From: Dennis E. Hamilton [dennis.hamilton@acm.org]
Sent: Saturday, 01 September 2001 15:07
To: Theory-Edge
Subject: CyberMetaphysics: Cyberneurons and the Scale of Human Experience


        At http://groups.yahoo.com/group/theory-edge/message/3798
        --- In theory-edge@y..., vznuri@e... wrote:

        [ ... ]

        a metaphysical question: is there any
        more to human experience than the
        encoded information which
        goes back and forth through the nervous
        system?

        [ ... ]

Hmm, the scale of human experience is a theme that interests me, and I thought I had something to offer for this question.

OH OH, WHAT ARE WE ACTUALLY TALKING ABOUT?

Looking more closely, I realize that I don't know what the question is (independent of whether it is a metaphysical question or not). 

My experience is not of the encoded information going back and forth through my nervous system.  That is, I don't experience that.  So there is a question here about what human experience is and its connection to events in the nervous system.  One thing I can say about the experience that I am aware of is that there are evidently lots of things that go on in my nervous system that do not arise in my experience.  But whether there is more or less at the level of human experience than at the level of human nervous-system activity, I don't think this is a well-formed question yet.

So, it would seem that there needs to be more careful alignment on terminology to go much further.  (I do not propose to get alignment on what metaphysics is, however.)

WHAT I THOUGHT WE COULD BE TALKING ABOUT

Looking at the apparatus of one human body seems to be looking too small when looking at human experience.  That is, maybe even human experience has an existence that is not confined or reduced that way.  I wanted to suggest that, and consider the scale of human experience.  One example is that we are able to talk about human experience, so it somehow has an existence beyond/between ourselves, in the sense that we communicate about it. (I do not mean to imply that human experience is situated somewhere in physical reality.  I can't figure out where "I" am, let alone where what I am experiencing is.)

As an example of what I mean about scale, it is useful to notice things that are beyond human experience yet that we operate with and even postulate having control over.  Some people committed to creating a collective intelligence (though perhaps not committed to the technological singularity, I can't be sure) have discussed this in another setting.  Here are some observations that I suggest imply more to human experience and something about things in our awareness that are beyond our direct experience.  The full text is at

        http://groups.yahoo.com/group/unrev-II/message/2388

"1.     Consider that there is already global collective intelligence.
[clearly not the one that we are dreaming of, but one that is perhaps already in place].  I am not proposing this as a fact, I am proposing it as something to consider.

"2.     If we were the neurons of a global collective intelligence, would we be aware of it, and would it matter one way or the other whether we were?  For that matter, would this "collective intelligence" be aware of us, and would it matter one way or the other whether it (or they) did?  [Terminology point: differentiation of collective intelligence from collective consciousness is worth considering too.  And wondering just what the conscious attention would be on.  Us?  Seems unlikely.  I don't know about you, but I don't even know how to contemplate my neurons, and I'm certainly not moved to do it.  Is it even possible?]

"3.     One of the things that fascinate me about the theory of evolution, and the theory of economics, for a system closer to the one that we may be looking for in this conversation, seems to be the following.  If the real world provides a valid interpretation of those (macro-) theories, then it is irrelevant whether we individuals are aware of those theories or not, and it is irrelevant whether we believe them or not, cooperate with them or not (whatever that could mean), and so on.  (Consider that there has been no "escape" from evolution.  Consider that the theory of evolution applies just fine.  Why do we find that idea objectionable?  Consider that it doesn't make any difference -- in the framework of evolution -- whether we do or not.)

"4.     Consider the prospects for the neurons of a collective intelligence actively controlling the emergence of the collective intelligence through their apparent autonomous behavior.  I notice that we have this conceit that the forces of evolution are somehow in our hands (and that the "natural" and the "artificial" are different, etc.).  It would appear that our having a theory of economics has led to some kind of economic efficiency in the world, yet I am distrustful of that.  (I have been noticing externalities, for example, at the household level and how, in my household, there is excessive use of the automobile, lack of commitment around recycling, cleaning up ones own mess, and so on, although it is clear what, by extension, the inevitable global consequences are.  Self-indulgence is winning, referenced to my local view of things.  Moving externalities to others is not merely malignant corporate behavior, by a long shot.  The practice is internalized far more locally, in my experience.)"

ANOTHER LOOK AT THE QUESTION

I don't think that we get to the full scale of human experience by confining ourselves to the boundaries of one physical individual.  Most of what shows up in my cognitive experience is not delimited that way.

What's interesting, and perhaps misleading, in computer science, is that algorithms are formulated in just this way.  That is, there are definite inputs and outputs and the algorithm operates within the prescribed rules on the given inputs and nothing else.  Algorithms are viewed independent of any computational setting (although one must assume a computational mechanism).  But there is no hidden information and no unidentified external influences.  It seems to take all of that to boil down to an acceptable formal basis for computation.  (In our speaking of algorithms there are often unstated assumptions, yes, but it all tends to work out anyhow.)

But to look at a computing *system*, studying algorithms isn't enough.  The system is open to stimuli from "outside," and its outputs impact that external world and influence future inputs.  To comprehend that, we need to zoom back and expand our view to macroscopic behaviors in which the computing system is just one participant.

I say the same applies in studying human systems and the reach of human experience, without regard to the metaphysics of the matter.  There is far more involved than some algorithm operating on a set of neuronal states (inputs) to give rise to a "human experience." 

Here's another way I have of looking at this.  Consider that an individual expression in the genetic code is far too small to fully determine a human being.  Even as a message to some mechanism, the message is far to small.  There is much more in the mechanism than in the code, for the message to be so small.  I say it is valuable to wonder where and what the mechanism is, and what is its scale and dispersal.  And how is it preserved/sustained over time when the individuals that are its apparent product have but these tiny codes to pass among themselves?


-- Dennis