Saturday, January 31, 2004

The privacy of consciousness

See the note to the following post for the origins of this material.

We can in fact "measure" consciousness, at least in principle, as indicated in operations that directly stimulate the brain of waking patients, who report resulting conscious experience, or qualia (to actually measure a conscious experience, we would need to run those operations in reverse, in a sense: instead of noting experiences as we apply electrical current, we would note current changes as patients were presented with experiences). But of course this is just needles on dials, ink on graph paper, or whatever, not consciousness itself. We're pretty sure, in such a situation, that we are measuring consciousness because we believe that other people, including this patient, are conscious, and the patient is telling us his or her experience. But still, all we have is just the report and the recording instruments – we can't seem to observe consciousness itself. It seems intuitively obvious that consciousness is, in a very fundamental sense, private ... but why, exactly?

Well, again, let's suppose that consciousness were a mechanism that consisted of two major components, fitted to one another -- call them World and Actor, or W and A. And then consider two different conscious mechanisms, C1 and C2:

C1 = W1-A1
C2 = W2-A2

Any observation of any form – measurement, direct perception, whatever – reaches the awareness of the observer only through W. But that awareness itself is only present in the connection between W and A. So if C1 wanted to observe the consciousness of C2, say, even if C1 were to take apart or dissect C2, all that C1 would be able to observe would be what came to it via W1 – through that portal, so to speak, it could observe C2 as a whole, or observe the separate parts W2 and A2, but it could not *observe* C2's consciouness because that's a phenomenon that only occurs at the Actor (or maybe Awareness) level, i.e., at the A1 or A2 level. For C1 really to become aware of C2's consciousness, it would need to get its A1 component in direct connection with C2's World component – that is, it would have to dissect not just C2 but also itself, and then re-wire its own A1 to C2's world:

C1-2 = W2-A1

Which would make an interesting basis for a science-fiction plot (though no doubt Philip K. Dick has already done something like that), but is otherwise unlikely.

Qualia again ("blue is not a number")

NOTE: This post, slightly edited, is taken from what to me was/is an interesting thread on Ray Kurzweil's "Mind.X" forum. The title refers to the handle of one of the more active participants in that thread, whose questioning regarding this project has been a spur to get me to think not just about how to refine the ideas here, but also how to improve the communication of them. What follows here and in the post above are some of the results.

Take the color blue -- not the concept "blue" which gathers together all the particular perceptions of blueness, but a specific perception of blue itself, the quale or experience of blue. It's not a number, precisely, it's a color. But what exactly does it mean to say that it's a "color"? What else could it be? Could it be a number, for example? The problem with that is that number itself is a concept not a perception, not a quale. We humans understand numbers because we invented the concept (and we tend to think that computers deal in numbers or "computations" because that's what we invented computers for), but how would number work in place of color in a conscious but non-linguistic organism? Not well, I think, because it's an abstraction, and what we need is a concrete experience. Similarly, even though we believe that blue "represents" a certain frequency of light, it's hard to see (so to speak) how there could be a direct perception of frequency, because "frequency" too is just a concept. If we had the problem of designing an artifact that perceived light frequencies (as opposed to simply being affected by frequencies, as is a digital camera, say, or for that matter, anything that's not transparent), we would need to come up with a way of presenting to it a simple, immediate, irreducible, and distinguishable quality for each frequency that we wanted the artifact to be able to distinguish. And that, for evolved organisms, is just what color does and is.


But there's another part of the story to come, and it may be the hardest part -- this involves explaining the distinction between perception and simple effect, because that gets to the core of the idea of consciousness as such. What we need, here, is a mechanism that is able to "apprehend" the various qualities presented to it, or to which it turns its "attention", but isn't simply determined by them -- "apprehension" meaning that the perceiving mechanism is affected by the qualities that it perceives but has them available as input for further processing. Ultimately, of course, such a mechanism is as determinate as any other system in nature, but insofar as we look just at the connection between the "perceiving system" and the "world" of qualities, on all channels, that it's able to perceive, then this sort of apprehension opens up a certain free play, so to speak, a looseness of connection that isn't present in other kinds of mechanism -- the world affects the perceiver but doesn't determine it.

Tuesday, January 27, 2004

Dialogue with the Dualist

Dualist: You see, the question is why should this specific sensation, this "redness", accompany the whole chain of events that follow from the impact of photons of a certain energy on the retina?


Me: Well, any specific sensation is probably arbitrary -- but it must be some sensation --


D: But why "some"? Why should there be any sensation? Not to mention, where does such a thing come from ? how, out of simple matter and energy, do we get sensation, this redness?


M: How else could information be conveyed?


D: Well, lots of ways, but let's take one: as a bit stream.


M: So are you asking why we don't perceive a string of 1's and 0's instead of the sensation red?


D: Yes.


M: Okay, but now you've just broken the problem into smaller chunks -- how would we perceive the first bit in the stream, say a "1"? As an actual numeral?


D: As any sort of token. As a certain voltage level, say.


M: And how would we perceive that voltage level? We'll assume not as a needle pointing at some number on an internal volt-meter ?-- but how else? As a little shock?


D: Well, maybe.


M: But then wouldn't that be a sensation?


D: Alright, but we've actually dodged around the main point -- why does there need to be this inner perception at all? This is just the old homunculus answer, isn't it -- our eyes send a signal to a little television screen inside our skulls, where a little man watches?


M: Yes, and then how does he (or she) see? But it's precisely sensation that puts a stop to that infinite regress. I think you're right if you're just objecting to my use of the word "perceive" in this regard -- I really should have used "experience". So my question should have been: how do we experience the first bit in the stream? Or how do we experience any information?


D: Well, information is just difference --


M: Exactly! So then it hardly matters whether we experience this "difference" a bit at a time (so to speak), or in 24-bit chunks -- millions of colors! -- the simple point is that the tokens of information must be different, and that's all that sensations, or qualia, are.


D: No! That's not all they are! They're actual feelings -- that is, they actually feel like something, they're not mere abstract differences, which is the whole point here.


M: But my point is that they must feel like something for that information to be actually experienced --


D: Alright, fine, but then it's experience that's the issue here -- why and whence does this come about? Why should there be experience at all?


M: Okay, now we're getting to the heart of the matter -- or the heart of the heart. You'll grant that without experience there is no consciousness?


D: Yes, fine.


M: So really you're asking why should there be consciousness, yes?


D: Yes. I'm not happy playing the anti-Socrates, by the way, but I'll put up with it a while longer.


M: Thank you. (I'm sure I'll return the favor.) But what kind of answer to that question would satisfy you? If I could show that consciousness was functional, would that do it?


D: Umm --


M: Maybe not. Perhaps you're not really asking why there should be consciousness, but how there can be consciousness?


D: Look, it just comes down to the fact that you don't find red or blue in nature, nor in our brains, but only in our minds.


M: Yes, that far we can go together. But for you, I think, that's pretty much the end of the road. Whereas I would like to take at least another step or two, or try to, by saying, first of all, what that word "mind" means, and then saying what it means for something to be "in the mind".


D (laughing): Those are ... giant steps, wouldn't you say?


M (tentatively): Umm ... maybe.

(Possibly to be continued.)

Sunday, January 25, 2004

"Observation", "experience" and "self-observation"

We can use the two-part structure of consciousness to (re)define some terms:
  • "observation", in its broad sense, refers to the first stage of consciousness, in which environmental signals are mapped to a fabricated "world";
  • "experience" refers to the second stage of consciousness, in which a behavior-determining system accesses that "world", as well as other sources of information such as memory and imagination.

Thus, observation is not experience, nor experience observation. One consequence of which must be that "self-observation" isn't true observation, but rather a form of experience. That is, one can't really observe oneself, one can only experience oneself.

This last point takes us beyond the stated limits of this project (which is focused on consciousness, as opposed to self-consciousness) -- but it also relates to the way in which we investigate, or even just think about, consciousness per se, and so it's worth pursuing a little. Why is it that self-awareness seems at once to be like observation, but also to be different (especially so, perhaps, in the peculiarly slippery, "glassy", protean qualities of its object)? Let's make a quick hypothesis: the advent of language, or of a token-base communication system, allowed the development of a third "layer" or stage of consciousness, and, in particular, the formation of a "self" which represented the whole of consciousness, an inherently recursive information structure. From the vantage point of this third stage, this self, then, consciousness itself can appear as something observed, even though this construct is entirely within experience, and so has none of the "hardness" or durability of true observation, its self-referential nature making it especially unstable.

Saturday, January 24, 2004

Consciousness as disconnection

Comparing the two simple entities below -- earthworm and thermostat -- and the alternative "wiring" options in each case leads to an imprecise but suggestive formula for the structure that is at the basis of conscious awareness:

Consciousness is a disconnection from the environment -- a disconnection that allows attention to the world.

In other words, the binary or two-stage model of consciousness introduces a gap, and in the space of that gap there arises information or phenomenal awareness as a new kind of re-connection.

The earthworm and the thermostat

An earthworm is an example of a simple entity that seems clearly to demonstrate purposeful behavior and some form and degree of sensitivity -- we might balk at ascribing consciousness to it, but it doesn't seem like an impossible stretch to imagine that it actually "feels" in some way. Whether or not this is actually the case depends, according to the argument here, on its neurophysiology, or on how its simple behavioral options are connected to its simple sensory inputs. If input is directly connected to output – that is, if its behavior consists just of reflex arcs – then we can at least say that it has no need of feeling. But if its neural inputs, however crude (sensitivity to temperature, nutrients, light perhaps) are instead connected to an intermediate neural structure that modulates and "represents" the external stimuli, and if there is another neural structure that is able to use these representations, however simply, in order to determine behavior, then we might very well say that the earthworm does indeed "feel", because some level of phenomenal awareness would then be required as the means of connecting the two intermediate neural structures.

This is just saying again what was said below, but relying upon at least a plausible intuition that even a very simple mobile organism can feel. But now let's consider a thermostat, a control device that also exhibits what might be called "purposive behavior", though only with a certain amount of metaphorical licence since in this case the mechanism involved is very obvious: a thermometer falling below a set level triggers a switch to start a furnace. Nevertheless, here we have another case, like the earthworm, of a simple entity that receives environmental input and determines its behavior in light of that input. Suppose that we complicate the thermostat a bit by adding another input channel besides temperature – perhaps a clock, say, or a light-meter, so that the triggering termperature can be set lower after a certain time, or with the onset of darkness. How would these two "sensory" inputs work together? One way – perhaps the more likely way – would be to connect both directly to a more complicated switch that relied upon a double trigger to start the furnace. But another way – perhaps a more flexible way – might be to connect both inputs not directly to the switch but to an intermediate layer, where their signals could be represented as qualitatively distinct "tokens" on a linear scale – and then to make a second layer out of a simple processing chip that could accept these tokens as input and determine upon its "behavior" – whether or not to start the furnace – based upon its processing of these inputs. In this case, would we be as tempted to say of the thermostat what we were of the worm – that, in some fashion and to some degree, it "feels"?

Well, perhaps not, and for a number of reasons, some of them good ones – such as the fact that the thermostat is a special purpose device whereas the worm is an autonomous entity, and much more complex than even this artificially complicated thermostat. But this is also a test of our intuition in this entire area, and, just as the intuition that the sun revolves around the earth mislead us in the past, so might the sense that even quite simple evolved organisms can feel, but even quite sophisticated designed ones cannot, be a mere prejudice.

Wednesday, January 21, 2004

"What is it like to be" X?

Let's stipulate that it isn't like anything to be a rock, or a plant, or, just by itself, a running computer. It may be like something, however simple, to be a worm, and it's very probably like something to be a turtle, say, or a bat, or a dog. That is, it can only be "like something" to be something if the thing we're imagining has a so-called "inner world" – or, in the theory developed here, has a certain internal control structure that makes use of an inner world. So in this sense it would be entirely appropriate to say that it would be "like something" to be a machine that possessed such a control structure as well, a binary control system with attention.

Tuesday, January 20, 2004

Qualia: primordial information tokens

In explanations of Shannon's theory, information is typically represented simply by tokens, each of which is distinct or unique, but otherwise quite arbitrary. But such a "token" is really a derivative concept, a kind of abstraction of the notion of qualitative difference. Before any such abstractions, before there were names of things, before there were objects, even, or things themselves, there was qualia. That is, qualia are logically prior to any other form of information - tokens, cyphers, names or numbers or quantities are all, in one way or another, derived from the primary form of information, qualia. So a "quale", by itself, is neither "hot" nor "cold", but simply a unique, distinctive bearer of information - e.g., what conveys the information "red", before anything else, is just redness. And that's all that redness does.

So now let's bring this back to the idea of consciousness as a binary control system. We're accustomed to think of awareness as centered in, and orginating from, a point - as in a "point of view". But suppose we assume, instead, that "awareness" is actually a function of a fairly complex subsystem of consciouness - a "behavior-determining" subsystem – with a number of parts and processes, all of which are necessarily outside of awareness (below, before, or just, in any case, beyond). And just as its own component parts and processes are inherently out of reach of the behavior-determining system that is able to manifest awareness, so the processes - neural pathways, silicon-etched circuits, or anything else - that underlie the world-making system are out of reach of its "awareness". All that is present to that awareness is the information space so formed, the space that we call the "world". And all that a particular state - the particular state upon which attention is focused at a particular time - of that information space is, or can be, made out of is qualia.

It's in this sense that qualia are not just functional, but are logically necessary to any system, like consciousness, which presents an information space for the attention of a decision-making process.

Friday, January 16, 2004

Why zombies lurch

Even without the FX, Hollywood zombies are more realistic than philosophical zombies because they usually lurch when they stumble toward you - Hollywood seems to understand that trying to get by without a consciousness makes things like walking difficult, not to mention talking. Philosophical zombies, on the other hand, are supposed to be indistinguishable from us, by any known test, and yet lack so-called "inner states" altogether. And from the apparent fact that such a chimera is a "conceptual possibility" (see David Chalmers, "Self-Ascription Without Qualia: A Case-Study") we can apparently deduce that these inner states - consciousness, qualia, whatever - are entirely dispensable, "epiphenomenal", or just "along for the ride". Now, maybe it's just me, but, with apologies to David Chalmers, this looks very much like a case of assuming that which you set out to prove - i.e., of begging the question. The alternative view - namely, that inner states, including qualia, are highly desirable if not indispensable for any kind of complex behavior control - is no more defeated by a mere conceptual possibility than a theory of gravity would be by imagining that someone could levitate.

But what Chalmers and others are doing in making this sort of argument, of course, is appealing to the same sort of deeply rooted intuition that has anchored philosophical debate in this area for a very long time - the simple idea that we can, in principle, follow the so-called "physical processes" underlying any given mental state to completion and never encounter the actual state itself - hence that state (which we can hardly deny, though some try to) is simply some mysterious extra, a superfluity, an "epiphenomenon". Zombies are just one more kick at that can (as are, for some strange reason, examples from things Chinese - e.g., Block's "Chinese Nation" or Searle's "Chinese Room"). Thus, goes the argument, in all its forms, the ineradicable gulf between mental states and physical processes.

Like many things that won't go away, there's something at once both superficial (the reason we'd like it to go away) and deep (the reason it won't) about this reasoning. On a superficial level, it sometimes seems like this kind of argument is just some sort of level mistake, like someone saying they followed every process occurring in the City Hall, but never encountered the city government itself - hence, city government must be either an illusion or an "epiphenomenon". But no, obviously, it's simply an abstraction, a way of grouping a set of concrete activities and entities. The problem with trying to explain away dualism in the same way, though, is that the situation is somehow reversed: unlike a city government, the mental states under investigation seem to be the very essence of concrete, primary, irreducible experience, and are the raw material out of which any concept of "physical process" must be made. And this is only part of what makes the issue a deep one - there are also matters involving the very nature of "explanation" and of "awareness".

For this and other reasons, qualia remain at the crux of the issue of consciousness itself. Still, the clumsiness of movie zombies ought to give us a clue about the functional efficacy of an "inner life".

On applying the right intuition

One very important source of confusion in this area, I find, is the erasure of the distinction between consciousness per se, and what might be called "linguistic consciousness", or what I refer to below as self-consciousness. I do believe that self-consciousness can only arise as a result of language -- in large part because the notion of a "self" is a creation of language -- but that's no doubt an issue for another time. The important point here is simply that there can be awareness - qualia, feels, experience, etc. - without necessarily language or even "thoughts", and certainly without awareness of the awareness. Of course, whenever we think about this kind of thing, we're necessarily being self-aware -- i.e., meta-aware -- and so there's an understandable tendency to merge these two quite distinct levels or even kinds of awareness -- but we should resist that. One technique for helping to do so, and keep our intuitions focused on the right level, is to imagine the world from an animal's perspective, say a dog or a cat. (There may be some who would deny conscious experience to animals, but if so the best argument against them would be to recommend they get a pet.)

An exercise in Applied Philosophy

-- which is how I've described this project elsewhere.

If it were actually implemented, I think, as I say in the Prefatory note, that it might provide us with a concrete platform with which to go beyond thought-experiments finally, and actually try out some conjectures. My idea, in other words, is not so much to build functioning robots, but to use this project, with its concrete, practical focus, as a means to help clarify some of the very confused and confusing notions that swirl around this whole area. For this reason, there will likely be an ongoing focus in this blog on philosophical issues, and on how they're affected by even a proposal for a project of this sort. So the project, even without implementation, might function as a kind of extended thought-experiment.


Tuesday, January 13, 2004

Project Proposal for a Synthetic Mind: Prefatory

[NOTE: if you got here from a link like this: http://mindsif.blogspot.com , then here's the new blogger URL: http://syntheticmind.blogspot.com/ ]


Is it possible to create a mind? That is, a machine that is actually aware (though not necessarily self-aware), as opposed to one that merely imitates the behavior of aware organisms? Part of the difficulty inherent in this question is arriving at some criteria for success. "Consciousness" is an elusive phenomenon in good part because we lack observable access to it, outside of our own minds -- that is, it seems to be inherently non-objective, which is often thought to be a disqualification for scientific study. Yet few people, including scientists, would be willing to doubt that it exists.

The following series of notes makes the claim that conscious awareness is indeed a real phenomenon, occurring in the same physical reality as other phenomena, and made up of the same "stuff" as the rest of the world. Moreover, since consciousness is a function of structure, not of materials, it should be possible to produce a system that manifested an artificial or synthetic consciousness by reproducing that structure in hardware and software. Though ultimate criteria for success will still be hard to come by, such a construction would be able to offer a testbed for a variety of investigative strategies by which such criteria might eventually be formulated. These notes, of course, are merely descriptive, and a suggestion of a starting point.


First, some clarifications


  • "Consciousness", as used here, is not the same as "self-awareness", but rather refers simply to "awareness", and is assumed, in some degree, to be a likely characteristic of all organisms beyond a minimal level of neurological complexity; self-awareness or "self-consciousness" is a different and more complex phenomenon that is, as far as we know, found only in human beings, and only from the time they're able to speak – it's beyond the scope of this outline, but will be addressed in a brief note at the end.
  • The term "synthetic consciouness" is used rather than the more expected "artificial consciousness" in order to avoid association with attempts simply to simulate or mimic apparently conscious behavior – this project, instead, aims to reproduce the phenomenon of "awareness" itself.

Design for a mind

Start off with a design for an organism:
  • Situate it in an "environment"
  • Give it -- or define it as -- a "body"
  • Provide the body with a number of modes of input (fuel and signals) and output (waste and behaviors)
So far, this fits perfectly the behaviourist model, in which the organism is simply a black box. But, since we're trying to design the organism, we need to design that box itself -- so we need to come up with the processing that will derive behaviors from signals (ignoring fuel and waste). Call this the Control System (or CS).

Behavior as reflex

The simplest strategy in designing this CS would be to come up with a table that lists all possible, or at least all relevant, signals in one column and matches them with the appropriate behavior in the next column – the CS as a table look-up, and all behavior as simple reflex. Would this work?


Not for anything other than the simplest of organisms. Because, among other things, the import of the signal needs to be able to vary considerably depending upon its context, and this context is not just the immediate environment, or, rather, the complete set of signals being received moment by moment, but also the state of the organism itself, including "memories", "expectations", and "purposes" (the quotes being just a reminder that these terms are suggestive place-holders at best at this point).

Simple learned response


So the next strategy is to allow for a more complex CS that will be able to process the signals as they're received, and will perhaps allow for at least learned responses. But this, just as such, is difficult, since every new signal (and these are constant) requires the CS to have to recalculate or recalibrate all of the other signals impinging upon it, together with the rapidly shifting internal state of the organism, in order to decide upon a single action. The problem is that the CS is still, in a sense, wired directly into environmental stimuli, and lacks a "wholistic" view that would provide a basis for better decision-making. It's clearly more sophisticated than the simple table look-up, but still a control system with limited flexibility.

The consciousness strategy

And so we come to the consciousness strategy: divide the problem in two, by splitting the CS into two sub-systems, loosely coupled. On the one hand, let us have a portion of the organism given over to the task of constructing a unified so-called "world" out of all the signals from each of the organism's signal channels (e.g., sight and hearing), but freed from the task of deciding upon a particular behavior or response – call this the "World Making System" or WMS. On the other hand, there is another portion of the control system that has the job of determining action, and takes selective portions of the prefabricated world as one of the inputs to that determination -- call this the "Behavior Determining System" or BDS. The loose coupling between the two systems -- "loose" because of the variety of inputs and factors that eventually result in behavior -- is described in the phenomenon of "attention", which can be focused in various directions, under algorthmic control.

And with this we have something that can rightly be called a "mind", whether implemented in cellular agglomerations, or silicon based circuits, or something else entirely. This is the defining structure of consciousness, and its functional advantage as a control system resides entirely in the flexibility gained through the loose coupling between its component subsystems.

The "world"

The key to this peculiar structure is that deceptively simple term "world". This is meant to be understood here as purely a construct, internal to the organism itself, a product of a process that unifies the various signals impinging on it, which usually arise as inputs from its environment -- but it's something quite distinct from that environment (or from "reality", whatever that might be taken to mean).

What is that world made out of? We can list some of its phenomenal characteristics:
  • Before anything else, the world is defined as a space, centered upon the body of the organism.
  • Each sensory channel defines a qualitatively unique set of qualitative states, and each channel wraps the entire world like a sheet or fabric; thus, multiple sensory channels are redundant in terms of the world's "totality" -- they add information, but not completion.
  • Signals arriving to consciousness are then mapped to this fabric in such a way that they are given a location in the predefined space of the world, and in this way the signals on various channels can be spatially related to one another (the absence of a signal is itself just a part -- possibly a significant part -- of the fabric)
  • Signals on the same channel may be qualitatively distinct, but they can also be blended or merged with one another, to form a kind of spectrum; signals on different channels are inherently distinct.


Attention

Attention is the key to Consciousness as a control system -- it's the phenomenon that provides the so-called "loose coupling" between the BDS and this fabricated world. The BDS must be able to have "passive" access to the entire world (i.e., signals from the world must be able to interrupt it) at the same time as having "active" or focused access to a portion of that world. But there are other inputs to the BDS than just the world at any one moment:
  • such as memories, or stored fragments of the world,
  • anticipations (imaginations), or constructed fragments of the world,
  • emotions, or conditional motivators,
  • drives, or inherent ("hardwired"), long-term motivators,
  • purposes, or immediate motivations.


These go together to determine the BDS focus or attention at any one moment, and that attention in turn provides information that, in the specific context, determines behavior.

But exactly what kind of access is this "attention"? In what way is the BDS able to "apprehend" the world without being wired directly into it? The answer can only be that it apprehends ("perceives", "feels", or just receives as input) the signals mapped onto sensory channels as so-called "qualia". Just as attention is the key to consciousness, so qualia are the key to attention.


Qualia

Qualia -- "raw feels", the redness of red, the taste of the world -- have been the real basis for the idea of "mind" through the whole of the modern era (as opposed to the conceptually easier notion of intelligence). It's qualia that philosophers have repeatedly either foundered upon or clung to as irreducible evidence of the radical, ontological split between the mental and the physical. It's not hard to imagine, for example, even something as simple as a worm having some sort of "feeling", however rudimentary -- but it's nearly impossible to imagine even the most sophisticated silicon-based machine having any more feeling than a brick.


Notice, though, that the problem isn't just with machines, but with neuronal implementations of mind as well: just as we can't find smell or sound as we examine circuits and logic-gates, so we seem to miss colours or textures as we dissect the axons and dendrites of the brain. It's as though there's a whole other level that we seem unable to access, and this has lead to weak, puzzled notions of some sort of superfluous, "epiphenomenal" parallel functioning, whereby mental events simply run in tandem with brain events. (And then into further mystifications surrounding the "self" as some sort of purely mental entity trapped inside a causal mechanism, and able to exercise "free will", if at all, only by taking advantage of some quantum loophole, etc.)


Looked at in another way, though, it's difficult to know what we really expected to find -- some kind of personal theater in the brain? How, other than through access to qualitative difference, would the phenomenon of attention be possible at all? Any system in which this phenomenon can be manifested, regardless of how it's made or what it's made of, necessarily must have access to such differences, or "qualia".


So the intuition that a worm may have feeling but a super-computer may not isn't necessarily wrong, though it is in the end misleading. The point, simply, is that "feeling" doesn't arise from complexity per se (and even less from some mysterious properties of carbon chemistry or quantum physics), but from the particular two-part structure of world and actor outlined above -- a worm may very well possess such a structure, a computer may very well not. But for any such structure, regardless of substrate, qualia are an inescapable feature. In this sense, qualia are functional, not epiphenomenal, and a part of the same physical world as bricks, neurons, and transistors. The "mental" is simply a structural level within a particular kind of physical structure.


UPDATE Jan 20/04:
This section is not well explained. See the entry above, on "Qualia: primordial information tokens" for a more fleshed-out argument for the logical necessity of qualia to a system like consciousness.


Further UPDATE Feb 4/04:

Still more explanation of the "hard problem", the central mystery, that everyday enigma -- qualia: Qualia again ("blue is not a number"), The privacy of consciousness, and Bracketing qualia. But really much of the blog so far is taken up with this stubbornly resistant issue, in one way or another.

In Summary


Consciousness is a loosely coupled binary control system, the two components of which are a World Making System (WMS) and a Behavior Determining System (BDS), with "attention" being the process that loosely couples them ("loose" in the sense that other factors and contexts are also involved in determining the operation of the two systems). The system or "organism" that Consciousness controls receives signals, on a variety of signal channels (i.e., "senses"), from its environment, and transmits those to Consciousness. The WMS component of Consciousness receives those signals, modulates them by mapping them to prefabricated "spectra", one for each channel – giving rise to so-called "qualia" – and relates the different spectra to one another by relating them all to a generated, common "space" centered upon the body of the organism – this is the "world" of a conscious system. The BDS component, under the influence of innate or hard-wired motivations or drives, forms particular goals or "purposes", and these in turn govern "attention", which is a heightened awareness of, or focus on, a part of that generated world as an input to a decision-making process; other inputs are memory, anticipation or imagination, and emotion, and all of this together goes into the determination of action or behavior, on a moment-by-moment basis.

A note on self-consciousness

This is yet another layer or level of complexity within a conscious system, with some of the following characteristics:
  • It is generated by, or is an aspect of, the learning of "language", or a token-based communication system, which is also the basis of so-called "cultural" systems of social organization – self-consciousness in the sense used here would simply not exist in non-language-using organisms.
  • The "self" so generated – the "I" or the "me" – is thus an inherently social or cultural phenomenon, and might be considered to be the representative of the larger cultural group within the individual consciousness. (This is the answer, by the way, to the worry – see Sartre or Wittgenstein – over solipsism: the very idea of an "I" implies a "you".)
  • With the formation of this "self" there arises a kind of meta-conscious level which is itself only loosely coupled to the whole of Consciouness as a control system – the control system is now extended out beyond the individual to the culture, operating within the individual through such mechanisms as guilt, praise, etc. (forming sub-systems such as "conscience", "ideals", the Freudian super-ego, etc.)
  • And there also arises the potential for this sort of meta-conscious formation to be repeated, as the self can examine its self, forming another, meta-meta-conscious level – the limits of this process being simply the practical ones of diminishing returns and limited processing capabilities.