Tuesday 18 February 2014

Thought Experiment - what is the mystery of consciousness?

Zoom right in.  Here are atoms, familiar particles, nucleus, electrons.  Ignore the nucleus - all the action is the electrons, binding and flitting between atoms.  There is much strange here, but nothing we haven't seen before.  It all follows quantum mechanics, reality matching our calculations to parts in billions.  These atoms could be anywhere, from a droplet of water to the heart of a star.

Zoom out.  Now we see large molecules, the information-rich structure of which reveals that we are looking at biology.  Even so, there are no new laws, no new particles or forces.  It's the domain where the quantum fades into the classical Newtonian.  We can use simpler models to understand what's going on, because quantum effects smear out, the throw of each individual quantum dice has little effect.  Normal physics applies, as quantum electrodynamics averages out to become familiar chemistry.

Zoom out.  Now we see cells.  Cells with tens of thousands of connections.  The connections tell us that we are within a brain, as nowhere else in biology do cells reach out and touch so many others; a hundred billion neurons, each only a small number of cell-steps from any other.  The cell count shows us it's an intelligent brain: human, but it need not be - a young whale has as much volume of brain as a human, and an adult considerably more.  But this is a human brain, linked to human biology.  Still, no surprises for the physicist, the chemist or the biologist; indeed the physicist sees only common atoms and electrons doing what atoms and electrons do throughout our world.  To the physicist, a brain is no more interesting than a lump of rock.  Cause and effect flow smoothly from atom to atom, electromagnetism and gravity the only forces that respond to causes and have effects that matter.

Zoom out.  The pink jelly of a brain, safe within a tough skull.  Holes in that skull allow nerves to spread out from the brain and touch whole body.  Anywhere can send signals to the brain, anywhere can feel pain (except the brain itself!).  Some of the nerves control muscles, allowing movement.  Watch.  Some of those nerves are firing now.  Nerves to the face, the lungs, the lips, the mouth, the vocal cords.

Listen.  The firing of nerves leads to a voice, which says "the mystery of consciousness".

This is the paradox - the reason why we say that consciousness is mysterious is utterly lacking in mystery.

So what, exactly, is the mystery?

14 comments:

Anonymous said...

Steve,
I don't expect this to be news for you, but for the benefit of the casual reader, and to see if I can keep it short and sweet, I'll try an answer.

What you describe in your "zoom" paragraphs is usually called the "easy problem". And it's not easy at all: we don't know how neurons encode information, we don't even know if there are many different codes within each brain, so we don't have a clue on how information is processed. In turn, we don't know what kind of processes creates what we generally call "consciousness", or, as I like to define it "the ability to know who I am, what I'm doing, and for what purpose".
To make matters worse, studying the neuronal mechanisms that generate consciousness is particularly complicated:
a) we don't have an accepted reductionist/conceptual model of what consciousness does.
b) it's hard to experimentally separate whatever activity is due to consciousness from all the rest.
See http://www.edge.org/response-detail/25457 for a short example.

Then, there is the "Hard Problem", as isolated and dissected by David Chalmers. Your post suggests that you don't see it, and I guess you don't because you are looking for scientific/reductionist problem, while the hard problem is mostly philosophical. I conceptualise it as follows: once we understand the computations, how can we explain that some particular data manipulation generates a first-person perspective? You take some numbers, you change them, and voilà, this manipulation of numbers gives the calculator pleasure, or pain, or attracts its interest, or...
Chalmers' point is that even when we'll know everything about the number-crunching, we'll still need an explanation of how data handling generates first-person experience. And he's right on the need of such an explanation. I would not call it a hard problem, but that's another story.

Steve Zara said...

What I am trying to illustrate in this post is one of the major difficulties for anyone who takes the view that there is a Hard Problem, which is how the existence of such a problem fits with what we know to be true from science.

Chalmers' well-known position that there will be something missing from our explanations when all the analysis of neuronal activity has been done is hugely question-begging, because he has never shown exactly where the explanatory gap actually is!

The key to all this is the matter of knowledge. Where does our belief that there really is a Hard Problem come from? Given that science says we can get a complete neuronal explanation for that belief, what exactly is missing?

Anonymous said...

I was trying to highlight how the Hard Problem makes little sense for a strictly scientific perspective, but it does make sense from a philosophical one.

I think the key is 1st person Vs. 3rd person or "objective" epidemiologies. The latter is an odd ball, as it has to rely on the first, just to exist, but requires the assumption that it doesn't.

In our case, science probably doesn't need to address the hard problem (HP), but this won't help anyone who does think the HP exists. If they think it does (1st person), it does, and they will look for a bridge to cross the explanatory gap.
If you as a scientist don't feel there is any gap (or are unable to see it), and are entirely satisfied with the "easy explanation" (3rd person account), this fact alone won't change the need of the HP-believer.
In one sense, the HP is self referential: it needs to be addressed because it is recognised as a genuine problem by at least some people. In my own case, I do think it needs and explanation specifically because if/when the "easy explanation" will be ready, science risks to become even more alien to anyone that can't find the 3rd person account exhaustive. And we would end with even more runaway nonsense of quantum wooo linked to magical free-will, information-integration that collapses wave-functions, a more-conscious than humans Internet, and who knows what else.
Another way to describe the situation is that there is an empirical question "what input-driven computations can generate a first person point of view?" and a philosophical questions, "how can input-driven computations generate a first person point of view?". Unsurprisingly many people think that the philosophical question needs to be addressed before the empirical one. You and I think that answering the empirical question will necessarily make the second one go away (or at least, make it easily addressed).

For the time being, since the 3rd person account is not even in sight, I can accept that people may believe that solving the "easy problem" will not help solving the hard one, but I'm placing strong bets that this isn't the case (that's the other story, not ready to tell it, I'm afraid).
I am deeply disturbed by the amount of absurdities that are used to fill the gap, but I can see that they are somewhat unavoidable at the moment.

What is needed is a philosophical account of how computation can generate a 1st person point of view (and consequently generate a self-defining "self"). In your Twitter feed you said "I believe that complex systems which can self-model are conscious" and I think you are on the right track: self-modelling is a required ingredient to generate a first-person point of view (not the only one, though). Also, because you share this same belief with me, the Hard Problem stops being a problem (or a hard one), because you are already aware that it's possible for computations to generate a first-person point of view using some sort of "sensory" input (to do so, you need to model the self within the environment), but:
1) we still don't know if that is indeed what's going on, because the easy problem still stands.
2) because of 1) it is still 100% legitimate and intellectually honest to look for a different explanatory bridge. Especially if you happen to disagree with the belief you and I share.

The difference between you and me seems to be that I see how not believing that "it's possible for computations to generate a first-person point of view" justifies the HP. Or maybe you don't see how it is possible not to believe the above. I don't know and shouldn't attribute thoughts to your own mind, so I'll stop it here.
Hope some of the above makes some sense to you!

Anonymous said...

epidemiologies?? ugh: Epistemologies!
apologies for that!

Frank O'Dwyer said...

"science risks to become even more alien to anyone that can't find the 3rd person account exhaustive"

Well, it's a HP because the 3rd person account *isn't* exhaustive, not really.

It's easy to imagine a similar account where, instead of zooming out to reveal a biological system, you instead zoom out to reveal an animatronic system. It too, could speak the words "the mystery of consciousness", but (by construction) it wouldn't understand what it was saying.

Also missing from the 3rd person account is the fact that the the 3rd person is *also* a first person. If the observer behind the camera had no concept of self it would be possible for it to observe the same facts and tell the same story. It would be able to give the answer but it wouldn't understand the question, nor would it understand why it was a HP either.

It's also interesting to note that the 1st person account of consciousness is in many regards the story of an unshakeable conviction that things are true which, according to the third person account, are demonstrably not true - i.e. the self is largely an illusion and you are largely a character in a story made up by your brain.

Steve Zara said...

In order to understand what science says about consciousness, it isn't necessary to have anything like a complete description of how consciousness arises. What matters is that science has revealed all the components of the system (the brain) that is having thoughts and beliefs about consciousness. When it comes to understanding why we say that there is a mystery about consciousness, we know that there is a complete scientific epistemology. We know that exists because we know that there is a complete causal system that results in beliefs in the brain. This is utterly uncontroversial science.

Given that such a complete epistemology exists in principle (due to physics and causality), this means that it's hard to see how any actual Hard Problem-type mystery can make sense, because we know all the aspects of consciousness that lead to the Hard Problem have a physical representation. I'm going to have to blog more about this...

Frank O'Dwyer said...

When I say that the 3rd person account is incomplete I don't mean that it fails to explain how consciousness arises, nor that its explanation is incomplete, rather I mean that it fails to even describe it - 'it' being the visceral sense of 1st person-ness that I and (I assume) you have.

The fact that the 'it' that has these feelings has a physical description and we can say a great deal about cause and effect does not address this, IMO. So of course with an account like that, it doesn't seem mysterious, because it leaves the mysterious part out. A description of thermodynamics is not a description of the feeling of the sun warming your bones. Nor is it an explanation of why there is a 'you' to feel that, as opposed to a sophisticated robot experiencing a temperature increase.

Sure we can describe the 'it' having the experience as something like a goal seeking agent which is computing a model of self and environment, and seeks to ensure its own (or its genes) survival, etc. But when it comes to consciousness, so what? We can imagine and describe similar systems which are not or might not be conscious - an ant colony, say, or a robot.

I would say we even have some access to this, since we do not just model ourselves, we have models of other people. However we do not have the same visceral sense of *being* other people when we try to predict what they'll do or empathise with them. We do not even have it when we think of people such as siblings or offspring (where we have more genetic interest). Why then do we have it when the subject is ourselves?

Steve Zara said...

I agree that there are questions about the "visceral feeling" - indeed, no matter how much information and explanation we may get from neuroscience, it may never be enough to give us the impression that that visceral feeling is explained.

My question is - what actually is there left to explain, if neuroscience can, in principle, even explain why we report having a visceral feeling? What does the explanatory gap actually look like?

Frank O'Dwyer said...

Well, one part of the mystery is whether neuroscience is the really where the explanation will come from, or whether biological neurons are just one possible implementation of consciousness.

What would happen if each of our neurons were progressively replaced with a digital equivalent, for example? Is there a point at which we would not be conscious or is it just the computation that matters?

Think of a virtual machine in computing - if we had a reductionist description of its underlying implementation (the intel architecture it happened to be implemented on, transistors, etc), would we really gain the higher order insight that the *virtual* machine could run on something very different (such as an ARM chip, a network, or a room full of people making calculations on pen and paper), and still be fundamentally the same thing? Would we even notice the virtual machine at all? I think this is far from obvious.

If we had a (rather large :-) room full of machines calculating the same inputs and outputs as the neurons inside a conscious beings skull, would that room be conscious too? Or is there something special about biological neurons, and biological neurons only, that results in consciousness? If so, what and why?

I think question like these are certainly of interest and are certainly mysterious and hard, at least so far.

Steve Zara said...

There is a great book called 'The Mind's I' by Dennett and Hofstadter which has a variety of thought experiments about such things.

Frank O'Dwyer said...

Thanks for the recommendation - I'd heard of the book but never read it. I really enjoy Dennet's stuff on this.

I guess all I am saying is that I am leery of a wholly reductionist explanation as it seems to me it can miss higher order questions and in turn blind us to higher order answers.

Also, I think that language itself stacks the deck in favour of the reductionist, to the extent that anyone trying to articulate a more holistic intuition will always sound like a mystic and/or a babbling idiot. (Of course, many of them are :-)

What if, for example, the sense of 1st person-ness that we find somehow mysterious and seek to explain is just another unshakeable conviction that we have, which turns out to be shakeable after all. What if the deep sense we have of 'self' and 'other' that we all(?) have is more like akin to an optical illusion than anything real? It seems to me that there are compelling reasons to think that this may be the case, and far from contradicting the scientific or neuroscientific knowledge we have it is rather suggested by it. If that were true, it would indeed take 'the mystery of consciousness' off the table but it would replace it with an even worse can of worms :-)

I can elaborate if you like, giving examples from scientific results, but this is already quite long. In any case thanks for the very well written and thought provoking piece.

Steve Zara said...

I do suspect that the sense of person-ness is something like an illusion, but not quite, and I find it hard to know what could be missing.

Thank you for the conversation.

Anonymous said...

Frank:
I wish to add my thanks as well. You managed to explain my point much better than me.
However, "a wholly reductionist explanation [...] can miss higher order questions and in turn blind us to higher order answers" is a somewhat dangerous way of putting it.
It scares me because provides ammo to those that say that science should keep its dirty hands out of "sacred" topics such as the mystery of consciousness, but also ethics, aesthetics, art and so forth. Or, if you wish, the really interesting things about humans. And that's a pity, because this kind of position in turn validates non-rigorous ways to create pseudo-knowledge about the contended subjects (e.g. gives undue legitimacy to notions/beliefs that look solid but are ultimately based on preconceived and untested assumptions).
I'm not criticising, though. I think your doubts on reductionism are legitimate, just wanted to attach a little warning to them.

Steve Zara said...

I'm not really sure what a 'wholly reductionist explanation' means. I have not come across anyone who suggests that all we need to do to understand humanity is look at the operation of brain cells. The way I look at things is like this: There are many stories that can be told about something that happens in the human mind. One story might involve concepts such as art and emotion, and another story might involve patterns of firing and connections of brain cells. Both stories are true, but one story may be useless at helping us to understand what is going on, depending on what we want to understand. I have seen some reactions to reductionism which insist that there is some extra reality that is involved in the story that involves art and emotion. There isn't. There is no more reality present than in the story of brain cells or a story of particles. It's just that you aren't going to be able to find intelligible meaning in those more fine-grained descriptions.