3/20/2005

Body & Soul I

I just finished a course at UCI on cognitive and computational neuroscience. The basic project of the field is to study the brain, learn how it works, and eventually build a computational model of it – a computer that thinks, sees, understands language, etc. The underlying assumptions are very naturalistic, of course. If matter is all there is and we’re just computers made of meat, then artificial intelligence must be possible1. If it can be done with neurons, it can be done with circuits.

All of this made me wonder about exactly how much of the computational neuroscience project is possible, given the existence of the mind/soul. By “soul”, I mean the non-material portion of human persons (or perhaps of all living organisms), whatever that might be. Mind is a little more specific – the non-material seat of intelligence, creativity, etc. Since I just took a course in neuroscience, many of my musings are focused on the human mind in particular, but what I’m really after is more broad than that. If it is impossible to “build” a human, what about a dog or a cat? A housefly? If I seem to use soul and mind interchangeably, or jump back and forth between talking about humans and animals, this is why.

What exactly does the soul “do”, and who/what has it? Presumably, at least some of the soul’s properties and functions are irreducible to physical brain states or machine instructions. So an understanding of the soul will place boundaries on the achievability of the aims of neuroscience. What I would like is a falsifiable claim or set of claims: I want to know precisely what computational neuroscience is incapable of producing.

A distinction ought to be drawn here between observable behavior and essence. Creating a robot whose actions are for all practical purposes indistinguishable from human actions is not the same as recreating the human mind in software. Consider Deep Blue, the IBM chess computer. If you were shown a complete chess game between Deep Blue and Gary Kasparov, could you tell which side was Deep Blue and which was Kasparov (questions of playing style aside)? I doubt it. However, Deep Blue does not really understand how to play chess – not in any meaningful sense. It just processes a very complex set of rules to determine where to move. Knowledge, comprehension, understanding – these are properties of minds, not rule-based systems. Following rules and knowing how to play chess are two different things.

Another example is John Searle’s Chinese room argument:
Imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating the Chinese symbols. The rules specify the manipulations of symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: ‘Take a squiggle-squiggle out of basket number one and put it next to a squoggle-squoggle sign from basket number two.’ Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you the symbols passed into the room are called “questions” by the people outside the room, and the symbols you pass back out of the room are called “answers to the questions.” Suppose, furthermore, that the programmers are so good at designing the programs and that you are so good at manipulating the symbols, that very soon you answers are indistinguishable from those of a native Chinese speaker.2

It should be intuitively obvious that there is a difference (though not an observable one) between this man and a native Chinese speaker – the native Chinese speaker understands Chinese. There are, then, two questions on the table: (1) “is it possible to create the equivalent of a human (or animal) mind within a physical system?”, and (2) “is it possible to mimic human (or animal) behavior, with a reasonably low margin of error, within a rule-based system?”3

The Christian answer to the first question must be a resounding no, at least with regards to human minds. But why, precisely? What is it about human minds that can’t be modeled by a purely physical system? I’ll give some suggestions and address the second question in a future post.



1While a lot of work continues to be done in AI, the scientific community has by and large abandoned its hope that AI will produce anything resembling intelligent life anytime soon. Most of the hope has shifted to neuroscience – if we aren’t clever enough to create AI, apparently our best hope is to find out how the brain works and duplicate that.
2quoted in Moreland and Rae, Body & Soul, pg. 165-66.
3Note that I am completely ignoring the deep epistemological questions that (1) raises. How can we know that some property x differentiates two things if x is unobservable? I can get away with ignoring this because I’m not really a philosopher.