To: Charles R.
From: Geoffrey Klempner
Subject: Could a computer that thinks also will?
Date: 29th November 2010 12:09
Thank you for your email of 19 November, with your third essay for the Philosophy of Mind program, in response to the question, ''A computer can think and make decisions, but it cannot WILL.' Is that a convincing argument against a materialist view of the nature of the self?'
This is a well argued, and indeed brave essay, in which you make major concessions to the arguments of materialism, in order ultimately to stake out a place for 'the sacred' as the object of an attitude of mind which turns away from the 'biological-material dimensions of existence and toward God, toward an interior, subjective, non-empirical experience of the Sacred.'
If I am reading this correctly, an artificial person, constructed by man, would be able to experience the Sacred in exactly the same way as a person who is 'natural born'. If we are, as you say, involved in chasing away, or pushing back, a 'God of the gaps' then we must surely not erect barriers which would prevent artificial persons from joining in worship alongside the rest of humanity.
But let's first retrace our steps. A being that thinks, you say, must also be capable of willing because thinking and willing are not two separate processes, they are part of a single capacity. One of your arguments is that every decision or act of willing is preceded by a thinking process whether we are aware of this or not. For example, as a competent driver I slow down on the motorway when I see traffic queues ahead. I don't have to consciously think, 'Uh huh, traffic queues ahead, must slow down.'
However, you could have just as easily argued that an act of will precedes every thinking process, whether we are aware of this act of will or not. Sitting in my attic at home, I decide to direct my thoughts (yet again) to that intractable philosophical problem which I will never solve, and yet which I will never give up trying to get to grips with. An effort is needed to maintain my concentration as I scramble about on the smooth rock face. Yet it is also true that any thought that I might think is an action, a 'mental act'.
We need to be a bit more precise about this, in order to deal with possible counterexamples. 'Thinking' is an action verb. To think is more than merely have ideas occur in one's mind. If you say to me, 'Don't think about dragons,' obviously I can't help having the idea of a dragon occur in my mind, but I can make the decision to think about something else. I can will the direction of my thoughts. Some thoughts may indeed seem involuntary, but then some physical actions are too.
Why can't present day computers will? The obvious answer is, 'because they can't think.' Every computer built up to the present day is basically a calculating machine. These machines have impressive capacities to represent states of affairs in the world and draw conclusions about them (e.g. Deep Blue, the chess playing computer that beat Gary Kasparov) but these representations are not *beliefs* because (I would argue) they are not *for* the entity that has the representations. This crucial notion of 'being for' is what makes the difference still (at the present time) between human beings and computers.
Before I take this any further, there is another issue to consider. The quote from Marvin Minsky arguably contains (by implication, though not explicitly) a non-sequitur. If the nervous system obeys the laws of physics and chemistry then, in principle, there could be an artificially produced physical entity which did everything that we can do so far as thinking is concerned. However, it does not follow that such a physical entity would be a Turing machine. The question of what else it could be is a matter of ongoing controversy, but the upshot is that it has yet to be proved that a biological organism works in fundamentally the same way (if you dig down far enough) as a computer. (A Turing machine is an idealized construction which every programmable computer has in common.)
So we have two thoughts: the first is that beliefs must be 'for' the entity in question. The second thought is that it may well turn out to be the case that what biology is able to create cannot be reproduced by means of an assembly of silicon chips, or in any similar manner.
In order for beliefs to be 'for' an entity, it must also have desires. An entity which has beliefs and desires would have to be an agent, capable of initiating actions in the world, in order to satisfy those desires. It follows that a computer which could 'will' would also be self-moving, as we are. If biology is necessary for thought, then this self-moving entity would have to be a biological organism. Perhaps this is what we are heading for: the first 100 per cent artificially constructed 'human being'.
And what, then, would be the consequences for religion? If we 'proved', finally, that we could do what previously it was believed only God could do would that make us Gods? Or would it be the final proof that there is no room for God in a material universe? I agree with you that the answer is, No. There is room for religion, room for the sacred. But the question is what this *means*. What is prayer, what does it mean, if it does not involve some form of communication or dialogue with the Eternal Thou? (to use Martin Buber's phrase). Is this actual communication, in some sense, or merely 'as if'? Are we, as in traditional religious belief, putting ourselves in relation to something actually existing outside us, outside 'material existence'? or, if not, what is the alternative?
All the best,