To: David T
From: Geoffrey Klempner
Subject: Could a computer be capable of an act of will?
Date: 20 September 2007 12:54
Dear Dave,
Thank you for your email of 10 September, with your third essay for the Philosophy of Mind program, in response to the question, ''A computer can think and make decisions, but it cannot will.' - Is that a convincing argument against a materialist view of the nature of the self?'
As you clearly point out, the argument is unconvincing, if only because it assumes - without argument - that the only way in which materialism can be true is if the human brain is a Turing machine. There are plenty of philosophers today who would reject that assumption.
As you also show, interpreting the argument as an appeal to the subjective 'feel' or quale of willing is equally a non-starter, as is the idea that I just 'know' that I have free will, and moreover that this is something that no mere material entity behaving according to laws of cause and effect can possess.
I take all the blame for inviting these red herrings in my formulation of the original question.
The question I wanted to raise (I guess) is about the idea of 'willing' as such, and whether such an event or process is, as alleged, inconsistent with a computational or 'generate and test' model of decision making.
We are all familiar with the scenario where an agent goes through a process of reasoning, following up various possible scenarios to their conclusion, deciding on which outcome is optimal, all things considered - and then does something different, or nothing.
All things considered, the Volvo is the ideal car for my growing family. Then I go and blow my cash on a Mercedes two seater instead, forcing my wife to take over the role as taxi driver.
All things considered, the best action in view of uncertainty over enemy strength and activity is to wait until more intelligence is received. On a hunch, the commander decides to risk a frontal assault and wins a brilliant victory.
I've chosen these examples because neither would be interpreted as 'weakness of will'. Weakness of will indicates that something is going wrong, and it's difficult to argue that the possibility of 'things going wrong' in this way is a property which human beings possess and computers don't (or can't). Whereas acting on impulse, or a hunch, or against all advice can turn out to have been the right thing to do, not just by the virtue of hindsight but given the actual circumstances which the agent faced.
What seems to be wrong here is the model of 'rational decision making' which we know from numerous examples is false to the way human beings actually deliberate.
What does that show? Nothing, in my view. For all we know, the decision to by the Mercedes or to risk the assault did arise as a result of a process of computation. The mistaken assumption is that the rational structures which we impose on our behaviour necessarily reflect - or are even capable in principle of reflecting - what is really going on.
Kant emphasized that human judgement cannot be reduced to rules. 'Examples are the go-cart of the understanding.' That isn't to say that the process of making a judgement is ultimately anomic or random, but rather than any rules we can give are ad hoc, partial, mere rationalizations.
Your example of a computational program with an added randomizing device (to cope with Buridan's ass scenarios) is a perfect description of a chess computer. Whatever move the chess computer makes, we know that it was provided for by the initial rules - such as plus or minus values given to certain types of position calculated as fractions of a pawn. We know what these rules are because we made the device, even though the result can be that the chess computer does things that surprise or even amaze us.
This is not to claim that folk psychology is a false reflection of what is 'really going on inside the brain'. On the contrary, folk psychology is sufficiently flexible and subtle to recognize that deciding isn't the same as rationalizing, that not all decisions can be explained.
I would argue that willing is deciding. If I decide to do x now, then I *must* do x now, as a matter of logic. There is no alternative. If I don't do x, it is not because a mysterious 'act of willing' was absent, but because I didn't really decide to do x. I merely rationalized that I 'ought' to do x but something held me back. In retrospect, I might decide that it was a 'brilliant hunch', or 'cowardice', or any number of explanations that folk psychology has developed in order to make sense of this kind of situation.
All the best,
Geoffrey