Philo
Sophos
 

philosophy is for everyone
and not just philosophers

philosophers should know lots
of things besides philosophy


Philosophical Connections

Electronic Philosopher

Feature Articles

University of London BA

Philosophy Lovers Gallery

PhiloSophos Home


International Society for Philosophers

Determinism, free will and moral responsibility

[INDEX]

To: Mark C.
From: Geoffrey Klempner
Subject: Determinism, free will and moral responsibility
Date: 8 June 2007 12:22

Dear Mark,

Thank you for your email of 2 June, with your essay, 'Determinism', in response to units 1 and 2 of Possible World Machine.

You raise a number of interesting points. The first concerns the very idea that we are 'arguing' over determinism and its consequences. If human beings are ultimately 'machines' determined by causality then it is hard to see how the very idea of a 'good' or 'bad' argument, or 'valid' or 'invalid' reasoning can get a grip. As you wrote your essay, you made choices about the 'best' way to express your ideas, or about what logically 'follows' from a given thought. These notions seemingly make no sense if all you are is a 'machine'. A machine doesn't raise 'questions'. You switch it on, and it goes, end of story.

In that case, the very idea of 'arguing' against free will on the basis of determinism makes no sense, because a person can only argue if they have the ability to reason and make choices.

In response, a determinist who argues against free will would say that human beings are a particular kind of machine, a 'deciding machine', which is set up in such a way as to evaluate situations and make decisions. Chess playing computers are an example of such machines. A chess playing computer can be 'good' or 'bad' at chess, and this is judged objectively in terms of results. Similarly, a human being who is 'good' at practical reasoning tends to make decisions which lead to better consequences for that person, compared to a human being who is 'bad' at practical reasoning.

Philosophers who argue that the human brain is just a very complex computer running a 'program' assume that the human brain is set up by evolution in such a way as to acquire its programming from experience. However, another point you raise concerns the possibility that human 'machines' have been made by a 'higher being'.

Here is a possible scenario. Aliens from the planet Zog were originally responsible for putting human beings on the earth. Human beings owe their natural characteristics to conscious design decisions made by the Zogs. One of these characteristics, say, is aggressiveness and the delight in murder and mayhem. When a human being goes on a murdering rampage, would we then be inclined to say that the Zogs are at least partly to blame?

Compare this to the 'Manchurian Candidate' scenario where (in the novel) American GI's are captured in Korea and brainwashed, then return to the US where they conduct assassinations. If one of these assassins was captured, we would not regard him as guilty of murder because he was merely acting as a result of his brainwashing. Is there any real difference between the Zog scenario and the Manchurian scenario? There seems to be. The GI's have no choice but to murder, they are helpless victims of their brainwashing, whereas even if we were put on Earth by the Zogs, we still have a choice whether to murder or not.

You make the point that human beings are not blank slates. You want to say that some of the influences on us 'determine' us to act, while others leave us with the power of free choice. However, the determinist will respond that it only appears that we have a 'choice' because we do not know all the facts. On a determinist scenario, if you ran the history of the world again, everything would be the same, including every decision ever made by every human being. There is no room for deviation at any point. As you point out, there is a distinction which we make, in ethics and in law, between acts which 'deserve' punishment and acts for which the agent is not responsible, but this is consistent with recognition that we are not 'ultimately' free. Whether an action is 'free' in the sense of 'worthy of praise or blame' is just a useful distinction made for practical purposes. (Hence to so-called 'compatibilist' view of free will.)

Another point that could be made is that, far from finding determinism a restriction on our freedom, we demand that our actions be predictable. You find a fifty pound note in the bus station and hand it in to Lost Property. A friend jokes, 'Why didn't you keep it, no-one would have found out,' and you respond angrily, 'You should have known me better than that!' It was your 'free' decision, yet given your character it was a decision which you could not fail to make. (This is a point made by F.H. Bradley in his book 'Ethical Studies'.)

Your speculation about being transported to an alien world is intriguing. On one possible reading, we take with us the character which we formed on earth, but find ourselves confronted by experiences which are so strange and perplexing that we don't know what to do. What exactly does this show? that we lack 'free will' in this scenario? or that our freedom of action is severely limited by our lack of knowledge? For example, I am asked to press one of two switches. I don't comprehend what will happen if I press switch A rather than switch B, so can't be blamed if bad consequences follow from choosing A. Yet it is still my free (in the sense of unconstrained) choice to select A. That is to say, I had the power to move my hand to the left rather than the right, nothing forced me to move it one way rather than the other. This is a very limited freedom, but it is still 'freedom', at least in the compatibilist sense.

All the best,

Geoffrey