Philo
Sophos
 

philosophy is for everyone
and not just philosophers

philosophers should know lots
of things besides philosophy


Philosophical Connections

Electronic Philosopher

Feature Articles

University of London BA

Philosophy Lovers Gallery

PhiloSophos Home


International Society for Philosophers

Scepticism, consciousness and moral theory

[INDEX]

To: Kyriakos C.
From: Geoffrey Klempner
Subject: Scepticism, consciousness and moral theory
Date: 5th October 2011 14:48

Dear Kyriakos,

Thank you for your email of 2 October, with your second essay for the Possible World Machine, in response to the question, 'Explore some of the issues surrounding the attribution of consciousness to machines and non-human animals,' and your email of 3 October, with your response to unit 7, on Morality.

Scepticism and consciousness

One big issue which you raise concerns the moral consequences of attributing consciousness. Here, one might distinguish between the question whether non-human animals are capable of consciously suffering, and the ethical consequences of this, and the question whether there might be a sense in which non-human animals might be admitted as members of the moral community.

The latter could, conceivably, happen if experiments with apes made sufficient process. Just as a child can be blamed for bad behaviour, so the experimenter might remonstrate with an ape who has successfully been taught a simplified version of English. 'You should not have stolen the banana!' (This would be different from 'punishing' an animal, say a dog, where our only concern is stimulus-response.) Maybe.

It might seem incredible that anyone could claim that animals don't suffer, yet Peter Carruthers (who was Head of the Philosophy Dept at Sheffield when I taught some classes there) in his book 'The Animals Issue' argues that animals are not 'conscious' of their pleasure or pain in the way that human beings are. So, if you are a utilitarian (like Peter Singer) it would be a fallacy to weigh up a human pain and the pain suffered by an ape because the two pains are incommensurable.

Is HAL conscious? An argument given in the program is that a computer program capable of consciousness would necessarily have beliefs and desires. Having desires, it must, necessarily, be capable of performing actions which satisfy those desires. So it must have real physical needs, the capacity for pleasure or pain, which it is able to provide for by the appropriate physical behaviour. But, as you say, even so there is no implication that HAL's 'consciousness' would be similar to human consciousness. Could we even communicate, or share the same language?

In Blade Runner, the android Rachel is presented as someone who genuinely believes she is a human being. She has fond memories of her childhood, precious photographs and mementoes. But then, in response to questioning, she begins to suspect the horrifying truth. What we have here is a technology fully capable of reproducing the biological structure of any animal (such as a snake) or human being.

To make a real human being you need something else: a life, a childhood from early infancy, through all the stages of learning and socialization. What the constructors of Rachel have done is merely provide her with a patchwork substitute, a simulacrum of 'memory', made up from genuine memories of other human subjects and pasted together in a form which resembles a coherent story. But this resemblance crumbles under close examination. This is the purpose of the specially designed psychometric tests conducted by the blade runner. Rachel displays intelligence, but she is not a genuine example of a 'conscious being', because of what she lacks.

You say some things about morality in your essay which would imply that we are not necessarily bound to behave ethically towards extra-terrestrial intelligent aliens. This would be speciesism. While a preference for our own species can be defended (in the same way as a preference for one's nation, or family, or spouse), I don't think there could be a moral case for placing an alien species (or by parity of reasoning, a genuine artificially constructed intelligence) altogether outside our moral community.

Unit 7

I was interested in your distinction between the 'efficiency' of a moral system and its 'consistency'. An efficient system would be one which always gives an intelligible, actionable answer to any moral question. A consistent system would be one which did not lead to dilemmas and conflicts. Kant's duty ethics is efficient, but it leads to conflicts of duties. Utilitarianism avoids conflict, because there is always an answer in principle to the question of 'the greatest happiness for the greatest number' but it is inefficient, because of the difficulty of calculating that answer.

My own view is that there cannot be a moral theory, period. As moral beings, each of us is equipped with all that is necessary to make moral decisions in the real world, but in the real world there also many situations or potential situations where we find ourselves faced with insoluble dilemmas.

The trolley problem has been the focus of much debate in moral philosophy. A trolley with 20 passengers is on a collision course which will lead to their certain death. The only way to divert the trolley involves causing the death of an innocent bystander. Would you kill 1 in order to save the lives of 20? Kantian ethics says, no, while utilitarianism says, yes. And so on. The problem is you can alter the numbers in any way you like. At some point the Kantian will have to say 'I don't know', and likewise (at a different point) the utilitarian.

'Do moral values determine beliefs or do beliefs determine moral values?' Beliefs about what, exactly? I just marked the essay of one of my University of London students who was asking a question (for the 'Ethics: Contemporary Perspectives' module) 'Are there moral facts?' The hallmarks of a 'moral fact' would have to be that it is (a) capable of being ascertained by the normal method that we ascertain facts, such as sense perception, experiment etc. but (b) such that knowing this fact necessarily entails a particular action.

According to David Hume, in his discussion of the gap between 'is' and 'ought' and also G.E. Moore, in his account of what he terms the 'naturalistic fallacy', such facts would be highly problematic. It is up to us to decide what we value, given the facts. The facts can't make that decision for us. But where does the decision come from? What justifies it?

I would argue that we should not look for metaphysical 'objects' to serve as moral facts. Rather, there are rational constraints on human behaviour which derive from the very fact that we are 'persons' in relation to other 'persons'. To deny the claims of ethics is equivalent to denying that other 'persons' are real, in other words, it is equivalent to solipsism. --The question of solipsism will be discussed later in the program.

All the best,

Geoffrey