Philo
Sophos
 

philosophy is for everyone
and not just philosophers

philosophers should know lots
of things besides philosophy


Philosophical Connections

Electronic Philosopher

Feature Articles

University of London BA

Philosophy Lovers Gallery

PhiloSophos Home


International Society for Philosophers

How do you know the author of these words has a mind?

[INDEX]

To: Louis G.
From: Geoffrey Klempner
Subject: How do you know the author of these words has a mind?
Date: 15 February 2007 11:04

Dear Louis,

Thank you for your email of 7 February, with your notes on unit 5 of the Philosophy of Mind program, and your second essay, in response to the question, 'How do you know that the author of these words has a mind?'

When is scepticism about another mind justified? It is a tragic fact about the human condition that we DO entertain scepticism regarding the motives and beliefs of others, and sometimes this scepticism turns out to be well founded.

Human beings exhibit the most remarkable ability to fake, act, pretend. Sometimes people are caught out, and sometimes they take their secrets to the grave.

What we have to rely on, in trying to determine whether a person is being honest with us or not, is not only 'generalisation from our own case', although this has a genuine part to play (as when we ask ourselves, 'What would I do in that situation?') but a host of factors based on our knowledge of that person's previous behaviour, human psychology, possible motives for deception and so on. Think of a police interrogation. Or a lover who doubts whether the person he or she loves is being faithful. Or great actors and actresses and their spellbinding ability to assume a character and personality which is not theirs.

Arguably, this is one of the most pervasive themes of human life.

It is the genius of philosophy, however, to have invented a whole new kind of scepticism. Just to give it a label, I'm going to call this 'metaphysical doubt about other minds', in contrast with 'real doubt about other minds'.

The best way to explain the difference is to consider a situation where we have some reason to question whether a person is being honest with us. 'Do you really love me, or are you just trying to get me into bed?' Of course, sometimes we don't know our own feelings for sure. But let's assume that we have a clear case where a person's words and actions can only be interpreted either as words and deeds of love, or else deliberate deception.

Real doubt can be occasioned by any number of things. As I indicated, there are circumstances which would confirm the doubt or help remove it.

But what about metaphysical doubt? The point about metaphysical doubt is that it has nothing whatsoever to do with a person's words or actions. 'How do I know that you are not a mindless zombie who talks and behaves in every way as someone with a mind would do?' is a question which cannot be answered by any words or actions - by hypothesis. If someone has real doubts about me, I can try to allay those doubts by the things that I do and say. But if someone has metaphysical doubts, then nothing I say or do will make any difference at all.

I am interested in the fact that it is possible to have metaphysical doubts about other minds. I would dispute what you say in paragraph 2 that the problem is 'more challenging for the materialist'. On the contrary, on the hypothesis of materialism, metaphysical doubt about other minds cannot even be entertained. If all the physical requirements are met, then there is no room for the hypothesis that nevertheless 'all is dark inside'. On the other hand, if mind-body dualism is true, then it does seem possible that there could be, e.g. a zombie double of GK who talks and behaves in every way like me. In fact, this is the argument David Chalmers gives in support of dualism, namely, the fact that we can conceive of the logical possibility that I have a zombie double.

Turing's Test is based on the commonsense principle that if something looks like a duck, walks like a duck and quacks like a duck then it is a duck. Of course, we know that this is not true. A child's electronic duck does all of these things. But it doesn't have kidneys, heart, liver etc. If you dissect a duck and find that it has all the correct internal organs, then that's pretty good evidence that it is a duck.

We understand the difference between a 'fake' dialogue, as generated by the famous 'Eliza' program, and a genuine dialogue. But, thinking of the duck analogy, is the capacity for dialogue the only essential trait of intelligence? I am not satisfied that it is. The Chinese Room scenario gives one very good reason for doubt. To be intelligent is not just to generate the appropriate words in the appropriate situations, but to understand the words so generated. It is true hat we can talk about lots of things that we don't fully understand. But generating words none of which one understands is not 'talk' but merely making a noise.

I would argue that an entity cannot have beliefs unless it has desires, and cannot have desires unless it has needs, and the capacity to satisfy those needs through physical agency. This as a conceptual claim about the nature of what it is to be a 'subject'.

Regarding non-human animals, it is again a conceptual question what kinds of feelings or experiences it makes sense to attribute to a given subject. An earth worm cannot feel anguish, although maybe it does 'feel' something when you tread on it. A dog cannot feel despair at the destruction of its life's work, although it might be upset to be deprived of its toy ball. These are conceptual points. If someone said, 'You can never know for sure. Maybe a feeling of anguish is occurring in the earth worm and you would never know,' that is just plain nonsense (albeit a nonsense which dualism encourages).

One interpretation of the question, 'How do you know that the author of these words has a mind?' is in terms of metaphysical doubt, which I talked about earlier. If you are prepared to entertain metaphysical doubt, then it is possible that the author, GK, is in fact a zombie who talks and acts in every way like a human being.

It is also possible to entertain real doubts. There's a site on the internet where you can have fun generating 'post-modern' philosophy essays. A program throws seemingly meaningful words and phrases together in a passable imitation of a philosophy essay. So it is logically possible that for the second set of essay questions, I used one of these programs rather than taking the trouble to compose six essay titles. To someone who had no knowledge at all of philosophy, many philosophical essay titles no doubt do look like gibberish.

When will a computer be able to genuinely produce philosophy? How many years is it likely to be before the Director of Studies of Pathways is a computer running an AI program?

If AI is finally achieved then, as I indicated above, it would have to involve the creation of an intelligent entity which has needs and desires and not merely the capacity for generating words. In that case, there will be nothing to prevent us from saying that it has a mind.

All the best,

Geoffrey