Philo
Sophos
 

philosophy is for everyone
and not just philosophers

philosophers should know lots
of things besides philosophy


Philosophical Connections

Electronic Philosopher

Feature Articles

University of London BA

Philosophy Lovers Gallery

PhiloSophos Home


International Society for Philosophers

Possible worlds and the problem of other minds

[INDEX]

To: Gordon F.
From: Geoffrey Klempner
Subject: Possible worlds and the problem of other minds
Date: 21 September 2006 09:23

Dear Gordon,

Thank you for your email of 5 September, with your first essay for units 1-3 the Possible World Machine in response to the question, 'Explore the use of 'possible worlds' in philosophy, illustrating your argument with an example of a problem that involves the notion of possible worlds', and for your email of 20 September, with your essay for units 4-6 in response to the question, 'How do you know that the author of these words has a mind?'

Possible worlds

Your account of 'personal worlds' does not involve any conflict with the distinction drawn in chapter 1 of Naive Metaphysics between the objective world and 'my subjective world'.

As I understand it from your account, the personal world of subject S would be defined as the possible world in which all and only the things that S believes are true. If S's beliefs are 'omega inconsistent' (i.e. there is inconsistency but S is not in a position to identify where the inconsistency lies) then this defines an 'impossible world', but one which S does not recognize to be impossible. Our personal worlds overlap to the extent that we share beliefs.

It is true that we can also talk of the 'world' of the autistic person, or someone who is colour blind, or deaf. In this case there could be a perfect coincidence - as defined in terms of belief - between the worlds of someone who is, e.g. deaf, and someone who is not deaf, yet we would want to say that in an important sense their worlds are profoundly 'different'. If this notion could be sharpened up, this would yield a different kind of personal world - call it a 'perceptual world'.

In your careful account of modality and ideas of possible worlds you don't go so far as to actually describe a philosophical problem which involves the notion of possible worlds. However, the contrast which you draw between 'possible actual worlds' conceived as linguistic entities and possible worlds not so conceived relates to the debate alluded to in unit 1 between those who view possible worlds as 'real' worlds such as David Lewis, and those who define possible worlds in linguistic terms.

Amongst those who take a realist view of possible worlds, Saul Kripke and David Lewis differ on one fundamental issue, namely the question of 'trans-world identity'. Kripke wants to take talk of possible words seriously, and not just reduce possible worlds to a linguistic construct, yet at the same time he insists on the common sense notion that when, e.g. I think about the things that GK might have done yesterday but did not, I am not thinking about a 'counterpart' of myself existing in some other possible world but about MYSELF.

Why 'believe' in possible worlds, if we can do all we want to do, or say all we want to say with your 'possible actual worlds'? Do you have a view on this?

My suspicion is that the question that the linguistic analysis fails to address is the very same question that Lewises account also fails to address, namely the nature of 'the possible' as such. A linguistic entity, or a world in a different time and space, remains stubbornly actual, something that 'exists'. Yet when we turn our minds to the possible, we are precisely aiming at the opposite of this, at a state of affairs that does not exist but might have existed.

Yesterday, I had an important decision to make (whether or not to go ahead with my business consultancy partnership). In time, I will be in a position to express satisfaction with the decision I made, or entertain regrets. The object of my thought - how things might have been had I decided differently - is not something that 'is' but rather something that 'is not', that is the whole point.

The difficulty is finding a way to capture this idea in philosophically illuminating terms, otherwise we have to rest content with saying that the notion of possibility, or possible worlds is 'sui generis'.

Other minds

You point out one a straightforward way of answering the question, 'Does the author of these words have a mind?' which does not raise any difficult philosophical questions. There is no need to give a philosophical analysis of the concept 'mind'. Just say that in our day-to-day lives we meet and converse with people, who express their thoughts, who behave in characteristic ways which imply thinking and deliberating, and these are the kinds of thing we mean by 'having a mind'.

On the other hand, there are children's toys - speaking dolls, or action men which yell 'pass me the ammo!' - to which we would not attribute minds, even though they appear to produce utterances of the kind which under other circumstances we would explain as coming from a subjects with minds.

I have a version of Eliza on my computer, along with a configuration file which you can add to at will. It does not seem too far fetched to suppose that given enough time one could program Eliza to carry out a pretty convincing discussion an any given philosophical topic. Maybe the configuration file would have to be a few (or many) millions of words long, but that kind of thing is child's play given the enormous size of programs being produced for the latest PCs. This would not have been possible in the days when Eliza was first designed because back then processor speeds ran at a comparative snail's pace.

I'm pretty sure that the method you suggest of cribbing questions from the units using software that selected likely statements and turned them into questions could be made work. In that case, the answer to the question is that you can't be sure that the author of the questions has a mind. The questions could have been generated by a computer. It is irrelevant that the software was designed by a human being. The programmer did not need to think of these questions in order to write the program, or indeed know anything about philosophy.

Eliza with a souped-up configuration file looks a likely candidate for the Turing Test. How long she can keep up a convincing philosophical discussion depends on the skill of the questioner. Yet even if the discussion carried on indefinitely, we would be very reluctant to say that Eliza is 'doing the same thing' as we do when we express our thoughts. Because we can see how the trick is done.

Now at last we are driven back to the philosophical question, what is it REALLY to have a mind? It can't just be 'to behave like something with a mind'. In that case, it must have something to do with source of the behaviour. One theory - dualism - says that only subjects with 'souls' have a mind. Anything without a soul is just like a more complicated version of Eliza. One unfortunate consequence of dualism, however, is that no-one can ever know whether anyone else has a mind. I can never know that about you and you can never know that about me.

If you reject dualism, on the other hand, then the challenge is to provide a convincing account of the essential difference between the kind of program that runs on the brain (if we can even call it a 'program') and a giant look-up table like Eliza.

All the best,

Geoffrey