Philo
Sophos
 

philosophy is for everyone
and not just philosophers

philosophers should know lots
of things besides philosophy


Philosophical Connections

Electronic Philosopher

Feature Articles

University of London BA

Philosophy Lovers Gallery

PhiloSophos Home


International Society for Philosophers

John Searle and the mind-body problem

[INDEX]

To: Charles C.
From: Geoffrey Klempner
Subject: John Searle and the mind-body problem
Date: 20 April 2003 13:32

Dear Charley,

Thank you for your e-mail of 9 May, with your interesting essay for units 4-6 of Searching for the Soul, 'About Identity: A Philosophical Discussion'.

This essay is really about John Searle's 'solution' to the mind-body problem. Although, the your own experience of the connection of the mental and physical suggests that they are not related as two separate substances, as Descartes thought, it would still be possible for a diehard Cartesian interactionist to propose an 'explanation' of how the physical brain in its interaction with non-physical mind is impaired by physical damage or disease, but is able to overcome this impairment with physical aids, such as drugs. Possible, though perhaps implausible given all the other information that we have.

There is a lot to say about each of the arguments which Searle refers to, Kripke's argument from 'Naming and Necessity' about rigid and non-rigid designators, Nagel's 'What Is It Like to Be a Bat?' and Jackson's argument for qualia.

However, as I understand his position, Searle has something to add to all these negative arguments, a positive conception of mind as a biological phenomenon. In the world of nature, wherever mind occurs, we are dealing with facts whose essential nature is to be approachable only from the 'inside'. Hence the claim that for consciously apprehended mental items, there is no distinction between the way they seem and the way they are. If there were a distinction to be made, then we would have to be in a position to describe or explain what *would* be a way of apprehending them as they really are as opposed to the way they merely appear. But Searle believes that such an approach is ruled out by the very nature of the mental.

When I wrote my paper 'Truth and Subjective Knowledge' (now on the Wood Paths at http://klempner.freeshell.org/articles/shap.html) I had moved towards a very similar view, although at the time I did not associate it with Searle. In essence, my view is that it is of the very nature of the mind that there are things to be *known* which are accessible only to the subject, but not, however, in any way which would entail the possibility of a 'private language' in Wittgenstein's sense.

I do not know for sure whether Searle would go along with my view that these subjective 'things to be known' are simply brain states. However, the crucial point is that I also hold that these brain states cannot be apprehended objectively (e.g. by pointing a 'cerebroscope' at the brain as Richard Rorty once thought) but only 'by the subject whose brain it is'. In other words, the only way to literally *see* a brain state (in this sense) is. e.g. to feel a pain or a tickle, or to experience red or Middle C. There is no way of getting at these 'things' - identifying or classifying them - from the outside.

These subjective 'things' cannot in fact be *objects* of judgement because that is already to turn them into something which they are not. Whether I say, 'the post box is red', or 'The post box looks red to me', I am making objective judgements with objective truth conditions. The subjective 'object' or 'thing' which is given only to me (wrongly characterized as qualia) cannot be an object of 'judgement' in any sense of the word. (So there cannot be any room, e.g. for the speculation that what 'looks from the inside' red to you might 'look' green to me.)

You might think that the connection between the mind and biology, on this view, would be that anything that can be put together and formed through technology is capable of being apprehended objectively (because we would know everything there is to know about how it works). I am worried about this, however, because it leaves open the possibility that while one can't prove that the brain is 'running a program' as the IA theorists believe, one *also* cannot prove that the brain is *not* running a program. How can we be so sure?

The well known alternative to the AI view, also consistent with the view that the mind is a product of the brain, is that the mind essentially involves a 'connectionist' structure which cannot be dissected analytically. In a computer lab, you can set up a connectionist system, and give it the opportunity to 'learn' things (like pattern recognition). But you can't then analyse the 'rules' which the system is following in producing its results.

This leads to a picture of a physical universe which, by its very nature creates 'recesses' that cannot be penetrated (as it were) by 'folding over' itself. Some, but not necessarily all of these recesses are the home of the mental.

All the best,

Geoffrey