To: Sandeep M.
From: Geoffrey Klempner
Subject: Problems for a materialist view of the mind
Date: 14th December 2010 13:08
Thank you for your email of 6 December, with your first essay for your Pathways Introduction to Philosophy program, in response to the question, 'What difficulties stand in the way of a materialist view of the mind, according to which thoughts, feelings and sensations are ultimately nothing more than processes in the brain?'
This is a well-researched essay which covers some of the main materialist theories and summarizes the objections or difficulties that might be raised against them. I will follow the structure of your essay and discuss each of the varieties of materialism in turn.
The mind-body identity theory is actually a term covering a variety of theories which differ significantly in what they claim about the relation between the mental and the physical. Perhaps the most important distinction is between 'type' identity and 'token' identity. 'Pain is the stimulation of C-fibres' is a type-identity statement. Wherever there is pain, there you will find C-fibre stimulation, because that's just what pain is. The main objection to this comes from functionalism which, as you state, accepts that different physical structures can realize the same function (just as a mousetrap can be made of wood or plastic). In principle, perhaps, an artificial person or android could 'feel pain' even though it's 'brain' consisted of chips, not of fibres.
The alternative is token identity. But what kind of thing could this be? Where will you find the token, the actual, particular occurrence of my memory of the drive I had last Saturday? Physical items are individuated spatio-temporally, but mental items are not located 'in' space but only occur in time. One theory, put forward by Donald Davidson, relies on the observation that events are individuated strictly by their causal relations. There is some event in my brain which has the same causes and effects as the event of my remembering the drive. That's simply what it means to state that the two events are 'identical'.
You refer to Ryle's celebrated book 'The Concept of Mind' in relation to the theory of behaviourism. There is another philosopher whose work has been lumped together (wrongly, in my view) with Ryle, namely the later Wittgenstein. Wittgenstein's view of the nature of the mental avoids claims about logical equivalence (such as the idea that, 'He ironed the shirt carefully' can be analysed fully in terms of counterfactual conditional statements about possible behaviour). Nor did Wittgenstein ever claim that mental events are a mere 'ghost in the machine'. What he did say is that, 'A nothing would serve as well as a something about which nothing can be said.' If you consider all the possible situations in which we use a term like 'pain', the criteria which we typically appeal to in applying it, the 'language game' of its use, then you have described its meaning. If you then go on to insist, 'But there is *something there*, something terrible,' you haven't succeeded in saying anything. You already said you were in terrible pain, didn't you? And didn't I understand exactly what you meant?
You describe Searle's famous critique of functionalism, the 'Chinese Room' thought experiment. I've always thought that Searle is cheating a bit here. Here's a thought experiment which has been used in reply to Searle. Imagine that my brain cells are dying, and that tiny, self-sacrificing intelligent alien beings have secretly decided to take the place of each deceased brain cell, performing exactly the same function that it performed. My brain continues working, exactly as it did before. Of course, we would not expect the aliens to 'understand' anything. They are just like the man in the Chinese Room, they are following instructions. Yet the functional entity which is my brain as a whole, 'learns', 'understands', writes philosophy emails etc.
However, I do agree with Searle that functionalists (such as Daniel Dennett in his excellent book 'Consciousness Explained') overstate the case. We don't know that the human brain functions as a 'Turing Machine' (the template for all possible computer programs). In fact, there is a growing body of evidence that it works in an entirely different way, not in terms of '1's and '0's. What this is an argument for is not dualism (Searle isn't a dualist) but rather a limit to the ambitions of Artificial Intelligence. It is entirely possible that no-one will ever write the 'program' of the human brain in the way that biologists have successfully mapped the human genome.
All the best,