The concept of Free Will has sparked many of the most pointless arguments in philosophy. When someone says that they can prove that Free Will doesn’t exist, my first reply is sometimes “well, you’re free to think that”. Which might be followed by the more thoughtful ”there is no way to live other than as if there is free will”. But it takes a bit more to get to the heart of what I see as the problem with the philosophical question of Free Will.
I think that the whole debate about Free Will is flawed from the start because the concept is not well defined. Many people agree, but many of them merely engage in semantic furniture rearrangement such as saying we really should be talking about Free Choice or some such. But the flaw is much deeper than that. The term Free Will might need some work, but the real flaw is in the assumption of who it is who possesses this Free Will, or makes that Free Choice. The view of the human mind emerging from the study of cognitive neuroscience is very different from the views assumed by most people in their day to day lives, and by philosophers over the history of philosophy. We have historically assumed a model of the human mind and the human “self” that has been shown to be deeply flawed by modern science.
The standard intuitive view of the “self” and ideas like “you”, “me”, or “him” are based on what we have needed in the past for social interaction. I view it as similar to the simplification in physics where you can assume the mass of a planet is concentrated at a point in order to calculate its orbit. You can use the mass-at-a-point assumption to steer a rocket to Mars pretty well, but when you start getting close to Mars and want to land, the simplification is no longer useful. You need to know the radius, the local topology, maybe even some local geology in order to find a spot to land safely. Similarly, our intuitive view of a self is very like a person concentrated at a point. This works amazingly well for many social interactions, but if you want to get closer, such as in an intimate relationship, or even try to figure out internal motivations, the simplified idea of a person-at-a-point is an impediment to understanding. This intuitive person-at-a-point is very like the old “homunculus” model of the human mind where a little man in your head would make all of the decisions, which model makes absolutely no logical sense since the question of how the mind works is just pushed into the homunculus who would then need another little man in his head. These little men are all an attempt to rationalize the person-concentrated-at-a-point intuitive view of the mind, but they have little to do with the way the human mind actually works.
Modern cognitive neuroscience has shown very clearly that humans are a complex fabric of interwoven simultaneous processes. We are nowhere near a full understanding of the implications of that fact. It is, however, clear to me that centuries worth of philosophy on the bookshelves needs to re-evaluated in light of the non-punctiform self.
It’s bad enough to see philosophers wasting their time arguing Free Will without a good concept of who is having it, but it makes me sad to see actual neuroscientists talking about it as if they might be looking for the homunculus neuron. Take this recent article in Nature as an example:
The main voice of reason in there is Michael Gazzaniga:
“Neuroscientists also sometimes have misconceptions about their own field, says Michael Gazzaniga, a neuroscientist at the University of California, Santa Barbara. In particular, scientists tend to see preparatory brain activity as proceeding stepwise, one bit at a time, to a final decision. He suggests that researchers should instead think of processes working in parallel, in a complex network with interactions happening continually. The time at which one becomes aware of a decision is thus not as important as some have thought.”
Which, translated into simple language might as well say: “the entire rest of this article is pointless”, since the article is about neuroscientists trying to pinpoint when and where a decision is made. They pose the Free Will definition question, but without ever phrasing it as if it might be deeper than a problem in semantics:
There are conceptual issues — and then there is semantics. “What would really help is if scientists and philosophers could come to an agreement on what free will means,” says Glannon.
They never really address the aspect of it most needing definition. It’s very sad to see so many scientists and philosophers, whom Nature presents so favorably here, just wandering in the darkness of their unexamined assumptions.
One prominent philosopher who is helpful on this subject is Daniel Dennett. He talks about related problems in many books, but his book “Freedom Evolves” is directly about the question of Free Will. He attempts to deal with the fact that the mind is not all happening serially, but is instead is distributed over many smaller parts operating in parallel. Each of these parts is less complex than the whole, definitely not a homunculus, but the from the combination of all of them, complexity arises. The best, and most humorous, summary I’ve seen of his idea is the one Dennett cites here:
‘Some years ago, there was a lovely philosopher of science and journalist in Italy named Giulio Giorello, and he did an interview with me. And I don’t know if he wrote it or not, but the headline in Corriere della Sera when it was published was “Sì, abbiamo un’anima. Ma è fatta di tanti piccoli robot – “Yes, we have a soul, but it’s made of lots of tiny robots.” And I thought, exactly.’
The “robots” being neurons of course, which can be viewed as deterministic, ie. robots. Dennett goes through a very complex argument to present a way in which something which is like what we think of as Free Will might arise from a complex network of tiny automata, such as neurons. It’s far from a settled argument, but Dennett points toward ways to break out of the fallacies that traditional philosophy finds itself mired in because of an outmoded concept of what a mind is.
Freedom Evolves also has another favorite Dennett quote:
’Some philosophers can’t bear to say simple things, like “Suppose a dog bites a man.” They feel obliged instead to say, “Suppose a dog d bites a man m at time t,” thereby demonstrating their unshakable commitment to logical rigor, even though they don’t go on to manipulate any formulae involving d, m, and t. Talk about time t is ubiquitous in philosophical definitions but seldom given any serious work to do.’ *
The misguided attempt to define a mind as concentrated at a point in space or a decision as happing at a point in time is a relic of outmoded intuitive ways of seeing ourselves. And this is not just an old intuition, when most people use a computer metaphor for the mind they are thinking in terms of the kinds of computers most people have been exposed to, ones which serialize a single thread of instructions through a central processing unit. (Advanced computer parallel processing technology is beyond most people’s grasp.) In the kind of computers most people are familiar with, a decision does happen at a particular time which can be measured with impressive accuracy and one could point to the electronic gate on the chip where it actually happened. However as computers become complex enough to model minds of complexity comparable to human minds, the decision will be much harder locate, just as it is in our minds.
If we want to come to terms with ourselves and how our minds work, and maybe with what a term like Free Will might mean, we first need to come to terms with a picture of a mind where many diverse components of “thought” are happening simultaneously in different brain structures and what we experience consciously is something like the froth on top of a bubbling stew of unconscious processes.
* (Dennett is essentially apologizing here for quoting another philosopher’s statement involving a time t, and goes on to point out that the use of time t is actually useful in the particular quote he is citing.)