Home Up



5 Aug Alan Dent

Intellectual question: I don't go along with Francis Combes's faith in inevitability. My hunch is that in complex systems determinism throws up contingency. The universe is governed by breathtakingly deterministic laws but if I trip on the stairs carrying Harrison tomorrow and break my neck this isn't because at the Big Bang it was inevitable but because there are so many converging circumstances that even though the laws of physics apply and I can't fly to save myself I'm not bound to trip. If this is so then contingency becomes enormously influential. In human society the possible converging happenings mean that though there are clear historical patterns, the details defy them. Was the industrial revolution inevitable ? How could it have been when 70,000 year ago our numbers fell to 2,000. We could easily have disappeared. It happened because of an amazing concatenation of circumstances. But contingency is a kind of determinism: we aren't free to control it. So what freedom do we have ?


6 Aug Ken Clay

I agree with you entirely apropos inevitability and consider this one of Marxism's worst errors (don't worry Comrades - the revolution is inevitable - all we need to do is wait, but a bit of stirring, while we wait, might speed things up a bit). Whether Marx himself actually believed this is less certain but it was certainly the received wisdom in the CPGB when I was in it. Philosophically Ted Honderich wrote a big fat book A Theory of Determinism (OUP 1988) which I dashed out and stumped up £50 for (yis I was rich in them days). Ted made the astonishing assertion that determinism was now the philosophical orthodoxy and that free will was a superstitious relic, like heaven, hell and reincarnation, held only by ignorant hoi polloi. (See Appendix 1) The indeterminate behaviour of sub atomic particles is irrelevant.  

I didn't believe it then and I don't now. I'm puzzled by your assertion that "contingency is a kind of determinism" that sounds like an oxymoron. The fact that something is unpredictable doesn't mean it isn't determined. One of the illustrative arguments for determinism was the drowning man - should he yell for help or should he resign himself to a pre-determined fate. The determinist would say he should yell since it may be pre-determined that he is saved after a lot of yelling. I don't think there's a clear answer but most (non-philosophical) humans have a strong conviction they have free will. Yis, Ken and most humans believe in some kind of God...Er..yis but... Ted's view has strange codicils - eg blame and praise apropos human behaviour are inappropriate - geniuses just are (they don't deserve it)  - and so are criminals (it's not their fault). Then comes the work-around. Criminals aren't to be blamed for their genes and social conditioning - but we should, nevertheless, lock them up to protect society. Likewise Beethoven and Shakespeare aren't to be lauded - but we should value them - and if say Bill is having trouble paying his rent we should bung him, or if Ludwig is going a bit mutt an jeff we should give him a free hearing aid  (because we value his works). Yis, sounds like semantic juggling. But there you have. I doubt you could blag the Honderich from OUP but the Harris library may have it.


7 Aug Alan Dent

I take your point that the fact something is unpredictable doesn't mean it isn't determined. But contingency is different. It isn't just unpredictability, it means things may occur or not. There's a degree of chance. Ramachandran says this about language. It developed by using parts of the brain which evolved for other things and it did so because some individuals had more nuanced sound-making and others copied them (because of mirror neurons). Then natural selection did the work. Like Darwin, Ramachandran thinks it all began with sex. Language originated in courtship rituals. Once a subtle use of sound became a survival advantage it very quickly got preserved as species specific and in double quick time a repertoire of pleasant noises for attracting mates became the works of Homer. The point is, there's nothing inevitable about language. It happened but it didn't have to. Yet at the same time there is determinism behind it. Determinism throws up contingency.


8 Aug Ken Clay

I'm not sure that Darwinism and Ramachandran's notions get us out of the contingency/ determinism crux. We can have the intuition that language just popped up by accident (and then prevailed due to evolutionary pressures) but the determined determinist would say that, at the lowest level, the genetic mutations enabling this development were inevitable. I find this hard to believe (as you do I guess) but where's the logical refutation of this determinist theory? We've only got what we've got - and variants are hypothetical no matter how much we're convinced it could've been otherwise. 


11 Aug Alan Dent

I think we have to distinguish between accident and contingency. There are no accidents in nature as there's no intention. But contingency’s a different matter. The question is can we prove determinism isn't absolute ? It may seem commonsensical that it can't be but it's commonsensical that time is absolute. There has to be a way of proving it. I think Ramachandran gives us a starting point: unless the structure of the human brain changed quite quickly allowing language to emerge then it must be the case that language was always possible (ie for the entire 150-200,000 years homo sapiens has been around) but didn't appear until a particular set of contingent conditions made it easier. Language appeared about 40,000 years ago, so that's a long period during which we had the capacity to be linguistic but didn't make it. If this is right then language must be contingent and if that's so then it points to significant contingency within evolution. Doesn't natural selection cobble together the best it can from the materials at hand and without any teleology. So we have five fingers but why not six (Anne Boleyn did I believe) ? Well, surely because five fingers does the trick. It's enough for the purposes the hand evolved for in specific circumstances.

What about prior to the emergence of life ? More than ten billion years. Were the physical processes of the universe entirely determined ? I've got no idea if there's a way to question that but as far as biology goes, there is a gap into which we can force contingency. Why does this matter ? Because if life isn't ruled by deterministic forces then the determinism of economic theories starts to look shaky. Both Smith and Marx rely on determinism. Smith thinks it must be the case that a multiplicity of competing self-interests will result in social peace because a deterministic deus ex machina intervenes. He simply assumed determinism from his belief that the universe is governed by a benign deity. Hence, nothing can be totally evil as god wouldn't permit it. Partial evil must serve god's beneficial ends. Capitalism which looks greedy and vicious is god's plan for social peace. Only in the superficial sense of people pursuing their narrow self-interest does Smith believe in choice. Behind that is a strict determinism. The much trickier matter is whether we can choose or are subject to a delusion of choice. But if we can establish that determinism isn't absolute we start to prise open a window onto choice being real.


12 Aug Ken Clay

I too would like to think that we make meaningful moral choices (rather than have the illusion we do) and are the masters of our fate (within certain biological and environmental limits) but there are a couple of points your note raises: “unless the structure of the human brain changed quite quickly allowing language to emerge” Couldn’t this have been the case? Surely now we observe the changing structure of a single human brain with the acquisition of knowledge (ie new dendrites and synapses – the brain as a muscle which grows with use). Perhaps you refer to a far greater change (but Neanderthal brains were bigger than ours). Maybe it’s crucially the cerebral cortex.  I believe geniuses who grapple with problems sometimes reach a solution after their brains have changed, over years, to suit.

I also wonder how we know there was a 150,000 year gap of silence. It’s a bit like wondering how the Romans spoke Latin. We don’t know and never will.

Like all philosophical questions the problem of free will and determinism remains one because there’s no definite, logical answer. We may pass on, like we did with Berkeley’s solipsism, or Hume’s denial of cause and effect, or Kant’s noumenon, saying yes, we can’t disprove it but we’re not persuaded – it’s a dead-end.

As a species we seem to be convinced we have choices and I find it hard to accept that this feeling will wither away like belief in God. I think William James said something to this effect – that a universe devoid of moral value was too horrible to contemplate – and therefore he couldn’t accept it. Hardly a logical rebuttal. Another 19C thinker, Vaihinger, thought we must act “as if” some notions were true until they can be verified ( a kind of categorical imperative). Surely a defeated fatalism is, as you fear, playing straight into conservative hands but I’m not sure we have to have a logical rebuttal to get us back on track.


12 Aug Alan Dent

So, why couldn't the structure of the human brain have changed quickly to produce a capacity for language? The answer is that the emergence of language was too sudden. Brain size isn't the crucial issue: how the brain is structured is what matters. The big Neanderthal brain didn't produce language. How do we know language arose suddenly: the conjecture is that the astonishing flowering of culture which happened about 40,000 years ago couldn't have taken place without language. To put it more accurately, it was the emergence of language which powered it. As we know  the hominid brain was its current size about 150,000 - 200,000 years ago, it follows there must have been this extraordinarily long period in which our ancestors were confined to grunts and moans but nothing like generative language.

The sudden emergence points to language having emerged from the use of parts of the brain which evolved for other things: rule-making for example. This isn't an exclusively linguistic capacity but it's crucial to generative language ie an infinite number of possible sentences on the basis of a finite set of rules. Now, if this thesis is right, then language is a contingency. If something as fundamental to our humanity as language is contingent, what role can we sensibly assign to determinism the unfolding of human history? If language is but might not have been, then it hardly makes any sense to claim that capitalism had to be, or socialism has to be. This doesn't prove that we can make moral choices, but it does suggest that the strict and simple laws which govern the world explored by physics break down once we arrive a human consciousness and society. For a physicist, electromagnetism, gravity and the strong and weak nuclear force work on a dozen fundamental particles to produce everything in the universe. But one of those things is the human brain and though it may have emerged through the operation of strict determinism its unconscionable number of neurons and breathtaking plasticity mean that it breaks free, to at least some extent, from the forces which formed it.

Yet more important than this is the fact that our brains have evolved to make us cultural. Even in a very small culture, one in which there are no more than 150 individuals, there is enough scope for difference within and between those 150 brains for strict determinism to have a hard time. Suppose we assume that 99.9 per cent of what goes on in the brain is determined, that tiny scope for contingency in each individual becomes very wide when multiplied by the thousands of social exchanges between 150 people. But in a society of millions  even the tiniest leeway for contingency means that the heavy weight of determinism is constantly shaken by the appearance of what contingency can produce. Language is a good example of how culture has to trigger what is inborn. You have to be part of a linguistic community to assimilate language. It's true of almost every human capacity that it requires a culture for its development. Cultures are inordinately variable, and even if that is a result of determinism, it creates something new which fights free of the determinism which created it... 


Aug 13 Ken Clay

I think you are quite right to say that a culture is facilitated by language and that if there was a recognisable efflorescence at a certain point then this is persuasive evidence of language acquisition at the same time. But just what would these indicator relics be? Large settlements? Pyramids? Cave paintings? Can we be sure there was a 100,000 year period of mindless grunting? (And does your recent experience as an educator of youth suggest a regression to this state?) It would be nit-picking (like a determinist) to suggest there might have been a 50,000 year transition period when these primitives used only the present indicative before they developed the conditional. Yes, I joke, but do accept that this imaginative property of language, the creation of alternative realities, and the accumulation and transmission of knowledge, the wisdom of the tribe, is what distinguishes us from apes. Language is indeed the key. But I still don’t see why language acquisition has to be a contingency. Yes it’s not evolutionary on the time scale of, say, the development of the opposable thumb; it is startling and sudden and revolutionary and certainly unpredictable but it could still be determined.  

Just where freedom (contingency if you like) enters into this hypothetically deterministic world is the nub of it. Even the great Ted holds his hand up in the face of neuroscience and quantum physics – it isn’t a proof, but neither is it a disproof he maintains – in fact the crazy sod thinks these disciplines support his hypothesis. 

As to the miraculous emergent properties of brains (consciousness) we run up against another classic conundrum of philosophy – the Mind /Body problem. Yes it is inexplicable how that gunk in our skulls can even at its lowest level produce the sensation of a patch of red much less compose Beethoven’s 9th. But even so – the determinist is not torpedoed – he sails on. 

I’m playing devil’s advocate here of course. I don’t believe this stuff either but I hope I’m demonstrating that indeterminacy is logically unprovable, and, therefore, the search for the killer rebuttal is futile. I guess, like me, you are somewhat disheartened by the current political apathy – what one commentator called, referring to the baleful effects of determinism “Asiatic fatalism”. You hope to wheel out the neuroscientific or quantum physical argument which will demolish such inertia. It’s certainly worth scanning the scientific and philosophical scene for any promising harbingers but I’m not holding my breath. And are our fellow oik victims of recent events going to be energised by this anyway? No fucker out there (at least none that I know) believes in determinism to start with. However I think there may be a fatal inclination towards historical determinism (see Appendix 2 for a summary of this) which, for an indeterminist (ie most oiks) is a logical disconnection since what’s history but the record of acts of individuals en masse?


15 August Alan Dent

             Let’s come at the question from a different angle. It’s long been held, following Saussure, that language is an enclosed system of signs in which there is difference without positive elements; this absence of positive elements points to the arbitrary nature of words. There is no necessary connection between the sound tree and the oak in your garden. It’s simply a matter of convention: all English speakers employ this particular combination of phonemes to designate a particular referent.  But why should words be arbitrary ? Why should units of speech bear no imprint of what they refer to ? Given the nature of the evolution of language, it would be surprising if words didn’t contain some residual onomatopoeia especially if language did emerge in courtship ritual . The theory that it arose as a means of making division of labour easier is persuasive but less likely: the need to reproduce is significantly more urgent than that to divide labour.

If language did begin as part of courtship, the emotional resonance of words would have been more important than their denotative meaning. We know that in conversation some 90% of meaning is conveyed non-verbally. This is what we would expect if language began as a means of expressing emotional, especially erotic, content. Logically, if language emerged this way, then all words would require some emotional grounding, even those most apparently with little emotional content. The notion of the arbitrary quality of words comes from the opacity of language’s origin: when we ask ourselves why the sound door stands for what lets us enter a room, there seems to be no answer; but if by some miracle we could have access to a record of the entire evolution of every language, perhaps we would see things differently. 

Ramachandran, as I’ve indicated, has picked up on the discovery of mirror neurons as significant in understanding how language emerged.  The puzzle about the size and complexity of our ancestors’ brains rules out the possibility of a specific language site which produces speech regardless of environmental pressure.  We know, especially from feral children, speech requires a social trigger. In a way, this rehearses the manner language might have emerged: parts of the brain specialised for producing sound (which exist in many animals) became much more sophisticated and specialised under environmental pressure. The pressure came because organs of speech had developed to produce emotionally charged, highly modulated sounds in courtship ritual and the existence of mirror neurons meant imitation could spread these very rapidly. Mirror neurons, discovered by Rizzollati and Gallasse in 1995, are crucial to the theory of how language may have evolved. What Rizzollati and Gallase found is that neurons in the ventral premotor area of macaque monkeys fire whenever the individual performs a more or less complex action: putting a nut in its mouth, pulling on a lever etc. The remarkable thing, however, is that the same neurons fire when a monkey watches another perform the action.

These monkeys must possess a primitive mind-reading capacity. Mirror neurons are likely to be fundamental in ensuring social behaviour. Yet how do we know they exist in humans ? Firstly, Ramachandran studied patients suffering from the odd condition known as anosognosia. Paralysed down the left-hand side by right hemisphere stroke, about 5% of patients deny their paralysis. This is explained by the specialization of different areas of the brain: the right-side tends to be reality testing while the left is more associated with repression, confabulation, reaction-formation and so on. Hence, when some of the reality-testing capacity is lost due to damage, the left side can exert greater influence and the result is denial. Yet these patients deny not only their own paralysis, but that of other patients suffering from the same condition ! This denial is explained by damage to the mirror neurons. The capacity to understand what is going on in someone else’s mind from what they do depends upon these neurons firing. If they are damaged then it would follow that paralysis would not necessarily be read. Secondly, researchers have found that pain neurons in the anterior cingulate which fire when, for example, someone is pricked with a pin, also fire when people watch other being pricked. The hypothesis then is that we are mind readers thanks to our mirror neurons. But how did the operation of these neurons combine with the development of the organs of speech to kick-start language ? And why did this happen so late ? Why were our ancestors a silent or grunting species for 150,000 years or so when the brain structures for language were in place?

The important idea here is precisely contingency. This sits oddly with a deterministic view of the universe, yet it is paradoxical rather than contradictory.  There was no absolute reason why language should have evolved as it did, or indeed that it should have evolved at all but if, even in only one place, our ancestors began to produce more sophisticated sounds and if these related to particular, precise elements of their reality, then the extraordinary capacity for imitation made possible by mirror neurons could have spread and developed this rapidly until it became the generative language we take for granted today. This negates the Chomskian view that the brain possesses a Language Acquisition Device which simply came into existence, more or less out of nowhere, but it doesn’t, of course, undermine the idea of specific language sites in the brain. On the contrary, Riazzollati considers that mirror neurons may well be homologous to the Broca’s area, responsible for some essential linguistic processing. If it is the case that language emerged more or less contingently, that is, in a specific time and place some individuals began to exploit more sophisticatedly the capacities of their vocal systems and were quickly copied by others; and if it is also the case that the first sounds our ancestors made were to convey emotion rather than denotative content, it would follow that words contain something in their sounds which relates to their meaning. The notion that the sound of words is arbitrary fits with the idea of language as a hard-wired system produced by some brain circuitry which simply arrived from who knows where. But the view of language as contingent and emerging from a previous sound system through rapid imitation made possible by mirror neurons points much more in the direction of the sound of a word itself containing emotional content and meaning. 

         It’s hard to see this with a word like tree ; what might the connection  be? But when we remember the word is derived from the Old Norse tre, which in turn is related to the Old Saxon trio and that these probably derive from the Greek doru, meaning wood; and when we recall that this is related to the French dur, meaning hard and the English durable , we can begin to discern a connection between the word, the sound tree, and its referent. The initial phoneme of durable is reminiscent of that of jug. It’s easy to make jug a short sound but durable wants to be elongated ( this is not a question of the fallacious notion of long and short vowels),and the span of the sound rehearses the word’s meaning. The sound of the word tends to impose an elongation which mirrors the meaning.  Ramachandran’s Booboo/Kiki test is one of his typically simple but penetrating experiments: two shapes, one cloudlike and fluffy, the other spikey like shards of glass are displayed. People are told that in Martian language one is a Booboo the other a Kiki. Almost without exception they attribute Booboo to the cloudlike shape and Kiki to the other. When they are asked why, they explain that Booboo sounds soft, gentle, cuddly and Kiki sounds harsh, sharp, cutting. What’s interesting about this is that something auditory is interpreted in tactile terms. How does a word sound cuddly ?

Ramachandran’s explanation is a pre-existing cross-translation between different parts of the brain, specifically the Broca’s and Wernicke’s areas which are implicated in language processing and the fusiform gyrus which is a visual area. Hence, sound is intimately connected to what is seen. Why shouldn’t it then be so connected to what is touched, tasted, smelt ? As language is the means we use to express experience, it would be odd if it emerged as an enclosed system unrelated to the brain’s multiple capacities. Rather, it’s much more likely the earliest sounds our ancestors used were principally redolent of their referent in their sound. If we think of two phrases very often used in English: I love you and I hate you we can see it’s impossible to say the former without at least a minimal pursing of the lips: the sound love is produced by the tip of the tongue first touching the alveolar ridge, just behind the front teeth, then falling to a neutral position as the mouth opens to produce the vowel, after which the lips purse to make the v sound, ending with a slight contact of the bottom teeth on the top lip. The lips make a shape similar to that assumed for kissing. The organs of speech imitate the meaning of the phrase.  Try to say I love you in a vicious way. Though not impossible, the incongruity between sound and meaning is striking. In the same way, it’s impossible to say I hate you without the final consonant of hate having a slightly spitting quality and to produce the vowel, the lips have to stretch slightly over the teeth in something reminiscent of a snarl. Try saying I hate you in a tender way and incongruity is once more obvious.

These are easy examples because they relate to powerful feelings. It’s much more difficult to see how sound and sense conjoin in a sentence like: The book is green. There is, apparently, no link between the sound green and the frequency of light which we see as green. This apparent absence is what gave rise to the notion of the arbitrary nature of words, but it is merely apparent. Somehow, in the long history of the emergence of that word a connection can be found. We could begin to uncover it by recalling the connection in Old High German between the words for green and grow. Green is the colour of growth, of many leaves, grass, many plants. What happens to your lips when you say the word green is that they begin slightly pursed and then stretch. This stretching makes you linger slightly on the vowel (once more I stress this has nothing to do with the nonsense about short and long vowels). It is simply that the very sound of the word enforces a slight musical elongation. What is the emotional resonance of that ? It is quite gentle, soft, appealing. It isn’t a hard, metallic, abrupt, sound. Here, though the argument is tenuous, is its link to the colour: the gentleness of fructifying nature, the pleasantness of spring and summer greenery.

Of course, this can seem like stretching a point, but we are trying to reach back into the evolution of words which began their life perhaps thousands of years ago. Just what kind of sound, we can wonder, did our ancestors produce when they were excited by the arrival of new leaves and shoots ?  We could write a big book exploring how the sounds of words mirrors their meaning, how, that is, language is intrinsically deterministic. Yet, though there is powerful determinism at work within language, it looks likely no deterministic influence ensured its emergence. Suppose we now make a leap and ask: was there a deterministic influence which ensured the emergence of homo sapiens, or is this too contingent ? And suppose we go even further and ask: was the Big Bang inevitable or could the forces at work prior to it have engendered something quite different ? Is the universe as we know it a contingency even though within it there is overwhelming determinism ? Our thinking operates within a frame in which determinism is taken for granted but suppose the whole of existence depends on contingency, wouldn’t then be probable that we would have evolved with a capacity to spot and exploit contingency and might this be the origin of an ability to choose ? And then let’s make another leap and say the law of supply and demand is no law at all: it depends on human decision. There is no inevitability in a supplier raising the price of a commodity when its supply is scarce. That behaviour depends on a prior decision to put the pursuit of profit above all else. 


August 19 Ken Clay

Thanks for your extended reflections on linguistics. I found these much more entertaining than my own somewhat arid thoughts on free will and determinism. I read Ramachandran’s Phantoms in the Brain when it came out and found that fascinating too. The idea that creating a mirage of a severed limb relieved sufferers from phantom limb pain even though they knew it was a trick was very strange – as if our higher, critical consciousness could be bypassed as far as pain generation is concerned (and, of course, pain is instantiated by the brain, sometimes after a period of painless reflection, rather than instantaneously generated by a wound). I thought Ram’s Reith lectures good too although mirror neurones are only briefly mentioned. His news on volitions – that they precede actions by about a second but are hidden from consciousness so that the act and the volition appear to synchronise, is quite extraordinary. I can’t find his Oxford lecture on language acquisition. I’m not convinced it’s all sexually driven either. I expect it’s because old Ram grew up surrounded by those temple carvings – I mean, some of those girls make Jordan look like a Belsen victim. And the idea that primitive hom sap lived in an environment not unlike that of the medieval troubadors where girls got wooed rather than simply assaulted seems fanciful. Anyway who says that girls prefer all that warbling? If this were the case no crooner would be able to move without being mobbed by horny teenagers…er…yis..moving swiftly on. 

These investigations of damaged brains are very revealing – Oliver Sacks has written on this also. But, to quote the philosopher Jerry Fodor, who has railed against the optimistic predictions of those who claim to have solved the Mind/ Body problem (another knotty intractable) via neuroscience - what they reveal “is just geography”. Knowing what bits of the brain fire up when we do things comes nowhere near explaining the phenomenon of consciousness. He, along with ex-Hartleypool oik Colin McGinn at Rutgers, came to be known as the New Jersey Nihilists believing that we’d never understand how the brain works coz we’ve only got brains to do the job. I guess they’d put determinism into that unsolvable category.  

Saussure I know only secondhand from people like Barthes. I must say the idea that the sound of a word itself containing emotional content and meaning. Seems far fetched and your examples a bit too selective – “tree” may be related to the greek “doru” and French “dur” but what about “arbre” or “baum” or any number of Chinese, Arab or Inuit words for this entity which sound nowt like “tree”? Actually there probably isn’t an Inuit word for tree since there aren’t any up there in the Arctic. Likewise your exposition of “I love you” (reminiscent of the opening lines of Lolita) and “I hate you” – yes, the latter is jagged and sharp, but so is “je t’aime”. All this seems to be in the great tradition of Gallic gobbledegook. And doesn’t this go against Sassure’s assertion that The connection between the signifier and the signified is arbitrary (thank you Wikipedia)? 

If I might, like the great Bryan Magee on the sofa, summarise your position perhaps that would help re-focus the debate. Your initial perplexity about the nature of chance in human affairs leads into an exposition of Ram’s mirror neurone phenomenon in which you think you’ve found strong evidence of chance. You accept some degree of determinism but assert that Determinism throws up contingency. Well to the thoroughgoing determinist that’s like saying you’re a bit pregnant. It’s either one or the other – you can’t have both. (Hume – see Appendix 4). We can’t re-run history and we’ve only got what we’ve got – it had to be like that even though historians make a living thinking about what-ifs. I have come across some discussions which define free will merely as “acting without constraint” but that doesn’t cut any ice either since the determinist maintains that you did what you did and you could have done no other no matter how extended the wrestles with your conscience. You merely had the illusion of decision making. 

Trawling for supporters of your view (and mine come to that) I dug up Popper’s analysis from his Objective Knowledge (1972) (see Appendix 4). He’s a common-sense kind of philosopher (if that’s not an oxymoron) who will have no truck with beliefs in the immateriality of the external world or the arcane meaning of words. He thinks there can be an in-between stage involving determinism and contingency. One sympathises but it still isn’t a disproof of determinism.  

To revert briefly to your opening statement about tripping with Harrison on the stair. Here you seem to be saying that the intersecting causal chains are so complex – the state of the carpet, your shoes etc - there are so many converging circumstances that even though the laws of physics apply and I can't fly to save myself I'm not bound to trip-  that pure chance must be entering into things. This surely is simply being overwhelmed by unpredictability – we’re not even considering human motives and decision making. If you walked into the kitchen and the ceiling fell on you surely you can accept that this is just the end result of many determined physical events. I can’t see any mystery in these examples. The nub must be in human volition. Odd that Popper agrees with Compton about quantum jumps being a possible source of indeterminism but then relegates this to irrational decisions reserving for rational decisions a more privileged status. That’s KP – ever the pragmatists – if it has no practical value it can be stuck in the not-interesting box and forgotten about. One begins to wonder why he’s even on the philosophy shelf at all.


22 Aug Alan Dent

What do we mean by determinism ? Isn't it that the universe is ruled by laws ? Now, does it follow that those laws must engender only one possible outcome, ie there can be no contingency but everything that happens has to happen ? Isn't it possible for the operation of strict laws to produce a multiplicity of outcomes ? If we return to the example of the near extinction of our species some 70,000 years ago, do we have to conclude that survival was assured by the inevitable working out of events. Just what laws would ensure that a small group of our ancestors managed to avoid the cataclysm ? From one realm to another the laws alter: the laws of physics are not those of chemistry or biology, nor the laws of biology those of psychology or sociology. History is not propelled by the law of gravity. The laws of physics gave rise to chemical reactions which in turn gave rise to biology but chemistry breaks free of the prior laws as does biology. When it comes to consciousness ( a term which, as Ramachandran says ,hides a great deal of ignorance) we are dealing with the laws of the operation of the brain, but much more than that: the working of culture on the brain. This is how consciousness is realized...


26 Aug Ken Clay

I’m beginning to see where your problem lies. But I still can’t see the logic of Isn't it possible for the operation of strict laws to produce a multiplicity of outcomes ?  Laplace’s famous remark* that if we knew the position and velocity of every particle in the universe we’d be able to predict the future still holds despite some uncertainties introduced by modern quantum theory. My new appendix 5 reinforces this assertion and I include it since it’s by a physics prof and bang up to date. Since neither of us is qualified to criticize or hypothesize on the truth of quantum mechanics and its significance for determinism this is a powerful support for Honderich’s startling statement that free will is now a folk myth and determinism is the new orthodoxy.

I think another confusion is that chemistry doesn’t obey the laws of physics. It does, and so does biology (think of the DNA molecule) and so do sociology and economics. And history is affected by the law of gravity since if Harold hadn’t been shot in the eye we’d all be speaking some guttural pidgin rather than the immensely flexible instrument I am writing in now. Nothing escapes the laws of physics. Nowhere can we say “of course this result contradicts the law of gravity or thermodynamics but it’s true nevertheless”. I go further and say that chemistry and biology are entirely reducible to the laws of physics. Economics and sociology are still unpredictable because of they deal with large numbers of socially influenced individuals. As we go up the scale of complexity the emergent properties of large scale aggregations may be new and unknown but they are still underwritten by the laws of the basic physical elements. For instance even though we know the exact detailed characteristics of an oxygen and hydrogen atom it would be very difficult to predict Niagara Falls. Of course in that example we’re working backwards but you see the problem in the other direction.

Everything in the universe, including brains, is made of nothing but elementary particles, these combine into molecules of stuff and neurons are nothing but large accumulations of molecules of stuff but when you get billions of neurones interconnected you get – consciousness! Who could have predicted that? Dualism, largely discredited now,  believes in a mental universe running alongside a physical one. Descartes’ insuperable problem was to explain how an immaterial thought impinged on a material one – ie how does a volition make my arm lift? Well it never could in his world. A desperate attempt to rescue dualism is the odd theory of epiphenomenalism. This states that there are mental phenomena and physical ones but there’s no causation between them. We just happen to have the thought “I’ll lift my arm” at the same time as my arm lifts – but there’s no effectual causation. Yis – quite barmy. Another solution to the mind / body problem was Russell’s neutral monism in which he posits that mental and physical qualities are just two aspects of one basic stuff.

I agree with Ram concerning our ignorance of how the brain works – those who say it’s just a big computer and soon we’ll make one which is conscious are crazy. But if we accept that physics best describes the nature and interaction of the basic elements of the universe and that these interactions are governed by laws (which, admittedly are provisional – like any scientific theory – just the best interpretation we’ve got up to now) then we’re driven to accept determinism. Unpredictability and novelty are endemic features of reality but maybe we’re inclined to imagine free will, volition and intentions as creative agents in an open-ended future because that seems the only sensible explanation to our limited intelligence.

I still find it hard to accept determinism but then a lot of revolutionary ideas are strange at first. Darwinism throws no light on it. It would be just as true if it were the case – there’s no teleology in it and plenty of novel mutants become rapidly extinct. But if free will is a myth then what evolutionary purpose does it serve? How does it make us fitter to survive? Does an illusion of morality and responsibility for choices promote greater social cohesion? And even if it did may not this be a temporary, local, species-prolonging strategy in a universe determined to collapse no matter what we do?



* We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

—Pierre Simon Laplace, A Philosophical Essay on Probabilities


Appendix 1
6.8 CONCLUSION ON THE TRUTH OF THE THEORY OF DETERMINISM From Ted Honderich’s A Theory of Determinism OUP 1988 p373 

Appendix 2 OBJECTIONS TO DETERMINISM From Historical Determinism by W.H. Dray The Encyclopedia of Philosophy Vol 2 - Paul Edwards (Ed) 1967 MacMillan p376 

Appendix 3  The Emerging Mind - REITH LECTURES 2003 RAMACHANDRAN (For the complete Reith lectures go to http://www.bbc.co.uk/radio4/reith2003/

Appendix 4 OBJECTIVE KNOWLEDGE Karl R Popper OUP 1972 p226 

Appendix 5 From Scientific American  September 2010 p19 Lawrence M. Krauss



From Ted Honderich’s A Theory of Determinism OUP 1988 p373 


At the end of our consideration of neuroscience and Quantum Theory, one final conclusion was drawn. It was that this book's theory of determinism cannot be taken as established, as true. If the most that can ever be said for any theory is something less than that—that it is in some sense confirmed or to some large degree probabilified—then the theory of determinism cannot be taken as in that way confirmed or to that degree probabilified.     

With respect to the question of whether the theory could be judged false, an interim conclusion was drawn. It was said that a consider­ation of Quantum Theory and of course neuroscience could not establish that the theory of determinism was false, or that it lacked very strong support. On the contrary, it was said, taking into account only neuroscience and Quantum Theory, the theory of determinism could be judged to be very strongly supported. However, there remained to be considered certain philosophical objections to the truth or acceptability of the theory—the objections of which we now know.

Those objections, although the first and last will have further consideration in another form, certainly do not weaken the claim to truth of the theory of determinism.

My final conclusion about the theory—it would be better to say my final official conclusion—is that it is very strongly supported, certainly as well supported as a general theory of indeterminism of the non-mental world, and, further, that it is greatly superior to indeterminist theories of the mind. Attempts have been made, certainly, to specify more precise and graded conceptions of the acceptability of theories—more precise than 'very strongly supported', 'weakly sup­ported', and the like. There seems little to be gained by these attempts to rigorize what is in the end and fundamentally a matter of informed judgement. (Brown, 1987; P. F. Strawson, 1952, pp. 233 ff.) We shall not pursue the matter.

It is my own inescapable conviction, which will be no news, that the determinist theory of this book is true, or at any rate has the strength of, say, the theory of evolution. This conviction, I grant, goes beyond the sum of the evidence for and against. Such a conviction, however, is no idiosyncrasy. It is a conviction about our existence similar to those of very many thinkers in various disciplines, including a number of sciences. Such convictions are had by most thinkers of a certain discriminable cast of mind. They are convictions that stand in connection with the fact that philosophers generally have not begun to give up the question of the consequences of determinism as a question whose traditional presupposition — determinism as against near-deter­minism (pp. 2, 9, 336)—has been established as false. Quantum Theory has had no such effect in its seventy years.

The convictions in question are fundamental to a long and dominant scientific and philosophical tradition, which has in it, among others, the greatest leaders of science and the most acute of philosophers.

Hume, foremost among the latter, necessarily depended, in large part, on one of the supporting grounds for determinism other than neuroscience, the ground of our ordinary experience of the non-mental world.

It is universally allowed that matter, in all its operations, is actuated by a necessary force, and that every natural effect is so precisely determined by the energy of its cause that no other effect, in such particular circumstances, could possibly have resulted from it. (1902 (1748), p. 82)

There is more reason now for being convinced of a determinist theory of the mind. To revert to the principal ground, it derives from the history of scientific inquiry since Hume's time into the Central Nervous System, above all the brain. We are not at all in Hume's position.

In abbreviation of all of this, but mainly of what was said above of determinism as very strongly supported, I shall speak of the likely truth of the theory of determinism. As will be gathered, that is not to imply that no probability whatever attaches to its denial.


From The Encyclopedia of Philosophy Vol 2 - Paul Edwards (Ed) 1967 MacMillan p376 

From Historical Determinism by W.H. Dray 



It has been objected, first, that history is a realm in which events sometimes occur "by chance"—it being assumed that what happens by chance cannot happen of necessity. Certainly, historians often report what happened in such terms. And chance has been regarded by some of them almost as a principle of historical interpretation. Thus J. B. Bury, in his Later Roman Empire, represented the success of the barbarians in penetrating the Roman Empire as due to a succession of coincidences—the "his­torical surprise" of the onslaught of the Asiatic Huns, which drove the Goths west and south; the lucky blow that killed a Roman emperor when the Goths engaged a Roman army that just happened to be in their way; the untimely death of that emperor's talented successor before he had arranged for the assimilation of those tribesmen who had settled within the imperial border; the unhappy fact that the two sons who subsequently divided the em­pire were both incompetent, and so on. Bury's example does at least afford a strong argument against the notion that history is a se//-determining system—one of the as­sumptions of the doctrine of historical inevitability. It illustrates the intrusion of nonhistorical factors into the historical process—an untimely death, for example—Bury's awareness of which led him to object to any search for what he called "general" causes. Bury's example makes clearer, too, the inappropriateness of a science like astron­omy as a model for social and historical explanation. For the solar system, unlike human society, is virtually isolated from such external influences. This makes it possible for us to make astronomical predictions without taking into ac­count anything but the description of the state of the sys­tem itself at any time and to predict accurately for long periods ahead. In history the situation is very different. The sufficient conditions of historical events are seldom to be found in other historical events.

But does the admission of chance, as Bury described it, count against the whole doctrine of historical determinism in the scientific sense? In support of their claim that it must, historical indeterminists sometimes cite parallels in physical inquiry. Modern subatomic physics, for example, whether correctly or not, has often been said to be indeterministic precisely because it regards certain aspects of the behavior of single electrons as matters of chance. Yet it may be questioned whether any of the contingencies, accidents, or unlucky "breaks" mentioned by Bury were matters of chance in the physicist's sense. For there is no reason to think of any of them as uncaused. What is pecul­iar about them is that they occur (to use a common phrase) at the intersection of two or more relatively independent causal chains. But there is nothing in such coincidences, determinists will maintain, that enables us to say that what occurs at the "intersections" could not be deduced from prior statements of conditions and appropriate laws, pro­vided we took all the relevant conditions into account.

In practice, of course, a historian may not be in a posi­tion to explain why a given coincidence occurred; at least one relevant chain—the biological one leading to the em­peror's death, for example—may be beyond the scope of his kind of inquiry. What happened may consequently be represented by him as something unforeseen—perhaps even as the intrusion of the "irrational" into the course of events. Here the notion of chance is extended from the paradigm case where an event is said to have no cause at all to one where the cause is simply unknown because nonhistorical.

The notion is commonly extended further (as Bury's example illustrates) to events whose causes, although not beyond the range of historical inquiry, are beyond the immediate range of the historian's interests—the appear­ance of the Huns, for example. This makes it misleading to define "chance event" in history, as some have done, as an event that has historical effects but lacks historical causes. The causes of the invasion of the Huns simply lie outside the story the historian is telling. The judgment that a his­torical event happened by chance is thus a function of what the historian (and his readers) are concerned about. (This also covers the case where "by chance" seems chiefly to mean "unplanned.") It follows that, from one standpoint, an event may properly be judged to be a chance occurrence, while from another it clearly could not be: the activities of the Huns, for example, were scarcely a matter of chance from their own standpoint. Speculative philosophers of history, if they aim to take the additional standpoints of God or "History" into account, will ob­viously have further problems when deciding whether something was a chance occurrence. The issues thus raised are doubtless of considerable interest for a general account of the logic of historical narration. It is difficult to see, however, that they have any important bearing on the acceptability of historical determinism. 


A second consideration often advanced against the determinist assumption is that history is a realm of novelty and that its course must therefore remain not only unforeseen but unforeseeable, even if we take into account the broadest possible range of antecedent conditions. The fact that what the historian discovers is often surprising is thus held to have an objective basis in human creativity, from which periodically there emerge events and condi­tions with radically novel characteristics. Such "emer­gence," it is often claimed, rules out the possibility of scientific prediction before the event because prediction is necessarily based on laws and theories that relate types of characteristics already known. In this connection it is interesting to note a "proof" offered by Popper that some historical events at least are unpredictable in principle. If we accept the common assumption that some historical events are dependent in part on the growth of human knowledge, Popper pointed out, then it is logically impos­sible that we should be able to predict them before they occur. For ex hypothesi, one of their conditions must re­main unknown to us.

Confronted by such an argument, determinists would want to make clear that, as they conceive it, determinism does not entail predictability, even though it has, unfortu­nately, sometimes been defined in terms of predictability. An event can be determined even though it is not known to be so. Popper himself did not regard the argument cited above as counting against historical determinism; indeed, his own statement of it strongly suggested that the unpredictability of the events in question actually follows from their being determined in a certain way, that is, by a set of conditions that are less than sufficient in the absence of as yet unattained human knowledge. All that is required by the doctrine of determinism, however, is that events have sufficient conditions, whether or not they can be known before the fact. It would thus be better, perhaps, to define the notion in terms of explicability rather than predictability. Determinists often point out that the emer­gent characteristics of natural things can be explained in the scientific sense, although they could not have been predicted before they first emerged. In his "Determinism in History," Ernest Nagel cited the emergence of the qualities of water out of a combination of hydrogen and oxygen. These are emergent and novel in the sense of not being possessed by the original elements and not being deducible from information about the behavior of these elements in isolation. Yet we have been able to frame laws governing the emergence of these originally novel attri­butes under specifiable conditions that allow us to deduce and now even to predict the attributes.

A likely reply is that whereas the emergence of the characteristics of water is a recurring, experimentally test­able phenomenon, the emergence of novelty in the course of history is not. At least some historical events and condi­tions, it may be said, are unique and hence not subject to scientific explanation even after the fact. In considering this rejoinder, however, it is important not to misunder­stand the claims of scientific determinism. For these do not include the deducibility in principle of the occurrence of historical events "in all their concrete actuality." Only events as historians represent them in their narratives are said to be so deducible. And their descriptions of events, it will be argued, are necessarily phrased in terms that apply, although not necessarily in the same combinations, to events at other times and places.

It may of course be doubted that we shall ever actually discover the determining conditions of such historical novelties as Alexander's use of the phalanx, Caesar Augus­tus' imperial policy, or the organization of the medieval church, under descriptions as highly detailed as historians customarily apply to them—a problem scarcely touched by the consideration, advanced by Nagel, that social science has sought, with some measure of success, to discover the conditions under which men act creatively. Yet determin-ists will regard these as merely "practical" difficulties, not bearing on the basic issue. That issue, they will main­tain, is whether the novelties that can be recognized by historical inquiry are such as to rule out their subsumabil-ity under laws "in principle." Unless historians' knowl­edge can be said to go beyond any description of such novelties in terms of a unique conjunction of recurring characteristics, the argument from historical novelty will be deemed to have missed its mark.

In fact, this further, and highly debatable claim is one that some historical theorists would be quite prepared to make. They would point out, for example, that we can listen to Mozart's music and read Newton's scientific writ­ings—two examples of creativity cited by Nagel—and, by thus enjoying direct acquaintance with radical historical novelty, discover more than could be conveyed by any description in terms of recurring characteristics. Ordinary historical knowledge of novel military tactics, imperial policies, or institutional organizations, they would main­tain, would similarly go beyond what could be expressed without reference, either explicitly or implicitly, to named individuals, groups, or periods. They would consequently represent historical narrative as employing concrete universals—like "Renaissance" or "Gothic"—as well as abstract ones. And since scientific laws can be framed only in terms of abstract universals, they would claim that warranted assertions of novelty expressed in terms of concrete universals do undermine the assumption of determinism. 


A third and even more common argument against accepting a determinist view of historical events turns on the claim that history is a realm not only of chance and novelty but of human freedom. The subject matter of history, it is sometimes said, is not mere "events" but human "actions," in a distinctive sense quite familiar to plain men who deliberate and decide what to do. If the historian is not to misrepresent such a subject matter, the argument goes, then he must take seriously the notion of choosing between alternatives. As Johan Huizinga ex­pressed it, in his "Idea of History" (in Fritz Stern, ed., The Varieties of History), "the historian must put himself at a point in the past at which the known factors still seem to permit different outcomes. If he speaks of Salamis, then it must be as if the Persians might still win." In Historical Inevitability, Isaiah Berlin gave a further and even more familiar reason for adopting the standpoint of "agency." "If determinism were true, . . ." he wrote, "the notion of hu­man responsibility, as ordinarily understood, would no longer apply." For an ascription of responsibility requires the assumption that the agent was "in control," that he could have acted otherwise than he did. Historical ac­counts, in other words, like the moralistic ones plain men ordinarily give of their own and others' actions, presup­pose "freedom of the will." And this is held to be incom­patible with the assumption of determinism.

Few philosophical problems have been discussed as exhaustively (or as inconclusively) as the problem of free­dom of the will, and it is quite impossible in this context to do justice to the subtleties involved. There are, however, two chief ways of handling the present objection. Histori­cal determinists can try to explain away the problem of freedom by arguing that, although moralistic accounts properly regard historical agents as free, the sense in which they must do so is quite compatible with the deter­ministic assumption. Libertarians, correspondingly, can try to give an account of historic causation that does not rule out an action's being both caused and undetermined. For historians, either of these ways out of the difficulty would presumably be more acceptable than the outright denial of the legitimacy of either moral appraisal or causal explana­tion in historical accounts. For, with no obvious sign of strain, historians generally offer both.

The determinist case often turns on the contention that the sense of freedom involved in attributing responsibility to a moral agent is not the "could have done otherwise" of absolute indeterminism; that sense implies only that the agent would have done otherwise if certain antecedents— his circumstances or his character, for example—had been a little different. Indeed, it is often argued that the test of whether the agent is really "in control," and hence responsible, is whether he acts differently on another occasion when the conditions have been changed—say, by his having been praised or blamed, rewarded or punished. It is therefore not the agent's freedom in the sense of his action's being uncaused that is at stake. The determinist, in arguing this way, conceives himself, furthermore, as accepting, not rejecting, the notion that the moral catego­ries the historian uses are those of the plain man. What is denied is that the "ordinary" sense of "free" is the uncon­ditional "freedom of the will" of the metaphysicians. As for Huizinga's claim that the historian must think of the agent's problem as if there were real possibilities open to him, this would be regarded as a purely methodological point. What is brought out thereby is the applicability to actions of a concept of understanding that requires us, quite properly, to view them in relation to what the agents thought about their situations, including any illusions they may have had about them.

Many libertarians might accept the latter contention. But most would surely repudiate the claim that responsibility requires freedom only in a sense compatible with deter­minism. To ascribe responsibility to a person whose actions necessarily follow from antecedent events, Berlin declared, is "stupid and cruel," and he meant rationally incoherent, not just foolish. In a sense alleged to be central to our notion of responsibility, such a person could not have done otherwise. Must a libertarian who takes such a stand, then, abandon the possibility of explaining actions causally? Some, at least, would say, No, provided we recognize that the term "cause," when applied to human actions, bears a special sense. Thus, according to R. G. Collingwood, the causes (in a distinctively historical sense) of "the free and deliberate act of a conscious and responsible agent" are to be sought in the agent's "thought" about his situation, his reasons for deciding to act (Essay on Metaphysics). What a libertarian will deny is that any combination of such "rational" causes that ex­cludes the agent's decision to act—since the latter falls into the historian's explanandum, not his explanans—is a sufficient condition of his action. Such causes become "effective," it might be said, only through an agent's de­ciding to act upon them. Yet when he does so, reference to them as his "reasons" will explain what he did in the sense of making it understandable. What such reference will not and need not do is explain his action in the sense of showing its performance to be deducible from sufficient antecedent conditions.

It is generally agreed that the conflict between historical determinists and indeterminists cannot be resolved by the offering of proofs or disproofs. Modern scientific determin­ists, in any case, seldom state their position dogmatically. According to Nagel, for example, all that can be claimed is that the principle of determinism has "regulative" status as a presupposition of the possibility of scientific inquiry—a principle which must therefore govern the scientific study of history as well. What is particularly interesting about theories of rational causation is the conceptual foundation they offer for denying that the principle of determinism is a necessary presupposition even of seeking explanations when the subject matter is human action: they show at least the conceivability of explanatory inquiry on liber­tarian principles. It must be conceded, however, that few contemporary philosophers regard indeterminism as an acceptable assumption to carry into historical or social investigation. 


For examples of determinist or near-determinist views of his­tory, see H. T. Buckle, A History of Civilization in England (Lon­don, 1899) or E. Huntington, Mainsprings of Civilization (New York, 1945). The works of various speculative and single-factor theorists mentioned above may also be consulted: Patrick Gardiner's Theories of History (Glencoe, 111., 1959) contains relevant extracts from the works of Vico, Hegel, Marx, Plekhanov, Buckle, Tolstoy, Spengler, Toynbee, Croce, and Collingwood. For a eon-temporary attack on deterministic views, both of the scientific and metaphysical kinds, see Isaiah Berlin, Historical Inevitability (London, 1954) and the reply offered by E. H. Carr in What Is History? (London, 1961). For a moderate defense of the determin­istic assumption against such attacks, see Ernest Nagel, "Deter­minism in History," Philosophy and Phenomenological Research, Vol. 20 (1960), 291-317. The viability of indeterministic historical and social scientific inquiry is argued for in Alan Donagan, "Social Science and Historical Antinomianism," Revue Internationale de Philosophic, Vol. 11 (1957), 433^449. The role of the individual in history is discussed in Sidney Hook, The Hero in History (New York, 1943). Johan Huizinga's "Idea of History" is included in English translation in Fritz Stern, ed., The Varieties of History (New York, 1956), pp. 290-303. The claim that historians use "cause" in a special sense is developed by R. G. Collingwood in An Essay on Metaphysics (Oxford, 1940), which should be read in conjunction with his The Idea of History (Oxford, 1946). See further the bibliography to GREAT MAN THEORY OF HISTORY.

W. H. Dray




Now imagine a patient with a right hemisphere stroke left side paralyzed. The patient is sending a command to move the arm, he is getting a visual signal saying it is not moving so there is a discrepancy. His right hemisphere is damaged, his left hemisphere goes about its job of denial and confabulation smoothing over the discrepancy and saying, all is fine, don't worry. On the other hand, if the left hemisphere is damaged and the right side is paralyzed the right hemisphere is functioning fine, notices the discrepancy between the motor command and the lack of visual feedback and says, my god you are paralyzed. This was an outlandish idea but it's now been tested with brain imaging experiments and shown to be essentially correct.

Now this syndrome is quite bizarre - a person denying that he or she is paralyzed - but what we found about seven or eight years ago something even more amazing. Some of these patients will deny that another patient is paralyzed so the patient is sitting here. We saw the patient, another patient sitting in a wheelchair - I'll call him patient B - and I've told patient B move your arm. Patient B of course is paralyzed, doesn't move. And then I ask my patient - is that patient moving his arm? And the patient says yes of course he is moving his arm. He is engaging in denial of other people's disabilities.

Now at first this didn't make any sense to me then I came across some studies by Giaccomo Rizzollati, experiments done on monkeys. If you record from parts of the frontal lobes which are concerned with motor commands you find there are cells which fire when the monkey performs certain specific movements, like one cell will fire when the monkey reaches out and grabs a peanut, another cell will fire when the monkey pulls something, yet another cell when the monkey pushes something. That's well known. These are motor command neurons. But Rizzollati found that some of these neurons will also fire when the monkey watches another monkey performing the same action, so you find a peanut-grabbing neuron which fires when the monkey grabs a peanut. When the monkey watches another monkey grab a peanut, it fires. It's quite extraordinary because the visual image of somebody else grabbing the peanut is utterly different so you have to do this internal mental transformation to do that computation and for that neuron to fire and Rizzollati calls these mirror neurons. Another name for them is monkey-see monkey-do neurons and these neurons I think are the ones that are damaged in these patients.

Because think about what's involved in your judging somebody else's movements. Maybe you need to do a virtual reality internal simulation of what that person is doing, and that may involve the activity of these very same neurons, these mirror neurons. So these mirror neurons, instead of being some kind of curiosity, hold important implications for understanding many aspects of human nature like how do you read somebody else's movements, their intentions, their actions. Many aspects of what you called a theory of other minds, a sophisticated theory of other people's behaviour. We think it is this system of neurons that is damaged in these patients. The patient can therefore no longer construct an internal model of somebody else's actions.

I also want to argue that these neurons may have played an important role in human evolution and I am going to talk about this at length in my Oxford lecture on the emergence of language and abstract thinking, because think about it. One of the hallmarks of our species is what we call culture. And culture depends crucially on imitation of your parents, of your teachers and the imitation of complex skills may require the participation of mirror neurons. So what I'm arguing is somewhere around 50,000 years ago maybe the mirror neurons system became sufficiently sophisticated that there was an explosive evolution of this ability to mime complex actions, in turn leading to cultural transmission of information which is what characterises us humans.



So we’ve talked about hysterical patients with hysterical paralysis. Now let’s go back to normals and do a PET scan when you’re voluntarily moving your finger using your free will. A second to three-fourths of a second prior to moving your finger, I get the EEG potential and it’s called the Readiness Potential. It’s as though the brain events are kicking in a second prior to your actual finger movement, even though your conscious intention of moving the finger coincides almost exactly with the wiggle of the finger. Why? Why is the mental sensation of willing the finger delayed by a second, coming a second after the brain events kick in as monitored by the EEG? What might the evolutionary rationale be?

The answer is, I think, that there is an inevitable neural delay before the signal arising in the brain cascades through the brain and the message arrives to wiggle you finger. There’s going to be a delay because of neural processing – just like the satellite interviews on TV which you’ve all been watching. So natural selection has ensured that the subjective sensation of wiling is delayed deliberately to coincide not with the onset of the brain commands but with the actual execution of the command by your finger, so that you feel you’re moving it.

And this in turn is telling you something important. It’s telling you that the subjective sensations that accompany brain events must have an evolutionary purpose, for if it had no purpose and merely accompanied brain events – like so many philosophers believe (this is called epiphenomenalism) - in other words the subjective sensation of willing is like a shadow that moves with you as you walk but is not causal in making you move, if that's correct then why would evolution bother delaying the signal so that it coincides with your finger movement?

So you see the amazing paradox is that on the one hand the experiment shows that free will is illusory, right? It can't be causing the brain events because the events kick in a second earlier. But on the other hand it has to have some function because if it didn't have a function, why would evolution bother delaying it? But if it does have a function, what could it be other than moving the finger? So maybe our very notion of causation requires a radical revision here as happened in quantum physics. OK, enough of free will. It's all philosophy!



OK, it's time to conclude now. I hope that I've convinced you that even though the behaviour of many patients with mental illness seems bizarre, we can now begin to make sense of the symptoms using our knowledge of basic brain mechanisms. You can think of mental illness as disturbances of consciousness and of self, two words that conceal depths of ignorance. Let me try to summarise in the remaining five or ten minutes what my own view of consciousness is. There are really two problems here - the problem of the subjective sensations or qualia and the problem of the self. The problem of qualia is the more difficult one.

The question is how does the flux of ions in little bits of jelly in my brain give rise to the redness of red, the flavour of marmite or mattar paneer, or wine. Matter and mind seem so utterly unlike each other. Well one way out of this dilemma is to think of them really as two different ways of describing the world, each of which is complete in itself. Just as we can describe light as made up of particles or waves - and there's no point in asking which is correct, because they're both correct and yet utterly unlike each other. And the same may be true of mental events and physical events in the brain.

But what about the self? The last remaining great mystery in science, it's something that everybody's interested in - and especially if you're from India, like me. Now obviously self and qualia are two sides of the same coin. You can't have free-floating sensations or qualia with no-one to experience it and you can't have a self completely devoid of sensory experiences, memories or emotions. For example as we saw in Cotard's syndrome, sensations and perceptions lose all their significance and meaning - and this leads to a dissolution of self.

What exactly do people mean when they speak of the self? Its defining characteristics are fourfold. First of all, continuity. You've a sense of time, a sense of past, a sense of future. There seems to be a thread running through your personality, through your mind. Second, closely related is the idea of unity or coherence of self. In spite of the diversity of sensory experiences, memories, beliefs and thoughts, you experience yourself as one person, as a unity.

So there's continuity, there's unity. And then there's the sense of embodiment or ownership - yourself as anchored to your body. And fourth is a sense of agency, what we call free will, your sense of being in charge of your own destiny. I moved my finger.
Now as we've seen in my lectures so far, these different aspects of self can be differentially disturbed in brain disease, which leads me to believe that the self really isn't one thing, but many. Just like love or happiness, we have one word but it's actually lumping together many different phenomena. For example, if I stimulate your right parietal cortex with an electrode (you're conscious and awake) you will momentarily feel that you are floating near the ceiling watching your own body down below. You have an out-of-the-body experience. The embodiment of self is abandoned. One of the axiomatic foundations of your Self is temporarily abandoned. And this is true of each of those aspects of self I was talking about. They can be selectively affected in brain disease.

Keeping this in mind, I see three ways in which the problem of self might be tackled by neuroscience. First, maybe the problem of self is a straightforward empirical problem. Maybe there is a single, very elegant, Pythagorean Aha! solution to the problem, just like DNA base-pairing was a solution to the riddle of heredity. I think this is unlikely, but I could be wrong.

Second, given my earlier remarks about the self, the notion of the self as being defined by a set of attributes - embodiment, agency, unity, continuity - maybe we will succeed in explaining each of these attributes individually in terms of what's going on in the brain. Then the problem of what is the self will vanish or recede into the background.

Third, maybe the solution to the problem of the self won't be a straightforward empirical one. It may instead require a radical shift in perspective, the sort of thing that Einstein did when he rejected the assumption that things can move at arbitrarily high velocities. When we finally achieve such a shift in perspective, we may be in for a big surprise and find that the answer was staring at us all along. I don't want to sound like a New Age guru, but there are curious parallels between this idea and the Hindu philosophical view that there is no essential difference between self and others or that the self is an illusion.

Now I have no clue what the solution to the problem of self is, what the shift in perspective might be. If I did I would dash off a paper to Nature today, and overnight I'd be the most famous scientist alive. But just for fun let me have a crack at it, at what the solution might look like.

Our brains were essentially model-making machines. We need to construct useful, virtual reality simulations of the world that we can act on. Within the simulation, we need also to construct models of other people's minds because we're intensely social creatures, us primates. We need to do this so we can predict their behaviour. We are, after all, the Machiavellian primate. For example, you want to know was what he did a wilful action. In that case he might repeat it. Or was it involuntary in which case it's quite benign. Indeed evolution may have given us the skill even before self-awareness emerged in the brain. But then once this mechanism is in place, you can also apply it to the particular creature who happens to occupy this particular body, called Ramachandran.

At a very rudimentary level this is what happens each time a new-born baby mimics your behaviour. Stick your tongue out next time you see a new-born-baby and the baby will stick its tongue out, mimicking your behaviour, instantly dissolving the boundary, the arbitrary barrier between self and others. And we even know that this is carried out by a specific group of neurons in the brain, in your frontal lobes, called the mirror neurons. The bonus from this might be self-awareness.

With this I'd like to conclude this whole series of lectures. As I said in my first lecture, my goal was not to give you a complete survey of our knowledge of the brain. That would take fifty hours, not five. But I hope I've succeeded in conveying to you the sense of excitement that my colleagues and I experience each time we try to tackle one of these problems, whether you're talking about hysteria, phantom limbs, free will, the meaning of art, denial, or neglect or any one of these syndromes which we talked about in earlier lectures. Second, I hope I've convinced you that by studying these strange cases and asking the right questions, we neuroscientists can begin to answer some of those lofty questions that thinking people have been preoccupied with since the dawn of history. What is free will? What is body image? What is the self? Who am I? - questions that until recently were the province of philosophy.
No enterprise is more vital for the wellbeing and survival of the human race. This is just as true now as it was in the past. Remember that politics, colonialism, imperialism and war also originate in the human brain.




Karl R Popper 

OUP 1972 p226 

Like Compton I am among those who take the problem of physical determinism seriously, and like Compton I do not believe that we are mere computing machines (though I readily admit that we can learn a great deal from computing machines —even about ourselves). Thus, like Compton, I am a physical indeterminist: physical indeterminism, I believe, is a necessary prerequisite for any solution of our problem. We have to be indeterminists; yet I shall try to show that indeterminism is not enough.

With this statement, indeterminism is not enough, I have arrived, not merely at a new point, but at the very heart of my problem.

The problem may be explained as follows.

If determinism is true, then the whole world is a perfectly running flawless clock, including all clouds, all organisms, all animals, and all men. If, on the other hand, Peirce's or Heisenberg's or some other form of indeterminism is true, then sheer chance plays a major role in our physical world. But is chance really more satisfactory than determinism?

The question is well known. Determinists like Schlick have put it in this way: '. . . freedom of action, responsibility, and mental sanity, cannot reach beyond the realm of causality: they stop where chance begins. ... a higher degree of randomness .. . [simply means] a higher degree of irresponsibility.'

I may perhaps put this idea of Schlick's in terms of an example I have used before: to say that the black marks made on white paper which I produced in preparation for this lecture were just the result of chance is hardly more satisfactory than to say that they were physically predetermined. In fact, it is even less satis­factory. For some people may perhaps be quite ready to believe that the text of my lecture can be in principle completely explained by my physical heredity, and my physical environ­ment, including my upbringing, the books I have been reading, and the talks I have listened to; but hardly anybody will believe that what I am reading to you is the result of nothing but chance —just a random sample of English words, or perhaps of letters, put together without any purpose, deliberation, plan, or in­tention.

The idea that the only alternative to determinism is just sheer chance was taken over by Schlick, together with many of his views on the subject, from Hume, who asserted that 'the removal' of what he called 'physical necessity' must always result in 'the same thing with chance. As objects must either be conjoin'd or not, . . . 'tis impossible to admit of any medium betwixt chance and an absolute necessity'.

I shall later argue against this important doctrine according to which the only alternative to determinism is sheer chance. Yet I must admit that the doctrine seems to hold good for the quantum-theoretical models which have been designed to explain, or at least to illustrate, the possibility of human free­dom. This seems to be the reason why these models are so very unsatisfactory.

Compton himself designed such a model, though he did not particularly like it. It uses quantum indeterminacy, and the unpredictability of a quantum jump, as a model of a human decision of great moment. It consists of an amplifier which amplifies the effect of a single quantum jump in such a way that it may either cause an explosion or destroy the relay necessary for bringing the explosion about. In this way one single quantum jump may be equivalent to a major decision. But in my opinion the model has no similarity to any rational decision. It is, rather, a model of a kind of decision-making where people who cannot make up their minds say: 'Let us toss a penny.' In fact, the whole apparatus for amplifying a quantum jump seems rather unnecessary: tossing a penny, and deciding on the result of the toss whether or not to pull a trigger, would do just as well. And there are of course computers with built-in penny-tossing devices for producing random results, where such are needed.

It may perhaps be said that some of our decisions are like penny-tosses: they are snap-decisions, taken without delibera­tion, since we often do not have enough time to deliberate. A driver or a pilot has sometimes to take a snap-decision like this; and if he is well trained, or just lucky, the result may be satis­factory; otherwise not.

I admit that the quantum-jump model may be a model for such snap-decisions; and I even admit that it is conceivable that something like the amplification of a quantum-jump may actually happen in our brains if we make a snap-decision. But are snap-decisions really so very interesting? Are they charac­teristic of human behaviour—of rational human behaviour?

I do not think so; and I do not think that we shall get much further with quantum jumps. They are just the kind of examples which seem to lend support to the thesis of Hume and Schlick that perfect chance is the only alternative to perfect determin­ism. What we need for understanding rational human behaviour —and indeed, animal behaviour—is something intermediate in character between perfect chance and perfect determinism— something intermediate between perfect clouds and perfect clocks.

Hume's and Schlick's ontological thesis that there cannot exist anything intermediate between chance and determinism seems to me not only highly dogmatic (not to say doctrinaire) but clearly absurd; and it is understandable only on the assump­tion that they believed in a complete determinism in which chance has no status except as a symptom of our ignorance. (But even then it seems to me absurd, for there is, clearly, some­thing like partial knowledge, or partial ignorance.) For we know that even highly reliable clocks are not really perfect, and Schlick (if not Hume) must have known that this is largely due to factors such as friction—that is to say, to statistical or chance effects. And we also know that our clouds are not perfectly chance-like, since we can often predict the weather quite successfully, at least for short periods.


From Scientific American  September 2010 p19 

Lawrence M. Krauss – Foundation Professor Arizon State University 

No area of physics stimulates more nonsense in the public arena than quantum mechanics—and with good reason. No one intuitively understands quantum mechanics because all of our experience involves a world of classical phenomena where, for example, a baseball thrown from pitcher to catcher seems to take just one path, the one described by Newton's laws of motion. Yet at a microscopic level, the universe behaves quite differently. Electrons traveling from one place to another do not take any single path but instead, as Feynman first demonstrated, take every possi­ble path at the same time.

Moreover, although the underlying laws of quantum mechanics are completely determinis­tic—I need to repeat this, they are completely deter­ministic—the results of measurements can only be described prob­abilistically. This inherent uncertainty, enshrined most in the fa­mous Heisenberg uncertainty principle, implies that various combinations of physical quantities can never be measured with absolute accuracy at the same time. Associated with that fact, but in no way equivalent to it, is the dilemma that when we measure a quantum system, we often change it in the process, so that the ob­server may not always be separated from that which is observed.

When science becomes this strange, it inevitably generates possibilities for confusion, and with confusion comes the oppor­tunity for profit. I hereby wish to bestow my Worst Abusers of Quantum Mechanics for Fun and Profit (but Mostly Profit) award on the following:

DEEPAK CHOPRA: I have read numerous pieces by him on why quantum mechanics provides rationales for everything from the existence of God to the possibility of changing the past. Nothing I have ever read, however, suggests he has enough understanding of quantum mechanics to pass an undergraduate course I might teach on the subject.

THE SECRET: This best-selling book, which spawned a self-help industry, seems to be built in part on the claim that quantum phys­ics implies a "law of attraction" that suggests good thoughts will make good things happen. It doesn't.

TRANSCENDENTAL MEDITATION: TMers argue that they can fly by achieving a "lower quantum-mechanical ground state" and that the more people who practice TM, the less violent the world will become. This last idea at least is in ac­cord with quantum mechanics, to the extent that if everyone on the planet did nothing but meditate there wouldn't be time for violence (or acts of kindness, either).

For the record: Quantum mechanics does not deny the existence of objective reality. Nor does it imply that mere thoughts can change external events. Effects still re­quire causes, so if you want to change the universe, you need to act on it.

Feynman once said, "Science is imagination in a straitjacket." It is ironic that in the case of quantum mechanics, the people without the straitjackets are generally the nuts.