Fodor’s critique of neo-Darwinism

The philosopher Jerry Fodor (1935-2017) has just died.
Here is a brief reprise by Stephen Metcalfe in The New Yoker of Fodor’s compelling critique on modern Darwinism.  For some reason, Fodor was ostracised by many philosophers and biologists for this critique.
An excerpt:

“Neo-Darwinism is taken as axiomatic,” he wrote in “What Darwin Got Wrong,” co-written with Massimo Piattelli-Palmarini, a cognitive scientist, and published in 2010. “It goes literally unquestioned. A view that looks to contradict it, either directly or by implication, is ipso facto rejected, however plausible it may otherwise seem.” Fodor thought that the neo-Darwinists had confused the loyalty oath of modernity—nature is without conscious design, species evolve over time, the emergence of Homo sapiens was without meaning or telos— with blind adherence to the fallacy known as “natural selection.” That species are a product of evolutionary descent was uncontroversial to Fodor, an avowed atheist; that the mechanism guiding the process was adaptation via a competition for survival—this, Fodor believed, had to be wrong.
Fodor attacked neo-Darwinism on a purely conceptual and scientific basis—its own turf, in other words. He thought that it suffered from a “free rider” problem: too many of our phenotypic traits have no discernible survival value, and therefore could not plausibly be interpreted as products of adaptation. “Selection theory cannot distinguish the trait upon which fitness is contingent from the trait that has no effect on fitness (and is merely a free rider),” he wrote. “Advertising to the contrary notwithstanding, natural selection can’t be a general mechanism that connects phenotypic variation with variation in fitness. So natural selection can’t be the mechanism of evolution.”
“What Darwin Got Wrong” was greeted with dismissive howls—and it is possible Fodor got the biology wrong.  But he got the ideology exactly right. Fodor was interested in how the distinction between an adaptation and a free rider might apply to our own behavior. It seems obvious to us that the heart is for circulating blood and not for making thump-thump noises. (Fodor did not believe this for was defensible, either, but that is for another day.) Pumping is therefore an “adaptation,” the noise is a “free rider.” Is there really a bright sociobiological line dividing, say, the desire to mate for life from the urge to stray? The problem isn’t that drawing a line is hard; it’s that it’s too easy: you simply call the behavior you like an adaptation, the one you don’t like a free rider. Free to concoct a just-so story, you may now encode your own personal biases into something called “human nature”.
Once you’ve made that error, the nonfiction best-seller list is yours for the asking. Everyone loves a mirror disguised as a windowpane: you tell whatever story your readership wants to hear, about whatever behavior it wants to see dignified. So the habits of successful people have been made, over the past thirty years, into derivatives of the savannah and the genetic eons, and “natural selection” has been stretched from a bad metaphor into an industry. Nobody was better at exposing this silliness than Fodor, whose occasional review-essays in the L.R.B. were masterpieces of a plainspoken and withering sarcasm. To Steven Pinker’s suggestion that we read fiction because “it supplies us with a mental catalogue of the fatal conundrums we might face someday,” for instance, Fodor replied, “What if it turns out that, having just used the ring that I got by kidnapping a dwarf to pay off the giants who built me my new castle, I should discover that it is the very ring that I need in order to continue to be immortal and rule the world?”
 

Bibliography:
Stephen Metcalf [2017]: Jerry Fodor’s Enduring Critique of Neo-Darwinism. The New Yorker. December 12, 2017.

Minds do precisely everything computers do not do: Hart on Dennett

Daniel Dennett thinks consciousness is an illusion.  This claim is refuted by a moment’s introspection, but perhaps some philosophers are less trustful of introspection than is everyone else.  Certainly, based on this particular case, some philosophers would have good reason to distrust their own reasoning abilities.
It is nice to read a rebuttal of Dennett’s manifestly ridiculous idea by another philosopher, David Bentley Hart (writing in The New Atlanticist).  Here is an excerpt:

Dennett is an orthodox neo-Darwinian, in the most gradualist of the sects. Everything in nature must for him be the result of a vast sequence of tiny steps. This is a fair enough position, but the burden of any narrative of emergence framed in those terms is that the stochastic logic of the tale must be guarded with untiring vigilance against any intrusion by “higher causes.” But, where consciousness is concerned, this may very well be an impossible task.
The heart of Dennett’s project, as I have said, is the idea of “uncomprehending competences,” molded by natural selection into the intricate machinery of mental existence. As a model of the mind, however, the largest difficulty this poses is that of producing a credible catalogue of competences that are not dependent for their existence upon the very mental functions they supposedly compose.
Certainly Dennett fails spectacularly in his treatment of the evolution of human language. As a confirmed gradualist in all things, he takes violent exception to any notion of an irreducible, innate, universal grammar, like that proposed by Noam Chomsky, Robert Berwick, Richard Lewontin, and others. He objects even when those theories reduce the vital evolutionary saltation between pre-linguistic and linguistic abilities to a single mutation, like the sudden appearance in evolutionary history of the elementary computational function called “Merge,” which supposedly all at once allowed for the syntactic combination of two distinct elements, such as a noun and a verb.
Fair enough. From Dennett’s perspective, after all, it would be hard to reconcile this universal grammar — an ability that necessarily began as an internal faculty of thought, dependent upon fully formed and discrete mental concepts, and only thereafter expressed itself in vocal signs — with a truly naturalist picture of reality. So, for Dennett, language must have arisen out of social practices of communication, rooted in basic animal gestures and sounds in an initially accidental association with features of the environment. Only afterward could these elements have become words, spreading and combining and developing into complex structures of reference. There must then, he assumes, have been “proto-languages” that have since died away, liminal systems of communication filling up the interval between animal vocalizations and human semiotic and syntactic capacities.
Unfortunately, this simply cannot be. There is no trace in nature even of primitive languages, let alone proto-languages; all languages possess a full hierarchy of grammatical constraints and powers. And this is not merely an argument from absence, like the missing fossils of all those dragons or unicorns that must have once existed. It is logically impossible even to reverse-engineer anything that would qualify as a proto-language. Every attempt to do so will turn out secretly to rely on the syntactic and semiotic functions of fully developed human language. But Dennett is quite right about how immense an evolutionary saltation the sudden emergence of language would really be. Even the simple algorithm of Merge involves, for instance, a crucial disjunction between what linguists call “structural proximity” and “linear proximity” — between, that is, a hypotactic or grammatical connection between parts of a sentence, regardless of their spatial and temporal proximity to one another, and the simple sequential ordering of signifiers in that sentence. Without such a disjunction, nothing resembling linguistic practice is possible; yet that disjunction can itself exist nowhere except in language.
Dennett, however, writes as if language were simply the cumulative product of countless physical ingredients. It begins, he suggests, in mere phonology. The repeated sound of a given word somehow embeds itself in the brain and creates an “anchor” that functions as a “collection point” for syntactic and semantic meanings to “develop around the sound.” But what could this mean? Are semiotic functions something like iron filings and phonemes something like magnets? What is the physical basis for these marvelous congelations in the brain? The only possible organizing principle for such meanings would be that very innate grammar that Dennett denies exists — and this would seem to require distinctly mental concepts. Not that Dennett appears to think the difference between phonemes and concepts an especially significant one. He does not hesitate, for instance, to describe the “synanthropic” aptitudes that certain organisms (such as bedbugs and mice) acquire in adapting themselves to human beings as “semantic information” that can be “mindlessly gleaned” from the “cycle of generations.”
But there is no such thing as mindless semantics. True, it is imaginable that the accidental development of arbitrary pre-linguistic associations between, say, certain behaviors and certain aspects of a physical environment might be preserved by natural selection, and become beneficial adaptations. But all semantic information consists in the interpretation of signs, and of conventions of meaning in which signs and references are formally separable from one another, and semiotic relations are susceptible of combination with other contexts of meaning. Signs are intentional realities, dependent upon concepts, all the way down. And between mere accidental associations and intentional signs there is a discontinuity that no gradualist — no pleonastic — narrative can span.
Similarly, when Dennett claims that words are “memes” that reproduce like a “virus,” he is speaking pure gibberish. Words reproduce, within minds and between persons, by being intentionally adopted and employed.
Here, as it happens, lurks the most incorrigibly problematic aspect of Dennett’s project. The very concept of memes — Richard Dawkins’s irredeemably vague notion of cultural units of meaning or practice that invade brains and then, rather like genetic materials, thrive or perish through natural selection — is at once so vapid and yet so fantastic that it is scarcely tolerable as a metaphor. But a depressingly substantial part of Dennett’s argument requires not only that memes be accorded the status of real objects, but that they also be regarded as concrete causal forces in the neurology of the brain, whose power of ceaseless combination creates most of the mind’s higher functions. And this is almost poignantly absurd.
Perhaps it is possible to think of intentional consciousness as having arisen from an improbable combination of purely physical ingredients — even if, as yet, the story of that seemingly miraculous metabolism of mechanism into meaning cannot be imagined. But it seems altogether bizarre to think of intentionality as the product of forces that would themselves be, if they existed at all, nothing but acts of intentionality. What could memes be other than mental conventions, meanings subsisting in semiotic practices? As such, their intricate interweaving would not be the source, but rather the product, of the mental faculties they inhabit; they could possess only such complexity as the already present intentional powers of the mind could impose upon them. And it is a fairly inflexible law of logic that no reality can be the emergent result of its own contingent effects.
This is why, also, it is difficult to make much sense of Dennett’s claim that the brain is “a kind of computer,” and mind merely a kind of “interface” between that computer and its “user.” The idea that the mind is software is a fairly popular delusion at the moment, but that hardly excuses a putatively serious philosopher for perpetuating it — though admittedly Dennett does so in a distinctive way. Usually, when confronted by the computational model of mind, it is enough to point out that what minds do is precisely everything that computers do not do, and therein lies much of a computer’s usefulness.
Really, it would be no less apt to describe the mind as a kind of abacus. In the physical functions of a computer, there is neither a semantics nor a syntax of meaning. There is nothing resembling thought at all. There is no intentionality, or anything remotely analogous to intentionality or even to the illusion of intentionality. There is a binary system of notation that subserves a considerable number of intrinsically mindless functions. And, when computers are in operation, they are guided by the mental intentions of their programmers and users, and they provide an instrumentality by which one intending mind can transcribe meanings into traces, and another can translate those traces into meaning again. But the same is true of books when they are “in operation.” And this is why I spoke above of a “Narcissan fallacy”: computers are such wonderfully complicated and versatile abacuses that our own intentional activity, when reflected in their functions, seems at times to take on the haunting appearance of another autonomous rational intellect, just there on the other side of the screen. It is a bewitching illusion, but an illusion all the same. And this would usually suffice as an objection to any given computational model of mind.
But, curiously enough, in Dennett’s case it does not, because to a very large degree he would freely grant that computers only appear to be conscious agents. The perversity of his argument, notoriously, is that he believes the same to be true of us.
For Dennett, the scientific image is the only one that corresponds to reality. The manifest image, by contrast, is a collection of useful illusions, shaped by evolution to provide the interface between our brains and the world, and thus allow us to interact with our environments. The phenomenal qualities that compose our experience, the meanings and intentions that fill our thoughts, the whole world of perception and interpretation — these are merely how the machinery of our nervous systems and brains represent reality to us, for purely practical reasons. Just as the easily manipulated icons on a computer’s screen conceal the innumerable “uncomprehending competences” by which programs run, even while enabling us to use those programs, so the virtual distillates of reality that constitute phenomenal experience permit us to master an unseen world of countless qualityless and purposeless physical forces.
Very well. In a sense, Dennett’s is simply the standard modern account of how the mind relates to the physical order. The extravagant assertion that he adds to this account, however, is that consciousness itself, understood as a real dimension of wholly first-person phenomenal experience and intentional meaning, is itself only another “user-illusion.” That vast abyss between objective physical events and subjective qualitative experience that I mentioned above does not exist. Hence, that seemingly magical transition from the one to the other — whether a genetic or a structural shift — need not be explained, because it has never actually occurred.
The entire notion of consciousness as an illusion is, of course, rather silly. Dennett has been making the argument for most of his career, and it is just abrasively counterintuitive enough to create the strong suspicion in many that it must be more philosophically cogent than it seems, because surely no one would say such a thing if there were not some subtle and penetrating truth hidden behind its apparent absurdity. But there is none. The simple truth of the matter is that Dennett is a fanatic: He believes so fiercely in the unique authority and absolutely comprehensive competency of the third-person scientific perspective that he is willing to deny not only the analytic authority, but also the actual existence, of the first-person vantage. At the very least, though, he is an intellectually consistent fanatic, inasmuch as he correctly grasps (as many other physical reductionists do not) that consciousness really is irreconcilable with a coherent metaphysical naturalism. Since, however, the position he champions is inherently ridiculous, the only way that he can argue on its behalf is by relentlessly, and in as many ways as possible, changing the subject whenever the obvious objections are raised.
For what it is worth, Dennett often exhibits considerable ingenuity in his evasions — so much ingenuity, in fact, that he sometimes seems to have succeeded in baffling even himself. For instance, at one point in this book he takes up the question of “zombies” — the possibility of apparently perfectly functioning human beings who nevertheless possess no interior affective world at all — but in doing so seems to have entirely forgotten what the whole question of consciousness actually is. He rejects the very notion that we “have ‘privileged access’ to the causes and sources of our introspective convictions,” as though knowledge of the causes of consciousness were somehow germane to the issue of knowledge of the experience of consciousness. And if you believe that you know you are not a zombie “unwittingly” imagining that you have “real consciousness with real qualia,” Dennett’s reply is a curt “No, you don’t” — because, you see, “The only support for that conviction is the vehemence of the conviction itself.”
It is hard to know how to answer this argument without mockery. It is quite amazing how thoroughly Dennett seems to have lost the thread here. For one thing, a zombie could not unwittingly imagine anything, since he would possess no consciousness at all, let alone reflective consciousness; that is the whole point of the imaginative exercise. Insofar as you are convinced of anything at all, whether vehemently or tepidly, you do in fact know with absolute certitude that you yourself are not a zombie. Nor does it matter whether you know where your convictions come from; it is the very state of having convictions as such that apprises you of your intrinsic intentionality and your irreducibly private conscious experience.
Simply enough, you cannot suffer the illusion that you are conscious because illusions are possible only for conscious minds. This is so incandescently obvious that it is almost embarrassing to have to state it. But this confusion is entirely typical of Dennett’s position. In this book, as he has done repeatedly in previous texts, he mistakes the question of the existence of subjective experience for the entirely irrelevant question of the objective accuracy of subjective perceptions, and whether we need to appeal to third-person observers to confirm our impressions. But, of course, all that matters for this discussion is that we have impressions at all.
Moreover, and perhaps most bizarrely, Dennett thinks that consciousness can be dismissed as an illusion — the fiction of an inner theater, residing in ourselves and in those around us — on the grounds that behind the appearance of conscious states there are an incalculable number of “uncomprehending competences” at work in both the unseen machinery of our brains and the larger social contexts of others’ brains. In other words, because there are many unknown physical concomitants to conscious states, those states do not exist. But, of course, this is the very problem at issue: that the limpid immediacy and incommunicable privacy of consciousness is utterly unlike the composite, objective, material sequences of physical causality in the brain, and seems impossible to explain in terms of that causality — and yet exists nonetheless, and exists more surely than any presumed world “out there.”
That, as it happens, may be the chief question Dennett neglects to ask: Why presume that the scientific image is true while the manifest image is an illusion when, after all, the scientific image is a supposition of reason dependent upon decisions regarding methods of inquiry, whereas the manifest image — the world as it exists in the conscious mind — presents itself directly to us as an indubitable, inescapable, and eminently coherent reality in every single moment of our lives? How could one possibly determine here what should qualify as reality as such? Dennett certainly provides small reason why anyone else should adopt the prejudices he cherishes. The point of From Bacteria to Bach and Back is to show that minds are only emergent properties of our brains, and brains only aggregates of mindless elements and forces. But it shows nothing of the sort.
The journey the book promises to describe turns out to be the real illusion: Rather than a continuous causal narrative, seamlessly and cumulatively progressing from the most primitive material causes up to the most complex mental results, it turns out to be a hopelessly recursive narrative, a long, languid lemniscate of a tale, twisting back and forth between low and high — between the supposed basic ingredients underlying the mind’s evolution and the fully realized mental phenomena upon which those ingredients turn out to be wholly dependent. It is nearly enough to make one suspect that Dennett must have the whole thing backward.
Perhaps the scientific and manifest images are both accurate. Then again, perhaps only the manifest image is. Perhaps the mind inhabits a real Platonic order of being, where ideal forms express themselves in phenomenal reflections, while the scientific image — a mechanistic regime devoid of purpose and composed of purely particulate causes, stirred only by blind, random impulses — is a fantasy, a pale abstraction decocted from the material residues of an immeasurably richer reality. Certainly, if Dennett’s book encourages one to adopt any position at all, reason dictates that it be something like the exact reverse of the one he defends. The attempt to reduce the phenomena of mental existence to a purely physical history has been attempted before, and has so far always failed. But, after so many years of unremitting labor, and so many enormous books making wildly implausible claims, Dennett can at least be praised for having failed on an altogether majestic scale.”

 
Reference:
David Bentley Hart [2017]:  “The Illusionist,” The New Atlantis, Number 53, Summer/Fall 2017, pp. 109-121.

Teleology Watch

I wonder about New Scientist magazine.  A recent article on plant evolution is encased in a teleological argument:
“Plants have evolved forgetfulness to wipe out memory of stress”
An entity may do action A which has consequence X. But that is very different to saying that the entity did action A in order to achieve outcome X. The theory of evolution makes no assumptions of intentionality. Indeed, quite the reverse – the classical Darwinian theory assumes that outcomes of evolutionary processes (whether non-detrimental or not) are the result of changes (eg, mutations) that happen apparently by random chance. Only with epigenetics and modern Lamarckism is there perhaps a role for non-random changes.
Moreover, evolution is a theory at the species level and across time. It makes no sense to talk about evolution of an individual or of a single generation. Yet only individual plants and animals have any intentionality.  Who or what is the entity that could have an intention for a species to evolve towards a certain goal? The entity would have to be both a cross-individual and a cross-generational collective. Ain’t such a thing, at least not in the material realm.
 

Cause and effect in human health

Despite what most of the medical profession would have us believe, they have very little understanding of  the actual causes of or best treatments for the obesity epidemic currently sweeping the West.   What little scientific evidence there is on the relationship between exercise and body weight indicates that increasing exercise leads to increased weight (presumably because more activity makes the exerciser hungrier).   And the extensive scientific evidence on the relationship between dieting and weight indicates very strongly that this relationship is complicated, subject to contextual factors, and highly non-linear, with so-called “set points” that result in increased fat storage when calorie intake goes down significantly, for instance.
Continue reading ‘Cause and effect in human health’

Popper vs Kuhn

For Popper scientific communities are politically virtuous because they permit unfettered criticism.   A scientific community is, by (Popper’s) definition, an open society.  Kuhn had to be shouted down because he seemed to deny this claim.”

Page 920 of B. Larvor [2000]:   Review of I. Lakatos and P. Feyerabend: “For and Against Method“. British Journal for the Philosophy of Science, 51: 919-922.

Does evo-psych explain anything at all?

Evolutionary psychology and evolutionary sociology have long struck me as arrant nonsense, because they ignore human free will and self-reflection, and thus our ability to rise above our own nature.   There are no pianos on the savanna, as I have remarked before, so an evolutionary psychologist will have a major challenge to explain a desire to play the piano in evolutionary terms.
Christopher Booker, in a review of E. O. Wilson’s new book, The Social Conquest of Earth, views similarly the flaws of evolutionary theory when applied to human behaviours:

It is our ability to escape from the rigid frame of instinct which explains almost everything that distinguishes human beings from any other form of life. But one looks in vain to Wilson to recognise this, let alone to explain how it could have come about in terms of Darwinian evolutionary theory. No attribute of Darwinians is more marked than their inability to grasp just how much their theory cannot account for, from all those evolutionary leaps which require a host of interdependent things to develop more or less simultaneously to be workable, that peculiarity of human consciousness which has allowed us to step outside the instinctive frame and to ‘conquer the Earth’ far more comprehensively than ants.
But it is this which also gives us our disintegrative propensity, individually and collectively, to behave egocentrically, presenting us with all those problems which distinguish us from all the other species which still live in unthinking obedience to the dictates of nature. All these follow from that split from our selfless ‘higher nature’, with which over the millennia our customs, laws, religion and artistic creativity have tried their best to re-integrate us.
Nothing is more comical about Darwinians than the contortions they get into in trying to explain those ‘altruistic’ aspects of human nature which might seem to contradict their belief that the evolutionary drive is always essentially self-centred (seen at its most extreme in Dawkins’s ‘selfish gene’ theory). Wilson’s thesis finally crumbles when he comes up with absurdly reductionist explanations for the emergence of the creative arts and religion. Forget Bach’s B Minor Mass or the deeper insights of the Hindu scriptures — as a lapsed Southern Baptist, he caricatures the religious instinct of mankind as little more than the stunted form of faith he escaped from.
His attempt to unravel what makes human nature unique is entirely a product of that limited ‘left-brain thinking’ which leads to cognitive dissonance.
Unable to think outside the Darwinian box, his account lacks any real warmth or wider understanding. Coming from ‘the most celebrated heir to Darwin’, his book may have won wide attention and praise. But all it really demonstrates is that the real problem with Darwinians is their inability to see just how much their beguilingly simple theory simply cannot explain.”

Influential Books

This is a list of non-fiction books and articles which have greatly influenced me – making me see the world differently or act in it differently. They are listed chronologically according to when I first encountered them.

  • 2023 – Clare Carlisle [2018]: “Habit, Practice, Grace: Towards a Philosophy of Religious Life.” In: F. Ellis (Editor): New Models of Religious Understanding. Oxford University Press, pp. 97–115.
  • 2022 – Sean Hewitt [2022]: All Down Darkness Wide. Jonathan Cape.
  • 2022 – Stewart Copeland [2009]: Strange Things Happen: A Life with “The Police”, Polo and Pygmies.
  • 2019 – Mary Le Beau (Inez Travers Cunningham Stark Boulton, 1888-1958) [1956]:  Beyond Doubt: A Record of Psychic Experience.
  • 2019 – Zhores A Medvedev [1983]: Andropov: An Insider’s Account of Power and Politics within the Kremlin.
  • 2016 – Lafcadio Hearn [1897]: Gleanings in Buddha-Fields: Studies of Hand and Soul in the Far East. London, UK: Kegan Paul, Trench, Trubner & Company Limited.
  • 2015 – Benedict Taylor [2011]: Mendelssohn, Time and Memory. The Romantic Conception of Cyclic Form. Cambridge UP.
  • 2010 – Hans Kundnani [2009]: Utopia or Auschwitz: Germany’s 1968 Generation and the Holocaust.
    London, UK: Hurst and Company.
  • 2009 – J. Scott Turner [2007]:  The Tinkerer’s Accomplice: How Design Emerges from Life Itself. Harvard UP. (Mentioned here.)
  • 2008 – Stefan Aust [2008]: The Baader-Meinhof Complex. Bodley Head.
  • 2008 – A. J. Liebling [2008]: World War II Writings. New York City, NY, USA: The Library of America.
  • 2008 – Pierre Delattre [1993]:  Episodes. St. Paul, MN, USA: Graywolf Press.
  • 2006 – Mark Evan Bonds [2006]: Music as Thought: Listening to the Symphony in the Age of Beethoven. Princeton UP.
  • 2006 – Kyle Gann [2006]: Music Downtown: Writings from the Village Voice. UCal Press.
  • 2005 – Clare Asquith [2005]: Shadowplay: The Hidden Beliefs and Coded Politics of William Shakespeare. Public Affairs.
  • 2004 – Igal Halfin [2003]: Terror in My Soul: Communist Autobiographies on Trial. Cambridge, MA, USA: Harvard UP.
  • 2002 – Philip Mirowski [2002]: Machine Dreams: Economics Becomes a Cyborg Science. Cambridge University Press.
  • 2001 – George Leonard [2000]: The Way of Aikido: Life Lessons from an American Sensei.
  • 2000 – Stephen E. Toulmin [1990]:  Cosmopolis:  The Hidden Agenda of Modernity. University of Chicago Press.
  • 1999 – Michel de Montaigne [1580-1595]: Essays.
  • 1997 – James Pritchett [1993]:  The Music of John Cage. Cambridge UP.
  • 1996 – George Fowler [1995]:  Dance of a Fallen Monk: A Journey to Spiritual Enlightenment.
    Doubleday.
  • 1995 – Chungliang Al Huang and Jerry Lynch [1992]:  Thinking Body, Dancing Mind.  New York: Bantam Books.
  • 1995 – Jon Kabat-Zinn [1994]: Wherever You Go, There You Are.
  • 1995 – Charlotte Joko Beck [1993]: Nothing Special: Living Zen.
  • 1993 – George Leonard [1992]: Mastery: The Keys to Success and Long-Term Fulfillment.
  • 1992 – Henry Adams [1907/1918]: The Education.
  • 1990 – Trevor Leggett [1987]:  Zen and the Ways. Tuttle.
  • 1989 – Grant McCracken [1988]:  Culture and Consumption.
  • 1989 – Teresa Toranska [1988]:  Them:  Stalin’s Polish Puppets.  Translated by Agnieszka Kolakowska. HarperCollins. (Mentioned here.)
  • 1988 – Henry David Thoreau [1865]:  Cape Cod.
  • 1988 – Rupert Sheldrake [1988]: The Presence of the Past: Morphic Resonance and the Habits of Nature.
  • 1988 – Dan Rose [1987]: Black American Street Life: South Philadelphia, 1969-1971. UPenn Press.
  • 1987 – Susan Sontag [1966]: Against Interpretation. Farrar, Straus and Giroux.
  • 1987 – Gregory Bateson [1972]: Steps to an Ecology of Mind. U Chicago Press.
  • 1987 – Jay Neugeboren [1968]:  Reflections at Thirty.
  • 1985 – Esquire Magazine Special Issue [June 1985]: The Soul of America.
  • 1985 – Brian Willan [1984]: Sol Plaatje: A Biography.
  • 1982 – John Miller Chernoff [1979]: African Rhythm and African Sensibility: Aesthetics and Social Action in African Musical Idioms. University of Chicago Press.
  • 1981 – Walter Rodney [1972]: How Europe Underdeveloped Africa. Bogle-L’Overture Publications.
  • 1980 – James A. Michener [1971]: Kent State: What happened and Why.
  • 1980 – Andre Gunder Frank [1966]:  The Development of Underdevelopment. Monthly Review Press.
  • 1980 – Paul Feyerabend [1975]: Against Method: Outline of an Anarchistic Theory of Knowledge.
  • 1979 – Aldous Huxley [1945]:  The Perennial Philosophy.
  • 1978 – Christmas Humphreys [1949]:  Zen Buddhism.
  • 1977 – Raymond Smullyan [1977]:  The Tao is Silent.
  • 1976 – Bertrand Russell [1951-1969]: The Autobiography.  George Allen & Unwin.
  • 1975 – Jean-Francois Revel [1972]:  Without Marx or Jesus: The New American Revolution Has Begun.
  • 1974 – Charles Reich [1970]: The Greening of America.
  • 1973 – Selvarajan Yesudian and Elisabeth Haich [1953]:  Yoga and Health. Harper.
  • 1972 – Robin Boyd [1960]: The Australian Ugliness.

String theorists in knots

Last week’s Observer carried a debate over the status of string theory by a theoretical physicist, Michael Duff,  and a science journalist, James Baggott.  Mostly, they talk past each other.   There is much in what they say that could provoke comment, but since time is short,  I will only comment on one statement.
Duff’s final contribution includes these words:

Finally, you offer no credible alternative. If you don’t like string theory the answer is simple: come up with a better one. “

This is plain wrong for several reasons.  First, we would have no scientific progress at all if critics of scientific theories first had to develop an alternative theory before they could advance their criticisms.   Indeed, public voicing of criticisms of a theory is one of the key motivations for other scientists to look for alternatives in the first place.  So Duff has the horse and the cart backwards here.  
Secondly, “come up with a better one“?   “better“?     What means “better“?  Duff has missed precisely the main point of the critics of string theory!  We have no way of knowing – not even in principle, let alone in practice – whether string theory is any good or not, nor whether it accurately describes reality.  We have no experimental evidence by which to assess it, and most likely (since it posits and models alleged additional dimensions of spacetime that are inaccessible to us) not ever any way to obtain such empirical evidence.    As I have argued before, theology has more empirical support – the personal spiritual experiences of religious believers and practitioners – than does string theory.    So, suppose we did come up with an alternative theory to string theory:  how then could we tell which theory was the better of the two?   
Pure mathematicians, like theologians, don’t use empirical evidence as a criterion for evaluating theories.  Instead, they use subjective criteria such as beauty, elegance, and self-coherence.   There is nothing at all wrong with this.  But such criteria ain’t science, which by its nature is a social activity.

Green intelligence

Are plants intelligent?   Here are 10 reasons for thinking so.    I suspect the reason we don’t naturally consider the activities of plants to be evidence of intelligent behaviour is primarily because the timescales over which these activities are undertaken is typically longer than for animal behaviours.    We humans have trouble seeing outside our own normal frames of reference.   (HT: JV)

Bayesianism in science

Bayesians are so prevalent in Artificial Intelligence (and, to be honest, so strident) that it can sometimes be lonely being a Frequentist.   So it is nice to see a critical review of Nate Silver’s new book on prediction from a frequentist perspective.   The reviewers are Gary Marcus and Ernest Davis from New York University, and here are some paras from their review in The New Yorker:

Silver’s one misstep comes in his advocacy of an approach known as Bayesian inference. According to Silver’s excited introduction,
Bayes’ theorem is nominally a mathematical formula. But it is really much more than that. It implies that we must think differently about our ideas.
Lost until Chapter 8 is the fact that the approach Silver lobbies for is hardly an innovation; instead (as he ultimately acknowledges), it is built around a two-hundred-fifty-year-old theorem that is usually taught in the first weeks of college probability courses. More than that, as valuable as the approach is, most statisticians see it is as only a partial solution to a very large problem.
A Bayesian approach is particularly useful when predicting outcome probabilities in cases where one has strong prior knowledge of a situation. Suppose, for instance (borrowing an old example that Silver revives), that a woman in her forties goes for a mammogram and receives bad news: a “positive” mammogram. However, since not every positive result is real, what is the probability that she actually has breast cancer? To calculate this, we need to know four numbers. The fraction of women in their forties who have breast cancer is 0.014, which is about one in seventy. The fraction who do not have breast cancer is therefore 1 – 0.014 = 0.986. These fractions are known as the prior probabilities. The probability that a woman who has breast cancer will get a positive result on a mammogram is 0.75. The probability that a woman who does not have breast cancer will get a false positive on a mammogram is 0.1. These are known as the conditional probabilities. Applying Bayes’s theorem, we can conclude that, among women who get a positive result, the fraction who actually have breast cancer is (0.014 x 0.75) / ((0.014 x 0.75) + (0.986 x 0.1)) = 0.1, approximately. That is, once we have seen the test result, the chance is about ninety per cent that it is a false positive. In this instance, Bayes’s theorem is the perfect tool for the job.
This technique can be extended to all kinds of other applications. In one of the best chapters in the book, Silver gives a step-by-step description of the use of probabilistic reasoning in placing bets while playing a hand of Texas Hold ’em, taking into account the probabilities on the cards that have been dealt and that will be dealt; the information about opponents’ hands that you can glean from the bets they have placed; and your general judgment of what kind of players they are (aggressive, cautious, stupid, etc.).
But the Bayesian approach is much less helpful when there is no consensus about what the prior probabilities should be. For example, in a notorious series of experiments, Stanley Milgram showed that many people would torture a victim if they were told that it was for the good of science. Before these experiments were carried out, should these results have been assigned a low prior (because no one would suppose that they themselves would do this) or a high prior (because we know that people accept authority)? In actual practice, the method of evaluation most scientists use most of the time is a variant of a technique proposed by the statistician Ronald Fisher in the early 1900s. Roughly speaking, in this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed ninety-five or ninety-nine per cent of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.
Unfortunately, Silver’s discussion of alternatives to the Bayesian approach is dismissive, incomplete, and misleading. In some cases, Silver tends to attribute successful reasoning to the use of Bayesian methods without any evidence that those particular analyses were actually performed in Bayesian fashion. For instance, he writes about Bob Voulgaris, a basketball gambler,
Bob’s money is on Bayes too. He does not literally apply Bayes’ theorem every time he makes a prediction. But his practice of testing statistical data in the context of hypotheses and beliefs derived from his basketball knowledge is very Bayesian, as is his comfort with accepting probabilistic answers to his questions. 
But, judging from the description in the previous thirty pages, Voulgaris follows instinct, not fancy Bayesian math. Here, Silver seems to be using “Bayesian” not to mean the use of Bayes’s theorem but, rather, the general strategy of combining many different kinds of information.
To take another example, Silver discusses at length an important and troubling paper by John Ioannidis, “Why Most Published Research Findings Are False,” and leaves the reader with the impression that the problems that Ioannidis raises can be solved if statisticians use Bayesian approach rather than following Fisher. Silver writes:
[Fisher’s classical] methods discourage the researcher from considering the underlying context or plausibility of his hypothesis, something that the Bayesian method demands in the form of a prior probability. Thus, you will see apparently serious papers published on how toads can predict earthquakes… which apply frequentist tests to produce “statistically significant” but manifestly ridiculous findings. 
But NASA’s 2011 study of toads was actually important and useful, not some “manifestly ridiculous” finding plucked from thin air. It was a thoughtful analysis of groundwater chemistry that began with a combination of naturalistic observation (a group of toads had abandoned a lake in Italy near the epicenter of an earthquake that happened a few days later) and theory (about ionospheric disturbance and water composition).
The real reason that too many published studies are false is not because lots of people are testing ridiculous things, which rarely happens in the top scientific journals; it’s because in any given year, drug companies and medical schools perform thousands of experiments. In any study, there is some small chance of a false positive; if you do a lot of experiments, you will eventually get a lot of false positive results (even putting aside self-deception, biases toward reporting positive results, and outright fraud)—as Silver himself actually explains two pages earlier. Switching to a Bayesian method of evaluating statistics will not fix the underlying problems; cleaning up science requires changes to the way in which scientific research is done and evaluated, not just a new formula.
It is perfectly reasonable for Silver to prefer the Bayesian approach—the field has remained split for nearly a century, with each side having its own arguments, innovations, and work-arounds—but the case for preferring Bayes to Fisher is far weaker than Silver lets on, and there is no reason whatsoever to think that a Bayesian approach is a “think differently” revolution. “The Signal and the Noise” is a terrific book, with much to admire. But it will take a lot more than Bayes’s very useful theorem to solve the many challenges in the world of applied statistics.” [Links in original]

Also worth adding here that there is a very good reason experimental sciences adopted Frequentist approaches (what the reviewers call Fisher’s methods) in journal publications.  That reason is that science is intended to be a search for objective truth using objective methods.  Experiments are – or should be – replicable  by anyone.   How can subjective methods play any role in such an enterprise?  Why should the  journal Nature or any of its readers care what the prior probabilities of the experimenters were before an experiment?    If these prior probabilities make a difference to the posterior (post-experiment) probabilities, then this is the insertion of a purely subjective element into something that should be objective and replicable. And if the actual numeric values of the prior probabilities don’t matter to the posterior probabilities (as some Bayesian theorems would suggest), then why does the methodology include them?