Minds do precisely everything computers do not do: Hart on Dennett

Daniel Dennett thinks consciousness is an illusion.  This claim is refuted by a moment’s introspection, but perhaps some philosophers are less trustful of introspection than is everyone else.  Certainly, based on this particular case, some philosophers would have good reason to distrust their own reasoning abilities.
It is nice to read a rebuttal of Dennett’s manifestly ridiculous idea by another philosopher, David Bentley Hart (writing in The New Atlanticist).  Here is an excerpt:

Dennett is an orthodox neo-Darwinian, in the most gradualist of the sects. Everything in nature must for him be the result of a vast sequence of tiny steps. This is a fair enough position, but the burden of any narrative of emergence framed in those terms is that the stochastic logic of the tale must be guarded with untiring vigilance against any intrusion by “higher causes.” But, where consciousness is concerned, this may very well be an impossible task.
The heart of Dennett’s project, as I have said, is the idea of “uncomprehending competences,” molded by natural selection into the intricate machinery of mental existence. As a model of the mind, however, the largest difficulty this poses is that of producing a credible catalogue of competences that are not dependent for their existence upon the very mental functions they supposedly compose.
Certainly Dennett fails spectacularly in his treatment of the evolution of human language. As a confirmed gradualist in all things, he takes violent exception to any notion of an irreducible, innate, universal grammar, like that proposed by Noam Chomsky, Robert Berwick, Richard Lewontin, and others. He objects even when those theories reduce the vital evolutionary saltation between pre-linguistic and linguistic abilities to a single mutation, like the sudden appearance in evolutionary history of the elementary computational function called “Merge,” which supposedly all at once allowed for the syntactic combination of two distinct elements, such as a noun and a verb.
Fair enough. From Dennett’s perspective, after all, it would be hard to reconcile this universal grammar — an ability that necessarily began as an internal faculty of thought, dependent upon fully formed and discrete mental concepts, and only thereafter expressed itself in vocal signs — with a truly naturalist picture of reality. So, for Dennett, language must have arisen out of social practices of communication, rooted in basic animal gestures and sounds in an initially accidental association with features of the environment. Only afterward could these elements have become words, spreading and combining and developing into complex structures of reference. There must then, he assumes, have been “proto-languages” that have since died away, liminal systems of communication filling up the interval between animal vocalizations and human semiotic and syntactic capacities.
Unfortunately, this simply cannot be. There is no trace in nature even of primitive languages, let alone proto-languages; all languages possess a full hierarchy of grammatical constraints and powers. And this is not merely an argument from absence, like the missing fossils of all those dragons or unicorns that must have once existed. It is logically impossible even to reverse-engineer anything that would qualify as a proto-language. Every attempt to do so will turn out secretly to rely on the syntactic and semiotic functions of fully developed human language. But Dennett is quite right about how immense an evolutionary saltation the sudden emergence of language would really be. Even the simple algorithm of Merge involves, for instance, a crucial disjunction between what linguists call “structural proximity” and “linear proximity” — between, that is, a hypotactic or grammatical connection between parts of a sentence, regardless of their spatial and temporal proximity to one another, and the simple sequential ordering of signifiers in that sentence. Without such a disjunction, nothing resembling linguistic practice is possible; yet that disjunction can itself exist nowhere except in language.
Dennett, however, writes as if language were simply the cumulative product of countless physical ingredients. It begins, he suggests, in mere phonology. The repeated sound of a given word somehow embeds itself in the brain and creates an “anchor” that functions as a “collection point” for syntactic and semantic meanings to “develop around the sound.” But what could this mean? Are semiotic functions something like iron filings and phonemes something like magnets? What is the physical basis for these marvelous congelations in the brain? The only possible organizing principle for such meanings would be that very innate grammar that Dennett denies exists — and this would seem to require distinctly mental concepts. Not that Dennett appears to think the difference between phonemes and concepts an especially significant one. He does not hesitate, for instance, to describe the “synanthropic” aptitudes that certain organisms (such as bedbugs and mice) acquire in adapting themselves to human beings as “semantic information” that can be “mindlessly gleaned” from the “cycle of generations.”
But there is no such thing as mindless semantics. True, it is imaginable that the accidental development of arbitrary pre-linguistic associations between, say, certain behaviors and certain aspects of a physical environment might be preserved by natural selection, and become beneficial adaptations. But all semantic information consists in the interpretation of signs, and of conventions of meaning in which signs and references are formally separable from one another, and semiotic relations are susceptible of combination with other contexts of meaning. Signs are intentional realities, dependent upon concepts, all the way down. And between mere accidental associations and intentional signs there is a discontinuity that no gradualist — no pleonastic — narrative can span.
Similarly, when Dennett claims that words are “memes” that reproduce like a “virus,” he is speaking pure gibberish. Words reproduce, within minds and between persons, by being intentionally adopted and employed.
Here, as it happens, lurks the most incorrigibly problematic aspect of Dennett’s project. The very concept of memes — Richard Dawkins’s irredeemably vague notion of cultural units of meaning or practice that invade brains and then, rather like genetic materials, thrive or perish through natural selection — is at once so vapid and yet so fantastic that it is scarcely tolerable as a metaphor. But a depressingly substantial part of Dennett’s argument requires not only that memes be accorded the status of real objects, but that they also be regarded as concrete causal forces in the neurology of the brain, whose power of ceaseless combination creates most of the mind’s higher functions. And this is almost poignantly absurd.
Perhaps it is possible to think of intentional consciousness as having arisen from an improbable combination of purely physical ingredients — even if, as yet, the story of that seemingly miraculous metabolism of mechanism into meaning cannot be imagined. But it seems altogether bizarre to think of intentionality as the product of forces that would themselves be, if they existed at all, nothing but acts of intentionality. What could memes be other than mental conventions, meanings subsisting in semiotic practices? As such, their intricate interweaving would not be the source, but rather the product, of the mental faculties they inhabit; they could possess only such complexity as the already present intentional powers of the mind could impose upon them. And it is a fairly inflexible law of logic that no reality can be the emergent result of its own contingent effects.
This is why, also, it is difficult to make much sense of Dennett’s claim that the brain is “a kind of computer,” and mind merely a kind of “interface” between that computer and its “user.” The idea that the mind is software is a fairly popular delusion at the moment, but that hardly excuses a putatively serious philosopher for perpetuating it — though admittedly Dennett does so in a distinctive way. Usually, when confronted by the computational model of mind, it is enough to point out that what minds do is precisely everything that computers do not do, and therein lies much of a computer’s usefulness.
Really, it would be no less apt to describe the mind as a kind of abacus. In the physical functions of a computer, there is neither a semantics nor a syntax of meaning. There is nothing resembling thought at all. There is no intentionality, or anything remotely analogous to intentionality or even to the illusion of intentionality. There is a binary system of notation that subserves a considerable number of intrinsically mindless functions. And, when computers are in operation, they are guided by the mental intentions of their programmers and users, and they provide an instrumentality by which one intending mind can transcribe meanings into traces, and another can translate those traces into meaning again. But the same is true of books when they are “in operation.” And this is why I spoke above of a “Narcissan fallacy”: computers are such wonderfully complicated and versatile abacuses that our own intentional activity, when reflected in their functions, seems at times to take on the haunting appearance of another autonomous rational intellect, just there on the other side of the screen. It is a bewitching illusion, but an illusion all the same. And this would usually suffice as an objection to any given computational model of mind.
But, curiously enough, in Dennett’s case it does not, because to a very large degree he would freely grant that computers only appear to be conscious agents. The perversity of his argument, notoriously, is that he believes the same to be true of us.
For Dennett, the scientific image is the only one that corresponds to reality. The manifest image, by contrast, is a collection of useful illusions, shaped by evolution to provide the interface between our brains and the world, and thus allow us to interact with our environments. The phenomenal qualities that compose our experience, the meanings and intentions that fill our thoughts, the whole world of perception and interpretation — these are merely how the machinery of our nervous systems and brains represent reality to us, for purely practical reasons. Just as the easily manipulated icons on a computer’s screen conceal the innumerable “uncomprehending competences” by which programs run, even while enabling us to use those programs, so the virtual distillates of reality that constitute phenomenal experience permit us to master an unseen world of countless qualityless and purposeless physical forces.
Very well. In a sense, Dennett’s is simply the standard modern account of how the mind relates to the physical order. The extravagant assertion that he adds to this account, however, is that consciousness itself, understood as a real dimension of wholly first-person phenomenal experience and intentional meaning, is itself only another “user-illusion.” That vast abyss between objective physical events and subjective qualitative experience that I mentioned above does not exist. Hence, that seemingly magical transition from the one to the other — whether a genetic or a structural shift — need not be explained, because it has never actually occurred.
The entire notion of consciousness as an illusion is, of course, rather silly. Dennett has been making the argument for most of his career, and it is just abrasively counterintuitive enough to create the strong suspicion in many that it must be more philosophically cogent than it seems, because surely no one would say such a thing if there were not some subtle and penetrating truth hidden behind its apparent absurdity. But there is none. The simple truth of the matter is that Dennett is a fanatic: He believes so fiercely in the unique authority and absolutely comprehensive competency of the third-person scientific perspective that he is willing to deny not only the analytic authority, but also the actual existence, of the first-person vantage. At the very least, though, he is an intellectually consistent fanatic, inasmuch as he correctly grasps (as many other physical reductionists do not) that consciousness really is irreconcilable with a coherent metaphysical naturalism. Since, however, the position he champions is inherently ridiculous, the only way that he can argue on its behalf is by relentlessly, and in as many ways as possible, changing the subject whenever the obvious objections are raised.
For what it is worth, Dennett often exhibits considerable ingenuity in his evasions — so much ingenuity, in fact, that he sometimes seems to have succeeded in baffling even himself. For instance, at one point in this book he takes up the question of “zombies” — the possibility of apparently perfectly functioning human beings who nevertheless possess no interior affective world at all — but in doing so seems to have entirely forgotten what the whole question of consciousness actually is. He rejects the very notion that we “have ‘privileged access’ to the causes and sources of our introspective convictions,” as though knowledge of the causes of consciousness were somehow germane to the issue of knowledge of the experience of consciousness. And if you believe that you know you are not a zombie “unwittingly” imagining that you have “real consciousness with real qualia,” Dennett’s reply is a curt “No, you don’t” — because, you see, “The only support for that conviction is the vehemence of the conviction itself.”
It is hard to know how to answer this argument without mockery. It is quite amazing how thoroughly Dennett seems to have lost the thread here. For one thing, a zombie could not unwittingly imagine anything, since he would possess no consciousness at all, let alone reflective consciousness; that is the whole point of the imaginative exercise. Insofar as you are convinced of anything at all, whether vehemently or tepidly, you do in fact know with absolute certitude that you yourself are not a zombie. Nor does it matter whether you know where your convictions come from; it is the very state of having convictions as such that apprises you of your intrinsic intentionality and your irreducibly private conscious experience.
Simply enough, you cannot suffer the illusion that you are conscious because illusions are possible only for conscious minds. This is so incandescently obvious that it is almost embarrassing to have to state it. But this confusion is entirely typical of Dennett’s position. In this book, as he has done repeatedly in previous texts, he mistakes the question of the existence of subjective experience for the entirely irrelevant question of the objective accuracy of subjective perceptions, and whether we need to appeal to third-person observers to confirm our impressions. But, of course, all that matters for this discussion is that we have impressions at all.
Moreover, and perhaps most bizarrely, Dennett thinks that consciousness can be dismissed as an illusion — the fiction of an inner theater, residing in ourselves and in those around us — on the grounds that behind the appearance of conscious states there are an incalculable number of “uncomprehending competences” at work in both the unseen machinery of our brains and the larger social contexts of others’ brains. In other words, because there are many unknown physical concomitants to conscious states, those states do not exist. But, of course, this is the very problem at issue: that the limpid immediacy and incommunicable privacy of consciousness is utterly unlike the composite, objective, material sequences of physical causality in the brain, and seems impossible to explain in terms of that causality — and yet exists nonetheless, and exists more surely than any presumed world “out there.”
That, as it happens, may be the chief question Dennett neglects to ask: Why presume that the scientific image is true while the manifest image is an illusion when, after all, the scientific image is a supposition of reason dependent upon decisions regarding methods of inquiry, whereas the manifest image — the world as it exists in the conscious mind — presents itself directly to us as an indubitable, inescapable, and eminently coherent reality in every single moment of our lives? How could one possibly determine here what should qualify as reality as such? Dennett certainly provides small reason why anyone else should adopt the prejudices he cherishes. The point of From Bacteria to Bach and Back is to show that minds are only emergent properties of our brains, and brains only aggregates of mindless elements and forces. But it shows nothing of the sort.
The journey the book promises to describe turns out to be the real illusion: Rather than a continuous causal narrative, seamlessly and cumulatively progressing from the most primitive material causes up to the most complex mental results, it turns out to be a hopelessly recursive narrative, a long, languid lemniscate of a tale, twisting back and forth between low and high — between the supposed basic ingredients underlying the mind’s evolution and the fully realized mental phenomena upon which those ingredients turn out to be wholly dependent. It is nearly enough to make one suspect that Dennett must have the whole thing backward.
Perhaps the scientific and manifest images are both accurate. Then again, perhaps only the manifest image is. Perhaps the mind inhabits a real Platonic order of being, where ideal forms express themselves in phenomenal reflections, while the scientific image — a mechanistic regime devoid of purpose and composed of purely particulate causes, stirred only by blind, random impulses — is a fantasy, a pale abstraction decocted from the material residues of an immeasurably richer reality. Certainly, if Dennett’s book encourages one to adopt any position at all, reason dictates that it be something like the exact reverse of the one he defends. The attempt to reduce the phenomena of mental existence to a purely physical history has been attempted before, and has so far always failed. But, after so many years of unremitting labor, and so many enormous books making wildly implausible claims, Dennett can at least be praised for having failed on an altogether majestic scale.”

 
Reference:
David Bentley Hart [2017]:  “The Illusionist,” The New Atlantis, Number 53, Summer/Fall 2017, pp. 109-121.

Honesty of intention

Some people I have encountered in this life have impressed me with their integrity-of-purpose, the coherence, sincerity and compellingness of their objectives and mission.  Sometimes these objectives have been political, as in the case of Don Day and Bill Mansfield. In other cases, they have been spiritual or religious, as in the case of Jes Albert Moeller, whom I first met in 1984. There are other people whose purposes are both political and spiritual, something which seems to have been true for Vaclav Havel.
 
In my experience, this human attribute is rare.  And I have never seen or heard anyone else talk of it, until now.  In Judith Wright’s autobiography, she speaks (page 234) of her partner and later husband, the philosopher Jack McKinney, meeting her father:

That my father was grieved by my relationship with Jack is undeniable but, once they met, he gave in to Jack’s obvious honesty of intention and the needs of my own that Jack was filling. . . . “

Judith Wright [1999]: Half a Lifetime.  Edited by Patricia Clarke. Melbourne, Australia: Text Publishing.

Courage and luck

A dialog from Nadine Gordimer’s novel, A Guest of Honour (page 222):

‘Why does playing safe always seem to turn out to be so dangerous?’
‘It’s unlucky . . .
. . . because you’re too scared to take a chance.’
‘It’s unlucky to lack courage?’
‘That’s it.  You have to go ahead into what’s coming, trust to luck.  Because if you play safe you don’t have any, anyway.’
‘It’s forfeited?’
‘Yes.’

The Way

A recurring theme here has been the complexity of most important real-world decision-making, contrary to the models used in much of economics and computer science.   Some relevant posts are here, here, and here.    On this topic, I recently came across a wonderful cartoon by Michael Leunig, entitled “The Way” (click on the image to enlarge it):
Michael-Leunig-The-Way-2012
I am very grateful to Michael Leunig for permission to reproduce this cartoon here.

Green intelligence

Are plants intelligent?   Here are 10 reasons for thinking so.    I suspect the reason we don’t naturally consider the activities of plants to be evidence of intelligent behaviour is primarily because the timescales over which these activities are undertaken is typically longer than for animal behaviours.    We humans have trouble seeing outside our own normal frames of reference.   (HT: JV)

What do mathematicians do?

Over at the AMS Graduate Student Blog, Jean Joseph wonders what it is that mathematicians do, asking if what they do is to solve problems:

After I heard someone ask about what a mathematician does, I myself wonder what it means to do mathematics if all what one can answer is that mathematicians do mathematics. Solving problems have been considered by some as the main activity of a mathematician, which might then be the answer to the question. But, could reading and writing about mathematics or crafting a new theory be considered as serious mathematical activities or mere extracurricular activities?”

Not all mathematics is problem-solving, as we’ve discussed here before, and I think it would be a great shame if the idea were to take hold that all that mathematicians did was to solve problems.  As Joseph says, this view does not account for lots of activities that we know mathematicians engage in which are not anywhere near to problem-solving, such as creating theories, defining concepts, writing expositions, teaching, etc.
I view mathematics (and the related disciplines in the pure mathematical universe) as the rigorous study of structure and relationship.   What mathematicians do, then, is to rigorously study structure and relationship.  They do this by creating, sharing and jointly manipulating abstract mental models, seeking always to understand the properties and inter-relations of these models.
Some of these models may arise from, or be applied to, particular domains or particular problems, but mathematicians (at least, pure mathematicians) are typically chiefly interested in the abstract models themselves and their formal properties, rather than the applications.  In some parts of mathematics (eg, algebra) written documents such as research papers and textbooks provide accurate descriptions of these mental models.  In other parts (eg, geometry), the written documents can only approximate the mental models.     As mathematician William Thurston once said:

There were published theorems that were generally known to be false, or where the proofs were generally known to be incomplete. Mathematical knowledge and understanding were embedded in the minds and in the social fabric of the community of people thinking about a particular topic. This knowledge was supported by written documents, but the written documents were not really primary.
I think this pattern varies quite a bit from field to field. I was interested in geometric areas of mathematics, where it is often pretty hard to have a document that reflects well the way people actually think. In more algebraic or symbolic fields, this is not necessarily so, and I have the impression that in some areas documents are much closer to carrying the life of the field. But in any field, there is a strong social standard of validity and truth.
. . .
When people are doing mathematics, the flow of ideas and the social standard of validity is much more reliable than formal documents. People are usually not very good in checking formal correctness of proofs, but they are quite good at detecting potential weaknesses or flaws in proofs.”

Listening to music by jointly reading the score

Another quote from Bill Thurston, this with an arresting image of mathematical communication:

We have an inexorable instinct to convey through speech content that is not easily spoken.  Because of this tendency, mathematics takes a highly symbolic, algebraic, and technical form.  Few people listening to a technical discourse are hearing a story. Most readers of mathematics (if they happen not to be totally baffled) register only technical details – which are essentially different from the original thoughts we put into mathematical discourse.  The meaning, the poetry, the music, and the beauty of mathematics are generally lost.  It’s as if an audience were to attend a concert where the musicians, unable to perform in a way the audience could appreciate, just handed out copies of the score.  In mathematics, it happens frequently that both the performers and the audience are oblivious to what went wrong, even though the failure of communication is obvious to all.” (Thurston 2011, page xi)  

Reference:
William P. Thurston [2011]:   Foreword.   The Best Writing on Mathematics: 2010.  Edited by Mircea Pitici.  Princeton, NJ, USA:  Princeton University Press.

Mathematical thinking and software

Further to my post citing Keith Devlin on the difficulties of doing mathematics online, I have heard from one prominent mathematician that he does all his mathematics now using LaTeX, not using paper or whiteboard, and thus disagrees with Devlin’s (and my) views.   Thinking about why this may be, and about my own experiences using LaTeX, it occurred to me that one’s experiences with thinking-support software, such as word-processing packages such as MS-WORD or  mark-up programming languages such as LaTeX, will very much depend on the TYPE of thinking one is doing.
If one is thinking with words and text, or text-like symbols such as algebra, the right-handed folk among us are likely to be using the left hemispheres of our brains.  If one is thinking in diagrams, as in geometry or graph theory or much of engineering including computing, the right-handed among us are more likely to be using the right hemispheres of our brains.  Yet MS-WORD and LaTeX are entirely text-based, and their use requires the heavy involvement of our left hemispheres (for the northpaws among us).  One doesn’t draw an arrow in LaTeX, for example, but instead types a command such as \rightarrow or \uparrow.   If one is already using one’s left hemisphere to do the mathematical thinking, as most algebraists would be, then the cognitive load in using the software will be a lot less then if one is using one’s right hemisphere for the mathematical thinking.  Activities which require both hemispheres are typically very challenging to most of us, since co-ordination between the two hemispheres adds further cognitive overhead.
I find LaTeX immeasurably better than any other word-processor for writing text:  it and I work at the same speed (which is not true of MS-WORD for me, for example), and I am able to do my verbal thinking in it.  In this case, writing is a form of thinking, not merely the subsequent expression of thoughts I’ve already had.     However, I cannot do my mathematical or formal thinking in LaTeX, and the software is at best a tool for subsequent expression of thoughts already done elsewhere – mentally, on paper, or on a whiteboard.    My formal thinking is usually about structure and relationship, and not as often algebraic symbol manipulation.
Bill Thurston, the geometer I recently quoted, said:

I was interested in geometric areas of mathematics, where it is often pretty hard to have a document that reflects well the way people actually think.  In more algebraic or symbolic fields, this is not necessarily so, and I have the impression that in some areas documents are much closer to carrying the life of the field.”  [Thurston 1994, p. 169]

It is interesting that many non-mathematical writers also do their thinking about structure not in the document itself or as they write, but outside it and beforehand, and often using tools such as post-it notes on boards; see the recent  article by John McPhee in The New Yorker for examples from his long writing life.
References:
John McPhee [2013]: Structure:  Beyond the picnic-table crisisThe New Yorker, 14 January 2013, pages 46-55.
William F. Thurston [1994]:  On proof and progress in mathematicsAmerican Mathematical Society, 30 (2):  161-177.

Mathematical hands

With MOOCs fast becoming teaching trend-du-jour in western universities, it is easy to imagine that all disciplines and all ways of thinking are equally amenable to information technology.   This is simply not true, and mathematical thinking  in particular requires hand-written drawing and symbolic manipulation.   Nobody ever acquired skill in a mathematical discipline without doing exercises and problems him or herself, writing on paper or a board with his or her own hands.   The physical manipulation by the hand holding the pen or pencil is necessary to gain facility in the mental manipulation of the mathematical concepts and their relationships.
Keith Devlin recounts his recent experience teaching a MOOC course on mathematics, and the deleterious use by students of the word-processing package latex for doing assignments:

We have, it seems, become so accustomed to working on a keyboard, and generating nicely laid out pages, we are rapidly losing, if indeed we have not already lost, the habit—and love—of scribbling with paper and pencil. Our presentation technologies encourage form over substance. But if (free-form) scribbling goes away, then I think mathematics goes with it. You simply cannot do original mathematics at a keyboard. The cognitive load is too great.

Why is this?  A key reason is that current mathematics-producing software is clunky, cumbersome, finicky, and not WYSIWYG (What You See Is What You Get).   The most widely used such software is Latex (and its relatives), which is a mark-up and command language; when compiled, these commands generate mathematical symbols.   Using Latex does not involve direct manipulation of the symbols, but only their indirect manipulation.   One has first to imagine (or indeed, draw by hand!) the desired symbols or mathematical notation for which one then creates using the appropriate generative Latex commands.   Only when these commands are compiled can the user see the effects they intended to produce.   Facility with pen-and-paper, by contrast, enables direct manipulation of symbols, with (eventually), the pen-in-hand being experienced as an extension of the user’s physical body and mind, and not as something other.   Expert musicians, archers, surgeons, jewellers, and craftsmen often have the same experience with their particular instruments, feeling them to be extensions of their own body and not external tools.
Experienced writers too can feel this way about their use of a keyboard, but language processing software is generally WYSIWYG (or close enough not to matter).  Mathematics-making software  is a long way from allowing the user to feel that they are directly manipulating the symbols in their head, as a pen-in-hand mathematician feels.  Without direct manipulation, hand and mind are not doing the same thing at the same time, and thus – a fortiori – keyboard-in-hand is certainly not simultaneously manipulating concept-in-mind, and nor is keyboard-in-hand simultaneously expressing or evoking concept-in-mind.
I am sure that a major source of the problem here is that too many people – and especially most of the chattering classes – mistakenly believe the only form of thinking is verbal manipulation.  Even worse, some philosophers believe that one can only think by means of words.     Related posts on drawing-as-a-form-of-thinking here, and on music-as-a-form-of-thinking here.
[HT:  Normblog]