Archive for the 'AI' Category

Minds do precisely everything computers do not do: Hart on Dennett

Daniel Dennett thinks consciousness is an illusion.  This claim is refuted by a moment’s introspection, but perhaps some philosophers are less trustful of introspection than is everyone else.  Certainly, based on this particular case, some philosophers would have good reason to distrust their own reasoning abilities.

It is nice to read a rebuttal of Dennett’s manifestly ridiculous idea by another philosopher, David Bentley Hart (writing in The New Atlanticist).  Here is an excerpt:

Dennett is an orthodox neo-Darwinian, in the most gradualist of the sects. Everything in nature must for him be the result of a vast sequence of tiny steps. This is a fair enough position, but the burden of any narrative of emergence framed in those terms is that the stochastic logic of the tale must be guarded with untiring vigilance against any intrusion by “higher causes.” But, where consciousness is concerned, this may very well be an impossible task.

The heart of Dennett’s project, as I have said, is the idea of “uncomprehending competences,” molded by natural selection into the intricate machinery of mental existence. As a model of the mind, however, the largest difficulty this poses is that of producing a credible catalogue of competences that are not dependent for their existence upon the very mental functions they supposedly compose.

Certainly Dennett fails spectacularly in his treatment of the evolution of human language. As a confirmed gradualist in all things, he takes violent exception to any notion of an irreducible, innate, universal grammar, like that proposed by Noam Chomsky, Robert Berwick, Richard Lewontin, and others. He objects even when those theories reduce the vital evolutionary saltation between pre-linguistic and linguistic abilities to a single mutation, like the sudden appearance in evolutionary history of the elementary computational function called “Merge,” which supposedly all at once allowed for the syntactic combination of two distinct elements, such as a noun and a verb.

Fair enough. From Dennett’s perspective, after all, it would be hard to reconcile this universal grammar — an ability that necessarily began as an internal faculty of thought, dependent upon fully formed and discrete mental concepts, and only thereafter expressed itself in vocal signs — with a truly naturalist picture of reality. So, for Dennett, language must have arisen out of social practices of communication, rooted in basic animal gestures and sounds in an initially accidental association with features of the environment. Only afterward could these elements have become words, spreading and combining and developing into complex structures of reference. There must then, he assumes, have been “proto-languages” that have since died away, liminal systems of communication filling up the interval between animal vocalizations and human semiotic and syntactic capacities.

Unfortunately, this simply cannot be. There is no trace in nature even of primitive languages, let alone proto-languages; all languages possess a full hierarchy of grammatical constraints and powers. And this is not merely an argument from absence, like the missing fossils of all those dragons or unicorns that must have once existed. It is logically impossible even to reverse-engineer anything that would qualify as a proto-language. Every attempt to do so will turn out secretly to rely on the syntactic and semiotic functions of fully developed human language. But Dennett is quite right about how immense an evolutionary saltation the sudden emergence of language would really be. Even the simple algorithm of Merge involves, for instance, a crucial disjunction between what linguists call “structural proximity” and “linear proximity” — between, that is, a hypotactic or grammatical connection between parts of a sentence, regardless of their spatial and temporal proximity to one another, and the simple sequential ordering of signifiers in that sentence. Without such a disjunction, nothing resembling linguistic practice is possible; yet that disjunction can itself exist nowhere except in language.

Dennett, however, writes as if language were simply the cumulative product of countless physical ingredients. It begins, he suggests, in mere phonology. The repeated sound of a given word somehow embeds itself in the brain and creates an “anchor” that functions as a “collection point” for syntactic and semantic meanings to “develop around the sound.” But what could this mean? Are semiotic functions something like iron filings and phonemes something like magnets? What is the physical basis for these marvelous congelations in the brain? The only possible organizing principle for such meanings would be that very innate grammar that Dennett denies exists — and this would seem to require distinctly mental concepts. Not that Dennett appears to think the difference between phonemes and concepts an especially significant one. He does not hesitate, for instance, to describe the “synanthropic” aptitudes that certain organisms (such as bedbugs and mice) acquire in adapting themselves to human beings as “semantic information” that can be “mindlessly gleaned” from the “cycle of generations.”

But there is no such thing as mindless semantics. True, it is imaginable that the accidental development of arbitrary pre-linguistic associations between, say, certain behaviors and certain aspects of a physical environment might be preserved by natural selection, and become beneficial adaptations. But all semantic information consists in the interpretation of signs, and of conventions of meaning in which signs and references are formally separable from one another, and semiotic relations are susceptible of combination with other contexts of meaning. Signs are intentional realities, dependent upon concepts, all the way down. And between mere accidental associations and intentional signs there is a discontinuity that no gradualist — no pleonastic — narrative can span.

Similarly, when Dennett claims that words are “memes” that reproduce like a “virus,” he is speaking pure gibberish. Words reproduce, within minds and between persons, by being intentionally adopted and employed.

Here, as it happens, lurks the most incorrigibly problematic aspect of Dennett’s project. The very concept of memes — Richard Dawkins’s irredeemably vague notion of cultural units of meaning or practice that invade brains and then, rather like genetic materials, thrive or perish through natural selection — is at once so vapid and yet so fantastic that it is scarcely tolerable as a metaphor. But a depressingly substantial part of Dennett’s argument requires not only that memes be accorded the status of real objects, but that they also be regarded as concrete causal forces in the neurology of the brain, whose power of ceaseless combination creates most of the mind’s higher functions. And this is almost poignantly absurd.

Perhaps it is possible to think of intentional consciousness as having arisen from an improbable combination of purely physical ingredients — even if, as yet, the story of that seemingly miraculous metabolism of mechanism into meaning cannot be imagined. But it seems altogether bizarre to think of intentionality as the product of forces that would themselves be, if they existed at all, nothing but acts of intentionality. What could memes be other than mental conventions, meanings subsisting in semiotic practices? As such, their intricate interweaving would not be the source, but rather the product, of the mental faculties they inhabit; they could possess only such complexity as the already present intentional powers of the mind could impose upon them. And it is a fairly inflexible law of logic that no reality can be the emergent result of its own contingent effects.

This is why, also, it is difficult to make much sense of Dennett’s claim that the brain is “a kind of computer,” and mind merely a kind of “interface” between that computer and its “user.” The idea that the mind is software is a fairly popular delusion at the moment, but that hardly excuses a putatively serious philosopher for perpetuating it — though admittedly Dennett does so in a distinctive way. Usually, when confronted by the computational model of mind, it is enough to point out that what minds do is precisely everything that computers do not do, and therein lies much of a computer’s usefulness.

Really, it would be no less apt to describe the mind as a kind of abacus. In the physical functions of a computer, there is neither a semantics nor a syntax of meaning. There is nothing resembling thought at all. There is no intentionality, or anything remotely analogous to intentionality or even to the illusion of intentionality. There is a binary system of notation that subserves a considerable number of intrinsically mindless functions. And, when computers are in operation, they are guided by the mental intentions of their programmers and users, and they provide an instrumentality by which one intending mind can transcribe meanings into traces, and another can translate those traces into meaning again. But the same is true of books when they are “in operation.” And this is why I spoke above of a “Narcissan fallacy”: computers are such wonderfully complicated and versatile abacuses that our own intentional activity, when reflected in their functions, seems at times to take on the haunting appearance of another autonomous rational intellect, just there on the other side of the screen. It is a bewitching illusion, but an illusion all the same. And this would usually suffice as an objection to any given computational model of mind.

But, curiously enough, in Dennett’s case it does not, because to a very large degree he would freely grant that computers only appear to be conscious agents. The perversity of his argument, notoriously, is that he believes the same to be true of us.

For Dennett, the scientific image is the only one that corresponds to reality. The manifest image, by contrast, is a collection of useful illusions, shaped by evolution to provide the interface between our brains and the world, and thus allow us to interact with our environments. The phenomenal qualities that compose our experience, the meanings and intentions that fill our thoughts, the whole world of perception and interpretation — these are merely how the machinery of our nervous systems and brains represent reality to us, for purely practical reasons. Just as the easily manipulated icons on a computer’s screen conceal the innumerable “uncomprehending competences” by which programs run, even while enabling us to use those programs, so the virtual distillates of reality that constitute phenomenal experience permit us to master an unseen world of countless qualityless and purposeless physical forces.

Very well. In a sense, Dennett’s is simply the standard modern account of how the mind relates to the physical order. The extravagant assertion that he adds to this account, however, is that consciousness itself, understood as a real dimension of wholly first-person phenomenal experience and intentional meaning, is itself only another “user-illusion.” That vast abyss between objective physical events and subjective qualitative experience that I mentioned above does not exist. Hence, that seemingly magical transition from the one to the other — whether a genetic or a structural shift — need not be explained, because it has never actually occurred.

The entire notion of consciousness as an illusion is, of course, rather silly. Dennett has been making the argument for most of his career, and it is just abrasively counterintuitive enough to create the strong suspicion in many that it must be more philosophically cogent than it seems, because surely no one would say such a thing if there were not some subtle and penetrating truth hidden behind its apparent absurdity. But there is none. The simple truth of the matter is that Dennett is a fanatic: He believes so fiercely in the unique authority and absolutely comprehensive competency of the third-person scientific perspective that he is willing to deny not only the analytic authority, but also the actual existence, of the first-person vantage. At the very least, though, he is an intellectually consistent fanatic, inasmuch as he correctly grasps (as many other physical reductionists do not) that consciousness really is irreconcilable with a coherent metaphysical naturalism. Since, however, the position he champions is inherently ridiculous, the only way that he can argue on its behalf is by relentlessly, and in as many ways as possible, changing the subject whenever the obvious objections are raised.

For what it is worth, Dennett often exhibits considerable ingenuity in his evasions — so much ingenuity, in fact, that he sometimes seems to have succeeded in baffling even himself. For instance, at one point in this book he takes up the question of “zombies” — the possibility of apparently perfectly functioning human beings who nevertheless possess no interior affective world at all — but in doing so seems to have entirely forgotten what the whole question of consciousness actually is. He rejects the very notion that we “have ‘privileged access’ to the causes and sources of our introspective convictions,” as though knowledge of the causes of consciousness were somehow germane to the issue of knowledge of the experience of consciousness. And if you believe that you know you are not a zombie “unwittingly” imagining that you have “real consciousness with real qualia,” Dennett’s reply is a curt “No, you don’t” — because, you see, “The only support for that conviction is the vehemence of the conviction itself.”

It is hard to know how to answer this argument without mockery. It is quite amazing how thoroughly Dennett seems to have lost the thread here. For one thing, a zombie could not unwittingly imagine anything, since he would possess no consciousness at all, let alone reflective consciousness; that is the whole point of the imaginative exercise. Insofar as you are convinced of anything at all, whether vehemently or tepidly, you do in fact know with absolute certitude that you yourself are not a zombie. Nor does it matter whether you know where your convictions come from; it is the very state of having convictions as such that apprises you of your intrinsic intentionality and your irreducibly private conscious experience.

Simply enough, you cannot suffer the illusion that you are conscious because illusions are possible only for conscious minds. This is so incandescently obvious that it is almost embarrassing to have to state it. But this confusion is entirely typical of Dennett’s position. In this book, as he has done repeatedly in previous texts, he mistakes the question of the existence of subjective experience for the entirely irrelevant question of the objective accuracy of subjective perceptions, and whether we need to appeal to third-person observers to confirm our impressions. But, of course, all that matters for this discussion is that we have impressions at all.

Moreover, and perhaps most bizarrely, Dennett thinks that consciousness can be dismissed as an illusion — the fiction of an inner theater, residing in ourselves and in those around us — on the grounds that behind the appearance of conscious states there are an incalculable number of “uncomprehending competences” at work in both the unseen machinery of our brains and the larger social contexts of others’ brains. In other words, because there are many unknown physical concomitants to conscious states, those states do not exist. But, of course, this is the very problem at issue: that the limpid immediacy and incommunicable privacy of consciousness is utterly unlike the composite, objective, material sequences of physical causality in the brain, and seems impossible to explain in terms of that causality — and yet exists nonetheless, and exists more surely than any presumed world “out there.”

That, as it happens, may be the chief question Dennett neglects to ask: Why presume that the scientific image is true while the manifest image is an illusion when, after all, the scientific image is a supposition of reason dependent upon decisions regarding methods of inquiry, whereas the manifest image — the world as it exists in the conscious mind — presents itself directly to us as an indubitable, inescapable, and eminently coherent reality in every single moment of our lives? How could one possibly determine here what should qualify as reality as such? Dennett certainly provides small reason why anyone else should adopt the prejudices he cherishes. The point of From Bacteria to Bach and Back is to show that minds are only emergent properties of our brains, and brains only aggregates of mindless elements and forces. But it shows nothing of the sort.

The journey the book promises to describe turns out to be the real illusion: Rather than a continuous causal narrative, seamlessly and cumulatively progressing from the most primitive material causes up to the most complex mental results, it turns out to be a hopelessly recursive narrative, a long, languid lemniscate of a tale, twisting back and forth between low and high — between the supposed basic ingredients underlying the mind’s evolution and the fully realized mental phenomena upon which those ingredients turn out to be wholly dependent. It is nearly enough to make one suspect that Dennett must have the whole thing backward.

Perhaps the scientific and manifest images are both accurate. Then again, perhaps only the manifest image is. Perhaps the mind inhabits a real Platonic order of being, where ideal forms express themselves in phenomenal reflections, while the scientific image — a mechanistic regime devoid of purpose and composed of purely particulate causes, stirred only by blind, random impulses — is a fantasy, a pale abstraction decocted from the material residues of an immeasurably richer reality. Certainly, if Dennett’s book encourages one to adopt any position at all, reason dictates that it be something like the exact reverse of the one he defends. The attempt to reduce the phenomena of mental existence to a purely physical history has been attempted before, and has so far always failed. But, after so many years of unremitting labor, and so many enormous books making wildly implausible claims, Dennett can at least be praised for having failed on an altogether majestic scale.”

 

Reference:

David Bentley Hart [2017]:  “The Illusionist,” The New Atlantis, Number 53, Summer/Fall 2017, pp. 109-121.




Philosophy as a creative art

I quoted poet Don Paterson on what he saw as Shakespeare’s use of the act of poetry-writing to learn what he intended to say in the poem being written.    And now, here is poet and philosopher George Santayana writing to William James in the same vein on philosophy:

If philosophy were the attempt to solve a given problem, I should see reason to be discouraged about its success; but it strikes me that it is [page-break] rather an attempt to express a half-undiscovered reality, just as art is, and that two different renderings, if they are expressive, far from cancelling each other add to each other’s value . . . I confess I do not see why we should be so vehemently curious about the absolute truth, which is not to be made or altered by our discovery of it.  But philosophy seems to me to be its own reward, and its justification lies in the delight and dignity of the art itself.” [Letter to William James, 1887-12-15, quoted in Kirkwood 1961, pp. 43-44.]

Reference:

M. M. Kirkwood [1961]: Santayana:  Saint of the Imagination.  Toronto, Canada:  University of Toronto Press.




AI’s first millenium: prepare to celebrate

A search algorithm is a computational procedure (an algorithm) for finding a particular object or objects in a larger collection of objects.    Typically, these algorithms search for objects with desired properties whose identities are otherwise not yet known.   Search algorithms (and search generally) has been an integral part of artificial intelligence and computer science this last half-century, since the first working AI program, designed to play checkers, was written in 1951-2 by Christopher Strachey.    At each round, that program evaluated the alternative board positions that resulted from potential next moves, thereby searching for the “best” next move for that round.

The first search algorithm in modern times apparently dates from 1895:  a depth-first search algorithm to solve a maze, due to amateur French mathematician Gaston Tarry (1843-1913).  Now, in a recent paper by logician Wilfrid Hodges, the date for the first search algorithm has been pushed back much further:  to the third decade of the second millenium, the 1020s.  Hodges translates and analyzes a logic text of Persian Islamic philosopher and mathematician, Ibn Sina (aka Avicenna, c. 980 – 1037) on methods for finding a proof of a syllogistic claim when some premises of the syllogism are missing.   Representation of domain knowledge using formal logic and automated reasoning over these logical representations (ie, logic programming) has become a key way in which intelligence is inserted into modern machines;  searching for proofs of claims (“potential theorems”) is how such intelligent machines determine what they know or can deduce.  It is nice to think that automated theorem-proving is almost 990 years old.

References:

B. Jack Copeland [2000]:  What is Artificial Intelligence?

Wilfrid Hodges [2010]: Ibn Sina on analysis: 1. Proof search. or: abstract state machines as a tool for history of logic.  pp. 354-404, in: A. Blass, N. Dershowitz and W. Reisig (Editors):  Fields of Logic and Computation. Lecture Notes in Computer Science, volume 6300.  Berlin, Germany:  Springer.   A version of the paper is available from Hodges’ website, here.

Gaston Tarry [1895]: La problem des labyrinths. Nouvelles Annales de Mathématiques, 14: 187-190.




As we once thought

The Internet, the World-Wide-Web and hypertext were all forecast by Vannevar Bush, in a July 1945 article for The Atlantic, entitled  As We May Think.  Perhaps this is not completely surprising since Bush had a strong influence on WW II and post-war military-industrial technology policy, as Director of the US Government Office of Scientific Research and Development.  Because of his influence, his forecasts may to some extent have been self-fulfilling.

However, his article also predicted automated machine reasoning using both logic programming, the computational use of formal logic, and computational argumentation, the formal representation and manipulation of arguments.  These areas are both now important domains of AI and computer science which developed first in Europe and which still much stronger there than in the USA.   An excerpt:

The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed—that is, whenever thought for a time runs along an accepted groove—there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.

Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field.

A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.”

Edinburgh sociologist, Donald MacKenzie, wrote a nice history and sociology of logic programming and the use of logic of computer science, Mechanizing Proof: Computing, Risk, and Trust.  The only flaw of this fascinating book is an apparent misunderstanding throughout that theorem-proving by machines  refers only to proving (or not) of theorems in mathematics.    Rather, theorem-proving in AI refers to proving claims in any domain of knowledge represented by a formal, logical language.    Medical expert systems, for example, may use theorem-proving techniques to infer the presence of a particular disease in a patient; the claims being proved (or not) are theorems of the formal language representing the domain, not necessarily mathematical theorems.

References:

Donald MacKenzie [2001]:  Mechanizing Proof: Computing, Risk, and Trust (2001).  Cambridge, MA, USA:  MIT Press.

Vannevar Bush[1945]:  As we may thinkThe Atlantic, July 1945.




Bayesian statistics

One of the mysteries to anyone trained in the frequentist hypothesis-testing paradigm of statistics, as I was, and still adhering to it, as I do, is how Bayesian approaches seemed to have taken the academy by storm.   One wonders, first, how a theory based – and based explicitly – on a measure of uncertainty defined in terms of subjective personal beliefs, could be considered even for a moment for an inter-subjective (ie, social) activity such as Science.    One wonders, second, how a theory justified by appeals to such socially-constructed, culturally-specific, and readily-contestable activities as gambling (ie, so-called Dutch-book arguments) could be taken seriously as the basis for an activity (Science) aiming for, and claiming to achieve, universal validity.   One wonders, third, how the fact that such justifications, even if gambling presents no moral, philosophical or other qualms,  require infinite sequences of gambles is not a little troubling for all of us living in this finite world.  (You tell me you are certain to beat me if we play an infinite sequence of gambles? Then, let me tell you, that I have a religion promising eternal life that may interest you in turn.)

One wonders, fourthly, where are recorded all the prior distributions of beliefs which this theory requires investigators to articulate before doing research.  Surely someone must be writing them down, so that we consumers of science can know that our researchers are honest, and hold them to potential account.   That there is such a disconnect between what Bayesian theorists say researchers do and what those researchers demonstrably do should trouble anyone contemplating a choice of statistical paradigms, surely.   Finally, one wonders how a theory that requires non-zero probabilities be allocated to models of which the investigators have not yet heard or even which no one has yet articulated, for those models to be tested, passes muster at the statistical methodology corral.

To my mind, Bayesianism is a theory from some other world – infinite gambles, imagined prior distributions, models that disregard time or requirements for constructability,  unrealistic abstractions from actual scientific practice – not from our own.

So, how could the Bayesians make as much headway as they have these last six decades? Perhaps it is due to an inherent pragmatism of statisticians – using whatever techniques work, without much regard as to their underlying philosophy or incoherence therein.  Or perhaps the battle between the two schools of thought has simply been asymmetric:  the Bayesians being more determined to prevail (in my personal experience, to the point of cultism and personal vitriol) than the adherents of frequentism.  Greg Wilson’s 2001 PhD thesis explored this question, although without finding definitive answers.

Now,  Andrew Gelman and the indefatigable Cosma Shalizi have written a superb paper, entitled “Philosophy and the practice of Bayesian statistics”.  Their paper presents another possible reason for the rise of Bayesian methods:  that Bayesianism, when used in actual practice, is most often a form of hypothesis-testing, and thus not as untethered to reality as the pure theory would suggest.  Their abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism.  We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

References:

Andrew Gelman and Cosma Rohilla Shalizi [2010]:  Philosophy and the practice of Bayesian statistics.  Available from Arxiv.  Blog post here.

Gregory D. Wilson [2001]:   Articulation Theory and Disciplinary Change:  Unpacking the Bayesian-Frequentist Paradigm Conflict in Statistical Science.  PhD Thesis,  Rhetoric and Professional Communication Programme, New Mexico State University.  Las Cruces, NM, USA.  July 2001.



Doing a PhD

These are some notes on deciding to do a PhD, notes I wrote some years ago after completing my own PhD.

Choosing a PhD program is one of the hardest decisions we can make. For a start, most of us only make this decision once in our lives, and so we have no prior personal experience to go on.

Second, the success or otherwise of a PhD depends a great deal on factors about which we have little advanced knowledge or control, including, for example:

Continue reading ‘Doing a PhD’




Maps and territories and knowledge

Seymour Papert, one of the pioneers of Artificial Intelligence, once wrote (1988, p. 3), “Artificial Intelligence should become the methodology for thinking about ways of knowing.”   I would add “and ways of acting”.

Some time back, I wrote about the painting of spirit-dreamtime maps by Australian aboriginal communities as proof of their relationship to specific places:  Only people with traditional rights to the specific place would have the necessary dreamtime knowledge needed to make the painting, an argument whose compelling force has been recognized by Australian courts.  These paintings are a form of map, showing (some of) the spirit relationships of the specific place.  The argument they make is a very interesting one, along the lines of:

What I am saying is true, by virtue of the mere fact that I am saying it, since only someone having the truth would be able to make such an utterance (ie, the painting).

Another example of this type of argument is given by Rory Stewart, in his account of his walk across Afghanistan.   Stewart does not carry a paper map of the country he is walking through, lest he be thought a foreign spy (p. 211).   Instead, he learns and memorizes a list of the villages and their headmen, in the order he plans to walk through them.  Like the aboriginal dreamtime paintings, mere knowledge of this list provides proof of his right to be in the area.  Like the paintings, the list is a type of map of the territory, a different way of knowing.  And also like the paintings, possession of this knowledge leads others, when they learn of the possession, to act differently towards the possessor.  Here’s Stewart on his map (p. 213):

It was less accurate the further you were from the speaker’s home . . .  But I was able to add details from villages along the way, till I could chant the stages from memory.

Day one:  Commandant Maududi in Badgah.  Day two:  Abdul Rauf Ghafuri in Daulatyar.  Day three:  Bushire Khan in Sang-izard.  Day four:  Mir Ali Hussein Beg of Katlish.  Day five: Haji Nasir-i-Yazdani Beg of Qala-eNau.  Day six:  Seyyed Kerbalahi of Siar Chisme . . .

I recited and followed this song-of-the-places-in-between as a map.  I chanted it even after I had left the villages, using the list as credentials.  Almost everyone recognized the names, even from a hundred kilometres away.  Being able to chant it made me half belong:  it reassured hosts who were not sure whether to take me in and it suggested to anyone who thought of attacking me that I was linked to powerful names. (page 213)

Because AI is (or should be) about ways of knowing and doing in the world, it therefore has close links to the social sciences, particularly anthropology, and to the humanities.

References:

Seymour Papert [1988]: One AI or Many? Daedalus, 117 (1) (Winter 1988):  1-14.

Rory Stewart [2004]: The Places in Between. London, UK:  Picador, pp. 211-214.




The websearch-industrial complex

I think it is now well-known that the creation of Internet was sponsored by the US Government, through its military research funding agencies, ARPA (later DARPA).   It is perhaps less well-known that Google arose from a $4.5 million research project sponsored also by the US Government, through the National Science Foundation.   Let no one say that the USA has an economic system involving “free” enterprise.

In the primordial ooze of Internet content several hundred million seconds ago (1993), fewer than 100 Web sites inhabited the planet. Early clans of information seekers hunted for data among the far larger populations of text-only Gopher sites and FTP file-sharing servers. This was the world in the years before Google.

Continue reading ‘The websearch-industrial complex’




Computers in conflict

ArgAIBook

Academic publishers Springer have just released a new book on Argumentation in Artificial Intelligence.  From the blurb:

This volume is a systematic, expansive presentation of the major achievements in the intersection between two fields of inquiry: Argumentation Theory and Artificial Intelligence. Contributions from international researchers who have helped shape this dynamic area offer a progressive development of intuitions, ideas and techniques, from philosophical backgrounds, to abstract argument systems, to computing arguments, to the appearance of applications producing innovative results. Each chapter features extensive examples to ensure that readers develop the right intuitions before they move from one topic to another.

In particular, the book exhibits an overview of key concepts in Argumentation Theory and of formal models of Argumentation in AI. After laying a strong foundation by covering the fundamentals of argumentation and formal argument modeling, the book expands its focus to more specialized topics, such as algorithmic issues, argumentation in multi-agent systems, and strategic aspects of argumentation. Finally, as a coda, the book explores some practical applications of argumentation in AI and applications of AI in argumentation.”

References:

Previous posts on argumentation can be found here.

Iyad Rahwan and Guillermo R. Simari (Editors) [2009]:  Argumentation in Artificial Intelligence.  Berlin, Germa ny Springer.




The Gamelatron

A robot Gamelan orchestra, thanks to Aaron Taylor Kuffner, Eric Singer and Lemur.

gamelatron - gisella somentino

(Photo:  Gisella Somentino).