Archive for the 'AI' Category

Philosophy as a creative art

I quoted poet Don Paterson on what he saw as Shakespeare’s use of the act of poetry-writing to learn what he intended to say in the poem being written.    And now, here is poet and philosopher George Santayana writing to William James in the same vein on philosophy:

If philosophy were the attempt to solve a given problem, I should see reason to be discouraged about its success; but it strikes me that it is [page-break] rather an attempt to express a half-undiscovered reality, just as art is, and that two different renderings, if they are expressive, far from cancelling each other add to each other’s value . . . I confess I do not see why we should be so vehemently curious about the absolute truth, which is not to be made or altered by our discovery of it.  But philosophy seems to me to be its own reward, and its justification lies in the delight and dignity of the art itself.” [Letter to William James, 1887-12-15, quoted in Kirkwood 1961, pp. 43-44.]

Reference:

M. M. Kirkwood [1961]: Santayana:  Saint of the Imagination.  Toronto, Canada:  University of Toronto Press.




AI’s first millenium: prepare to celebrate

A search algorithm is a computational procedure (an algorithm) for finding a particular object or objects in a larger collection of objects.    Typically, these algorithms search for objects with desired properties whose identities are otherwise not yet known.   Search algorithms (and search generally) has been an integral part of artificial intelligence and computer science this last half-century, since the first working AI program, designed to play checkers, was written in 1951-2 by Christopher Strachey.    At each round, that program evaluated the alternative board positions that resulted from potential next moves, thereby searching for the “best” next move for that round.

The first search algorithm in modern times apparently dates from 1895:  a depth-first search algorithm to solve a maze, due to amateur French mathematician Gaston Tarry (1843-1913).  Now, in a recent paper by logician Wilfrid Hodges, the date for the first search algorithm has been pushed back much further:  to the third decade of the second millenium, the 1020s.  Hodges translates and analyzes a logic text of Persian Islamic philosopher and mathematician, Ibn Sina (aka Avicenna, c. 980 – 1037) on methods for finding a proof of a syllogistic claim when some premises of the syllogism are missing.   Representation of domain knowledge using formal logic and automated reasoning over these logical representations (ie, logic programming) has become a key way in which intelligence is inserted into modern machines;  searching for proofs of claims (“potential theorems”) is how such intelligent machines determine what they know or can deduce.     It is nice to think that automated theorem-proving is almost 990 years old.

References:

B. Jack Copeland [2000]:  What is Artificial Intelligence?

Wilfrid Hodges [2010]: Ibn Sina on analysis: 1. Proof search. or: abstract state machines as a tool for history of logic.  pp. 354-404, in: A. Blass, N. Dershowitz and W. Reisig (Editors):  Fields of Logic and Computation. Lecture Notes in Computer Science, volume 6300.  Berlin, Germany:  Springer.   A version of the paper is available from Hodges’ website, here.

Gaston Tarry [1895]: La problem des labyrinths. Nouvelles Annales de Mathématiques, 14: 187-190.

Technorati Tags: , , , , , , ,




As we once thought

The Internet, the World-Wide-Web and hypertext were all forecast by Vannevar Bush, in a July 1945 article for The Atlantic, entitled  As We May Think.  Perhaps this is not completely surprising since Bush had a strong influence on WW II and post-war military-industrial technology policy, as Director of the US Government Office of Scientific Research and Development.  Because of his influence, his forecasts may to some extent have been self-fulfilling.

However, his article also predicted automated machine reasoning using both logic programming, the computational use of formal logic, and computational argumentation, the formal representation and manipulation of arguments.  These areas are both now important domains of AI and computer science which developed first in Europe and which still much stronger there than in the USA.   An excerpt:

The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed—that is, whenever thought for a time runs along an accepted groove—there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.

Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field.

A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.”

Edinburgh sociologist, Donald MacKenzie, wrote a nice history and sociology of logic programming and the use of logic of computer science, Mechanizing Proof: Computing, Risk, and Trust.  The only flaw of this fascinating book is an apparent misunderstanding throughout that theorem-proving by machines  refers only to proving (or not) of theorems in mathematics.    Rather, theorem-proving in AI refers to proving claims in any domain of knowledge represented by a formal, logical language.    Medical expert systems, for example, may use theorem-proving techniques to infer the presence of a particular disease in a patient; the claims being proved (or not) are theorems of the formal language representing the domain, not necessarily mathematical theorems.

References:

Donald MacKenzie [2001]:  Mechanizing Proof: Computing, Risk, and Trust (2001).  Cambridge, MA, USA:  MIT Press.

Vannevar Bush [1945]:  As we may thinkThe Atlantic, July 1945.

Technorati Tags: , , ,




Bayesian statistics

One of the mysteries to anyone trained in the frequentist hypothesis-testing paradigm of statistics, as I was, and still adhering to it, as I do, is how Bayesian approaches seemed to have taken the academy by storm.   One wonders, first, how a theory based – and based explicitly – on a measure of uncertainty defined in terms of subjective personal beliefs, could be considered even for a moment for an inter-subjective (ie, social) activity such as Science.    One wonders, second, how a theory justified by appeals to such socially-constructed, culturally-specific, and readily-contestable activities as gambling (ie, so-called Dutch-book arguments) could be taken seriously as the basis for an activity (Science) aiming for, and claiming to achieve, universal validity.   One wonders, third, how the fact that such justifications, even if gambling presents no moral, philosophical or other qualms,  require infinite sequences of gambles is not a little troubling for all of us living in this finite world.  (You tell me you are certain to beat me if we play an infinite sequence of gambles? Then, let me tell you, that I have a religion promising eternal life that may interest you in turn.)

One wonders, fourthly, where are recorded all the prior distributions of beliefs which this theory requires investigators to articulate before doing research.  Surely someone must be writing them down, so that we consumers of science can know that our researchers are honest, and hold them to potential account.   That there is such a disconnect between what Bayesian theorists say researchers do and what those researchers demonstrably do should trouble anyone contemplating a choice of statistical paradigms, surely.   Finally, one wonders how a theory that requires non-zero probabilities be allocated to models of which the investigators have not yet heard or even which no one has yet articulated, for those models to be tested, passes muster at the statistical methodology corral.

To my mind, Bayesianism is a theory from some other world – infinite gambles, imagined prior distributions, models that disregard time or requirements for constructability,  unrealistic abstractions from actual scientific practice – not from our own.

So, how could the Bayesians make as much headway as they have these last six decades? Perhaps it is due to an inherent pragmatism of statisticians – using whatever techniques work, without much regard as to their underlying philosophy or incoherence therein.  Or perhaps the battle between the two schools of thought has simply been asymmetric:  the Bayesians being more determined to prevail (in my personal experience, to the point of cultism and personal vitriol) than the adherents of frequentism.  Greg Wilson’s 2001 PhD thesis explored this question, although without finding definitive answers.

Now,  Andrew Gelman and the indefatigable Cosma Shalizi have written a superb paper, entitled “Philosophy and the practice of Bayesian statistics”.  Their paper presents another possible reason for the rise of Bayesian methods:  that Bayesianism, when used in actual practice, is most often a form of hypothesis-testing, and thus not as untethered to reality as the pure theory would suggest.  Their abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

References:

Andrew Gelman and Cosma Rohilla Shalizi [2010]:  Philosophy and the practice of Bayesian statistics.  Available from Arxiv.  Blog post here.

Gregory D. Wilson [2001]:   Articulation Theory and Disciplinary Change:  Unpacking the Bayesian-Frequentist Paradigm Conflict in Statistical Science.  PhD Thesis,  Rhetoric and Professional Communication Programme, New Mexico State University.  Las Cruces, NM, USA.  July 2001.

Technorati Tags: , , , , ,




Doing a PhD

These are some notes on deciding to do a PhD, notes I wrote some years ago after completing my own PhD.

Choosing a PhD program is one of the hardest decisions we can make. For a start, most of us only make this decision once in our lives, and so we have no prior personal experience to go on.

Second, the success or otherwise of a PhD depends a great deal on factors about which we have little advanced knowledge or control, including, for example:

Continue reading ‘Doing a PhD’




Maps and territories and knowledge

Seymour Papert, one of the pioneers of Artificial Intelligence, once wrote (1988, p. 3), “Artificial Intelligence should become the methodology for thinking about ways of knowing.”   I would add “and ways of acting”.

Some time back, I wrote about the painting of spirit-dreamtime maps by Australian aboriginal communities as proof of their relationship to specific places:  Only people with traditional rights to the specific place would have the necessary dreamtime knowledge needed to make the painting, an argument whose compelling force has been recognized by Australian courts.  These paintings are a form of map, showing (some of) the spirit relationships of the specific place.  The argument they make is a very interesting one, along the lines of:

What I am saying is true, by virtue of the mere fact that I am saying it, since only someone having the truth would be able to make such an utterance (ie, the painting).

Another example of this type of argument is given by Rory Stewart, in his account of his walk across Afghanistan.   Stewart does not carry a paper map of the country he is walking through, lest he be thought a foreign spy (p. 211).   Instead, he learns and memorizes a list of the villages and their headmen, in the order he plans to walk through them.  Like the aboriginal dreamtime paintings, mere knowledge of this list provides proof of his right to be in the area.  Like the paintings, the list is a type of map of the territory, a different way of knowing.  And also like the paintings, possession of this knowledge leads others, when they learn of the possession, to act differently towards the possessor.  Here’s Stewart on his map (p. 213):

It was less accurate the further you were from the speaker’s home . . .  But I was able to add details from villages along the way, till I could chant the stages from memory.

Day one:  Commandant Maududi in Badgah.  Day two:  Abdul Rauf Ghafuri in Daulatyar.  Day three:  Bushire Khan in Sang-izard.  Day four:  Mir Ali Hussein Beg of Katlish.  Day five: Haji Nasir-i-Yazdani Beg of Qala-eNau.  Day six:  Seyyed Kerbalahi of Siar Chisme . . .

I recited and followed this song-of-the-places-in-between as a map.  I chanted it even after I had left the villages, using the list as credentials.  Almost everyone recognized the names, even from a hundred kilometres away.  Being able to chant it made me half belong:  it reassured hosts who were not sure whether to take me in and it suggested to anyone who thought of attacking me that I was linked to powerful names. (page 213)

Because AI is (or should be) about ways of knowing and doing in the world, it therefore has close links to the social sciences, particularly anthropology, and to the humanities.

References:

Seymour Papert [1988]: One AI or Many? Daedalus, 117 (1) (Winter 1988):  1-14.

Rory Stewart [2004]: The Places in Between. London, UK:  Picador, pp. 211-214.

Technorati Tags: , , ,




The websearch-industrial complex

I think it is now well-known that the creation of Internet was sponsored by the US Government, through its military research funding agencies, ARPA (later DARPA).   It is perhaps less well-known that Google arose from a $4.5 million research project sponsored also by the US Government, through the National Science Foundation.   Let no one say that the USA has an economic system involving “free” enterprise.

In the primordial ooze of Internet content several hundred million seconds ago (1993), fewer than 100 Web sites inhabited the planet. Early clans of information seekers hunted for data among the far larger populations of text-only Gopher sites and FTP file-sharing servers. This was the world in the years before Google.

Continue reading ‘The websearch-industrial complex’

Technorati Tags: ,




Computers in conflict

ArgAIBook 

Academic publishers Springer have just released a new book on Argumentation in Artificial Intelligence.  From the blurb:

This volume is a systematic, expansive presentation of the major achievements in the intersection between two fields of inquiry: Argumentation Theory and Artificial Intelligence. Contributions from international researchers who have helped shape this dynamic area offer a progressive development of intuitions, ideas and techniques, from philosophical backgrounds, to abstract argument systems, to computing arguments, to the appearance of applications producing innovative results. Each chapter features extensive examples to ensure that readers develop the right intuitions before they move from one topic to another.

 In particular, the book exhibits an overview of key concepts in Argumentation Theory and of formal models of Argumentation in AI. After laying a strong foundation by covering the fundamentals of argumentation and formal argument modeling, the book expands its focus to more specialized topics, such as algorithmic issues, argumentation in multi-agent systems, and strategic aspects of argumentation. Finally, as a coda, the book explores some practical applications of argumentation in AI and applications of AI in argumentation.”

References:

Previous posts on argumentation can be found here.

Iyad Rahwan and Guillermo R. Simari (Editors) [2009]:  Argumentation in Artificial Intelligence.  Berlin, Germa ny Springer.

Technorati Tags: , ,




The Gamelatron

A robot Gamelan orchestra, thanks to Aaron Taylor Kuffner, Eric Singer and Lemur.

gamelatron - gisella somentino

(Photo:  Gisella Somentino).