RIP: Peter Geach

The death has occured of British philosopher and logician Peter Geach (1916-2013).
There is a famous story, perhaps apocryphal, of the logician Alfred Tarski, Polish-born but in American exile from WW II, asking his American City College of New York colleague Emil Post why he, Post, was the only prominent propositional logician who was not Polish.   Post replied that he was not born American, but had come to the USA as a child, and had in fact been born in Poland (although at the time part of the Russian empire).   It has seemed at times that Poland cornered the market in logicians and we find yet another example in Peter Geach.  According to his Guardian obituary, his maternal grandparents were Polish.

Long ago, I wrote an essay, in a logic course taught by Paul Thom and Malcolm Rennie, exploring a system of entailment due to Geach.  Then, as now, pure mathematicians mostly disparaged logic, and my university offered no further courses in the discipline that has since become the single most important to artificial intelligence and automated reasoning.   Universities are very good at preparing their graduates for the past; for the future, not so much.

Combining actions

How might two actions be combined?  Well, depending on the actions, we may be able to do one action and then the other, or we may be able do the other and then the one, or maybe not.  We call such a combination a sequence or concatenation of the two actions.  In some cases, we may be able to do the two actions in parallel, both at the same time.  We may have to start them simultaneously, or we may be able to start one before the other.  Or, we may have to ensure they finish together, or that they jointly meet some other intermediate synchronization targets.
In some cases, we may be able to interleave them, doing part of one action, then part of the second, then part of the first again, what management consultants in telecommunications call multiplexing.   For many human physical activities – such as learning to play the piano or learning to play golf – interleaving is how parallel activities are first learnt and complex motor skills acquired:  first play a few bars of music on the piano with only the left hand, then the same bars with only the right, and keep practicing the hands on their own, and only after the two hands are each practiced individually do we try playing the piano with the two hands together.
Computer science, which I view as the science of delegation, knows a great deal about how actions may be combined, how they may be distributed across multiple actors, and what the meanings and consequences of these different combinations are likely to be.     It is useful to have a list of the possibilities.  Let us suppose we have two actions, represented by A and B respectively.   Then we may be able to do the following compound actions:

  • Sequence:  The execution of A followed by the execution of B, denoted A ;  B
  • Iterate: A executed n times, denoted A ^ n  (This is sequential execution of a single action.)
  • Parallelize: Both A and B are executed together, denoted A & B
  • Interleave:  Action A is partly executed, followed by part-execution of B, followed by continued part-execution of A, etc, denoted A || B
  • Choose:  Either A is executed or B is executed but not both, denoted A v B
  • Combinations of the above:  For example, with interleaving, only one action is ever being executed at one time.  But it may be that the part-executions of A and B can overlap, so we have a combination of Parallel and Interleaved compositions of A and B.

Depending on the nature of the domain and the nature of the actions, not all of these compound actions may necessarily  be possible.  For instance, if action B has some pre-conditions before it can be executed, then the prior execution of A has to successfully achieve these pre-conditions in order for the sequence A ; B to be feasible.
This stuff may seem very nitty-gritty, but anyone who’s ever asked a teenager to do some task they don’t wish to do, will know all the variations in which a required task can be done after, or alongside, or intermittently with, or be replaced instead by, some other task the teen would prefer to do.    Machines, it turns out, are much like recalcitrant and literal-minded teenagers when it comes to commanding them to do stuff.

The epistemology of intelligence

I have in the past discussed some of the epistemological challenges facing an intelligence agency – here and here.  I now see that I am not the only person to think about these matters, and that academic philosophers have started to write articles for learned journals on the topic, eg,  Herbert (2006) and Dreisbach (2011).
In essence, Herbert makes a standard argument from the philosophy of knowledge:  that knowledge (by someone of some proposition p) comprises three necessary elements:  belief by the someone in p, p being true, and a justification by the someone for his/her belief in p.  The first very obvious criticism of this approach, particularly in intelligence work, is that answering the question, Is p true? is surely the objective of any analysis, not its starting point.     A person (or an organization) may have numerous beliefs about which he (she or it) cannot say whether or not the propositions in question are true or not.  Any justification is an attempt to generate a judgement about whether or not the propositions should be believed, so saying that one can only know something when it is also true has everything pointing exactly in the wrong direction, putting the cart before the horse. This is defining knowledge to be something almost impossible to verify, and is akin to the conflict between constructivist and non-constructivist mathematicians.  How else can we know something is true except by some adequate process of justification,  so our only knowledge surely comprises justified belief, rather than justified true belief.   I think the essential problem here is that all knowledge, except perhaps some conclusions drawn using deduction, is uncertain, and this standard philosophical approach simply ignores uncertainty.
Dreisbach presents other criticisms (also long-standing) of the justified true belief model of knowledge, but both authors ignore a more fundamental  problem with this approach.   That is that much of intelligence activity aims to identify the intentions of other actors, be they states (such as the USSR or Iraq), or groups and individuals (such as potential terrorists).   Intentions, as any marketing researcher can tell you, are very slippery things:  Even a person having, or believed by others to have, an intention may not realize they have it, or may not understand themselves well enough to realize they have it, or may not be able to express to others that they have it, even when they do realize they have it.   Moreover, intentions about the non-immediate future are particularly slippery:  you can ask potential purchasers of some new gizmo all you want before the gizmo is for sale, and still learn nothing accurate about how those very same purchasers will actually react when they are able to finally purchase it.  In short, there is no fact of the matter with intentions, and thus it makes no sense to represent them as propositions.  Accordingly, we cannot evaluate whether or not p is true, so the justified true belief model collapses.  It would be better to ask (as good marketing researchers do):    Does the person in question have a strong tendency to act in future in a certain way, and if so, what factors will likely encourage or inhibit or preclude them to act that way?
However, a larger problem looms with both these papers, since both are written as if the respective author believes the primary purpose of intelligence analysis is to garner knowledge in a vacuum.      Knowledge is an intermediate objective of intelligence activity, but it is surely subordinate to the wider diplomatic, military or political objectives of the government or society the intelligence activity is part of.  CIA was not collecting information about the USSR, for example, because of a disinterested, ivory-tower-ish concern with the progress of socialism in one country, but because the USA and the USSR were engaged in a global conflict.    Accordingly, there are no neutral actions – every action, every policy, every statement, even every belief of each side may have consequences for the larger strategic interaction that the two sides are engaged in.   A rational and effective intelligence agency should not just be asking:
Is p true?
but also:

  • What are the consequences of us believing p to be true?
  • What are the consequences of us believing p not to be true?
  • What are the consequences of the other side believing that we believe p to be true?
  • What are the consequences of the other side believing that we do not believe p to be true?
  • What are the consequences of the other side believing that we are conflicted internally about the truth of p?
  • What are the consequences of the other side initially believing that we believe p to be true and then coming to believe that we do not believe p?
  • What are the consequences of the other side initially believing that we do not believe p to be true and then coming to believe that we do in fact believe p?
  • What are the consequences of the other side being conflicted about whether or not they should believe p?
  • What are the consequences of the other side being conflicted about whether or not we believe p?

and so on.   I give an example of the possible strategic interplay between a protagonist’s beliefs and his or her antagonist’s intentions here.
A decision to believe or not believe p may then become a strategic one, taken after analysis of these various consequences and their implications.   An effective intelligence agency, of course, will need to keep separate accounts for what it really believes and what it wants others to believe it believes.  This can result in all sorts of organizational schizophrenia, hidden agendas, and paranoia (Holzman 2008), with consequent challenges for those writing histories of espionage.  Call these mind-games if you wish, but such analyses helped the British manipulate and eventually control Nazi German remote intelligence efforts in British and other allied territory during World War II (through the famous XX system).
Likewise, many later intelligence efforts from all major participants in the Cold War were attempts –some successful, some not – to manipulate the beliefs of opponents.   The Nosenko case (Bagley 2007) is perhaps the most famous of these, but there were many.   In the context of the XX action, it is worth mentioning that the USA landed scores of teams of spies and saboteurs into the Democratic Republic of Vietnam (North Vietnam) during the Second Indochinese War, only to have every single team either be captured and executed, or captured and turned; only the use of secret duress codes by some landed agents communicating back enabled the USA to infer that these agents were being played by their DRV captors.
Intelligence activities are about the larger strategic interaction between the relevant stakeholders as much (or more) than they are about the truth of propositions.  Neither Herbert nor Dreisbach seems to grasp this, which makes their analysis disappointingly impoverished.
References:
Tennent H. Bagley [2007]:  Spy Wars.  New Haven, CT, USA:  Yale University Press.
Christopher Dreisbach [2011]:  The challenges facing an IC epistemologist-in-residence.  International Journal of Intelligence and CounterIntelligence, 24: 757-792.
Matthew Herbert [2006]:  The intelligence analyst as epistemologist.  International Journal of Intelligence and CounterIntelligence, 19:  666-684.
Michael Holzman [2008]:  James Jesus Angleton, the CIA and the Craft of Counterintelligence.  Boston, MA, USA: University of Massachusetts Press.

The Matherati: Index

The psychologist Howard Gardner identified nine distinct types of human intelligence. It is perhaps not surprising that people with great verbal and linguistic dexterity have long had a word to describe themselves, the Literati. Those of us with mathematical and logical reasoning capabilities I have therefore been calling the Matherati, defined here. I have tried to salute members of this group as I recall or encounter them.

This page lists the people I have currently written about or mentioned, in alpha order:
Alexander d’Arblay, John Aris, John Atkinson, John Bennett, Christophe Bertrand, Matthew Piers Watt Boulton, Joan Burchardt, David Caminer, Boris N. Delone, the Delone family, Nicolas Fatio de Duillier, Michael Dummett, Sean Eberhard, Edward FrenkelMartin Gardner, Kurt Godel, Charles Hamblin, Thomas Harriott, Martin Harvey, Fritz JohnErnest Kaye, Robert May, Robin Milner, Isaac NewtonHenri PoincareMervyn Pragnell, Malcolm Rennie, Dennis Ritchie, Ibn Sina, Adam Spencer, Bella Subbotovskaya, Bill Thurston, Alan Turing, Alexander Yessenin-Volpin.

And lists:
20th-Century Mathematicians.

A salute to Charles Hamblin

This short biography of Australian philosopher and computer scientist Charles L. Hamblin was initially commissioned by the Australian Computer Museum Society.

Charles Leonard Hamblin (1922-1985) was an Australian philosopher and one of Australia’s first computer scientists. His main early contributions to computing, which date from the mid 1950s, were the development and application of reverse polish notation and the zero-address store. He was also the developer of one of the first computer languages, GEORGE. Since his death, his ideas have become influential in the design of computer interaction protocols, and are expected to shape the next generation of e-commerce and machine-communication systems.
Continue reading ‘A salute to Charles Hamblin’

The Matherati

Howard Gardner’s theory of multiple intelligences includes an intelligence he called Logical-Mathematical Intelligence, the ability to reason about numbers, shapes and structure, to think logically and abstractly.   In truth, there are several different capabilities in this broad category of intelligence – being good at pure mathematics does not necessarily make you good at abstraction, and vice versa, and so the set of great mathematicians and the set of great computer programmers, for example, are not identical.
But there is definitely a cast of mind we might call mathmind.   As well as the usual suspects, such as Euclid, Newton and Einstein, there are many others with this cast of mind.  For example, Thomas Harriott (c. 1560-1621), inventor of the less-than symbol, and the first person to draw the  moon with a telescope was one.   Newton’s friend, Nicolas Fatio de Duiller (1664-1753), was another.   In the talented 18th-century family of Charles Burney, whose relatives and children included musicians, dancers, artists, and writers (and an admiral), Charles’ grandson, Alexander d’Arblay (1794-1837), the son of writer Fanny Burney, was 10th wrangler in the Mathematics Tripos at Cambridge in 1818, and played chess to a high standard.  He was friends with Charles Babbage, also a student at Cambridge at the time, and a member of the Analytical Society which Babbage had co-founded; this was an attempt to modernize the teaching of pure mathematics in Britain by importing the rigor and notation of continental analysis, which d’Arblay had already encountered as a school student in France.
And there are people with mathmind right up to the present day.   The Guardian a year ago carried an obituary, written by a family member, of Joan Burchardt, who was described as follows:

My aunt, Joan Burchardt, who has died aged 91, had a full and interesting life as an aircraft engineer, a teacher of physics and maths, an amateur astronomer, goat farmer and volunteer for Oxfam. If you had heard her talking over the gate of her smallholding near Sherborne, Dorset, you might have thought she was a figure from the past. In fact, if she represented anything, it was the modern, independent-minded energy and intelligence of England. In her 80s she mastered the latest computer software coding.”

Since language and text have dominated modern Western culture these last few centuries, our culture’s histories are mostly written in words.   These histories favor the literate, who naturally tend to write about each other.    Clive James’ book of a lifetime’s reading and thinking, Cultural Amnesia (2007), for instance, lists just 1 musician and 1 film-maker in his 126 profiles, and includes not a single mathematician or scientist.     It is testimony to text’s continuing dominance in our culture, despite our society’s deep-seated, long-standing reliance on sophisticated technology and engineering, that we do not celebrate more the matherati.
On this page you will find an index to Vukutu posts about the Matherati.
FOOTNOTE: The image above shows the equivalence classes of directed homotopy (or, dihomotopy) paths in 2-dimensional spaces with two holes (shown as the black and white boxes). The two diagrams model situations where there are two alternative courses of action (eg, two possible directions) represented respectively by the horizontal and vertical axes.  The paths on each diagram correspond to different choices of interleaving of these two types of actions.  The word directed is used because actions happen in sequence, represented by movement from the lower left of each diagram to the upper right.  The word homotopy refers to paths which can be smoothly deformed into one another without crossing one of the holes.  The upper diagram shows there are just two classes of dihomotopically-equivalent paths from lower-left to upper-right, while the lower diagram (where the holes are positioned differently) has three such dihomotopic equivalence classes.  Of course, depending on the precise definitions of action combinations, the upper diagram may in fact reveal four equivalence classes, if paths that first skirt above the black hole and then beneath the white one (or vice versa) are permitted.  Applications of these ideas occur in concurrency theory in computer science and in theoretical physics.

AI's first millenium: prepare to celebrate

A search algorithm is a computational procedure (an algorithm) for finding a particular object or objects in a larger collection of objects.    Typically, these algorithms search for objects with desired properties whose identities are otherwise not yet known.   Search algorithms (and search generally) has been an integral part of artificial intelligence and computer science this last half-century, since the first working AI program, designed to play checkers, was written in 1951-2 by Christopher Strachey.    At each round, that program evaluated the alternative board positions that resulted from potential next moves, thereby searching for the “best” next move for that round.
The first search algorithm in modern times apparently dates from 1895:  a depth-first search algorithm to solve a maze, due to amateur French mathematician Gaston Tarry (1843-1913).  Now, in a recent paper by logician Wilfrid Hodges, the date for the first search algorithm has been pushed back much further:  to the third decade of the second millenium, the 1020s.  Hodges translates and analyzes a logic text of Persian Islamic philosopher and mathematician, Ibn Sina (aka Avicenna, c. 980 – 1037) on methods for finding a proof of a syllogistic claim when some premises of the syllogism are missing.   Representation of domain knowledge using formal logic and automated reasoning over these logical representations (ie, logic programming) has become a key way in which intelligence is inserted into modern machines;  searching for proofs of claims (“potential theorems”) is how such intelligent machines determine what they know or can deduce.  It is nice to think that automated theorem-proving is almost 990 years old.
References:
B. Jack Copeland [2000]:  What is Artificial Intelligence?
Wilfrid Hodges [2010]: Ibn Sina on analysis: 1. Proof search. or: abstract state machines as a tool for history of logic.  pp. 354-404, in: A. Blass, N. Dershowitz and W. Reisig (Editors):  Fields of Logic and Computation. Lecture Notes in Computer Science, volume 6300.  Berlin, Germany:  Springer.   A version of the paper is available from Hodges’ website, here.
Gaston Tarry [1895]: La problem des labyrinths. Nouvelles Annales de Mathématiques, 14: 187-190.

As we once thought


The Internet, the World-Wide-Web and hypertext were all forecast by Vannevar Bush, in a July 1945 article for The Atlantic, entitled  As We May Think.  Perhaps this is not completely surprising since Bush had a strong influence on WW II and post-war military-industrial technology policy, as Director of the US Government Office of Scientific Research and Development.  Because of his influence, his forecasts may to some extent have been self-fulfilling.
However, his article also predicted automated machine reasoning using both logic programming, the computational use of formal logic, and computational argumentation, the formal representation and manipulation of arguments.  These areas are both now important domains of AI and computer science which developed first in Europe and which still much stronger there than in the USA.   An excerpt:

The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed—that is, whenever thought for a time runs along an accepted groove—there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.
Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field.
A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.”

Edinburgh sociologist, Donald MacKenzie, wrote a nice history and sociology of logic programming and the use of logic of computer science, Mechanizing Proof: Computing, Risk, and Trust.  The only flaw of this fascinating book is an apparent misunderstanding throughout that theorem-proving by machines  refers only to proving (or not) of theorems in mathematics.    Rather, theorem-proving in AI refers to proving claims in any domain of knowledge represented by a formal, logical language.    Medical expert systems, for example, may use theorem-proving techniques to infer the presence of a particular disease in a patient; the claims being proved (or not) are theorems of the formal language representing the domain, not necessarily mathematical theorems.
References:
Donald MacKenzie [2001]:  Mechanizing Proof: Computing, Risk, and Trust (2001).  Cambridge, MA, USA:  MIT Press.
Vannevar Bush[1945]:  As we may thinkThe Atlantic, July 1945.

Vale: Stephen Toulmin

The Anglo-American philosopher, Stephen Toulmin, has just died, aged 87.   One of the areas to which he made major contributions was argumentation, the theory of argument, and his work found and finds application not only in philosophy but in computer science.
For instance, under the direction of John Fox, the Advanced Computation Laboratory at Europe’s largest medical research charity, Cancer Research UK (formerly, the Imperial Cancer Research Fund) applied Toulmin’s model of argument in computer systems they built and deployed in the 1990s to handle conflicting arguments in some domain.  An example was a system for advising medical practitioners with the arguments for and against prescribing a particular drug to a patient with a particular medical history and disease presentation.  One company commercializing these ideas in medicine is Infermed.    Other applications include the automated prediction of chemical properties such as toxicity (see for example, the work of Lhasa Ltd), and dynamic optimization of extraction processes in mining.
S E Toulmin
For me, Toulmin’s most influential work was was his book Cosmopolis, which identified and deconstructed the main biases evident in contemporary western culture since the work of Descartes:

  • A bias for the written over the oral
  • A bias for the universal over the local
  • A bias for the general over the particular
  • A bias for the timeless over the timely.

Formal logic as a theory of human reasoning can be seen as example of these biases at work. In contrast, argumentation theory attempts to reclaim the theory of reasoning from formal logic with an approach able to deal with conflicts and gaps, and with special cases, and less subject to such biases.    Norm’s dispute with Larry Teabag is a recent example of resistance to the puritanical, Descartian desire to impose abstract formalisms onto practical reasoning quite contrary to local and particular sense.
Another instance of Descartian autism is the widespread deletion of economic history from graduate programs in economics and the associated privileging of deductive reasoning in abstract mathematical models over other forms of argument (eg, narrative accounts, laboratory and field experiments, field samples and surveys, computer simulation, etc) in economic theory.  One consequence of this autism is the Great Moral Failure of Macroeconomics in the Great World Recession of 2008-onwards.
References:
S. E. Toulmin [1958]:  The Uses of Argument.  Cambridge, UK: Cambridge University Press.
S. E. Toulmin [1990]: Cosmopolis:  The Hidden Agenda of Modernity.  Chicago, IL, USA: University of Chicago Press.

Theatre Lakatos

Last night, I caught a new Australian play derived from the life of logician Kurt Godel, called Incompleteness.  The play is by playwright Steven Schiller and actor Steven Phillips, and was peformed at Melbourne’s famous experimental theatrespace, La Mama, in Carlton. Both script and performance were superb:  Congratulations to both playwright and actor, and to all involved in the production.
Godel was famous for having kept every piece of paper he’d ever encountered, and the set design (pictured here) included many file storage boxes.  Some of these were arranged in a checkerboard pattern on the floor, with gaps between them.  As the Godel character (Phillips) tried to prove something, he took successive steps along diagonal and zigzag paths through this pattern, sometimes retracing his steps when potential chains of reasoning did not succeed.   This was the best artistic representation I have seen of the process of attempting to do mathematical proof:  Imre Lakatos’ philosophy of mathematics made theatrical flesh.
 

There is a photograph of the La Mama billboard at Paola’s site.