The epistemology of intelligence

I have in the past discussed some of the epistemological challenges facing an intelligence agency – here and here.  I now see that I am not the only person to think about these matters, and that academic philosophers have started to write articles for learned journals on the topic, eg,  Herbert (2006) and Dreisbach (2011).
In essence, Herbert makes a standard argument from the philosophy of knowledge:  that knowledge (by someone of some proposition p) comprises three necessary elements:  belief by the someone in p, p being true, and a justification by the someone for his/her belief in p.  The first very obvious criticism of this approach, particularly in intelligence work, is that answering the question, Is p true? is surely the objective of any analysis, not its starting point.     A person (or an organization) may have numerous beliefs about which he (she or it) cannot say whether or not the propositions in question are true or not.  Any justification is an attempt to generate a judgement about whether or not the propositions should be believed, so saying that one can only know something when it is also true has everything pointing exactly in the wrong direction, putting the cart before the horse. This is defining knowledge to be something almost impossible to verify, and is akin to the conflict between constructivist and non-constructivist mathematicians.  How else can we know something is true except by some adequate process of justification,  so our only knowledge surely comprises justified belief, rather than justified true belief.   I think the essential problem here is that all knowledge, except perhaps some conclusions drawn using deduction, is uncertain, and this standard philosophical approach simply ignores uncertainty.
Dreisbach presents other criticisms (also long-standing) of the justified true belief model of knowledge, but both authors ignore a more fundamental  problem with this approach.   That is that much of intelligence activity aims to identify the intentions of other actors, be they states (such as the USSR or Iraq), or groups and individuals (such as potential terrorists).   Intentions, as any marketing researcher can tell you, are very slippery things:  Even a person having, or believed by others to have, an intention may not realize they have it, or may not understand themselves well enough to realize they have it, or may not be able to express to others that they have it, even when they do realize they have it.   Moreover, intentions about the non-immediate future are particularly slippery:  you can ask potential purchasers of some new gizmo all you want before the gizmo is for sale, and still learn nothing accurate about how those very same purchasers will actually react when they are able to finally purchase it.  In short, there is no fact of the matter with intentions, and thus it makes no sense to represent them as propositions.  Accordingly, we cannot evaluate whether or not p is true, so the justified true belief model collapses.  It would be better to ask (as good marketing researchers do):    Does the person in question have a strong tendency to act in future in a certain way, and if so, what factors will likely encourage or inhibit or preclude them to act that way?
However, a larger problem looms with both these papers, since both are written as if the respective author believes the primary purpose of intelligence analysis is to garner knowledge in a vacuum.      Knowledge is an intermediate objective of intelligence activity, but it is surely subordinate to the wider diplomatic, military or political objectives of the government or society the intelligence activity is part of.  CIA was not collecting information about the USSR, for example, because of a disinterested, ivory-tower-ish concern with the progress of socialism in one country, but because the USA and the USSR were engaged in a global conflict.    Accordingly, there are no neutral actions – every action, every policy, every statement, even every belief of each side may have consequences for the larger strategic interaction that the two sides are engaged in.   A rational and effective intelligence agency should not just be asking:
Is p true?
but also:

  • What are the consequences of us believing p to be true?
  • What are the consequences of us believing p not to be true?
  • What are the consequences of the other side believing that we believe p to be true?
  • What are the consequences of the other side believing that we do not believe p to be true?
  • What are the consequences of the other side believing that we are conflicted internally about the truth of p?
  • What are the consequences of the other side initially believing that we believe p to be true and then coming to believe that we do not believe p?
  • What are the consequences of the other side initially believing that we do not believe p to be true and then coming to believe that we do in fact believe p?
  • What are the consequences of the other side being conflicted about whether or not they should believe p?
  • What are the consequences of the other side being conflicted about whether or not we believe p?

and so on.   I give an example of the possible strategic interplay between a protagonist’s beliefs and his or her antagonist’s intentions here.
A decision to believe or not believe p may then become a strategic one, taken after analysis of these various consequences and their implications.   An effective intelligence agency, of course, will need to keep separate accounts for what it really believes and what it wants others to believe it believes.  This can result in all sorts of organizational schizophrenia, hidden agendas, and paranoia (Holzman 2008), with consequent challenges for those writing histories of espionage.  Call these mind-games if you wish, but such analyses helped the British manipulate and eventually control Nazi German remote intelligence efforts in British and other allied territory during World War II (through the famous XX system).
Likewise, many later intelligence efforts from all major participants in the Cold War were attempts –some successful, some not – to manipulate the beliefs of opponents.   The Nosenko case (Bagley 2007) is perhaps the most famous of these, but there were many.   In the context of the XX action, it is worth mentioning that the USA landed scores of teams of spies and saboteurs into the Democratic Republic of Vietnam (North Vietnam) during the Second Indochinese War, only to have every single team either be captured and executed, or captured and turned; only the use of secret duress codes by some landed agents communicating back enabled the USA to infer that these agents were being played by their DRV captors.
Intelligence activities are about the larger strategic interaction between the relevant stakeholders as much (or more) than they are about the truth of propositions.  Neither Herbert nor Dreisbach seems to grasp this, which makes their analysis disappointingly impoverished.
References:
Tennent H. Bagley [2007]:  Spy Wars.  New Haven, CT, USA:  Yale University Press.
Christopher Dreisbach [2011]:  The challenges facing an IC epistemologist-in-residence.  International Journal of Intelligence and CounterIntelligence, 24: 757-792.
Matthew Herbert [2006]:  The intelligence analyst as epistemologist.  International Journal of Intelligence and CounterIntelligence, 19:  666-684.
Michael Holzman [2008]:  James Jesus Angleton, the CIA and the Craft of Counterintelligence.  Boston, MA, USA: University of Massachusetts Press.

Oral culture

For about the last 300 years, and especially from the introduction of universal public education in the late 19th century, western culture has  been dominated by text and writing.  Elizabethan culture, by contrast, was primarily oral:  Shakespeare, for example, wrote his plays to be performed not to be read, and did not even bother to arrange definitive versions for printing.  

One instance of the culture-wide turn from speech to text was a switch from spoken to written mathematics tests in the west which occurred at Cambridge in the late 18th century, as I discuss here.  There is nothing intrinsically better about written examinations over spoken ones, especially when standardized and not tailored for each particular student.  This is true even for mathematics, as is shown by the fact that oral exams are still the norm in university mathematics courses in the Russian-speaking world; Russia continues to produce outstanding mathematicians.

Adventurer and writer Rory Stewart, now an MP,  has an interesting post about the oral culture of the British Houses of Parliament, perhaps the last strong-hold of argument-through-speech in public culture.  The only other places in modern life, a place which is not quite as public, where speech reigns supreme, are court rooms.

What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
References:
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

On Getting Things Done

New York Times Op-Ed writer, David Brooks, has two superb articles about the skills needed to be a success in contemporary technological society, the skills I refer to as Getting-Things-Done IntelligenceOne is a short article in The New York Times (2011-01-17), reacting to the common, but wrong-headed, view that technical skill is all you need for success, and the other a long, fictional disquisition in The New Yorker (2011-01-17) on the social skills of successful people.  From the NYT article:

Practicing a piece of music for four hours requires focused attention, but it is nowhere near as cognitively demanding as a sleepover with 14-year-old girls. Managing status rivalries, negotiating group dynamics, understanding social norms, navigating the distinction between self and group — these and other social tests impose cognitive demands that blow away any intense tutoring session or a class at Yale.
Yet mastering these arduous skills is at the very essence of achievement. Most people work in groups. We do this because groups are much more efficient at solving problems than individuals (swimmers are often motivated to have their best times as part of relay teams, not in individual events). Moreover, the performance of a group does not correlate well with the average I.Q. of the group or even with the I.Q.’s of the smartest members.
Researchers at the Massachusetts Institute of Technology and Carnegie Mellon have found that groups have a high collective intelligence when members of a group are good at reading each others’ emotions — when they take turns speaking, when the inputs from each member are managed fluidly, when they detect each others’ inclinations and strengths.
Participating in a well-functioning group is really hard. It requires the ability to trust people outside your kinship circle, read intonations and moods, understand how the psychological pieces each person brings to the room can and cannot fit together.
This skill set is not taught formally, but it is imparted through arduous experiences. These are exactly the kinds of difficult experiences Chua shelters her children from by making them rush home to hit the homework table.”

These articles led me to ask exactly what is involved in reading a social situation?  Brooks mentions some of the relevant aspects, but not all.   To be effective, a manager needs to parse the social situation of the groups he or she must work with – those under, those over and peer groups to the side – to answer questions such as the following:

  • Who has power or influence over each group?  Is this exercised formally or informally?
  • What are the norms and practices of the group, both explicit and implicit, known and unconscious?
  • Who in the group is reliable as a witness?   Whose stories can be believed?
  • Who has agendas and what are these?
  • Who in the group is competent or capable or intelligent?  Whose promises to act can be relied upon?  Who, in contrast, needs to be monitored or managed closely?
  • What constraints does the group or its members operate under?  Can these be removed or side-stepped?
  • What motivates the members of the group?  Can or should these motivations be changed, or enhanced?
  • Who is open to new ideas, to change, to improvements?
  • What obstacles and objections will arise in response to proposals for change?  Who will raise these?  Will these objections be explicit or hidden?
  • Who will resist or oppose change?  In what ways? Who will exercise pocket vetos?

Parsing new social situations – ie, answering these questions in a specific situation – is not something done in a few moments.  It may take years of observation and participation to understand a new group in which one is an outsider.  People who are good at this may be able to parse the key features of a new social landscape within a few weeks or months, depending on the level of access they have, and the willingness of the group members to trust them.     Good management consultants, provided their sponsors are sufficiently senior, can often achieve an understanding within a few weeks.   Experience helps.
Needless to say, most academic research is pretty useless for these types of questions.  Management theory has either embarked on the reduce-and-quantify-and-replicate model of academic psychology, or else undertaken the narrative descriptions of successful organizations of most books by business gurus.   Narrative descriptions of failures would be far more useful.
The best training for being able to answer such questions – apart from experience of life – is the study of anthropology or literature:  Anthropology because it explores the social structures of other cultures and the factors within a single lifetime which influence these structures, and Literature because it explores the motivations and consequences of human actions and interactions.   The golden age of television drama we are currently fortunate to be witness to also provides good training for viewers in human motivations, actions and interactions.  It is no coincidence, in my view, that the British Empire was created and run by people mostly trained in Classics, with its twofold combination of the study of alien cultures and literatures, together with the analytical rigor and intellectual discipline acquired through the incremental learning of those difficult subjects, Latin and Ancient Greek languages.
UPDATE (2011-02-16): From Norm Scheiber’s profile of US Treasury Secretary Timothy Geithner in The New Republic (2011-02-10):

“Tim’s real strength … is that he’s really quick at reading the culture of any institutions,” says Leslie Lipschitz, a former Geithner deputy.

The profile also makes evident Geithner’s agonistic planning approach to policy – seeking to incorporate opposition and minority views into both policy formation processes and the resulting policies.

A salute to Charles Hamblin

This short biography of Australian philosopher and computer scientist Charles L. Hamblin was initially commissioned by the Australian Computer Museum Society.

Charles Leonard Hamblin (1922-1985) was an Australian philosopher and one of Australia’s first computer scientists. His main early contributions to computing, which date from the mid 1950s, were the development and application of reverse polish notation and the zero-address store. He was also the developer of one of the first computer languages, GEORGE. Since his death, his ideas have become influential in the design of computer interaction protocols, and are expected to shape the next generation of e-commerce and machine-communication systems.
Continue reading ‘A salute to Charles Hamblin’

Santayana on Stickney

George Santayana was friends with Joe Trumbull Stickney.  In 1952, five decades after Stickney died from a brain tumour, Santayana wrote a letter about their friendship to William Kirkwood.  The letter is reproduced in facsimile in M. Kirkwood’s life of Santayana (1961, pp. 234-235).

Via di Santo Stefano Rotundo, 6
Rome, May 27, 1952
To Professor Wm. A. Kirkwood, Ph. D.
Trinity College, Toronto
Dear Sir,
It was a happy impulse that prompted you to think that the books you speak of and their annotations, and especially the lines in praise of Homer written by my friend Stickney would interest me. They have called up vividly in my mind the quality of his mind, although the verses represent a much earlier feeling for the classics, and a more conventional mood than he had in the years when we had our frequent moral fencing bouts; for there was a contrary drift in our views in spite of great sympathy in our tastes and pursuits. These verses are signed Sept. 15/ 90. Now Stickney graduated at Harvard in 1895, so that five years earlier he must have been about 17 years old. This explains to me the tone of the verses and also the fact that they advance line by line, seldom or never running over and breaking the next line at the cesura or before it, as he would surely have done in his maturity, when he doted on the dramatic interruptions of Shakespeare’s lines in Antony and Cleopatra in particular, and in all the later plays in general. [page break]
I see clearly the greater mastery and strength of impassioned drama, if impassioned drama is what you are in sympathy with; but I like to warn dogmatic critics of what a more naive art achieves in its impartial and peaceful labour and the risk that overcharged movement or surpluses [?] runs of drowning in its deathbed [?] waters. Every form of art has its charm and is appropriate in its place; but it is moral cramp to admit only one form of art to be legitimate or important. The reminder of this old debate that I had with Stickney who enlightened me more (precisely about the abuse of rhetoric) than I ever could enlighten him about the relativity of everything has been a pleasant reminder of younger days: although I am not sure that much progress towards reason and justice has been made since by critical opinion.
With best thanks and regards
Yours sincerely
G. Santayana

Reference:
M. M. Kirkwood [1961]:  Santayana:  Saint of the Imagination.  Toronto, Canada:  University of Toronto Press.
Previous posts on George Santayana here, and Joe Stickney here.

Dialogs over actions

In the post below, I mentioned the challenge for knowledge engineers of representing know-how, a task which may require explicit representation of actions, and sometimes also of utterances over actions.  The know-how involved in steering a large sailing ship with its diverse crew surely includes the knowledge of who to ask (or to command) to do what, when, and how to respond when these requests (or commands) are ignored, or fail to be executed successfully or timeously.
One might imagine epistemology – the philosophy of knowledge – would be of help here.  Philosophers, however, have been seduced since Aristotle with propositions (factual statements about the world having truth values), largely ignoring actions, and their representation.   Philosophers of language have also mostly focused on speech acts – utterances which act to change the world – rather than on utterances about actions themselves.  Even among speech act theorists the obsession with propositions is strong, with attempts to analyze utterances which are demonstrably not propositions (eg, commands) by means of implicit assertive statements – propositions  asserting something about the world, where “the world” is extended to include internal mental states and intangible social relations between people – which these utterances allegedly imply.  With only a few exceptions (Thomas Reid 1788, Adolf Reinach 1913, Juergen Habermas 1981, Charles Hamblin 1987), philosophers of language have mostly ignored utterances  about actions.
Consider the following two statements:

I promise you to wash the car.
I command you to wash the car.

The two statements have almost identical English syntax.   Yet their meanings, and the intentions of their speakers, are very distinct.  For a start, the action of washing the car would be done by different people – the speaker and the hearer, respectively (assuming for the moment that the command is validly issued, and accepted).  Similarly, the power to retract or revoke the action of washing the car rests with different people – with the hearer (as the recipient of the promise) and the speaker (as the commander), respectively.
Linguists generally use “semantics” to refer to the real-world referants of syntactically-correct expressions, while “pragmatics” refers to other aspects of the meaning and use of an expression not related to their relationship (or not) to things in the world, such as the speaker’s intentions.  For neither of these two expressions does it make sense to speak of  their truth value:  a promise may be questioned as to its sincerity, or its feasibility, or its appropriateness, etc, but not its truth or falsity;  likewise, a command  may be questioned as to its legal validity, or its feasibility, or its morality, etc, but also not its truth or falsity.
For utterances about actions, such as promises, requests, entreaties and commands, truth-value semantics makes no sense.  Instead, we generally need to consider two pragmatic aspects.  The first is uptake, the acceptance of the utterance by the hearer (an aspect first identified by Reid and by Reinach), an acceptance which generally creates a social commitment to execute the action described in the utterance by one or other party to the conversation (speaker or hearer).    Once uptaken, a second pragmatic aspect comes into play:  the power to revoke or retract the social commitment to execute the action.  This revocation power does not necessarily lie with the original speaker; only the recipient of a promise may cancel it, for example, and not the original promiser.  The revocation power also does not necessarily lie with the uptaker, as commands readily indicate.
Why would a computer scientist be interested in such humanistic arcana?  The more tasks we delegate to intelligent machines, the more they need to co-ordinate actions with others of like kind.  Such co-ordination requires conversations comprising utterances over actions, and, for success, these require agreed syntax, semantics and pragmatics.  To give just one example:  the use of intelligent devices by soldiers have made the modern battlefield a place of overwhelming information collection, analysis and communication.  Lots of this communication can be done by intelligent software agents, which is why the US military, inter alia, sponsors research applying the philosophy of language and the  philosophy of argumentation to machine communications.
Meanwhile, the philistine British Government intends to cease funding tertiary education in the arts and the humanities.   Even utilitarians should object to this.
References:
Juergen  Habermas [1984/1981]:   The Theory of Communicative Action:  Volume 1:  Reason and the Rationalization of Society.  London, UK:  Heinemann.   (Translation by T. McCarthy of:  Theorie des Kommunikativen Handelns, Band I,  Handlungsrationalitat und gesellschaftliche Rationalisierung. Suhrkamp, Frankfurt, Germany, 1981.)
Charles  L. Hamblin [1987]:  Imperatives. Oxford, UK:  Basil Blackwell.
P. McBurney and S. Parsons [2007]: Retraction and revocation in agent deliberation dialogs. Argumentation, 21 (3): 269-289.

Adolph Reinach [1913]:  Die apriorischen Grundlagen des bürgerlichen Rechtes.  Jahrbuch für Philosophie und phänomenologische Forschung, 1: 685-847.

Good decisions

Which decisions are good decisions?
Since 1945, mainstream economists have arrogated the word “rational” to describe a mode of decision-making which they consider to be best.   This method, called maximum-expected utility (MEU) decision-making, assumes that the decision-maker has only a finite set of possible action-options and that she knows what these are, that she knows the possible consequences of each of these actions and can quantify (or at least can estimate) these consequences, and can do so on a single, common, numerical scale of value (the payoffs), that she knows a finite and complete collection of uncertain events that are possible and which may impact the consequences and their values, and knows (or at least can estimate) the probabilities of these uncertain events, again on a common numerical scale of uncertainty.  The MEU decision procedure is then to quantify the consequences of each action-option, weighting them by the relative likelihood of their arising according to their probabilities of the uncertain events which influence them.
The decision-maker then selects that action-option which has the maximum expected consequential value, ie the consequential value weighted by the probabilities of the uncertain events. Such decision-making, in an abuse of language that cries out for a criminal charges, is then called rational by economists.   Bayesian statistician Dennis Lindley even wrote a book about MEU which included the stunningly-arrogant sentence, “The main conclusion [of this book] is that there is essentially only one way to reach a decision sensibly.”

Rational?  This method is not even feasible, let alone sensible or good!
First, where do all these numbers come from?  With the explicit assumptions that I have listed, economists are assuming that the decision-maker has some form of perfect knowledge.  Well, no one making any real-world decisions has that much knowledge.  Of course, economists often respond, estimates can be used when the knowledge is missing.  But whose estimates?   Sourced from where?   Updated when? Anyone with any corporate or public policy experience knows straight away that consensus on such numbers for any half-way important problem will be hard to find.  Worse than that, any consensus achieved should immediately be suspected and interrogated, since it may be evidence of groupthink.    There simply is no certainty about the future, and if a group of people all do agree on what it holds, down to quantified probabilities and payoffs, they deserve the comeuppance they are likely to get!
Second, the MEU principle simply averages across uncertain events.   What of action-options with potentially catastrophic outcomes?   Their small likelihood of occurrence may mean they disappear in the averaging process, but no real-world decision-maker – at least, none with any experience or common sense – would risk a catastrophic outcome, despite their estimated low probabilities.   Wall Street trading firms have off-street (and often off-city) backup IT systems, and sometimes even entire backup trading floors, ready for those rare events.
Third, look at all the assumptions not made explicit in this framework.  There is no mention of the time allowed for the decision, so apparently the decision-maker has infinities of time available.  No mention is made of the processing or memory resources available for making the decision, so she has infinities of world also.   That makes a change from most real-world decisions:  what a pleasant utopia this MEU-land must be.  Nothing is said – at least nothing explicit – about taking into account the historical or other contexts of the decision, such as past decisions by this or related decision-makers, technology standards, legacy systems, organization policies and constraints, legal, regulatory or ethical constraints, or the strategies of the company or the society in which the decision-maker sits.   How could a decision procedure which ignores such issues be considered, even for a moment, rational?   I think only an academic could ignore context in this way; no business person I know would do so, since certain unemployment would be the result.  And how could members of an academic discipline purporting to be a social science accept and disseminate a decision-making framework which ignores such social, contextual features?
And do the selected action-options just execute themselves?  Nothing is said in this framework about consultation with stakeholders during the decision-process, so presumably the decision-maker has no one to report to, no board members or stockholders or division presidents or ward chairmen or electors to manage or inform or liaise with or mollify or reward or appease or seek re-election from, no technical departments to seek feasibility approval from, no implementation staff to motivate or inspire, no regulators or ethicists or corporate counsel to seek legal approval from, no funders or investors to raise finance from, no suppliers to convince to accept orders with, no distribution channels to persuade to schedule throughput with,  no competitors to second-guess or outwit, and no actual, self-immolating protesters outside one’s office window to avert one’s eyes from and feel guilt about for years afterward.*
For many complex decisions, the ultimate success or failure of the decision can depend significantly on the degree to which those having to execute the decision also support it.  Consequently, the choice of a specific action-option (and the logical reasoning process used to select it) may be far less important for success of the decision than that key stakeholders feel that they have been consulted appropriately during the reasoning process.  In other words, the quality of the decision may depend much more on how and with who the decision-maker reasons than on the particular conclusion she reaches.   Arguably this is true of almost all significant corporate strategy decisions and major public policy decisions:  There is ultimately no point sending your military to prop up an anti-communist regime in South-East Asia, for example, if your own soldiers come to feel they should not be there (as I discuss here, regarding another decision to go to war).
Mainstream economists have a long way to go before they will have a theory of good decision-making.   In the meantime, it would behoove them to show some humility when criticizing the decision-making processes of human beings.**
Notes and Bibliography:
Oskar Lange [1945-46]:  The scope and method of economics.  The Review of Economic Studies, 13 (1): 19-32.
Dennis Lindley [1985]:  Making Decisions.  Second Edition. London, UK: John Wiley and Sons.
L James Savage [1950]: The Foundations of Statistics.  New York, NY, USA:  Wiley.
* I’m sure Robert McNamara, statistician and decision-theory whizz kid, never considered the reactions of self-immolating protesters when making decisions early in his career, but having seen one outside his office window late in his time as Secretary of Defense he seems to have done so subsequently.
** Three-toed sloth comments dialogically and amusingly on MEU theory here.

In defence of futures thinking

Norm at Normblog has a post defending theology as a legitimate area of academic inquiry, after an attack on theology by Oliver Kamm.  (Since OK’s post is behind a paywall, I have not read it, so my comments here may be awry with respect to that post.)  Norm argues, very correctly, that it is legitimate for theology, considered as a branch of philosophy to, inter alia, reflect on the properties of entities whose existence has not yet been proven.  In strong support of Norm, let me add:  Not just in philosophy!
In business strategy, good decision-making requires consideration of the consequences of potential actions, which in turn requires the consideration of the potential actions of other actors and stakeholders in response to the first set of actions.  These actors may include entities whose existence is not yet known or even suspected, for example, future competitors to a product whose launch creates a new product category.   Why, there’s even a whole branch of strategy analysis, devoted to scenario planning, a discipline that began in the military analysis of alternative post-nuclear worlds, and whose very essence involves the creation of imagined futures (for forecasting and prognosis) and/or imagined pasts (for diagnosis and analysis).   Every good air-crash investigation, medical diagnosis, and police homicide investigation, for instance, involves the creation of imagined alternative pasts, and often the creation of imaginary entities in those imagined pasts, whose fictional attributes we may explore at length.   Arguably, in one widespread view of the philosophy of mathematics, pure mathematicians do nothing but explore the attributes of entities without material existence.
And not just in business, medicine, the military, and the professions.   In computer software engineering, no new software system development is complete without due and rigorous consideration of the likely actions of users or other actors with and on the system, for example.   Users and actors here include those who are the intended target users of the system, as well as malevolent or whimsical or poorly-behaved or bug-ridden others, both human and virtual, not all of whom may even exist when the system is first developed or put into production.      If creative articulation and manipulation of imaginary futures (possible or impossible) is to be outlawed, not only would we have no literary fiction or much poetry, we’d also have few working software systems either.

Agonistic planning

One key feature of the Kennedy and Johnson administrations identified by David Halberstam in his superb account of the development of US policy on Vietnam, The Best and the Brightest, was groupthink:  the failure of White House national security, foreign policy and defense staff to propose or even countenance alternatives to the prevailing views on Vietnam, especially when these alternatives were in radical conflict with the prevailing wisdom.   Among the junior staffers working in those administrations was Richard Holbrooke, now the US Special Representative for Afghanistan and Pakistan in the Obama administration.  A New Yorker profile of Holbrooke last year included this statement by him, about the need for policy planning processes to incorporate agonism:

“You have to test your hypothesis against other theories,” Holbrooke said. “Certainty in the face of complex situations is very dangerous.” During Vietnam, he had seen officials such as McGeorge Bundy, Kennedy’s and Johnson’s national-security adviser, “cut people to ribbons because the views they were getting weren’t acceptable.” Washington promotes tactical brilliance framed by strategic conformity—the facility to outmaneuver one’s counterpart in a discussion, without questioning fundamental assumptions. A more farsighted wisdom is often unwelcome. In 1975, with Bundy in mind, Holbrooke published an essay in Harpers in which he wrote, “The smartest man in the room is not always right.” That was one of the lessons of Vietnam. Holbrooke described his method to me as “a form of democratic centralism, where you want open airing of views and opinions and suggestions upward, but once the policy’s decided you want rigorous, disciplined implementation of it. And very often in the government the exact opposite happens. People sit in a room, they don’t air their real differences, a false and sloppy consensus papers over those underlying differences, and they go back to their offices and continue to work at cross-purposes, even actively undermining each other.”  (page 47)
Of course, Holbrooke’s positing of policy development as distinct from policy implementation is itself a dangerous simplification of the reality for most complex policy, both private and public, where the relationship between the two is usually far messier.    The details of policy, for example, are often only decided, or even able to be decided, at implementation-time, not at policy design-time.    Do you sell your new hi-tech product via retail outlets, for instance?  The answer may depend on whether there are outlets available to collaborate with you (not tied to competitors) and technically capable of selling it, and these facts may not be known until you approach the outlets.  Moreover, if the stakeholders implementing (or constraining implementation) of a policy need to believe they have been adequately consulted in policy development for the policy to be executed effectively (as is the case with major military strategies in democracies, for example here), then a further complication to this reductive distinction exists.
 
 
UPDATE (2011-07-03):
British MP Rory Stewart recounts another instance of Holbrooke’s agonist approach to policy in this post-mortem tribute: Holbrooke, although disagreeing with Stewart on policy toward Afghanistan, insisted that Stewart present his case directly to US Secretary of State Hilary Clinton in a meeting that Holbrooke arranged.
 
References:

David Halberstam [1972]:  The Best and the Brightest.  New York, NY, USA: Random House.
George Packer [2009]:  The last mission: Richard Holbrooke’s plan to avoid the mistakes of Vietnam in AfghanistanThe New Yorker, 2009-09-28, pp. 38-55.