42 again

Following my recent post on the meaning of life, I recalled Georges Perec’s great novel, Life: A User’s Manual, which I first encountered in a 1987 book review by Paul Auster in the New York Times, here.

If anyone can be called the central character in this shifting, kaleidoscopic work, it would have to be Percival Bartlebooth, an eccentric English millionaire whose insane and useless 50-year project serves as an emblem for the book as a whole. Realizing as a young man that his wealth has doomed him to a life of boredom, Bartlebooth undertakes to study the art of watercolor with Serge Valene for a period of 10 years. Although he has no aptitude whatsoever for painting, he eventually reaches a satisfactory level of competence. Then, in the company of a servant, he sets out on a 20-year voyage around the world with the sole intention of painting watercolors of 500 different harbors and seaports.
As soon as one of these pictures is finished, he sends it to a man in Paris by the name of Gaspard Winckler, who also lives in the building. Winckler is an expert puzzle-maker whom Bartlebooth has hired to turn the watercolors into 750-piece jigsaw puzzles. One by one, the puzzles are made and stored in wooden boxes. When Bartlebooth returns from his travels and settles back into his apartment, he will methodically go about putting the puzzles together in chronological order. By means of an elaborate chemical process, the borders of the puzzle pieces have been glued together in such a way that the seams are no longer visible, thus restoring the watercolor to its original integrity. The painting, good as new, can then be removed from its wooden backing and sent to the place where it was originally executed. There it will be dipped into a detergent solution that eliminates all traces of the painting, yielding a clean and unmarked sheet of paper.
In other words, Bartlebooth will be left with nothing, the same thing he started with.

The idea of wasting the second half of your life trying to make sense of all you did in the first half I have found to be increasingly insightful as I age.

FWIW, Auster’s 1987 review appears to have been plagiarized, without any acknowledgement, in this 1999 post.

The science of delegation

Most people, if they think about the topic at all, probably imagine computer science involves the programming of computers.  But what are computers?  In most cases, these are just machines of one form or another.  And what is programming?  Well, it is the issuing of instructions (“commands” in the programming jargon) for the machine to do something or other, or to achieve some state or other.   Thus, I view Computer Science as nothing more or less than the science of delegation.
When delegating a task to another person, we are likely to be more effective (as the delegator or commander) the more we know about the skills and capabilities and curent commitments and attitudes of that person (the delegatee or commandee).   So too with delegating to machines.   Accordingly, a large part of theoretical computer science is concerned with exploring the properties of machines, or rather, the deductive properties of mathematical models of machines.  Other parts of the discipline concern the properties of languages for commanding machines, including their meaning (their semantics) – this is programming language theory.  Because the vast majority of lines of program code nowadays are written by teams of programmers, not individuals, then much of computer science – part of the branch known as software engineering – is concerned with how to best organize and manage and evaluate the work of teams of people.   Because most machines are controlled by humans and act in concert for or with or to humans, then another, related branch of this science of delegation deals with the study of human-machine interactions.   In both these branches, computer science reveals itself to have a side which connects directly with the human and social sciences, something not true of the other sciences often grouped with Computer Science: pure mathematics, physics, or chemistry. 
And from its modern beginnings 70 years ago, computer science has been concerned with trying to automate whatever can be automated – in other words, with delegating the task of delegating.  This is the branch known as Artificial Intelligence.   We have intelligent machines which can command other machines, and manage and control them in the same way that humans could.   But not all bilateral relationships between machines are those of commander-and-subordinate.  More often, in distributed networks machines are peers of one another, intelligent and autonomous (to varying degrees).  Thus, commanding is useless – persuasion is what is needed for one intelligent machine to ensure that another machine does what the first desires.  And so, as one would expect in a science of delegation, computational argumentation arises as an important area of study.
 

Strategic Progamming

Over the last 40-odd years, a branch of Artificial Intelligence called AI Planning has developed.  One way to view Planning is as automated computer programming: 

  • Write a program that takes as input an initial state, a final state (“a goal”), and a collection of possible atomic actions, and  produces as output another computer programme comprising a combination of the actions (“a plan”) guaranteed to take us from the initial state to the final state. 

A prototypical example is robot motion:  Given an initial position (e.g., here), a means of locomotion (e.g., the robot can walk), and a desired end-position (e.g., over there), AI Planning seeks to empower the robot to develop a plan to walk from here to over there.   If some or all the actions are non-deterministic, or if there are other possibly intervening effects in the world, then the “guaranteed” modality may be replaced by a “likely” modality. 
Another way to view Planning is in contrast to Scheduling:

  • Scheduling is the orderly arrangement of a collection of tasks guranteed to achieve some goal from some initial state, when we know in advance the initial state, the goal state, and the tasks.
  • Planning is the identification and orderly arrangement of tasks guranteed to achieve some goal from some initial state, when we know in advance the initial state, the goal state, but we don’t yet know the tasks;  we only know in advance the atomic actions from which tasks may be constructed.

Relating these ideas to my business experience, I realized that a large swathe of complex planning activities in large companies involves something at a higher level of abstraction.  Henry Mintzberg called these activities “Strategic Programming”

  • Strategic Programming is the identification and priorization of a finite collection of programs or plans, given an initial state, a set of desirable end-states or objectives (possibly conflicting).  A program comprises an ordered collection of tasks, and these tasks and their ordering we may or may not know in advance.

Examples abound in complex business domains.   You wake up one morning to find yourself the owner of a national mobile telecommunications licence, and with funds to launch a network.  You have to buy the necessary equipment and deploy and connect it, in order to provide your new mobile network.   Your first decision is where to provide coverage:  you could aim to provide nationwide coverage, and not open your service to the public until the network has been installed and connected nationwide.  This is the strategy Orange adopted when launching PCS services in mainland Britain in 1994.   One downside of waiting till you’ve covered the nation before selling any service to customers is that revenues are delayed. 
Another downside is that a competitor may launch service before you, and that happened to Orange:  Mercury One2One (as it then was) offered service to the public in 1993, when they had only covered the area around London.   The upside of that strategy for One2One was early revenues.  The downside was that customers could not use their phones outside the island of coverage, essentially inside the M25 ring-road.   For some customer segments, wide-area or nationwide coverage may not be very important, so an early launch may be appropriate if those customer segments are being targeted.  But an early launch won’t help customers who need wider-area coverage, and – unless marketing communications are handled carefully – the early launch may position the network operator in the minds of such customers as permanently providing inadequate service.   The expectations of both current target customers and customers who are not currently targets need to be explicitly managed to avoid such mis-perceptions.
In this example, the different coverage rollout strategies ended up at the same place eventually, with both networks providing nationwide coverage.  But the two operators took different paths to that same end-state.   How to identify, compare, prioritize, and select-between these different paths is the very stuff of marketing and business strategy, ie, of strategic programming.  It is why business decision-making is often very complex and often intellectually very demanding.   Let no one say (as academics are wont to do) that decision-making in business is a doddle.   Everything is always more complicated than it looks from outside, and identifying and choosing-between alternative programs is among the most complex of decision-making activities.

Combining actions

How might two actions be combined?  Well, depending on the actions, we may be able to do one action and then the other, or we may be able do the other and then the one, or maybe not.  We call such a combination a sequence or concatenation of the two actions.  In some cases, we may be able to do the two actions in parallel, both at the same time.  We may have to start them simultaneously, or we may be able to start one before the other.  Or, we may have to ensure they finish together, or that they jointly meet some other intermediate synchronization targets.
In some cases, we may be able to interleave them, doing part of one action, then part of the second, then part of the first again, what management consultants in telecommunications call multiplexing.   For many human physical activities – such as learning to play the piano or learning to play golf – interleaving is how parallel activities are first learnt and complex motor skills acquired:  first play a few bars of music on the piano with only the left hand, then the same bars with only the right, and keep practicing the hands on their own, and only after the two hands are each practiced individually do we try playing the piano with the two hands together.
Computer science, which I view as the science of delegation, knows a great deal about how actions may be combined, how they may be distributed across multiple actors, and what the meanings and consequences of these different combinations are likely to be.     It is useful to have a list of the possibilities.  Let us suppose we have two actions, represented by A and B respectively.   Then we may be able to do the following compound actions:

  • Sequence:  The execution of A followed by the execution of B, denoted A ;  B
  • Iterate: A executed n times, denoted A ^ n  (This is sequential execution of a single action.)
  • Parallelize: Both A and B are executed together, denoted A & B
  • Interleave:  Action A is partly executed, followed by part-execution of B, followed by continued part-execution of A, etc, denoted A || B
  • Choose:  Either A is executed or B is executed but not both, denoted A v B
  • Combinations of the above:  For example, with interleaving, only one action is ever being executed at one time.  But it may be that the part-executions of A and B can overlap, so we have a combination of Parallel and Interleaved compositions of A and B.

Depending on the nature of the domain and the nature of the actions, not all of these compound actions may necessarily  be possible.  For instance, if action B has some pre-conditions before it can be executed, then the prior execution of A has to successfully achieve these pre-conditions in order for the sequence A ; B to be feasible.
This stuff may seem very nitty-gritty, but anyone who’s ever asked a teenager to do some task they don’t wish to do, will know all the variations in which a required task can be done after, or alongside, or intermittently with, or be replaced instead by, some other task the teen would prefer to do.    Machines, it turns out, are much like recalcitrant and literal-minded teenagers when it comes to commanding them to do stuff.

Taking a view vs. maximizing expected utility

The standard or classical model in decision theory is called Maximum Expected Utility (MEU) theory, which I have excoriated here and here (and which Cosma Shalizi satirized here).   Its flaws and weaknesses for real decision-making have been pointed out by critics since its inception, six decades ago.  Despite this, the theory is still taught in economics classes and MBA programs as a normative model of decision-making.
A key feature of MEU is the decision-maker is required to identify ALL possible action options, and ALL consequential states of these options.   He or she then reasons ACROSS these consequences by adding together the utilites of the consquential states, weighted by the likelihood that each state will occur.
However, financial and business planners do something completely contrary to this in everyday financial and business modeling.   In developing a financial model for a major business decision or for a new venture, the collection of possible actions is usually infinite and the space of possible consequential states even more so.  Making human sense of the possible actions and the resulting consequential states is usually a key reason for undertaking the financial modeling activity, and so cannot be an input to the modeling.  Because of the explosion in the number states and in their internal complexity, business planners cannot articulate all the actions and all the states, nor even usually a subset of these beyond a mere handful.
Therefore, planners typically choose to model just 3 or 4 states – usually called cases or scenarios – with each of these combining a complex mix of (a) assumed actions, (b) assumed stakeholder responses and (c) environmental events and parameters.  The assumptions and parameter values are instantiated for each case, the model run, and  the outputs of the 3 or 4 cases compared with one another.  The process is usually repeated with different (but close) assumptions and parameter values, to gain a sense of the sensitivity of the model outputs to those assumptions.
Often the scenarios will be labeled “Best Case”, “Worst Case”, “Base Case”, etc to identify the broad underlying  principles that are used to make the relevant assumptions in each case.   Actually adopting a financial model for (say) a new venture means assuming that one of these cases is close enough to current reality and its likely future development in the domain under study- ie, that one case is realistic.   People in the finance world call this adoption of one case “taking a view” on the future.
Taking a view involves assuming (at least pro tem) that one trajectory (or one class of trajectories) describes the evolution of the states of some system.  Such betting on the future is the complete opposite cognitive behaviour to reasoning over all the possible states before choosing an action, which the protagonists of the MEU model insist we all do.   Yet the MEU model continues to be taught as a normative model for decision-making to MBA students who will spend their post-graduation life doing business planning by taking a view.
 

Imaginary beliefs

In a discussion of the utility of religious beliefs, Norm makes this claim:

A person can’t intelligibly say, ‘I know that p is false, but it’s useful for me to think it’s true, so I will.’ “

(Here, p is some proposition – that is, some statement about the world which may be either true or false, but not both and not neither.)
In fact, a person can indeed intelligibly say this, and pure mathematicians do it all the time.   Perhaps the example in mathematics which is easiest to grasp is the use of the square root of minus one, the number usually denoted by the symbol i.   Negative numbers cannot have square roots, since there are no numbers which when squared (multiplied by themselves) lead to a negative number.  However, it turns out that believing that these imaginary numbers do exist leads to a beautiful and subtle mathematical theory, called the theory of complex numbers. This theory has multiple practical applications, from mathematics to physics to engineering.  One area of application we have known for about a  century is the theory of alternating current in electricity;  blogging – among much else of modern life – would perhaps be impossible, or at least very different, without this belief in imaginary entities underpinning the theory of electricity.
And, as I have argued before (eg, here and here), effective business strategy development and planning under uncertainty requires holding multiple incoherent beliefs about the world simultaneously.   The scenarios created by scenario planners are examples of such mutually inconsistent beliefs about the world.   Most people – and most companies – find it difficult to maintain and act upon mutually-inconsistent beliefs.   For that reason the company that pioneered the use of scenario planning, Shell, has always tried to ensure that probabilities are never assigned to scenarios, because managers tend to give greater credence and hence attention to scenarios having higher-probabilities.  The utilitarian value of scenario planning is greatest when planners consider seriously the consequences of low-likelihood, high-impact scenarios (as Shell found after the OPEC oil price in 1973), not the scenarios they think are most probable.  To do this well, planners need to believe statements that they judge to be false, or at least act as if they believe these statements.
Here and here I discuss another example, taken from espionage history.

What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
References:
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

On Getting Things Done

New York Times Op-Ed writer, David Brooks, has two superb articles about the skills needed to be a success in contemporary technological society, the skills I refer to as Getting-Things-Done IntelligenceOne is a short article in The New York Times (2011-01-17), reacting to the common, but wrong-headed, view that technical skill is all you need for success, and the other a long, fictional disquisition in The New Yorker (2011-01-17) on the social skills of successful people.  From the NYT article:

Practicing a piece of music for four hours requires focused attention, but it is nowhere near as cognitively demanding as a sleepover with 14-year-old girls. Managing status rivalries, negotiating group dynamics, understanding social norms, navigating the distinction between self and group — these and other social tests impose cognitive demands that blow away any intense tutoring session or a class at Yale.
Yet mastering these arduous skills is at the very essence of achievement. Most people work in groups. We do this because groups are much more efficient at solving problems than individuals (swimmers are often motivated to have their best times as part of relay teams, not in individual events). Moreover, the performance of a group does not correlate well with the average I.Q. of the group or even with the I.Q.’s of the smartest members.
Researchers at the Massachusetts Institute of Technology and Carnegie Mellon have found that groups have a high collective intelligence when members of a group are good at reading each others’ emotions — when they take turns speaking, when the inputs from each member are managed fluidly, when they detect each others’ inclinations and strengths.
Participating in a well-functioning group is really hard. It requires the ability to trust people outside your kinship circle, read intonations and moods, understand how the psychological pieces each person brings to the room can and cannot fit together.
This skill set is not taught formally, but it is imparted through arduous experiences. These are exactly the kinds of difficult experiences Chua shelters her children from by making them rush home to hit the homework table.”

These articles led me to ask exactly what is involved in reading a social situation?  Brooks mentions some of the relevant aspects, but not all.   To be effective, a manager needs to parse the social situation of the groups he or she must work with – those under, those over and peer groups to the side – to answer questions such as the following:

  • Who has power or influence over each group?  Is this exercised formally or informally?
  • What are the norms and practices of the group, both explicit and implicit, known and unconscious?
  • Who in the group is reliable as a witness?   Whose stories can be believed?
  • Who has agendas and what are these?
  • Who in the group is competent or capable or intelligent?  Whose promises to act can be relied upon?  Who, in contrast, needs to be monitored or managed closely?
  • What constraints does the group or its members operate under?  Can these be removed or side-stepped?
  • What motivates the members of the group?  Can or should these motivations be changed, or enhanced?
  • Who is open to new ideas, to change, to improvements?
  • What obstacles and objections will arise in response to proposals for change?  Who will raise these?  Will these objections be explicit or hidden?
  • Who will resist or oppose change?  In what ways? Who will exercise pocket vetos?

Parsing new social situations – ie, answering these questions in a specific situation – is not something done in a few moments.  It may take years of observation and participation to understand a new group in which one is an outsider.  People who are good at this may be able to parse the key features of a new social landscape within a few weeks or months, depending on the level of access they have, and the willingness of the group members to trust them.     Good management consultants, provided their sponsors are sufficiently senior, can often achieve an understanding within a few weeks.   Experience helps.
Needless to say, most academic research is pretty useless for these types of questions.  Management theory has either embarked on the reduce-and-quantify-and-replicate model of academic psychology, or else undertaken the narrative descriptions of successful organizations of most books by business gurus.   Narrative descriptions of failures would be far more useful.
The best training for being able to answer such questions – apart from experience of life – is the study of anthropology or literature:  Anthropology because it explores the social structures of other cultures and the factors within a single lifetime which influence these structures, and Literature because it explores the motivations and consequences of human actions and interactions.   The golden age of television drama we are currently fortunate to be witness to also provides good training for viewers in human motivations, actions and interactions.  It is no coincidence, in my view, that the British Empire was created and run by people mostly trained in Classics, with its twofold combination of the study of alien cultures and literatures, together with the analytical rigor and intellectual discipline acquired through the incremental learning of those difficult subjects, Latin and Ancient Greek languages.
UPDATE (2011-02-16): From Norm Scheiber’s profile of US Treasury Secretary Timothy Geithner in The New Republic (2011-02-10):

“Tim’s real strength … is that he’s really quick at reading the culture of any institutions,” says Leslie Lipschitz, a former Geithner deputy.

The profile also makes evident Geithner’s agonistic planning approach to policy – seeking to incorporate opposition and minority views into both policy formation processes and the resulting policies.

Distributed cognition

Some excerpts from an ethnographic study of the operations of a Wall Street financial trading firm, bearing on distributed cognition and joint-action planning:

This emphasis on cooperative interaction underscores that the cognitive tasks of the arbitrage trader are not those of some isolated contemplative, pondering mathematical equations and connected only to to a screen-world.  Cognition at International Securities is a distributed cognition.  The formulas of new trading patterns are formulated in association with other traders.  Truly innovative ideas, as one senior trader observed, are slowly developed through successions of discreet one-to-one conversations.
. . .
An idea is given form by trying it out, testing it on others, talking about it with the “math guys,” who, significantly, are not kept apart (as in some other trading rooms),  and discussing its technical intricacies with the programmers (also immediately present).”   (p. 265)
The trading room thus shows a particular instance of Castell’s paradox:  As more information flows through networked connectivity, the more important become the kinds of interactions grounded in a physical locale. New information technologies, Castells (2000) argues, create the possibility for social interaction without physical contiguity.  The downside is that such interactions can become repititive and programmed in advance.  Given this change, Castells argues that as distanced, purposeful, machine-like interactions multiply, the value of less-directd, spontaneous, and unexpected interactions that take place in physical contiguity will become greater (see also Thrift 1994; Brown and Duguid 2000; Grabhar 2002).  Thus, for example, as surgical techniques develop together with telecommunications technology, the surgeons who are intervening remotely on patients in distant locations are disproportionately clustering in two or three neighbourhoods of Manhattan where they can socialize with each other and learn about new techniques, etc.” (p. 266)
“One examplary passage from our field notes finds a senior trader formulating an arbitrageur’s version of Castell’s paradox:
“It’s hard to say what percentage of time people spend on the phone vs. talking to others in the room.   But I can tell you the more electronic the market goes, the more time people spend communicating with others inside the room.”  (p. 267)
Of the four statistical arbitrage robots, a senior trader observed:
“We don’t encourage the four traders in statistical arb to talk to each other.  They sit apart in the room.  The reason is that we have to keep diversity.  We could really hammered if the different robots would have the same P&L [profit and loss] patterns and the same risk profiles.”  (p. 283)

References:
Daniel Beunza and David Stark [2008]:  Tools of the trade:  the socio-technology of arbitrage in a Wall Street trading room.  In:  Trevor Pinch and Richard Swedborg (Editors):  Living in a Material World:  Economic Sociology Meets Science and Technology Studies. Cambridge, MA, USA: MIT Press.  Chapter 8, pp. 253-290.
M. Castells [1996]:  The Information Age:  Economy, Society and Culture. Blackwell, Second Edition.

Good decisions

Which decisions are good decisions?
Since 1945, mainstream economists have arrogated the word “rational” to describe a mode of decision-making which they consider to be best.   This method, called maximum-expected utility (MEU) decision-making, assumes that the decision-maker has only a finite set of possible action-options and that she knows what these are, that she knows the possible consequences of each of these actions and can quantify (or at least can estimate) these consequences, and can do so on a single, common, numerical scale of value (the payoffs), that she knows a finite and complete collection of uncertain events that are possible and which may impact the consequences and their values, and knows (or at least can estimate) the probabilities of these uncertain events, again on a common numerical scale of uncertainty.  The MEU decision procedure is then to quantify the consequences of each action-option, weighting them by the relative likelihood of their arising according to their probabilities of the uncertain events which influence them.
The decision-maker then selects that action-option which has the maximum expected consequential value, ie the consequential value weighted by the probabilities of the uncertain events. Such decision-making, in an abuse of language that cries out for a criminal charges, is then called rational by economists.   Bayesian statistician Dennis Lindley even wrote a book about MEU which included the stunningly-arrogant sentence, “The main conclusion [of this book] is that there is essentially only one way to reach a decision sensibly.”

Rational?  This method is not even feasible, let alone sensible or good!
First, where do all these numbers come from?  With the explicit assumptions that I have listed, economists are assuming that the decision-maker has some form of perfect knowledge.  Well, no one making any real-world decisions has that much knowledge.  Of course, economists often respond, estimates can be used when the knowledge is missing.  But whose estimates?   Sourced from where?   Updated when? Anyone with any corporate or public policy experience knows straight away that consensus on such numbers for any half-way important problem will be hard to find.  Worse than that, any consensus achieved should immediately be suspected and interrogated, since it may be evidence of groupthink.    There simply is no certainty about the future, and if a group of people all do agree on what it holds, down to quantified probabilities and payoffs, they deserve the comeuppance they are likely to get!
Second, the MEU principle simply averages across uncertain events.   What of action-options with potentially catastrophic outcomes?   Their small likelihood of occurrence may mean they disappear in the averaging process, but no real-world decision-maker – at least, none with any experience or common sense – would risk a catastrophic outcome, despite their estimated low probabilities.   Wall Street trading firms have off-street (and often off-city) backup IT systems, and sometimes even entire backup trading floors, ready for those rare events.
Third, look at all the assumptions not made explicit in this framework.  There is no mention of the time allowed for the decision, so apparently the decision-maker has infinities of time available.  No mention is made of the processing or memory resources available for making the decision, so she has infinities of world also.   That makes a change from most real-world decisions:  what a pleasant utopia this MEU-land must be.  Nothing is said – at least nothing explicit – about taking into account the historical or other contexts of the decision, such as past decisions by this or related decision-makers, technology standards, legacy systems, organization policies and constraints, legal, regulatory or ethical constraints, or the strategies of the company or the society in which the decision-maker sits.   How could a decision procedure which ignores such issues be considered, even for a moment, rational?   I think only an academic could ignore context in this way; no business person I know would do so, since certain unemployment would be the result.  And how could members of an academic discipline purporting to be a social science accept and disseminate a decision-making framework which ignores such social, contextual features?
And do the selected action-options just execute themselves?  Nothing is said in this framework about consultation with stakeholders during the decision-process, so presumably the decision-maker has no one to report to, no board members or stockholders or division presidents or ward chairmen or electors to manage or inform or liaise with or mollify or reward or appease or seek re-election from, no technical departments to seek feasibility approval from, no implementation staff to motivate or inspire, no regulators or ethicists or corporate counsel to seek legal approval from, no funders or investors to raise finance from, no suppliers to convince to accept orders with, no distribution channels to persuade to schedule throughput with,  no competitors to second-guess or outwit, and no actual, self-immolating protesters outside one’s office window to avert one’s eyes from and feel guilt about for years afterward.*
For many complex decisions, the ultimate success or failure of the decision can depend significantly on the degree to which those having to execute the decision also support it.  Consequently, the choice of a specific action-option (and the logical reasoning process used to select it) may be far less important for success of the decision than that key stakeholders feel that they have been consulted appropriately during the reasoning process.  In other words, the quality of the decision may depend much more on how and with who the decision-maker reasons than on the particular conclusion she reaches.   Arguably this is true of almost all significant corporate strategy decisions and major public policy decisions:  There is ultimately no point sending your military to prop up an anti-communist regime in South-East Asia, for example, if your own soldiers come to feel they should not be there (as I discuss here, regarding another decision to go to war).
Mainstream economists have a long way to go before they will have a theory of good decision-making.   In the meantime, it would behoove them to show some humility when criticizing the decision-making processes of human beings.**
Notes and Bibliography:
Oskar Lange [1945-46]:  The scope and method of economics.  The Review of Economic Studies, 13 (1): 19-32.
Dennis Lindley [1985]:  Making Decisions.  Second Edition. London, UK: John Wiley and Sons.
L James Savage [1950]: The Foundations of Statistics.  New York, NY, USA:  Wiley.
* I’m sure Robert McNamara, statistician and decision-theory whizz kid, never considered the reactions of self-immolating protesters when making decisions early in his career, but having seen one outside his office window late in his time as Secretary of Defense he seems to have done so subsequently.
** Three-toed sloth comments dialogically and amusingly on MEU theory here.