Archive for the 'Decision theory' Category
Norm Ornstein in The Atlantic on criticisms of Bam that he’s not as good at cajoling and arm-twisting as was LBJ, not as good at shooting-the-breeze as was Clinton, and not as good at hard-ball negotiation as was Reagan. An excerpt:
But there was one downside: the reactivation of one of the most enduring memes and myths about the presidency, and especially the Obama presidency. Like Rasputin (or Whac-A-Mole,) it keeps coming back even after it has been bludgeoned and obliterated by facts and logic. I feel compelled to whack this mole once more.
The meme is what Matthew Yglesias, writing in 2006, referred to as “the Green Lantern Theory of Geopolitics,” and has been refined by Greg Sargent and Brendan Nyhan into the Green Lantern Theory of the presidency. In a nutshell, it attributes heroic powers to a president—if only he would use them. And the holders of this theory have turned it into the meme that if only Obama used his power of persuasion, he could have the kind of success that LBJ enjoyed with the Great Society, that Bill Clinton enjoyed in his alliance with Newt Gingrich that gave us welfare reform and fiscal success, that Ronald Reagan had with Dan Rostenkowski and Bill Bradley to get tax reform, and so on.
If only Obama had dealt with Congress the way LBJ did—persuading, cajoling, threatening, and sweet-talking members to attain his goals—his presidency would not be on the ropes and he would be a hero. If only Obama would schmooze with lawmakers the way Bill Clinton did, he would have much greater success. If only Obama would work with Republicans and not try to steamroll them, he could be a hero and have a fiscal deal that would solve the long-term debt problem.
If only the proponents of this theory would step back and look at the realities of all these presidencies (or would read or reread the Richard Neustadt classic, Presidential Power.)
I do understand the sentiment here and the frustration over the deep dysfunction that has taken over our politics. It is tempting to believe that a president could overcome the tribalism, polarization, and challenges of the permanent campaign, by doing what other presidents did to overcome their challenges. It is not as if passing legislation and making policy was easy in the old days.
But here is the reality, starting with the Johnson presidency. I do not want to denigrate LBJ or downplay his remarkable accomplishments and the courage he displayed in taking on his own base, Southern Democrats, to enact landmark civil-rights and voting-rights laws that have done more to transform America in a positive way than almost anything else in our lifetimes. And it is a fact that the 89th Congress, that of the Great Society, can make the case for having more sweeping accomplishments, from voting rights to Medicare to elementary and secondary education reform, than any other.
LBJ had a lot to do with the agenda, and the accomplishments. But his drive for civil rights was aided in 1964 by having the momentum following John F. Kennedy’s assassination, and the partnership of Republicans Everett Dirksen and Bill McCullough, detailed beautifully in new books by Clay Risen and Todd Purdum. And Johnson was aided substantially in 1965-66 by having swollen majorities of his own party in both chambers of Congress—68 of 100 senators, and 295 House members, more than 2-to-1 margins. While Johnson needed, and got, substantial Republican support on civil rights and voting rights to overcome Southern Democrats’ opposition, he did not get a lot of Republicans supporting the rest of his domestic agenda. He had enough Democrats supporting those policies to ensure passage, and he got enough GOP votes on final passage of key bills to ensure the legitimacy of the actions.
Johnson deserves credit for horse-trading (for example, finding concessions to give to Democrat Wilbur Mills, chairman of the House Ways and Means Committee, to get his support for Medicare), but it was the numbers that made the difference. Consider what happened in the next two years, after the 1966 midterm elections depleted Democratic ranks and enlarged Republican ones. LBJ was still the great master of Congress—but without the votes, the record was anything but robust. All the cajoling and persuading and horse-trading in the world did not matter.
Now briefly consider other presidents. Ronald Reagan was a master negotiator, and he has the distinction of having two major pieces of legislation, tax reform and immigration reform, enacted in his second term, without the overwhelming numbers that Johnson enjoyed in 1965-66. What Reagan did have, just like Johnson had on civil rights, was active and eager partners from the other party. The drive for tax reform did not start with Reagan, but with Democrats Bill Bradley and Dick Gephardt, whose reform bill became the template for the law that ultimately passed. They, and Ways and Means Chairman Dan Rostenkowski, were delighted to make their mark in history (and for Bradley and Gephardt, to advance their presidential ambitions) by working with the lame-duck Republican president. The same desire to craft transformative policy was there for both Alan Simpson and Ron Mazzoli, a Senate Republican and a House Democrat, who put together immigration legislation with limited involvement by the White House.
As for Bill Clinton, he was as politically adept as any president in modern times, and as charismatic and compelling as anyone. But the reality is that these great talents did not convince a single Republican to support his economic plan in 1993, nor enough Democrats to pass the plan for a crucial seven-plus months; did not stop the Republicans under Speaker Newt Gingrich from shutting down the government twice; and did not stop the House toward the end of his presidency from impeaching him on shaky grounds, with no chance of conviction in the Senate. The brief windows of close cooperation in 1996, after Gingrich’s humiliation following the second shutdown, were opened for pragmatic, tactical reasons by Republicans eager to win a second consecutive term in the majority, and ended shortly after they had accomplished that goal.
When Obama had the numbers, not as robust as LBJ’s but robust enough, he had a terrific record of legislative accomplishments. The 111th Congress ranks just below the 89th in terms of significant and far-reaching enactments, from the components of the economic stimulus plan to the health care bill to Dodd/Frank and credit-card reform. But all were done with either no or minimal Republican support. LBJ and Reagan had willing partners from the opposite party; Obama has had none. Nothing that he could have done would have changed the clear, deliberate policy of Republicans uniting to oppose and obstruct his agenda, that altered long-standing Senate norms to use the filibuster in ways it had never been employed before, including in the LBJ, Reagan, and Clinton eras, that drew sharp lines of total opposition on policies like health reform and raising taxes as part of a broad budget deal.
Could Obama have done more to bond with lawmakers? Sure, especially with members of his own party, which would help more now, when he is in the throes of second-term blues, than it would have when he achieved remarkable party unity in his first two years. But the brutal reality, in today’s politics, is that LBJ, if he were here now, could not be the LBJ of the Great Society years in this environment. Nobody can, and to demand otherwise is both futile and foolish.”
Different knowledge disciplines mean different things by the verb “to understand”. For economists and physicists, a domain or a problem is not understood unless and until it is modeled, and often only by a particular type of model. For most economists, for instance, agent-based models do not provide understanding, because they only show sufficient and not necessary conclusions. For mechanical engineers, understanding usually only comes from a physical prototype. For computer programmers, understanding happens through and with the writing of a software programme for the problem. For legal scholars, it arises with and from the writing of a narrative text reflecting on the problem and its issues.
Here is economist and game theorist Ariel Rubinstein on models in economics:
Mere purposive rationality unaided by such phenomena as art, religion, dream and the like, is necessarily pathogenic and destructive of life; and . . . its virulence springs specifically from the circumstance that life depends upon interlocking circuits of contingency, while consciousness can see only such short arcs of such circuits as human purpose may direct.”
Gregory Bateson : “Style, Grace and Information in Primitive Art.” Page 146 in: Steps to an Ecology of Mind. New York, NY: Ballentine Books.
George Santayana said something similar in his Sonnet III:
It is not wisdom to be only wise,
And on the inward vision close the eyes,
But it is wisdom to believe the heart.
A recurring theme here has been the complexity of most important real-world decision-making, contrary to the models used in much of economics and computer science. Some relevant posts are here, here, and here. On this topic, I recently came across a wonderful cartoon by Michael Leunig, entitled “The Way” (click on the image to enlarge it):
I am very grateful to Michael Leunig for permission to reproduce this cartoon here.
Most people, if they think about the topic at all, probably imagine computer science involves the programming of computers. But what are computers? In most cases, these are just machines of one form or another. And what is programming? Well, it is the issuing of instructions (“commands” in the programming jargon) for the machine to do something or other, or to achieve some state or other. Thus, I view Computer Science as nothing more or less than the science of delegation.
When delegating a task to another person, we are likely to be more effective (as the delegator or commander) the more we know about the skills and capabilities and curent commitments and attitudes of that person (the delegatee or commandee). So too with delegating to machines. Accordingly, a large part of theoretical computer science is concerned with exploring the properties of machines, or rather, the deductive properties of mathematical models of machines. Other parts of the discipline concern the properties of languages for commanding machines, including their meaning (their semantics) – this is programming language theory. Because the vast majority of lines of program code nowadays are written by teams of programmers, not individuals, then much of computer science – part of the branch known as software engineering – is concerned with how to best organize and manage and evaluate the work of teams of people. Because most machines are controlled by humans and act in concert for or with or to humans, then another, related branch of this science of delegation deals with the study of human-machine interactions. In both these branches, computer science reveals itself to have a side which connects directly with the human and social sciences, something not true of the other sciences often grouped with Computer Science: pure mathematics, physics, or chemistry.
And from its modern beginnings 70 years ago, computer science has been concerned with trying to automate whatever can be automated – in other words, with delegating the task of delegating. This is the branch known as Artificial Intelligence. We have intelligent machines which can command other machines, and manage and control them in the same way that humans could. But not all bilateral relationships between machines are those of commander-and-subordinate. More often, in distributed networks machines are peers of one another, intelligent and autonomous (to varying degrees). Thus, commanding is useless – persuasion is what is needed for one intelligent machine to ensure that another machine does what the first desires. And so, as one would expect in a science of delegation, computational argumentation arises as an important area of study.
Over the last 40-odd years, a branch of Artificial Intelligence called AI Planning has developed. One way to view Planning is as automated computer programming:
- Write a program that takes as input an initial state, a final state (“a goal”), and a collection of possible atomic actions, and produces as output another computer programme comprising a combination of the actions (“a plan”) guaranteed to take us from the initial state to the final state.
A prototypical example is robot motion: Given an initial position (e.g., here), a means of locomotion (e.g., the robot can walk), and a desired end-position (e.g., over there), AI Planning seeks to empower the robot to develop a plan to walk from here to over there. If some or all the actions are non-deterministic, or if there are other possibly intervening effects in the world, then the “guaranteed” modality may be replaced by a “likely” modality.
Another way to view Planning is in contrast to Scheduling:
- Scheduling is the orderly arrangement of a collection of tasks guranteed to achieve some goal from some initial state, when we know in advance the initial state, the goal state, and the tasks.
- Planning is the identification and orderly arrangement of tasks guranteed to achieve some goal from some initial state, when we know in advance the initial state, the goal state, but we don’t yet know the tasks; we only know in advance the atomic actions from which tasks may be constructed.
Relating these ideas to my business experience, I realized that a large swathe of complex planning activities in large companies involves something at a higher level of abstraction. Henry Mintzberg called these activities “Strategic Programming”
- Strategic Programming is the identification and priorization of a finite collection of programs or plans, given an initial state, a set of desirable end-states or objectives (possibly conflicting). A program comprises an ordered collection of tasks, and these tasks and their ordering we may or may not know in advance.
Examples abound in complex business domains. You wake up one morning to find yourself the owner of a national mobile telecommunications licence, and with funds to launch a network. You have to buy the necessary equipment and deploy and connect it, in order to provide your new mobile network. Your first decision is where to provide coverage: you could aim to provide nationwide coverage, and not open your service to the public until the network has been installed and connected nationwide. This is the strategy Orange adopted when launching PCS services in mainland Britain in 1994. One downside of waiting till you’ve covered the nation before selling any service to customers is that revenues are delayed.
Another downside is that a competitor may launch service before you, and that happened to Orange: Mercury One2One (as it then was) offered service to the public in 1993, when they had only covered the area around London. The upside of that strategy for One2One was early revenues. The downside was that customers could not use their phones outside the island of coverage, essentially inside the M25 ring-road. For some customer segments, wide-area or nationwide coverage may not be very important, so an early launch may be appropriate if those customer segments are being targeted. But an early launch won’t help customers who need wider-area coverage, and – unless marketing communications are handled carefully – the early launch may position the network operator in the minds of such customers as permanently providing inadequate service. The expectations of both current target customers and customers who are not currently targets need to be explicitly managed to avoid such mis-perceptions.
In this example, the different coverage rollout strategies ended up at the same place eventually, with both networks providing nationwide coverage. But the two operators took different paths to that same end-state. How to identify, compare, prioritize, and select-between these different paths is the very stuff of marketing and business strategy, ie, of strategic programming. It is why business decision-making is often very complex and often intellectually very demanding. Let no one say (as academics are wont to do) that decision-making in business is a doddle. Everything is always more complicated than it looks from outside, and identifying and choosing-between alternative programs is among the most complex of decision-making activities.
This week’s leadership-challenge-that-wasn’t in the Federal Parliamentary Caucus of the Australian Labor Party saw the likely end of Kevin Rudd’s political career. At the last moment he bottled it, having calculated that he did not have the numbers to win a vote of his caucus colleagues and so deciding not to stand. Ms Gillard was re-elected leader of the FPLP unopposed. Why Rudd failed to win caucus support is explained clearly in subsequent commentary by one of his former speech-writers, James Button:
The trick to government, Paul Keating once said, is to pick three big things and do them well. But Rudd opened a hundred policy fronts, and focused on very few of them. He centralised decision-making in his office yet could not make difficult decisions. He called climate change the greatest moral challenge of our time, then walked away from introducing an emissions trading scheme. He set a template for governing that Labor must move beyond.
On Thursday, for the third time in three years, a large majority of Rudd’s caucus colleagues made it clear that they did not want him as leader. Yet for years Rudd seemed as if he would never be content until he returned as leader. On Friday he said that he would never again seek the leadership of the party. He must keep his word, or else the impasse will destabilise and derail the party until he leaves Parliament.
Since losing the prime ministership, Rudd never understood that for his prospects to change within the government he had to openly acknowledge, at least in part, that there were sensible reasons why Gillard and her supporters toppled him in 2010. Then, as hard as it would have been, he had to get behind Gillard, just as Bill Hayden put aside his great bitterness and got behind Bob Hawke and joined his ministry after losing the Labor leadership to him in 1983.
Yes, Rudd’s execution was murky and brutal and should have been done differently, perhaps with a delegation of senior ministers going to Rudd first to say change or go. Yes, the consequences have been catastrophic for Gillard and for the ALP. ”Blood will have blood,” as Dennis Glover, a former Gillard speechwriter who also wrote speeches for Rudd, said in a newspaper on Thursday.
But why did it happen? Why did so many Labor MPs resolve to vote against Rudd that he didn’t dare stand? Why was he thrashed in his 2012 challenge? Why have his numbers not significantly moved, despite all the government’s woes?
Because – it must be said again – Rudd was a poor prime minister. To his credit, he led the government’s brave and decisive response to the global financial crisis. His apology speech changed Australia and will be remembered for years to come. But beyond that he has few achievements, and the way he governed brought him down.
At the time of his 2012 challenge, seven ministers went public with fierce criticisms of Rudd’s governing style. When most of them made it clear they would not serve again in a Rudd cabinet, many commentators wrote this up as slander and character assassination of Rudd, or as one of those vicious but mysterious internal brawls that afflict the Labor Party from time to time. They missed the essential points: that the criticisms came from a diverse and representative set of ministers, and they had substance.
If the word of these seven ministers is not enough, consider the reporting of Rudd’s treatment of colleagues by Fairfax journalist David Marr in his 2010 Quarterly Essay, Power Trip. Or the words of Glover, who wrote last year that as a ”member of the Gang of Four Hundred or So (advisers and speechwriters) I can assure you that the chaos and frustration described by Gillard supporters during February’s failed leadership challenge rang very, very true with about 375 of us.”
Consider the reporting of Rudd’s downfall by ABC journalist Barrie Cassidy in his book, Party Thieves. Never had numbers tumbled so quickly, Cassidy wrote. ”That’s because Rudd himself drove them. His own behaviour had caused deep-seated resentment to take root.” Leaders had survived slumps before and would again. But ”Rudd was treated differently because he was different: autocratic, exclusive, disrespectful and at times flat-out abusive”. Former Labor minister Barry Cohen told Cassidy: ”If Rudd was a better bloke he would still be leader. But he pissed everybody off.”
These accounts tallied with my own observations when I worked as a speechwriter for Rudd in 2009. While my own experience of Rudd was both poor and brief, I worked with many people – 40 or more – who worked closely with him. Their accounts were always the same. While Rudd was charming to the outside world, behind closed doors he treated people with rudeness and contempt. At first I kept waiting for my colleagues to give me another side of Rudd: that he could be difficult but was at heart a good bloke. Yet apart from some conversations in which people praised his handling of the global financial crisis, no one ever did.
Since he lost power, is there any sign that Rudd has reflected on his time in office, accepted that he made mistakes, that he held deep and unaccountable grudges and treated people terribly?
Did he reflect on the rages he would fly into when people gave him advice he didn’t want, how he would put those people into what his staff called ”the freezer”, sometimes not speaking to them for months or more? Did he reflect on the way he governed in a near permanent state of crisis, how his reluctance to make decisions until the very last moment coupled with a refusal to take unwelcome advice led his government into chaos by the middle of 2010, when his obsessive focus on his health reforms left the government utterly unprepared to deal with the challenges of the emissions trading scheme, the budget, the Henry tax review and the mining tax? To date there is no sign that he has learnt from the failures of his time as prime minister.
Through his wife, Rudd is currently the richest member of the Australian Commonwealth Parliament, and perhaps the richest person ever to be an MP. He is also fluent in Mandarin Chinese and famously intelligent, although perhaps not as bright as his predecessors as Labor leader, Gough Whitlam or Doc Evatt, or former ministers, Isaac Isaacs, Ted Theodore or Barry Jones. It is possible, of course, to have a first-rate mind and a second-rate temperament. An autocratic management style – unpopular within the Labor Party at any time, as Evatt and Whitlam both learnt – is even less appropriate when the Party lacks a majority in the House, and has to rely on a permanent, floating two-up game of ad hoc negotiations with Green and Independent MPs to pass legislation.
How might two actions be combined? Well, depending on the actions, we may be able to do one action and then the other, or we may be able do the other and then the one, or maybe not. We call such a combination a sequence or concatenation of the two actions. In some cases, we may be able to do the two actions in parallel, both at the same time. We may have to start them simultaneously, or we may be able to start one before the other. Or, we may have to ensure they finish together, or that they jointly meet some other intermediate synchronization targets.
In some cases, we may be able to interleave them, doing part of one action, then part of the second, then part of the first again, what management consultants in telecommunications call multiplexing. For many human physical activities – such as learning to play the piano or learning to play golf – interleaving is how parallel activities are first learnt and complex motor skills acquired: first play a few bars of music on the piano with only the left hand, then the same bars with only the right, and keep practicing the hands on their own, and only after the two hands are each practiced individually do we try playing the piano with the two hands together.
Computer science, which I view as the science of delegation, knows a great deal about how actions may be combined, how they may be distributed across multiple actors, and what the meanings and consequences of these different combinations are likely to be. It is useful to have a list of the possibilities. Let us suppose we have two actions, represented by A and B respectively. Then we may be able to do the following compound actions:
- Sequence: The execution of A followed by the execution of B, denoted A ; B
- Iterate: A executed n times, denoted A ^ n (This is sequential execution of a single action.)
- Parallelize: Both A and B are executed together, denoted A & B
- Interleave: Action A is partly executed, followed by part-execution of B, followed by continued part-execution of A, etc, denoted A || B
- Choose: Either A is executed or B is executed but not both, denoted A v B
- Combinations of the above: For example, with interleaving, only one action is ever being executed at one time. But it may be that the part-executions of A and B can overlap, so we have a combination of Parallel and Interleaved compositions of A and B.
Depending on the nature of the domain and the nature of the actions, not all of these compound actions may necessarily be possible. For instance, if action B has some pre-conditions before it can be executed, then the prior execution of A has to successfully achieve these pre-conditions in order for the sequence A ; B to be feasible.
This stuff may seem very nitty-gritty, but anyone who’s ever asked a teenager to do some task they don’t wish to do, will know all the variations in which a required task can be done after, or alongside, or intermittently with, or be replaced instead by, some other task the teen would prefer to do. Machines, it turns out, are much like recalcitrant and literal-minded teenagers when it comes to commanding them to do stuff.
The standard or classical model in decision theory is called Maximum Expected Utility (MEU) theory, which I have excoriated here and here (and which Cosma Shalizi satirized here). Its flaws and weaknesses for real decision-making have been pointed out by critics since its inception, six decades ago. Despite this, the theory is still taught in economics classes and MBA programs as a normative model of decision-making.
A key feature of MEU is the the decision-maker is required to identify ALL possible action options, and ALL consequential states of these options. He or she then reasons ACROSS these consequences by adding together the utilites of the consquential states, weighted by the likelihood that each state will occur.
However, financial and business planners do something completely contrary to this in everyday financial and business modeling. In developing a financial model for a major business decision or for a new venture, the collection of possible actions is usually infinite and the space of possible consequential states even more so. Making human sense of the possible actions and the resulting consequential states is usually a key reason for undertaking the financial modeling activity, and so cannot be an input to the modeling. Because of the explosion in the number states and in their internal complexity, business planners cannot articulate all the actions and all the states, nor even usually a subset of these beyond a mere handful.
Therefore, planners typically choose to model just 3 or 4 states – usually called cases or scenarios – with each of these combining a complex mix of (a) assumed actions, (b) assumed stakeholder responses and (c) environmental events and parameters. The assumptions and parameter values are instantiated for each case, the model run, and the outputs of the 3 or 4 cases compared with one another. The process is usually repeated with different (but close) assumptions and parameter values, to gain a sense of the sensitivity of the model outputs to those assumptions.
Often the scenarios will be labeled “Best Case”, “Worst Case”, “Base Case”, etc to identify the broad underlying principles that are used to make the relevant assumptions in each case. Actually adopting a financial model for (say) a new venture means assuming that one of these cases is close enough to current reality and its likely future development in the domain under study- ie, that one case is realistic. People in the finance world call this adoption of one case “taking a view” on the future.
Taking a view involves assuming (at least pro tem) that one trajectory (or one class of trajectories) describes the evolution of the states of some system. Such betting on the future is the complete opposite cognitive behaviour to reasoning over all the possible states before choosing an action, which the protagonists of the MEU model insist we all do. Yet the MEU model continues to be taught as a normative model for decision-making to MBA students who will spend their post-graduation life doing business planning by taking a view.