Archive for the 'Computing-as-interaction' Category

cod-Bourbakism

Economic historians Philip Mirowski and Edward Nik-Khah have published a new book on the role of information in post-war economics.    The introductory chapter contains a nice, high-level summary of the failures of the standard model of decision-making in mainstream micro Economics, Maximum Expected Utility Theory or so-called rational choice theory.  Because the MEU model continues to dominate academic economics despite working neither in practice nor in theory, I have written about it often before, for example herehere and here.  Listen to Mirowski and Nik-Khah:
Given the massive literature on so-called rationality in the social sciences, it gives one pause to observe what a dark palimpsest the annals of rational choice has become. The modern economist, who avoids philosophy and psychology as the couch potato avoids the gym, has almost no appreciation for the rich archive of paradoxes of rationality. This has come to pass primarily by insisting upon a distinctly peculiar template as the necessary starting point of all discussion, at least from the 1950s onwards. Neoclassical economists frequently characterize their schema as comprising three components: (a) a consistent well-behaved preference ordering reflecting the mindset of some individual; (b) the axiomatic method employed to describe mental manipulations of (a) as comprising the definition of “rational choice”; and (c) reduction of all social phenomena to be attributed to the activities of individual agents applying (b) to (a). These three components may be referred to in shorthand as: “utility” functions, formal axiomatic definitions (including maximization provisions and consistency restrictions), and some species of methodological individualism.
The immediate response is to marvel at how anyone could have confused this extraordinary contraption with the lush forest of human rationality, however loosely defined. Start with component (a). The preexistence of an inviolate preference order rules out of bounds most phenomena of learning, as well as the simplest and most commonplace of human experiences—that feeling of changing one’s mind. The obstacles that this doctrine pose for problems of the treatment of information turns out to be central to our historical account. People have been frequently known to make personally “inconsistent” evaluations of events both observed and unobserved; yet in rational choice theory, committing such a solecism is the only real mortal sin—one that gets you harshly punished at minimum and summarily drummed out of the realm of the rational in the final analysis. Now, let’s contemplate component (b). That dogma insists the best way to enshrine rationality is by mimicking a formal axiomatic system—as if that were some sterling bulwark against human frailty and oblique hidden flaws of hubris. One would have thought Gödel’s Theorem might have chilled the enthusiasm for this format, but curiously, the opposite happened instead. Every rational man within this tradition is therefore presupposed to conform to his own impregnable axiom system—something that comes pre-loaded, like Microsoft on a laptop. This cod-Bourbakism ruled out many further phenomena that one might otherwise innocently call “rational”: an experimental or pragmatic stance toward the world; a life where one understands prudence as behaving different ways (meaning different “rationalities”) in different contexts; a self-conception predicated on the possibility that much personal knowledge is embodied, tacit, inarticulate, and heavily emotion driven.  Furthermore, it strangely banishes many computational approaches to cognition: for instance, it simply elides the fact that much algorithmic inference can be shown to be noncomputable in practice; or a somewhat less daunting proposition, that it is intractable in terms of the time and resources required to carry it out. The “information revolution” in economics primarily consisted of the development of Rube Goldberg–type contraptions to nominally get around these implications. Finally, contemplate component (c): complaints about methodological individualism are so drearily commonplace in history that it would be tedious to reproduce them here. Suffice it to say that (c) simply denies the very existence of social cognition in its many manifestations as deserving of the honorific “rational.”
There is nothing new about any of these observations. Veblen’s famous quote summed them up more than a century ago: “The hedonistic conception of man is that of a lightning calculator of pleasures and pains, who oscillates like a homogeneous globule of desire of happiness under the impulse of stimuli that shift him about the area, but leave him intact.”  The roster of latter-day dissenters is equally illustrious, from Herbert Simon to Amartya Sen to Gerd Gigerenzer, if none perhaps is quite up to his snuff in stylish prose or withering skepticism. It is commonplace to note just how ineffectual their dissent has been in changing modern economic practice.
Why anyone would come to mistake this virtual system of billiard balls careening across the baize as capturing the white-hot conviction of rationality in human life is a question worthy of a few years of hard work by competent intellectual historians; but that does not seem to be what we have been bequeathed. In its place sits the work of (mostly) historians of economics and a few historians of science treating these three components of rationality as if they were more or less patently obvious, while scouring over fine points of dispute concerning the formalisms involved, and in particular, an inordinate fascination for rival treatments of probability theory within that framework. We get histories of ordinal versus cardinal utility, game theory, “behavioral” peccadillos, preferences versus “capacities,” social choice theory, experimental interventions, causal versus evidential decision theory, formalized management theory, and so forth, all situated within a larger framework of the inexorable rise of neoclassical economics. Historians treat components (a–c) as if they were the obvious touchstone of any further research, the alpha and omega of what it means to be “rational.” Everything that comes after this is just a working out of details or a cleaning up of minor glitches. If and when this “rational choice” complex is observed taking root within political science, sociology, biology, or some precincts of psychology, it is often treated as though it had “migrated” intact from the economists’ citadel. If that option is declined, then instead it is intimated that “science” and the “mathematical tools” made the figures in question revert to certain stereotypic caricatures of rationality.” [Mirowski and Nik-Khah 2017, locations 318-379 of the Kindle edition].

Reference:

Philip Mirowski and Edward Nik-Khah [2017]: The Knowledge We Have Lost in Information: The History of Information in Modern Economics. Oxford, UK: Oxford University Press.




Life in Crypto Valley

Lake Zug this morning from Zug Harbour, Zug, Switzerland.




London Life: Ethereum DevCon1

image

Gibson Hall, London, venue for DevCon1, 9-13 November 2015.  There was some irony in holding a conference to discuss technology developments in blockchains and distributed ledgers in a grand, neo-classical heritage-listed building erected in 1865. At least it was fitting that a technology currently taking the financial world by storm should be debated in what was designed to be a banking hall (for Westminster Bank). The audience was split fairly evenly between dreadlocked libertarians & cryptocurrency enthusiasts and bankers & lawyers in smart suits: cyberpunk meets Gordon Gekko.




Hackathon life

image

At the first Internet of Things and Distributed Ledgers Hackathon, Barclays Rise Hackspace, Notting Hill, London, 7 November 2015.




The science of delegation

Most people, if they think about the topic at all, probably imagine computer science involves the programming of computers.  But what are computers?  In most cases, these are just machines of one form or another.  And what is programming?  Well, it is the issuing of instructions (“commands” in the programming jargon) for the machine to do something or other, or to achieve some state or other.   Thus, I view Computer Science as nothing more or less than the science of delegation.

When delegating a task to another person, we are likely to be more effective (as the delegator or commander) the more we know about the skills and capabilities and curent commitments and attitudes of that person (the delegatee or commandee).   So too with delegating to machines.   Accordingly, a large part of theoretical computer science is concerned with exploring the properties of machines, or rather, the deductive properties of mathematical models of machines.  Other parts of the discipline concern the properties of languages for commanding machines, including their meaning (their semantics) – this is programming language theory.  Because the vast majority of lines of program code nowadays are written by teams of programmers, not individuals, then much of computer science – part of the branch known as software engineering – is concerned with how to best organize and manage and evaluate the work of teams of people.   Because most machines are controlled by humans and act in concert for or with or to humans, then another, related branch of this science of delegation deals with the study of human-machine interactions.   In both these branches, computer science reveals itself to have a side which connects directly with the human and social sciences, something not true of the other sciences often grouped with Computer Science: pure mathematics, physics, or chemistry. 

And from its modern beginnings 70 years ago, computer science has been concerned with trying to automate whatever can be automated – in other words, with delegating the task of delegating.  This is the branch known as Artificial Intelligence.   We have intelligent machines which can command other machines, and manage and control them in the same way that humans could.   But not all bilateral relationships between machines are those of commander-and-subordinate.  More often, in distributed networks machines are peers of one another, intelligent and autonomous (to varying degrees).  Thus, commanding is useless – persuasion is what is needed for one intelligent machine to ensure that another machine does what the first desires.  And so, as one would expect in a science of delegation, computational argumentation arises as an important area of study.

 




When are agent models or systems appropriate?

In July 2005, inspired by a talk on formation flying by unmanned aircraft by Sandor Veres at the Liverpool Agents in Space Symposium, I wrote down some rules of thumb I have been using informally for determining whether an agent-based modeling (ABM) approach is appropriate for a particular application domain.  Appropriateness is assessed by answering the following questions:

1. Are there multiple entities in the domain, or can the domain be represented as if there are?
2. Do the entities have access to potentially different information sources or do they have potentially different beliefs? For example, differences may be due to geographic, temporal, legal, resource or conceptual constraints on the information available to the entities.
3. Do the entities have potentially different goals or objectives? This will typically be the case if the entities are owned or instructed by different people or organizations.
4. Do the entities have potentially different preferences (or utilities) over their goals or objectives ?
5. Are the relationships between the entities likely to change over time?
6. Does a system representing the domain have multiple threads of control?

If the answers are YES to Question 1 and also YES to any other question, then an agent-based approach is appropriate. If the answer to Question 1 is NO, or if the answers are YES to Question 1 but NO to all other questions, then a traditional object-based approach is more appropriate.

Traditional object-oriented systems involve static relationships between non-autonomous entities sharing the same beliefs, preferences and goals, and in a system with a single thread of control.




What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.

Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).

References:

Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).

Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.

David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.

Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.

Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.

Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.




A salute to Charles Hamblin

This short biography of Australian philosopher and computer scientist Charles L. Hamblin was initially commissioned by the Australian Computer Museum Society.

Charles Leonard Hamblin (1922-1985) was an Australian philosopher and one of Australia’s first computer scientists. His main early contributions to computing, which date from the mid 1950s, were the development and application of reverse polish notation and the zero-address store. He was also the developer of one of the first computer languages, GEORGE. Since his death, his ideas have become influential in the design of computer interaction protocols, and are expected to shape the next generation of e-commerce and machine-communication systems.

Continue reading ‘A salute to Charles Hamblin’




Dialogs over actions

In the post below, I mentioned the challenge for knowledge engineers of representing know-how, a task which may require explicit representation of actions, and sometimes also of utterances over actions.  The know-how involved in steering a large sailing ship with its diverse crew surely includes the knowledge of who to ask (or to command) to do what, when, and how to respond when these requests (or commands) are ignored, or fail to be executed successfully or timeously.

One might imagine epistemology – the philosophy of knowledge – would be of help here.  Philosophers, however, have been seduced since Aristotle with propositions (factual statements about the world having truth values), largely ignoring actions, and their representation.   Philosophers of language have also mostly focused on speech acts – utterances which act to change the world – rather than on utterances about actions themselves.  Even among speech act theorists the obsession with propositions is strong, with attempts to analyze utterances which are demonstrably not propositions (eg, commands) by means of implicit assertive statements – propositions  asserting something about the world, where “the world” is extended to include internal mental states and intangible social relations between people – which these utterances allegedly imply.  With only a few exceptions (Thomas Reid 1788, Adolf Reinach 1913, Juergen Habermas 1981, Charles Hamblin 1987), philosophers of language have mostly ignored utterances  about actions.

Consider the following two statements:

I promise you to wash the car.

I command you to wash the car.

The two statements have almost identical English syntax.   Yet their meanings, and the intentions of their speakers, are very distinct.  For a start, the action of washing the car would be done by different people – the speaker and the hearer, respectively (assuming for the moment that the command is validly issued, and accepted).  Similarly, the power to retract or revoke the action of washing the car rests with different people – with the hearer (as the recipient of the promise) and the speaker (as the commander), respectively.

Linguists generally use “semantics” to refer to the real-world referants of syntactically-correct expressions, while “pragmatics” refers to other aspects of the meaning and use of an expression not related to their relationship (or not) to things in the world, such as the speaker’s intentions.  For neither of these two expressions does it make sense to speak of  their truth value:  a promise may be questioned as to its sincerity, or its feasibility, or its appropriateness, etc, but not its truth or falsity;  likewise, a command  may be questioned as to its legal validity, or its feasibility, or its morality, etc, but also not its truth or falsity.

For utterances about actions, such as promises, requests, entreaties and commands, truth-value semantics makes no sense.  Instead, we generally need to consider two pragmatic aspects.  The first is uptake, the acceptance of the utterance by the hearer (an aspect first identified by Reid and by Reinach), an acceptance which generally creates a social commitment to execute the action described in the utterance by one or other party to the conversation (speaker or hearer).    Once uptaken, a second pragmatic aspect comes into play:  the power to revoke or retract the social commitment to execute the action.  This revocation power does not necessarily lie with the original speaker; only the recipient of a promise may cancel it, for example, and not the original promiser.  The revocation power also does not necessarily lie with the uptaker, as commands readily indicate.

Why would a computer scientist be interested in such humanistic arcana?  The more tasks we delegate to intelligent machines, the more they need to co-ordinate actions with others of like kind.  Such co-ordination requires conversations comprising utterances over actions, and, for success, these require agreed syntax, semantics and pragmatics.  To give just one example:  the use of intelligent devices by soldiers have made the modern battlefield a place of overwhelming information collection, analysis and communication.  Lots of this communication can be done by intelligent software agents, which is why the US military, inter alia, sponsors research applying the philosophy of language and the  philosophy of argumentation to machine communications.

Meanwhile, the philistine British Government intends to cease funding tertiary education in the arts and the humanities.   Even utilitarians should object to this.

References:

Juergen  Habermas [1984/1981]:   The Theory of Communicative Action:  Volume 1:  Reason and the Rationalization of Society.  London, UK:  Heinemann.   (Translation by T. McCarthy of:  Theorie des Kommunikativen Handelns, Band I,  Handlungsrationalitat und gesellschaftliche Rationalisierung. Suhrkamp, Frankfurt, Germany, 1981.)

Charles  L. Hamblin [1987]:  Imperatives. Oxford, UK:  Basil Blackwell.

P. McBurney and S. Parsons [2007]: Retraction and revocation in agent deliberation dialogs. Argumentation, 21 (3): 269-289.

Adolph Reinach [1913]:  Die apriorischen Grundlagen des bürgerlichen Rechtes.  Jahrbuch für Philosophie und phänomenologische Forschung, 1: 685-847.




Antikythera

An orrery is a machine for predicting the movements of heavenly bodies.   The oldest known orrery is the Antikythera Mechanism, created in Greece around 2100 years ago, and rediscovered in 1901 in a shipwreck near the island of  Antikythera (hence its name).   The high-quality and precision nature of its components would indicate that this device was not unique, since the making of high-quality mechanical components is not trivial, and is not usually achieved with just one attempt (something Charles Babbage found, and which delayed his development of computing machinery immensely).

It took until 2006 and the development of x-ray tomography for a plausible theory of the purpose and operations of the Antikythera Mechanism to be proposed (Freeth et al. 2006).   The machine was said to be a physical examplification of  late Greek theories of cosmology, in particular the idea that the motion of a heavenly body could  be modeled by an epicycle – ie, a body traveling around a circle, which is itself moving around some second circle.  This model provided an explanation for the fact that many heavenly bodies appear to move at different speeds at different times of the year, and sometimes even (appear to) move backwards.

There have been two recent developments:  One is the re-creation of the machine (or, rather, an interpretation of it)  using lego components.

The second has arisen from a more careful examination of the details of the mechanism.  According to Marchant (2010), some people now believe that the mechanism examplifies Babylonian, rather than Greek, cosmology.   Babylonian astronomers modeled the movements of heavenly bodies by assuming each body traveled along just one circle, but at two different speeds:  movement in one period of the year being faster than during the other part of the year.

If this second interpretation of the Antikythera Mechanism is correct, then perhaps it was the mechanism itself (or others like it) which gave late Greek astronomers the idea for an epicycle model.   In support of this view is the fact that, apparently, gearing mechanisms and the epicycle model both appeared around the same time, with gears perhaps a little earlier.   So late Greek cosmology (and perhaps late geometry) may have arisen in response to, or at least alongside, practical developments and physical models.   New ideas in computing typically follow the same trajectory – first they exist in real, human-engineered, systems; then, we develop a formal, mathematical theory of them.   Programmable machines, for instance, were invented in the textile industry in the first decade of the 19th century (eg, the Jacquard Loom), but a mathematical theory of programming did not appear until the 1960s.   Likewise, we have had a fully-functioning, scalable, global network enabling multiple, asynchronous, parallel, sequential and interleaved interactions since Arpanet four decades ago, but we still lack a thorough mathematical theory of interaction.

And what have the Babylonians ever done for us?   Apart from giving us our units for measuring of time (divided into 60) and of angles (into 360 degrees)?

References:

T Freeth, Y Bitsakis, X Moussas, JH Seiradaki, A Tselikas, H Mangou, M Zafeiropoulou, R Hadland, D Bate, A Ramsey, M Allen, A Crawley, P Hockley, T Malzbender, D Gelb,W Ambrisco and MG Edmunds [2006]:  Decoding the ancient Greek astronomical calculator known as the Antikythera Mechanism.  Nature444 (30):   587-591.  30 November 2006.

J. Marchant [2010]:  Mechanical inspiration.  Nature, 468:  496-498.  25 November 2010.