cod-Bourbakism

Economic historians Philip Mirowski and Edward Nik-Khah have published a new book on the role of information in post-war economics.    The introductory chapter contains a nice, high-level summary of the failures of the standard model of decision-making in mainstream micro Economics, Maximum Expected Utility Theory or so-called rational choice theory.  Because the MEU model continues to dominate academic economics despite working neither in practice nor in theory, I have written about it often before, for example herehere and here.  Listen to Mirowski and Nik-Khah:
Given the massive literature on so-called rationality in the social sciences, it gives one pause to observe what a dark palimpsest the annals of rational choice has become. The modern economist, who avoids philosophy and psychology as the couch potato avoids the gym, has almost no appreciation for the rich archive of paradoxes of rationality. This has come to pass primarily by insisting upon a distinctly peculiar template as the necessary starting point of all discussion, at least from the 1950s onwards. Neoclassical economists frequently characterize their schema as comprising three components: (a) a consistent well-behaved preference ordering reflecting the mindset of some individual; (b) the axiomatic method employed to describe mental manipulations of (a) as comprising the definition of “rational choice”; and (c) reduction of all social phenomena to be attributed to the activities of individual agents applying (b) to (a). These three components may be referred to in shorthand as: “utility” functions, formal axiomatic definitions (including maximization provisions and consistency restrictions), and some species of methodological individualism.
The immediate response is to marvel at how anyone could have confused this extraordinary contraption with the lush forest of human rationality, however loosely defined. Start with component (a). The preexistence of an inviolate preference order rules out of bounds most phenomena of learning, as well as the simplest and most commonplace of human experiences—that feeling of changing one’s mind. The obstacles that this doctrine pose for problems of the treatment of information turns out to be central to our historical account. People have been frequently known to make personally “inconsistent” evaluations of events both observed and unobserved; yet in rational choice theory, committing such a solecism is the only real mortal sin—one that gets you harshly punished at minimum and summarily drummed out of the realm of the rational in the final analysis. Now, let’s contemplate component (b). That dogma insists the best way to enshrine rationality is by mimicking a formal axiomatic system—as if that were some sterling bulwark against human frailty and oblique hidden flaws of hubris. One would have thought Gödel’s Theorem might have chilled the enthusiasm for this format, but curiously, the opposite happened instead. Every rational man within this tradition is therefore presupposed to conform to his own impregnable axiom system—something that comes pre-loaded, like Microsoft on a laptop. This cod-Bourbakism ruled out many further phenomena that one might otherwise innocently call “rational”: an experimental or pragmatic stance toward the world; a life where one understands prudence as behaving different ways (meaning different “rationalities”) in different contexts; a self-conception predicated on the possibility that much personal knowledge is embodied, tacit, inarticulate, and heavily emotion driven.  Furthermore, it strangely banishes many computational approaches to cognition: for instance, it simply elides the fact that much algorithmic inference can be shown to be noncomputable in practice; or a somewhat less daunting proposition, that it is intractable in terms of the time and resources required to carry it out. The “information revolution” in economics primarily consisted of the development of Rube Goldberg–type contraptions to nominally get around these implications. Finally, contemplate component (c): complaints about methodological individualism are so drearily commonplace in history that it would be tedious to reproduce them here. Suffice it to say that (c) simply denies the very existence of social cognition in its many manifestations as deserving of the honorific “rational.”
There is nothing new about any of these observations. Veblen’s famous quote summed them up more than a century ago: “The hedonistic conception of man is that of a lightning calculator of pleasures and pains, who oscillates like a homogeneous globule of desire of happiness under the impulse of stimuli that shift him about the area, but leave him intact.”  The roster of latter-day dissenters is equally illustrious, from Herbert Simon to Amartya Sen to Gerd Gigerenzer, if none perhaps is quite up to his snuff in stylish prose or withering skepticism. It is commonplace to note just how ineffectual their dissent has been in changing modern economic practice.
Why anyone would come to mistake this virtual system of billiard balls careening across the baize as capturing the white-hot conviction of rationality in human life is a question worthy of a few years of hard work by competent intellectual historians; but that does not seem to be what we have been bequeathed. In its place sits the work of (mostly) historians of economics and a few historians of science treating these three components of rationality as if they were more or less patently obvious, while scouring over fine points of dispute concerning the formalisms involved, and in particular, an inordinate fascination for rival treatments of probability theory within that framework. We get histories of ordinal versus cardinal utility, game theory, “behavioral” peccadillos, preferences versus “capacities,” social choice theory, experimental interventions, causal versus evidential decision theory, formalized management theory, and so forth, all situated within a larger framework of the inexorable rise of neoclassical economics. Historians treat components (a–c) as if they were the obvious touchstone of any further research, the alpha and omega of what it means to be “rational.” Everything that comes after this is just a working out of details or a cleaning up of minor glitches. If and when this “rational choice” complex is observed taking root within political science, sociology, biology, or some precincts of psychology, it is often treated as though it had “migrated” intact from the economists’ citadel. If that option is declined, then instead it is intimated that “science” and the “mathematical tools” made the figures in question revert to certain stereotypic caricatures of rationality.” [Mirowski and Nik-Khah 2017, locations 318-379 of the Kindle edition].

Reference:
Philip Mirowski and Edward Nik-Khah [2017]: The Knowledge We Have Lost in Information: The History of Information in Modern Economics. Oxford, UK: Oxford University Press.

Possible Worlds

This is a list of movies which play with alternative possible realities, in various ways:

  • It’s a Wonderful Life [Frank Capra, USA 1947]
  • Przypadek (Blind Chance) [Krzysztof Kieslowski, Poland 1987]
  • Lola Rennt (run lola run) [Tom Tykwer, Germany 1998]
  • Sliding Doors [Peter Howitt, UK 1998]
  • The Family Man [Brett Ratner, USA 2000]
  • Me Myself I [Pip Karmel, Australia 2000]

On the topic of possible worlds, this post may be of interest.

Blockchains are the new black!

In late 2014, the first edition of DevCon (labelled DevCon0, in computing fashion), the Ethereum developers conference held in Berlin had a dozen or so participants.  A year later, several hundred people attended DevCon1 in the City of London.  The participants were a mix of pony-tailed hacker libertarians and besuited, besotted bankers, and I happened to speak at the event.  Since New Year, I have participated in a round-table discussion on technology and entrepreneurship with a Deputy Prime Minister of Singapore, previewed a smart contracts project undertaken by a prominent former member of Anonymous, briefed a senior business journalist on the coming revolution in financial and regulatory technology, and presented to 120 people at a legal breakfast on distributed ledgers and blockchains. That audience was, as it happens, the quietest and most attentive I have ever encountered.
For only the second time in my adult life, we are experiencing a great dramatic sea-change in technology infrastructure, and this period feels exactly like the early days of the Web.  In 1994-1997, every corporation and their sister was intent on getting online, but most  did not know how, and skills were scarce.  Great fortunes were to be made in clicks and mortar:  IBM took out full-page ads in the WSJ offering to put your company online in only 3 months and for just $1 million!  Today, sisters are urging investment in blockchains, and as much as $1 billion of venture funding went to blockchain and Bitcoin startups in 2015 alone.
The Web revolution helped make manifest the Information Society, by putting information online and making it easily accessible.  But, as I have argued before, most real work uses information but is not about information per se.  Rather, real work is about doing stuff, getting things done, and getting them done with, by, and through other people or organizations. Exchanges, promises, and commitments are as important as facts in such a world.  The blockchain revolution will manifest the Joint-Action Society, by putting transactions and commitments online in a way that is effectively unrevokable and unrepudiable.  The leaders of this revolution are likely to arise from banking and finance and insurance, since those sectors are where the applications are most compelling and the business needs most pressing. So expect this revolution to be led not from Silicon Valley, but from within the citadels of global banking: New York and London, Paris and Frankfurt, and perhaps Tokyo and Singapore.

Strategic Progamming

Over the last 40-odd years, a branch of Artificial Intelligence called AI Planning has developed.  One way to view Planning is as automated computer programming: 

  • Write a program that takes as input an initial state, a final state (“a goal”), and a collection of possible atomic actions, and  produces as output another computer programme comprising a combination of the actions (“a plan”) guaranteed to take us from the initial state to the final state. 

A prototypical example is robot motion:  Given an initial position (e.g., here), a means of locomotion (e.g., the robot can walk), and a desired end-position (e.g., over there), AI Planning seeks to empower the robot to develop a plan to walk from here to over there.   If some or all the actions are non-deterministic, or if there are other possibly intervening effects in the world, then the “guaranteed” modality may be replaced by a “likely” modality. 
Another way to view Planning is in contrast to Scheduling:

  • Scheduling is the orderly arrangement of a collection of tasks guranteed to achieve some goal from some initial state, when we know in advance the initial state, the goal state, and the tasks.
  • Planning is the identification and orderly arrangement of tasks guranteed to achieve some goal from some initial state, when we know in advance the initial state, the goal state, but we don’t yet know the tasks;  we only know in advance the atomic actions from which tasks may be constructed.

Relating these ideas to my business experience, I realized that a large swathe of complex planning activities in large companies involves something at a higher level of abstraction.  Henry Mintzberg called these activities “Strategic Programming”

  • Strategic Programming is the identification and priorization of a finite collection of programs or plans, given an initial state, a set of desirable end-states or objectives (possibly conflicting).  A program comprises an ordered collection of tasks, and these tasks and their ordering we may or may not know in advance.

Examples abound in complex business domains.   You wake up one morning to find yourself the owner of a national mobile telecommunications licence, and with funds to launch a network.  You have to buy the necessary equipment and deploy and connect it, in order to provide your new mobile network.   Your first decision is where to provide coverage:  you could aim to provide nationwide coverage, and not open your service to the public until the network has been installed and connected nationwide.  This is the strategy Orange adopted when launching PCS services in mainland Britain in 1994.   One downside of waiting till you’ve covered the nation before selling any service to customers is that revenues are delayed. 
Another downside is that a competitor may launch service before you, and that happened to Orange:  Mercury One2One (as it then was) offered service to the public in 1993, when they had only covered the area around London.   The upside of that strategy for One2One was early revenues.  The downside was that customers could not use their phones outside the island of coverage, essentially inside the M25 ring-road.   For some customer segments, wide-area or nationwide coverage may not be very important, so an early launch may be appropriate if those customer segments are being targeted.  But an early launch won’t help customers who need wider-area coverage, and – unless marketing communications are handled carefully – the early launch may position the network operator in the minds of such customers as permanently providing inadequate service.   The expectations of both current target customers and customers who are not currently targets need to be explicitly managed to avoid such mis-perceptions.
In this example, the different coverage rollout strategies ended up at the same place eventually, with both networks providing nationwide coverage.  But the two operators took different paths to that same end-state.   How to identify, compare, prioritize, and select-between these different paths is the very stuff of marketing and business strategy, ie, of strategic programming.  It is why business decision-making is often very complex and often intellectually very demanding.   Let no one say (as academics are wont to do) that decision-making in business is a doddle.   Everything is always more complicated than it looks from outside, and identifying and choosing-between alternative programs is among the most complex of decision-making activities.

Taking a view vs. maximizing expected utility

The standard or classical model in decision theory is called Maximum Expected Utility (MEU) theory, which I have excoriated here and here (and which Cosma Shalizi satirized here).   Its flaws and weaknesses for real decision-making have been pointed out by critics since its inception, six decades ago.  Despite this, the theory is still taught in economics classes and MBA programs as a normative model of decision-making.
A key feature of MEU is the decision-maker is required to identify ALL possible action options, and ALL consequential states of these options.   He or she then reasons ACROSS these consequences by adding together the utilites of the consquential states, weighted by the likelihood that each state will occur.
However, financial and business planners do something completely contrary to this in everyday financial and business modeling.   In developing a financial model for a major business decision or for a new venture, the collection of possible actions is usually infinite and the space of possible consequential states even more so.  Making human sense of the possible actions and the resulting consequential states is usually a key reason for undertaking the financial modeling activity, and so cannot be an input to the modeling.  Because of the explosion in the number states and in their internal complexity, business planners cannot articulate all the actions and all the states, nor even usually a subset of these beyond a mere handful.
Therefore, planners typically choose to model just 3 or 4 states – usually called cases or scenarios – with each of these combining a complex mix of (a) assumed actions, (b) assumed stakeholder responses and (c) environmental events and parameters.  The assumptions and parameter values are instantiated for each case, the model run, and  the outputs of the 3 or 4 cases compared with one another.  The process is usually repeated with different (but close) assumptions and parameter values, to gain a sense of the sensitivity of the model outputs to those assumptions.
Often the scenarios will be labeled “Best Case”, “Worst Case”, “Base Case”, etc to identify the broad underlying  principles that are used to make the relevant assumptions in each case.   Actually adopting a financial model for (say) a new venture means assuming that one of these cases is close enough to current reality and its likely future development in the domain under study- ie, that one case is realistic.   People in the finance world call this adoption of one case “taking a view” on the future.
Taking a view involves assuming (at least pro tem) that one trajectory (or one class of trajectories) describes the evolution of the states of some system.  Such betting on the future is the complete opposite cognitive behaviour to reasoning over all the possible states before choosing an action, which the protagonists of the MEU model insist we all do.   Yet the MEU model continues to be taught as a normative model for decision-making to MBA students who will spend their post-graduation life doing business planning by taking a view.
 

Time, gentlemen, please

Much discussion again over at Language Log over a claim of the form “Language L has no word for concept C”.  This time, it was the claim by Wade Davis (whose strange use of past tense indicates he has forgotten or is unaware that many Australian Aboriginal languages are still in use) that:

In not one of the hundreds of Aboriginal dialects and languages was there a word for time.”

The rebuttal of this claim by Mark Liberman was incisive and decisive.   Davis was using this claim to support a more general argument:  that traditional Australian Aboriginal cultures had different notions of and metaphors for time to those we mostly have in the modern Western world.
We in the contemporary educated West typically use a spatial metaphor for time, where the past is in one abstract place, the present in another non-overlapping abstract place, and the future in yet a third non-overlapping abstract place.    In this construal of time, causal influence travels in one direction only:  from the past to the present, and from the present to the future.   Nothing in either the present  or the future may influence the past, which is fixed and unchangeable.   Events in the future may perhaps be considered to influence the present, depending on how much fluidity we allow the present to have.  However, most of us would argue that it is not events in the future that influence events in the present, but our present perceptions of possible future events that influence events and actions in the present.
Modern Western Europeans typically think of the place that represents the past as being behind them, and the future ahead.   People raised in Asian cultures often think of the abstract place that is the past as being below them (or above them), and the future above (or below).   But all consider these abstract places to be non-overlapping, and even non-contiguous.
Traditional Australian Aboriginal cultures, as Davis argues, construe time very differently, and influences may flow in all directions.   A better spatial metaphor for Aboriginal notions of time would be to consider a modern city, where there are many different types of transport and communications, each viewable as a network:  rivers, canals, roads, bus-only road corridors, railways, underground rail tunnels, underground sewage or water drains, cycleways, footpaths, air-transport corridors, electricity networks, fixed-link telecommunications networks, wireless telecommunications networks, etc.    A map of each of these networks could be created (and usually are) for specific audiences.  A map of the city itself could then be formed from combining these separate maps, overlaid upon one another as layers in a stack.   Each layer describes a separate aspect of reality, but the reality of the actual entire city is complex and more than merely the sum of these parts.  Events or perceptions in one layer may influence events or perceptions in other layers, without any limitations on the directions of causality between layers.
Traditional Aboriginal notions of time are similar, with pasts, the present and futures all being construed as separate layers stacked over the same geographic space – in this case actual geographic country, not an abstract spatial representation of time.  Each generation of people who have lived, or who will live, in the specific region (“country” in modern Aboriginal English) will have created a layer in the stack.   Influence travels between the different layers in any and all directions, so events in the distant past or the distant future may influence events in the present, and events in the present may influence events in the past and the future.
Many religions – for example, Roman Catholicism, Hinduism, and African cosmologies – allow for such multi-directional causal influences via a non-material realm of saints or spirits, usually the souls of the dead, who may have power to guide the actions of the living in the light of the spirits’ better knowledge of the future.   Causal influence can thus travel, via such spirit influences, from future to present.  Similarly, the view of Quantum Mechanics of space-time as a single 4-dimensional manifold allows for influences across the dimension of time as well as those of space.
I am reminded of an experience I once witnessed where the only sensible explanation of a colleague’s passionate enthusiasm for a particular future course of action was his foreknowledge of the specific details of the outcome of that course of action.  But these details he did not know and could not have known at the time of his enthusiasm,  prior to the course of action being executed.  In other words, only a causal influence from future to present provided a sensible explanation for this enthusiasm, and this explanation only became evident as the future turned into the present, and the details of the outcome emerged.  Until that point, he could not justify or explain his passionate enthusiasm, which seemed to be a form of madness, even to him.    Contemporary Western cosmology does not provide such time-reversing explanations, but many other cultures do; and current theories of quantum entanglement also seem to.
Contemporary westerners, particularly those trained in western science, have a hard time understanding such alternative cosmologies, in my experience.  I have posted before about the difficulties most westerners have, for instance,  in understanding Taoist/Zen notions of synchronicity of events, which westerners typically mis-construe as random chance.

Glasperlenspielen

Lars Pålsson Syll on “orthodox, mainstream, neoclassical economics”:

Economic theory today consists mainly in investigating economic models.
Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory (cf. Hausman [1997]), where models largely functions as a substitute for empirical evidence.  But “facts kick”, as Gunnar Myrdal used to say. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on mathematical deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.
If not, we will have to keep on wondering – with Robert Solow and other thoughtful persons – what planet the economic theoretician is on.”  [page 54]

I agree with the general thrust of this essay, which resonates with some of my own thoughts on the Glass Bead Game of Economics, for example,  here and here.

Mind you, I don’t agree with everything that Syll says in this essay.  For example, he argues that good predictive capabilities require models to bear resemblance to their target domains.    But we know many counter-examples to this claim, from Newton’s model of planetary motion to Friedman’s billiard players.    Prediction and explanation are two orthogonal dimensions of a model, which may or may not be related in any particular case.

His essay also overlooks the fact that the so-called “real world” which is the target domain of economic models contains, at least in the case of macro-economics, mostly humanly-constructed artefacts, such as the “variables” known as inflation and unemployment rates. Having sat in working parties defining and redefining such artefacts, I am always surprised that any economist could possibly imagine they are modeling an independent reality.

Reference:
Lars Pålsson Syll [2010]:  What is (wrong with) economic theory?  Real-world Economics Review, 55: 23-57.

Shackle on Rational Expectations

The Rational Expectations model in economics assumes that each economic agent (whether an individual or a company) can predict the future as perfectly as the modelers themselves.   To anyone living outside the rarified bubble of mathematical economics, this is simply ridiculous.   It is clear that no one associated with that theory has ever made any real business decisions, or suffered their consequences.
Here is non-mainstream economist George Shackle, writing to Bryan Hopkins on 1980-08-20:

‘Rational expectations’ remains for me a sort of monster living in a cave. I have never ventured into the cave to see what he is like, but I am always uneasily aware that he may come out and eat me. If you will allow me to stir the cauldron of mixed metaphors with a real flourish, I shall suggest that ‘rational expectations’ is neo-classical theory clutching at the last straw.
Observable circumstances offer us suggestions as to what may be the sequel of this act or that one. How can we know what invisible circumstances may take effect in time-to come, of which no hint can now be gained? I take it that ‘rational expectations’ assumes that we can work out what will happen as a consequence of this or that course of action. I should rather say that at most we can hope to set bounds to what can happen, at best and at worst, within a stated length of time from ‘the present’, and can invent an endless diversity of possibilities lying between them. [Italics in original]

Of course, unlike John Muth or Robert Lucas, Shackle had actual real-world experience of investment decision-making from his experience during WW II on national infrastructure planning.
Reference:
George L. S. Shackle [1980]:  Letter to Bryan Hopkins.  Quoted in:  Stephen L. Littlechild [2003]: Reflections on George Shackle:  Three Excerpts from the Shackle Collection.  The Review of Austrian Economics, 16 (1): 113-117.

The Great British Rail Network Franchise Disaster of 2012

The British papers are full of stories about The Great British Rail Network Franchise Disaster of 2012.  Like Bristow’s Great Tea Trolley Disaster of 1967, we may never learn the real reasons behind the disaster — errors are alleged in calculations (arithmetic errors? using multi-line spreadsheets?) undertaken by senior civil servants, now suspended.  But one item leapt out to me:

Government sources said “heads will definitely roll in the department” over the affair, adding that “the minister cannot be expected to be responsible for a very technical models with hundreds of lines in a spreadsheet”.
The key error seems to have been to underestimate the potential value of the franchise – where the company pays a premium to the Government, rather than receiving a subsidy.
The department said mistakes had been made over estimates of the number of passengers who would use the route and the way inflation was calculated. Three civil servants have been suspended.

Why on earth are government civil servants estimating future passenger numbers and rates of inflation?  Surely, that is the business of the bidders.   Only the bidders, after all, have the expertise, the experience, and the motivated self-interest to make these forecasts as accurately and realistically as possible.  The Government should be making its franchise decision on whatever criteria it thinks appropriate (eg, the numbers of jobs created, the novelty of services provided, the public fares charged, the money payments offered for the franchise, etc), but not trying to second-guess the business plans of the train operators.   Any demand forecast will depend on assumptions about the actual services offered, the actual prices charged for these services, and the actions undertaken to market, promote, distribute, and sell them, and none of these assumptions are within the purview of the Government.
Indeed, not only do civil servants not know these marketing plans, civil servants — in my extensive experience of submitting telecommunications licence applications — do not even have the expertise needed to assess such plans.     How can they tell whether a marketing plan is effective or not?  Feasible or not?  Sensible or not?  Even experienced marketers can get market planning wrong, so how much more so civil servants with no commercial experience at all, no direct stake in the outcome, and no ear to the market ground?  A famous British example of marketing ignorance by civil servants was the refusal by British Treasury officials during the 1960s to approve (what is now) British Telecom’s proposed telecoms switch upgrades, since the proposed switches allowed for itemized billing of calls:  What user would need that? asked the refusenik officials.
A decade of telecommunications licences awarded by beauty contests finally convinced Governments around the world to put aside any attempt to plan the businesses involved, and just ask potential operators to pay what they think each licence is worth, via auctions.   Of course, British regional rail network franchises are monopolies, so it is appropriate for franchise allocation decisions to be based on criteria additional to the amount of money offered for the franchise.   It is even appropriate for these criteria to include subjective and qualitative factors, such as the degree of risk of the bidder going bankrupt during the franchise period.  Even so, I cannot see a need for a Government to be predicting customer demand,  or even assessing the predictions of customer demand made by the bidders.  They should leave that job to the people with the most to lose for getting the forecasts wrong.
If, for some reason, the Government does need its own independent forecast of demand, it should outsource the creation of the forecast (strictly, the forecast model) to some outside entity with the expertise, the experience, and the motivated self-interest to make these forecasts (or model) as accurately and realistically as possible.   Outsourcing would also more likely ensure that the generation of such demand forecasts is independent from their use in any evaluation of franchise bids, so that neither decision — deciding the forecasts nor choosing the franchise winners — could corruptly influence the other.

Embedded network data

In June, I saw a neat presentation by mathematician Dr Tiziana Di Matteo on her work summarizing high-dimensional network data.  Essentially, she and her colleagues embed their data as a graph on a 2-dimensional surface.   This process, of course, loses information from the original data, but what remains is (argued to be) the most important features of the original data.
Seeing this, I immediately thought of the statistical moments of a probability distribution – the mean, the variance, the skewness, the kurtosis, etc.   Each of these summarizes an aspect of the distribution – respectively, its location, its variability, its symmetry, its peakedness, etc.  The moments may be derived from the coefficients of the Taylor series expansion (the sum of derivatives of increasing order) of the distribution, assuming that such an expansion exists.
So, as I said to Dr Di Matteo, the obvious thing to do next (at least obvious to me) would be to embed their original network data in a sequence of surfaces of increasing dimension:  a 3-dimensional surface, a 4-dimensional surface, and so on, akin to the Taylor series expansion of a distribution.     Each such embedding would retain some features of the data and not others.  Each embedding would thus summarize the data in a certain way.   The trick will be in the choice of surfaces, and the appropriate surfaces may well depend on features of the original network data.
One may think of these various sequences of embeddings or Taylor series expansions as akin to the chain complexes in algebraic topology, which are means of summarizing the increasing-dimensional connectedness properties of a topological space.  So there would also be a more abstract treatment in which the topological embeddings would be a special case.
References:
M. Tumminello, T. Aste, T. Di Matteo, and R. N. Mantegna [2005]:  A tool for filtering information in complex systems.  Proceedings of the National Academy of Sciences of the United States of America (PNAS), 102 (30) 10421-10426.
W. M. Song, T. Di Matteo and T. Aste [2012]:  Hierarchical information clustering by means of topologically embedded graphs. PLoS ONE, 7:  e31929.