Blockchains are the new black!

In late 2014, the first edition of DevCon (labelled DevCon0, in computing fashion), the Ethereum developers conference held in Berlin had a dozen or so participants.  A year later, several hundred people attended DevCon1 in the City of London.  The participants were a mix of pony-tailed hacker libertarians and besuited, besotted bankers, and I happened to speak at the event.  Since New Year, I have participated in a round-table discussion on technology and entrepreneurship with a Deputy Prime Minister of Singapore, previewed a smart contracts project undertaken by a prominent former member of Anonymous, briefed a senior business journalist on the coming revolution in financial and regulatory technology, and presented to 120 people at a legal breakfast on distributed ledgers and blockchains. That audience was, as it happens, the quietest and most attentive I have ever encountered.
For only the second time in my adult life, we are experiencing a great dramatic sea-change in technology infrastructure, and this period feels exactly like the early days of the Web.  In 1994-1997, every corporation and their sister was intent on getting online, but most  did not know how, and skills were scarce.  Great fortunes were to be made in clicks and mortar:  IBM took out full-page ads in the WSJ offering to put your company online in only 3 months and for just $1 million!  Today, sisters are urging investment in blockchains, and as much as $1 billion of venture funding went to blockchain and Bitcoin startups in 2015 alone.
The Web revolution helped make manifest the Information Society, by putting information online and making it easily accessible.  But, as I have argued before, most real work uses information but is not about information per se.  Rather, real work is about doing stuff, getting things done, and getting them done with, by, and through other people or organizations. Exchanges, promises, and commitments are as important as facts in such a world.  The blockchain revolution will manifest the Joint-Action Society, by putting transactions and commitments online in a way that is effectively unrevokable and unrepudiable.  The leaders of this revolution are likely to arise from banking and finance and insurance, since those sectors are where the applications are most compelling and the business needs most pressing. So expect this revolution to be led not from Silicon Valley, but from within the citadels of global banking: New York and London, Paris and Frankfurt, and perhaps Tokyo and Singapore.

London life – the past and the future

In the same week:

  • A meeting at Google Campus London – a superb space, a hive of activity, buzzing with energy and ideas, casual, and a wonderful vibe.
  • An invitation to a reception at the British Computer Society (BCS):  jacket and tie compulsory for all.

Here we are, one-sixth of our way into the 21st century, and the BCS is still insisting on formal dress?  Do they also require that only unmarried women be allowed to program these new-fangled machines, too?  That social, religious, and intellectual radical, Charles Babbage, would be appalled at such deference to established tradition.
You know that one of the people sitting beside you at Google Campus is the next Zuckerberg or Brin.  Maybe it is even you yourself.  Not  a single person at Google was wearing a tie or a suit, though.  I doubt anyone intent on changing the future – or even the present! – is attending events requiring formal dress, but I guess the past is not evenly distributed either.
NOTE:  An early review of the Google London Campus is here.

Progress in computing

Computer science typically proceeds by first doing something, and then thinking carefully about it:    Engineering usually precedes theory.    Some examples:

  • The first programmable device in modern times was the Jacquard Loom, a textile loom that could weave different patterns depending on the instruction cards fed into it.   This machine dates from the first decade of the 19th century, but we did not have a formal, mathematical theory of programming until the 1960s.
  • Charles Babbage designed various machines to undertake automated calculations in the first half of the 19th century, but we did not have a mathematical theory of computation until Alan Turing’s film-projector model a century later.
  • We’ve had a fully-functioning, scalable, global network enabling multiple, asynchronous, parallel, sequential and interleaved interactions since Arpanet four decades ago, but we still lack a fully-developed mathematical theory of interaction.   In particular, Turing’s film projectors seem inadequate to model interactive computational processes, such as those where new inputs arrive or partial outputs are delivered before processing is complete, or those processes which are infinitely divisible and decentralizable, or nearly so.
  • The first mathematical theory of communications (due to Claude Shannon) dates only from the 1940s, and that theory explicitly ignores the meaning of messages.   In the half-century since, computer scientists have used speech act theory from the philosophy of language to develop semantic theories of interactive communication.  Arguably, however, we still lack a good formal, semantically-rich account of dialogs and utterances  about actions.  Yet, smoke signals were used for communications in ancient China, in ancient Greece, and in medieval-era southern Africa.

An important consequence of this feature of the discipline is that theory and practice are strongly coupled and symbiotic.   We need practice to test and validate our theories, of course.   But our theories are not (in general) theories of something found in Nature, but theories of practice and of the objects and processes created by practice.  Favouring theory over practice risks creating a sterile, infeasible discipline out of touch with reality – a glass bead game such as string theory or pre-Crash mathematical economics.   Favouring practice over theory risks losing the benefits of intelligent thought and modeling about practice, and thus inhibiting our understanding about limits to practice.   Neither can exist very long or effectively without the other.

Mathematical hands

With MOOCs fast becoming teaching trend-du-jour in western universities, it is easy to imagine that all disciplines and all ways of thinking are equally amenable to information technology.   This is simply not true, and mathematical thinking  in particular requires hand-written drawing and symbolic manipulation.   Nobody ever acquired skill in a mathematical discipline without doing exercises and problems him or herself, writing on paper or a board with his or her own hands.   The physical manipulation by the hand holding the pen or pencil is necessary to gain facility in the mental manipulation of the mathematical concepts and their relationships.
Keith Devlin recounts his recent experience teaching a MOOC course on mathematics, and the deleterious use by students of the word-processing package latex for doing assignments:

We have, it seems, become so accustomed to working on a keyboard, and generating nicely laid out pages, we are rapidly losing, if indeed we have not already lost, the habit—and love—of scribbling with paper and pencil. Our presentation technologies encourage form over substance. But if (free-form) scribbling goes away, then I think mathematics goes with it. You simply cannot do original mathematics at a keyboard. The cognitive load is too great.

Why is this?  A key reason is that current mathematics-producing software is clunky, cumbersome, finicky, and not WYSIWYG (What You See Is What You Get).   The most widely used such software is Latex (and its relatives), which is a mark-up and command language; when compiled, these commands generate mathematical symbols.   Using Latex does not involve direct manipulation of the symbols, but only their indirect manipulation.   One has first to imagine (or indeed, draw by hand!) the desired symbols or mathematical notation for which one then creates using the appropriate generative Latex commands.   Only when these commands are compiled can the user see the effects they intended to produce.   Facility with pen-and-paper, by contrast, enables direct manipulation of symbols, with (eventually), the pen-in-hand being experienced as an extension of the user’s physical body and mind, and not as something other.   Expert musicians, archers, surgeons, jewellers, and craftsmen often have the same experience with their particular instruments, feeling them to be extensions of their own body and not external tools.
Experienced writers too can feel this way about their use of a keyboard, but language processing software is generally WYSIWYG (or close enough not to matter).  Mathematics-making software  is a long way from allowing the user to feel that they are directly manipulating the symbols in their head, as a pen-in-hand mathematician feels.  Without direct manipulation, hand and mind are not doing the same thing at the same time, and thus – a fortiori – keyboard-in-hand is certainly not simultaneously manipulating concept-in-mind, and nor is keyboard-in-hand simultaneously expressing or evoking concept-in-mind.
I am sure that a major source of the problem here is that too many people – and especially most of the chattering classes – mistakenly believe the only form of thinking is verbal manipulation.  Even worse, some philosophers believe that one can only think by means of words.     Related posts on drawing-as-a-form-of-thinking here, and on music-as-a-form-of-thinking here.
[HT:  Normblog]

The corporate culture of Microsoft

Anyone with friends or associates working for Microsoft these last few years has heard stories of its bizarre internal employee appraisal system, called stack ranking:   Every group, no matter how wonderful or effective, must include some poor performers – by decree, not for any other reason.   One is reminded of Stalin’s imposition of quotas on the intelligence agencies for finding spies in the USSR in the 1930s.    With this system, it is not sufficient that people achieve their objectives or perform well: to be also rated as performing well, others in the same team must be rated as performing poorly.   There are three extremely negative outcomes of this system:  first, good and even very good performers get rated as performing poorly; second, immense effort is spent by almost everyone in ensuring  that others do badly in the ratings; and third, team spirit dissolves.
The August issue of Vanity Fair has a long profile of Microsoft and its current ills, Microsoft’s Lost Decade, by Kurt Eichenwald, here, which discusses this system in detail.     Here is a description of  the system and its consequences:

At the center of the cultural problems was a management system called “stack ranking.” Every current and former Microsoft employee I interviewed—every one—cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees. The system—also referred to as “the performance model,” “the bell curve,” or just “the employee review”—has, with certain variations over the years, worked like this: every unit was forced to declare a certain percentage of employees as top performers, then good performers, then average, then below average, then poor.
“If you were on a team of 10 people, you walked in the first day knowing that, no matter how good everyone was, two people were going to get a great review, seven were going to get mediocre reviews, and one was going to get a terrible review,” said a former software developer. “It leads to employees focusing on competing with each other rather than competing with other companies.”
Supposing Microsoft had managed to hire technology’s top players into a single unit before they made their names elsewhere—Steve Jobs of Apple, Mark Zuckerberg of Facebook, Larry Page of Google, Larry Ellison of Oracle, and Jeff Bezos of Amazon—regardless of performance, under one of the iterations of stack ranking, two of them would have to be rated as below average, with one deemed disastrous.
For that reason, executives said, a lot of Microsoft superstars did everything they could to avoid working alongside other top-notch developers, out of fear that they would be hurt in the rankings. And the reviews had real-world consequences: those at the top received bonuses and promotions; those at the bottom usually received no cash or were shown the door.
Outcomes from the process were never predictable. Employees in certain divisions were given what were known as M.B.O.’s—management business objectives—which were essentially the expectations for what they would accomplish in a particular year. But even achieving every M.B.O. was no guarantee of receiving a high ranking, since some other employee could exceed the assigned performance. As a result, Microsoft employees not only tried to do a good job but also worked hard to make sure their colleagues did not.
“The behavior this engenders, people do everything they can to stay out of the bottom bucket,” one Microsoft engineer said. “People responsible for features will openly sabotage other people’s efforts. One of the most valuable things I learned was to give the appearance of being courteous while withholding just enough information from colleagues to ensure they didn’t get ahead of me on the rankings.”
Worse, because the reviews came every six months, employees and their supervisors—who were also ranked—focused on their short-term performance, rather than on longer efforts to innovate.
“The six-month reviews forced a lot of bad decision-making,” one software designer said. “People planned their days and their years around the review, rather than around products. You really had to focus on the six-month performance, rather than on doing what was right for the company.”
There was some room for bending the numbers a bit. Each team would be within a larger Microsoft group. The supervisors of the teams could have slightly more of their employees in the higher ranks so long as the full group met the required percentages. So, every six months, all of the supervisors in a single group met for a few days of horse trading.
On the first day, the supervisors—as many as 30—gather in a single conference room. Blinds are drawn; doors are closed. A grid containing possible rankings is put up—sometimes on a whiteboard, sometimes on a poster board tacked to the wall—and everyone breaks out Post-it notes. Names of team members are scribbled on the notes, then each manager takes a turn placing the slips of paper into the grid boxes. Usually, though, the numbers don’t work on the first go-round. That’s when the haggling begins.
“There are some pretty impassioned debates and the Post-it notes end up being shuffled around for days so that we can meet the bell curve,” said one Microsoft manager who has participated in a number of the sessions. “It doesn’t always work out well. I myself have had to give rankings to people that they didn’t deserve because of this forced curve.”
The best way to guarantee a higher ranking, executives said, is to keep in mind the realities of those behind-the-scenes debates—every employee has to impress not only his or her boss but bosses from other teams as well. And that means schmoozing and brown-nosing as many supervisors as possible.
“I was told in almost every review that the political game was always important for my career development,” said Brian Cody, a former Microsoft engineer. “It was always much more on ‘Let’s work on the political game’ than on improving my actual performance.”
Like other employees I interviewed, Cody said that the reality of the corporate culture slowed everything down. “It got to the point where I was second-guessing everything I was doing,” he said. “Whenever I had a question for some other team, instead of going to the developer who had the answer, I would first touch base with that developer’s manager, so that he knew what I was working on. That was the only way to be visible to other managers, which you needed for the review.”
I asked Cody whether his review was ever based on the quality of his work. He paused for a very long time. “It was always much less about how I could become a better engineer and much more about my need to improve my visibility among other managers.”
In the end, the stack-ranking system crippled the ability to innovate at Microsoft, executives said. “I wanted to build a team of people who would work together and whose only focus would be on making great software,” said Bill Hill, the former manager. “But you can’t do that at Microsoft.”

And, according to Eichenwald, Microsoft had an early lead in e-reader technology that was lost due to the company’s cultural bias in favour of the Windows look-and-feel:

The spark of inspiration for the device had come from a 1979 work of science fiction, The Hitchhiker’s Guide to the Galaxy, by Douglas Adams. The novel put forth the idea that a single book could hold all knowledge in the galaxy. An e-book, the Microsoft developers believed, would bring Adams’s vision to life. By 1998 a prototype of the revolutionary tool was ready to go. Thrilled with its success and anticipating accolades, the technology group sent the device to Bill Gates—who promptly gave it a thumbs-down. The e-book wasn’t right for Microsoft, he declared.
“He didn’t like the user interface, because it didn’t look like Windows,” one programmer involved in the project recalled. But Windows would have been completely wrong for an e-book, team members agreed. The point was to have a book, and a book alone, appear on the full screen. Real books didn’t have images from Microsoft Windows floating around; putting them into an electronic version would do nothing but undermine the consumer experience.
The group working on the initiative was removed from a reporting line to Gates and folded into the major-product group dedicated to software for Office, the other mammoth Microsoft moneymaker besides Windows. Immediately, the technology unit was reclassified from one charged with dreaming up and producing new ideas to one required to report profits and losses right away.
“Our entire plan had to be moved forward three to four years from 2003–04, and we had to ship a product in 1999,” said Steve Stone, a founder of the technology group. “We couldn’t be focused anymore on developing technology that was effective for consumers. Instead, all of a sudden we had to look at this and say, ‘How are we going to use this to make money?’ And it was impossible.”
Rushing the product to market cost Microsoft dearly. The software had been designed to run on a pad with touch-screen technology, a feature later popularized with the iPhone. Instead, the company pushed out Microsoft Reader, to run on the Microsoft Pocket PC, a small, phone-size device, and, soon after, on Windows. The plan to give consumers something light and simple that would allow them to read on a book-size screen was terminated.
The death of the e-book effort was not simply the consequence of a desire for immediate profits, according to a former official in the Office division. The real problem for his colleagues was that a simple touch-screen device was seen as a laughable distraction from the tried-and-true ways of dealing with data. “Office is designed to inputting with a keyboard, not a stylus or a finger,” the official said. “There were all kinds of personal prejudices at work.”
Indeed, executives said, Microsoft failed repeatedly to jump on emerging technologies because of the company’s fealty to Windows and Office. “Windows was the god—everything had to work with Windows,” said Stone. “Ideas about mobile computing with a user experience that was cleaner than with a P.C. were deemed unimportant by a few powerful people in that division, and they managed to kill the effort.”
This prejudice permeated the company, leaving it unable to move quickly when faced with challenges from new competitors. “Every little thing you want to write has to build off of Windows or other existing products,” one software engineer said. “It can be very confusing, because a lot of the time the problems you’re trying to solve aren’t the ones that you have with your product, but because you have to go through the mental exercise of how this framework works. It just slows you down.”

RIP: Ernest Kaye

While on the subject of Britain’s early lead in computing, I should mention the recent death of Ernest Kaye (1922-2012).  Kaye was the last surviving member of the design team of the LEO computer, the pioneering business and accounting machine developed by the Lyons Tea Shop chain in the early 1950s.  As with jet aircraft, computers were another technological lead gained and squandered by British companies.
Kaye’s Guardian obituary is here.   A post on his LEO colleague John Aris is here.  An index to Vukutu posts on the Matherati is here.