Computing (software)


The graphic element of the BarcampAfrica UK logo

The graphic element of the BarcampAfrica UK logo

BarcampAfrica UK is a gathering of technologically inspired Africans and their friends, to be held in London on 7th November. Barcamps are a form of ‘unconference’, events in which the overall focus is declared, but a detailed agenda is only worked out on the day, and people are encouraged to come along prepared to take part in running a workshop. I’m lending my support to the organising team (thus far my main contribution was the logo design), and I plan to facilitate a workshop on the day.

The first Barcamp Africa was held last year in northern California, the brainchild of Ellen Petry Leanse and Kaushal Jhalla. There have been several since, one recently in Cameroon. Much of the impetus for the UK barcamp has come from young Africans in the BCS. The overall focus of the event will be on what technology (especially computing and communications technologies) can contribute to positive human, economic and environmental development in the African continent.

My Barcamp focus will be on information sharing/publishing. Information design, information sharing and electronic publishing are my personal areas of interest and knowledge. The workshop I intend to run at BarcampAfrica UK will focus on how to make it easier for African institutions to publish and disseminate useful information.

The high cost of publishing software

Ghanaian high school students in the National Museum, Accra. Photo Conrad Taylor

Ghanaian high school students in the National Museum, Accra. Photo Conrad Taylor

A few years ago, I spent a couple of weeks in Ghana. I went over with Ghanaian friends, but spent time wandering around Accra on my own. One of my visits was to Ghana’s National Museum, a small circular building that is not well known even to Ghanaians. If you want to get there by taxi, ask to be taken to ‘Edvy’s restaurant’: located within the museum grounds, it is better known than the museum itself!.

While browsing the exhibitions, I got talking to the museum’s curator, Joseph Gazari Seini, about methods for captioning exhibits; most of theirs were either typed, or hand-written. I explained the kind of work I do, and he invited me to his office.

Pulling out a folder, he showed me the text for a book, a report of some important archaeological work. It had been written on a computer, and the print-out was inkjetted. The folder also contained a collection of colour photographic prints showing aspects of the excavation and the objects found. Joe Gazari explained to me that they would love to get materials like this published, but they didn’t know how to go about it, and they hadn’t the means.

I wondered if there was some way I could help, and later that week I had a meeting with Joe’s boss. I’m afraid it ended up with misunderstanding; in retrospect I think they thought I was some rich oburoni offering to finance the whole venture. But what I did do when I got back to England was contact Adobe Systems Inc. Their software — FrameMaker or InDesign, Photoshop and Illustrator — would be excellent tools for book-publishing.

I understood that Adobe had some history of charitable giving; would they donate software to this cause? No. They do support projects, but only in their back yard, for example in San Jose, California. And the amount of money it would cost the museum in Accra to buy legitimate copies of professional publishing software would be way beyond their budgets.

Is free, open-source software the answer?

So this is one of the barriers: the cost of decent software. What about free, open source software? That is certainly a route worth exploring. There is OpenOffice Writer of course, but a word-processing program doesn’t have all the features you would expect of proper publishing software, such as flexible layouts, extensive support of graphics types, and CMYK colour for commercial printing.

There is also the Scribus desktop publishing program. Last time I tried it it was awful, but that was two years ago and maybe version 1.3.3.x is OK. I have to say that the page about installing on Mac OSX doesn’t inspire much confidence…

Another possibility would be to use the venerable typesetting language TeX, invented by Donald Knuth in the 1980s — free implementations are available for a variety of computer operating systems. On the down side, typesetting in TeX is not a WYSIWYG experience: you prepare a file using a text editor, embedding formatting codes, and feed the result to the TeX engine, which does a batch process on the file. The results can be of very good quality and the method has great promise for textbooks, but is difficult if not impossible to lay out publications in which there is an intimate relationship between text layout and graphics.

Fonts for African languages

Another issue that is a barrier for publishing in some of the African vernacular languages is that they require characters that cannot be found in the standard fonts. In early 2000, I conducted a personal investigation of this issue, writing a 55-page illustrated report called Typesetting African Languages, which I have now parked at The Internet Archive as a PDF. (Warning – it is now out of date, and may contain errors.)

In my report, I divided the languages into five grades of difficulty in respect of typesetting them:

  1. Languages which use the same characters as English, without any diacritics — examples include Swahili, Somali, Zulu, Xhosa. These langauges can even be processed using 7-bit ASCII or sent as telex without orthographic compromise!
  2. Languages which use accents but in a way that is similar to Western European languages like French or Portuguese, so can be typeset using standard computer fonts. Examples include Kikuyu and Tswana.
  3. Languages which use ‘ordinary’ letterforms but in unusual combinations, such as a dot under a vowel, or an accent over a consonant. These could be typeset with systems like TeX that build accented characters from components. Examples include the Nigerian languages Igbo, Edo and Yorùbá, the Zambian language Nyanja, and Wolof (Senegal).
  4. Languages which have additional letterforms that are not found in standard fonts — for example Fulfulde, Krio, Twi and Hausa.
  5. Languages which are written with non-latin script systems. Apart from the Arabic script, there is also the Tifinagh script of the Berber people, and the Ge’ez script of Ethiopia and Eritrea.
A passage from a text in Yorùbá, typeset using MacTeX and Computer Modern fonts. Click to enlarge.

A passage from a text in Yorùbá, typeset using MacTeX and Computer Modern fonts. Click to enlarge.

At the time that I wrote my study, the options for Level Three and Level Four languages looked quite problematic. I did have a go at typesetting a passage of Yorùbá using the MacTeX system and the Computer Modern type family. In this system, the accents and letters are composed by TeX macros, rather than stored as precomposed letterforms, so with a bit of hacking I was able to produce the result illustrated here

But a lot has happened since. For a start, more publishing software now encodes text in the Unicode system, which allocates a unique character identification number to each letter in all of the world’s languages. There are now fonts available in the extended TrueType and OpenType formats – fonts like Arial Unicode, Lucida Sans Unicode and Adobe’s ‘Pro’ font families – which contain hundreds or even thousands of characters, though usually not the ones which are needed for the African languages I have mentioned.

The growth in the number of ‘large glyph set’ fonts has in turn led to the development of mechanisms which make it easier to insert characters for which there is no keyboard keystroke. A good example is the Glyph Palette in Adobe InDesign, and the Character Palette system utility for Mac OSX.

A Twi (Akan) proverb typeset in two free-of-charge fonts, Gentium and SIL Charis

A Twi (Akan) proverb typeset in two free-of-charge fonts, Gentium and SIL Charis

What Africa really needs is fonts which contain the glyphs for African languages – and which are free of charge! Fortunately this is now beginning to happen. Gentium, a beautiful font, was designed by Victor Gaultney, originally I believe as part of a research project at The University of Reading. Victor started the drawings for Gentium (the name means of the nations in Latin) in the same year I was conducting my informal research. It now encompasses full extended latin, greek and cyrillic alphabets, is hosted by SIL International, and is available at no charge under the SIL Open Font License, which allows modification and redistribution.

SIL International is a Christian organisation with a 75-year history of documenting and supporting the world’s less well known languages. Originally a summer school (Summer Institute of Linguistics’), SIL has grown into an organisation with knowledge of 2,550 languages. It publishes The Ethnologue and has also created a number of fonts that supprt African languages well. Charis is a family of four fonts – roman, italic, bold and bold italic. It is based on Bitstream Charter, originally designed by Matthew Carter, which in 1992 was donated by the Bitstream foundry to the X Consortium.

It’s a good start, and there are other initiatives along these lines of which I am aware. A friend of mine, Dave Crossland, recently obtained his Masters at the University of Reading, in the course of which he created a font which he intends to release using an Open Font license. The other day, Dave showed me the font running on his Google Android ’phone, and very crisp and legible it looked too. Although Dave’s font doesn’t have the character support for, say, Twi and Yorùbá, the licensing provisions he has chosen make it possible for other type-wranglers to take the baton and run the next mile.

Even TeX doesn’t stand still. In the course of preparing this blog post I found a reference to a brand new version of TeX – XeTeX – which supports Unicode, OpenType and Apple Advanced Typography. It’s also being looked after by SIL, see here!

THIS WEEK, the British Computer Society launched its new identity, as part of what it calls the ‘transformation’ programme. In fact it now wants to be called only ‘the BCS’, along the lines of BAA and HSBC. But I predict that those of us who have been BCS members for some time will find it hard to drop the habitual mention of British, or Computer, or for that matter Society (BCS now wants to be called a ‘Chartered Institute’ instead).

On Wednesday, 23 September, the BCS held a Member Groups Convention at the Royal Society’s building in Carlton Terrace. The BCS is home to scores of local branches, some International Sections, plus more than fifty Specialist Groups. The ‘SGs’ are volunteer-run societies, embedded within the BCS, which cater to special interests such as Fortran, computer history and conservation, project management, health informatics, security, interaction design, information retrieval, artificial intelligence… the list is a long one. In addition, there is a very active Young Professionals Group.

The activities of the member groups form a vital part of the life of the BCS, and the Convention was held so that the elected and employed officers of the Society, I mean the Institute, could tell a collection of the organisers and activists from those member groups about the BCS’s new orientation and aspirations for the future. It also gave an opportunity for those of us who work as volunteers within the BCS member groups to think about how we can contribute to these goals.

Adrian Walmsley, BCS vice-president for member services, stands beside one of the two London cabs sporting BCS marketing messages.

Adrian Walmsley, BCS vice-president for member services, stands beside one of the two London cabs sporting BCS marketing messages.

The new visual identity of the BCS is a strong green colour, suffused with yellow glowing bits, and a half-shield logo device — as illustrated on one of two taxis currently carrying the BCS’s advertising messages on the streets of London. (Appropriately, the taxi shown has WiFi inside and a satellite data uplink!). There is also a new slogan to accompany the logo — Enabling the information society — and it is this new mission statement that I want to comment on here.

Of information and computers

As Alan Turing predicted, the conversion of data and information into numbers allowed glorified electronic adding machines to evolve into general-purpose information appliances. The Lyons catering company took a revolutionary step when it commissioned the creators of the EDSAC machine at Cambridge University to design a general-purpose business computer, which made its début in 1951 as the Lyons Electronic Office, or LEO computer. This was pressed into service collating the requirements of Lyons’ nationwide chain of tea-rooms, managing the inventory, and controlling the production and dispatch of baked products to the tea-shops.

Today, only the smallest enterprises are run without computers, and the management of data and information using computers is central to the operations of government and public services too. But computing is also essential to scientific research and technology development — and has itself become the object of scientific research and technology development.

I have no idea just when the term ‘Information Technology’ or ‘IT’ started to be used as a synonym for computing, but I suspect the reason was to ‘big up’ the image of computing and its practitioners to business bosses and politicians. Personally I do not like the term. Firstly, because there are many other uses for computers apart from in managing information, and calling all computing ‘IT’ ignores that. Secondly, many non-computing technologies have been employed through the centuries in connection with information, for example in the writing and printing of books.

Hence my reservations about the BCS’s move to suppress pronuncation of the C-word, while describing itself as the Chartered Institute for IT.

The information society

On the other hand, I feel very positive about the adoption by the BCS of ‘Enabling the information society’ as a statement of its mission; which I find both humble and ambitious at the same time.

Why humble? Because the slogan does not claim that computer people are the only heroes of the information society. There are, after all, other people who know a thing or two about how to deal with information: researchers, teachers, writers and editors, publishers, library and information science people.

Consider my craft practice as a writer, information designer, illustrator, photographer, typographer and media producer. I used to practice all these crafts without a computer, but without a doubt the Apple Macintosh, the PostScript language, digital cameras, all that lovely software, and of course the Internet, have been very enabling indeed.

The computer and communications industries, now thoroughly converged, are providing the world with the infrastructure and tools which information and media workers like me rely on to create, manage and disseminate information products, and various cultural creations too. And it’s not only about production: in the last two decades, what I prefer to call CCTs — computing and communications technologies — have also become the means by which ordinary people access information, enjoy cultural media creations like films and music, and communicate with each other. Thus CCTs, and the people who know how to create them and make them work, are essential enablers to all these practices.

It’s true, ‘information society’ is more of a sloganeering term than a clear definition. There is some reasonably authoritative precedent for it, however. In 2003 in Geneva and 2005 in Tunis, the United Nations held a two-part conference called WSIS, the World Summit for the Information Society. It was managed by UNESCO and the ITU, and I’m afraid the British Government didn’t take it very seriously, but at least I can say that the BCS Specialist Group community did.

Led by John Lindsay, then chair of the Developing Countries Specialist Group, a group of us held a workshop in January 2003 to consider the WSIS challenge and themes, and what might be the response of professional societies working within the information sphere. (Full account of the meeting as PDF, available at the Internet Archive.)

Certainly on the UNESCO side of the WSIS organisation, the concept of the Information Society carried with it a commitment to promoting wider access to the technical means of accessing and sharing information. In preparatory papers for WSIS, UNESCO also identified the need to promote what they called ‘information literacy’, the skills people need to be able to find the information they need, evaluate its worth and make critical use of it. And information literacy was the topic we focussed on in the January 2003 BCS-DCSG workshop.

The BCS as enabler

In explaining the importance of the new mission statement, BCS President Elisabeth Sparrow spoke to the Member Groups Convention of several ways in which the BCS can work towards enabling the information society.

For a start, society needs information-oriented computer systems that are fit for purpose, and one of the BCS’s principal aims is to promote professionalism among computing practitioners. The BCS has also committed itself to bridging the gap between education, practice and research. I would expect the member groups to contribute a lot to this bridging process, because they already provide a focus for collaboration between academics and practitioners.

The BCS is a charity and has a Royal Charter which commits it to working for the public good. One of the commitments described in the transformation launch document (also entitled Enabling the information society) is to ‘Informing public policy on how IT can contribute to society’ and another is to ‘Ensuring everyone benefits from IT’.

To be honest, in recent years the BCS and its member groups have had a difficult relationship; the groups have felt under-appreciated and over-controlled, and there is suspicion that there are some BCS staff who wish the groups would just go away and let the staff get on with running the whole show. But the mood at this week’s Member Groups Convention was much more positive, and with this new expression of the BCS’s role as helping to enable the information society, we have an agenda with which the member groups can engage with enthusiasm, and to which we have a lot to contribute.

This I think is particularly true for the Specialist Groups, many of which are engaged with external ‘communities of use’ for computer and communications technologies and information-handling systems (health informatics is a good example, and so is electronic publishing). In a way we act as a natural interface between the BCS and wider communities of information users.

One informal project that I have been involved in running is the discussion community KIDMM (on Knowledge, Information, Data and Metadata Management), which started as a collaboration between members of a dozen or so BCS SGs, held a founding workshop in March 2006 and now, three and a half years on, has held three other events and has eighty members of its discussion list. Most are BCS members, but many have joined us from other communities equally committed to enabling the information society.

Dave Snowden at the Gurteen Knowledge Cafe at the BT Tower in London: 17 February 2009.

Dave Snowden at the Gurteen Knowledge Cafe at the BT Tower in London: 17 February 2009. Photo Conrad Taylor.

33 YEARS AGO, I lived in a commune at Wick Court, a Jacobean manor house situated between Bath and Bristol, which was then the head office and conference centre of the Student Christian Movement. Another of the 17 or so communards was the SCM’s Chairman, Dave Snowden, a spare, bearded Welsh Marxist Catholic philosophy and physics graduate. Taking no prisoners in argument, and delighting in winding people up, Dave gave the impression of having slipped from another historical epoch, though whether it was from some future time, 1920s Russia or perhaps the Counter-Reformation was less easy to judge.

Our paths separated after Wick Court. Last summer I was again in the Bristol region, staying with my friend Bob Bater, who works in the area of knowledge management — a field I have strayed into from my lifelong interest in information design and communication. Bob told me about the work of a company called Cognitive Edge and its founder Dave Snowden, a description of whom sounded remarkably familiar. So when I had the opportunity to hear a lecture by Dave Snowden at a Gurteen Knowledge Cafe event at the BT Tower in London, I made an effort to go along.

Quite apart from the joy of re-union (and some grand views of London from the revolving restaurant at the top of the tower), it was also fascinating to discover how Dave’s thinking and mine have been oscillating around some of the same attractors over the years. Mind you, he has made a great deal more use of them than I have, through his many consultancies with major enterprises, and even state security organisations.

The topic for the day’s Cafe session was How can we best keep employees engaged in their work, in the current economic climate? But Dave’s talk addressed a much wider set of issues around what organisations should do to adapt to a complex and crisis-ridden world — one in which the ‘lessons’ of the past are no longer a guide for future action, and ‘management by objectives’ doesn’t equip us with the agility and intelligence to respond to emergent threats and opportunities.

Paradigm shifts in management thinking

The word paradigm has many sloppy uses, but Dave deploys it strictly to mean a theoretical framework: very close to the use established by Thomas Kuhn in his book on the sociology of scientific knowledge, The Structure of Scientific Revolutions. For a period of time, a paradigm dominates because it seems best to organise our thinking about the world; but then, a new approach that flies in the face of received wisdom and which is initially ridiculed gets taken up; and at a certain point, a ‘paradigm shift’ occurs and the former heresy becomes the dominant idea. The revolutions associated with Copernicus and Darwin are two such instances.

In the last 120 years, said Dave, two paradigms in succession have dominated management thinking. The first of these, dubbed Scientific Management (or ‘Taylorism’ after Frederick Winslow Taylor, its progenitor), focuses on the function of the business organisation and takes a command and control approach to the organisation of work. In this period, the emphases are on mass production, time and motion, automation and efficiency. The Machine is the model, and people are viewed as components of the system.

The second paradigm identified by Dave Snowden is Business Process Re-engineering, which took off in the 1990s. The belief in BPR is that one should look anew at the company’s objectives and define the outcomes or goals you want to achieve, the values or behaviours which you want your staff to adopt. Dave notes that this makes several assumptions, among them that you can pre-define your goals, and that one can rely on a form of causality in which results are repeatable, that the past is a guide to the future. The approach also relies on scalable and reliable technology (particularly information and communication technology) through which metrics can be gathered and control can be driven.

And now, said Dave, we are entering a third paradigm, which it was his job to explain. Rather than projecting an idealised future, this approach seeks to understand and manage the evolutionary potential of the present. It is people-focused, exploits mass collaboration and pervasive social computing, and relies on distributed rather than centralised cognition.

Dave also displayed for us a quote from Seneca:

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

Catastrophes, niches and predators

Dave argues that times of economic crisis also tend to coincide with paradigm shifts, the times when the previously dominant set of ideas appear weak and the opportunity arises for the new model to take hold. Focusing on evolutionary potential and emergent trends in the present is particularly beneficial in opening our attention to novel possibilities for ways of doing business.

An idea he borrows from evolutionary biology is ‘Dominant Predator Theory’. This holds that after a major disturbance of the ecology, there will be one predator species that will first make the move into the new niche, occupy it and dominate it. The speed with which the predator can move into this niche is key to its dominance; and agile companies will be the ones that emerge from the current crisis in best shape.

Systems and agents: three configurations

To understand how an organisation might exploit the third paradigm, Dave presented three system models: ordered, chaotic and complex. By ‘system’ he means networks that have coherence, though that need not imply sharp boundaries. ‘Agents’ are defined as anything which acts within a system. An agent could be an individual person, or a grouping; an idea can also be an agent, for example the myth-structures which largely determine how we make decisions within the communities and societies within which we live.

  • Ordered systems are ones in which the actions of agents are constrained by the system, making the behaviour of the agents predictable. Most management theory is predicated on this view of the organisation.
  • Chaotic systems are ones in which the agents are unconstrained and independent of each other. This is the domain of statistical analysis and probability. We have tended to assume that markets are chaotic; but this has been a simplistic view.
  • Complex systems are ones in which the agents are lightly constrained by the system, and through their mutual interactions with each other and with the system environment, the agents also modify the system. As a result, the system and its agents ‘co-evolve’. This, in fact, is a better model for understanding markets, and organisations.

Most people understand how to think about order and chaos, but understanding complexity requires a new mind-set. One property of complex dynamic systems is that patterns may take shape within them that can rapidly escalate if they find reinforcement. Many people know of the phrase ‘the butterfly effect’ to express how interactions between agents can sometime build rapidly into something that would have been hard to predict. The key to managing complex systems is therefore to attune to spotting emerging trends as early as you can, sometimes referred to as weak signal detection. If you have the means of detecting signals early, you can move to boost those that you view as positive, and dampen down those which are negative.

The childrens’ party metaphor

I don’t remember the younger Dave as a raconteur, but he’s certainly an accomplished one now. To explain the difference between the above three systems and how to manage them, he told a story about the rules around a birthday party for his 12-year-old son, and his 16-year-old daughter.

What would happen if you were prepared to model a 12-year-old’s party as a chaotic system? Anything could happen, including the house being reduced to ashes. What would an ‘ordered system’, ‘management by objectives’ approach look like? Dave span out a fantasy process starting with declaring the objectives for the party, pinning up motivational posters, writing a project plan with clear milestones at which progress would be measured against declared outcomes, starting the party with motivational video and Powerpoint, and after the party, reviewing the outcomes and updating the best-practice database. (That raised a laugh and ripple of applause.)

Far better to model the kids’ party as a complex system. You set boundaries (‘cross that and you’re dead!’) — but you keep them flexible, you see if you can stimulate the formation of ‘attractor mechanisms’ that will set up some good patterns, and you keep a weather-eye open for bad patterns that you will want to disrupt. Your aim is: to manage the emergence of beneficial coherence. And the benefits in business of adopting this process, Dave asserted, are that you get as dramatic an improvement as business process re-engineering ever got you, and at a low cost if you spot the patterns early and move appropriately.

Dave’s 16-year-old daughter’s party raised the stakes to a new level, as a previous incident involving a couple of friends and a bottle of vodka had given them cause to know. Here, the rules were more carefully set: who could be invited, what kind of alcohol etc. Shortly before the impending event, Dave was working with a well known client organisation in Langley, Va. USA, and sympathetic friends there loaned him an extensive array of surveillance equipment. With the house fully bugged, Dave and his wife stayed in their bedroom, tuned in, and only twice had to make forays so as to be on hand to disrupt possibly negative emergent patterns. ‘In a complex system, what counts is your weak signal detection,’ Dave explained.

Distributed cognition

Swindon’s Magic Roundabout

Swindon’s Magic Roundabout


Distributed Cognition is a key concept in Dave Snowden’ vision of the new way to manage radically networked organisations, and he explained it with reference to the so-called ‘Magic Roundabout’ in Swindon. I’ve been trying to get my head round it (according to Dave American visitors can’t get cars round it!), and my best way of conceptualising the roundabout is as a very compact ring road, with traffic in both directions, linked to five radial roads by a mini-roundabout at each junction.

The Magic Roundabout offloads onto drivers the decisions about how to interpret the traffic flows, when to go and to stop, which way to turn and so on. The roundabout has no traffic lights. That’s what distributed cognition is about: you rely on the intelligence of the agents in the system. Since it was introduced it has never jammed up.

When the Magic Roundabout was set up, the partitions within the roadways were set up on a temporary basis, movable on the basis of observations by policemen and road traffic engineers about how people reacted to them. And the moral that Dave drew from that was: ‘Move away from attempting failsafe design towards setting up safe-fail experimentation’. Because you cannot analyse the problem space fully in advance, and you have to be prepared to adjust systems interactively until you find that they work.

Putting distributed cognition to work

Within complex systems, for the organisation to become alerted to early weak signals of pattern formation, Dave argues that it is necessary to work with objects of fine granularity — be these objects organisational ones or informational ones.

As for the granularity of levels of human organisation at which distributed cognition works, we should also be aware of certain numerical thresholds: ‘Organisations work differently as 5 or less individuals, 15 or less, and 150 of less’. 150 is Dunbar’s number [Wikipedia ref], the limit on the number of ‘acquaints’ a normal person can maintain; it is also a typical kind of size for a military fighting unit such as a company, and W.L. Gore & Associates divide business units when they exceed this size. Dave described fifteen as a ’limit of trust’ and related to the size of the typical extended family; and five as Miller’ Number [Wikipedia ref], the limit of short-term memory. (Note: George Miller actually defined this limit as ‘the magical number seven, plus or minus two.’)

Spotting the ball, missing the beast

Dave then got us to take part in an experiment which demonstrated both the strengths of distributed cognition, and one of the failings of all human cognition about which we should be on our guard. He prepared us for this by getting us to think about the fairground competition that invites us to guess how many jelly beans are in a large jar. If the first guess is visibly posted up, subsequent guesses tend to bracket this; whereas if each person remains unaware of the other guesses, cognition remains distributed and the average of the guesses is more accurate.

Our task was to watch a video in which three students dressed in white T-shirts walked within a circle passing a basketball between them, while another three dressed in black T-shirts also circled between them, likewise passing a ball. How many times was the ball passed between the students in the white shirts?

This visual attention test wasn’t new to me, as I had read about Daniel Simons’ experiment in Scientific American a couple of years ago. Most people concentrating on this task fail completely to notice that half way through the clip (see it here), someone in a gorilla suit shambles in from the right, grins at the camera and beats his chest, then leaves. Many in this audience likewise missed the gorilla — but as a thumbs-up to distributed cognition, the reported number of passes neatly bracket the correct answer, which is 14. (I anticipated the gorilla, but failed to count three of the passes as a result!)

Given how we were prepped for the task, the gorilla was a typical ‘weak signal’ that slipped off most people’s radar. There is a moral to this: we don’t tend to notice what we are not looking for. According to Dave, people on the mild side of the autistic spectrum do tend to notice the gorilla despite concentrating on the task.

Stories and filters

With both positive and negative effects, human beings and societies are closely attuned to stories. Indeed a book by John D Niles characterises our species as Homo narrans. Families, workgroups, companies and whole cultures have stories, and dominant narratives act as primary filters through which we see (or fail to see) data. (Indeed, I see this as related to the idea of a ‘paradigm’ anyway.)

But what if we were able to capture stories and make use of them to build business intelligence? Dave’s clients are coming to value the freeform comment over conventional forms of survey, and also in preference to the output of focus groups (in which the danger of bias from the facilitator is high). At Cognitive Edge they have been developing methods and software tools which harness the illuminating power of stories beside distributed cognition, and make the raw data available directly to decision makers together with useful quantitative summaries and visualisations — what Dave calls disintermediation.

Dave gave several examples of this in practice, but one will suffice for here. In the My Visit project, National Museums Liverpool wanted to collect and analyse feedback from the hundreds of thousands of school children who visit and interact with its staff. The children leave small text comments, and are asked to ‘self-signify’ the comment fragments by placing them on a sliding scale between two negative polar opposites. Examples of scales:

  • From staff patronise the children

    to staff are too childlike and pathetic

  • From too much to see and it’s overwhelming

    to not enough to keep me interested

  • From rushed from place to place; missed things

    to too much time in one place

The software provided by Cognitive Edge (SenseMaker) presents the spectra of results as a histogram. Obviously, what the museum management would like to see would be positive stories that sit comfortably in the middle of these negative extremes. But to detect problems and fix them early, what they keep their eye on is the histogram columns at the negative edges. The long-term pattern displays as blue bars, the last 48 hours’ results show in red. Behind the simple display is a database of all the comment fragments, and the management can drill down quickly to read the individual, unmediated complaints.

The SenseMaker display lets museum staff monitor the emergence of trends and drill down to read comments left by children.

The SenseMaker display lets museum staff monitor the emergence of trends and drill down to read comments left by children. Image by permission of Dave Snowden.

Feedback from the system has helped the museum to refine and develop over twenty learning activities for children, and has quadrupled the figures for learning visits.

Dave showed other examples covering e.g. attitudes of employees to their leadership, scenario elicitation within a Canadian forestry service, and so on. One of the latest uses of SenseMaker sounds fascinating and I look forward to learning more: the Children of the World project aims to create a cultural map of the world, getting children to gather stories from their families that are reflections about past and present life, future hopes and aspirations. Starting in Liverpool, the project will branch out soon to Australia and Canada, Bangladesh and Africa.

Shifting paradigms

Summing up, Dave argued that while there is nothing wrong with the BPO paradigm for the things it’s good at, it doesn’t deal well with complexity. So what are the practical implications for organisations prepared to make the switch?

  • The demands of weak signal detection imply setting aside periodical analyses based on surveys and focus groups, in favour of narrative-based research that continuously captures and displays a disintermediated evidence base, as described above.
  • Rather than determining outcomes and measuring performance against targets (what’s ruining UK health and education), shift to measuring the impacts of activities and allow for emergence and adjustment.
  • Rather than using centralised scenario planning, set up systems which allow for scenarios to be generated by employees.
  • Break people out of their boxes. If you can assemble ‘crews’ with a membership that spans functional job boundaries and put the crews ‘on watch’, you will have at your service a team whose skills exceed those of any one of its members.
  • Best practice databases are all very well, but they lead you to rely on what you ‘learned’ in the past. Dave recommends knowledge management practices that collect narrative fragments.
  • ‘Practice-Informed Theory’ similarly assumes stability and doesn’t deal well with emergent behaviours and situations. Indeed Dave felt that more ‘Theory-Informed Practice’ is desirable. For example if there were more professional ethical principle applied in banking and accounting, would we be in the mess we’re in now?

WITH HYPERINFLATION IN ZIMBABWE currently running at over eleven million percent, the presses at the state-owned Fidelity Printers are finding it hard to keep up with demand. Within weeks of a new banknote denomination being created — for example, the Z$100 billion note released on 21st July and worth about 7 pence — it has almost entirely lost its value. Yet the presses are crucial in maintaining Mugabe in power: just imagine what would happen if the soldiers could no longer be paid.

On 24th July, The Guardian reported an unexpected threat to the Mugabe régime: the German company Giesecke & Devrient, which supplies the watermarked banknote paper on which the Zimbabwean currency is printed, cut off supplies under pressure from the German government. Harare has anxiously been trying to find an alternative supply, reportedly from Malaysia.

Intriguingly, the newspaper also reported that Fidelity were in a panic about the European software that they use to create the new banknote designs:

A source inside Fidelity Printers said the software issue had created an air of panic. “It’s a major problem. They are very concerned that the licence will be withdrawn or not renewed. They are trying to find ways around it, looking at the software, but it’s very technical. They are in a panic because without the software they can’t print anything,” he said.

The software in question is supplied by an Austro-Hungarian company, Jura JSP GmbH. Their ’GS’ high-security pre-press software suite is essentially a layout tool for banknotes and securities that also generates all the complex anti-forgery features needed.

However, also on 24th July, and obviously in response to the press speculation, the Jura Group put out a press release that was strangely self-contradictory. It included these statements:

The software delivered in 2001 in accordance with the contract allows only for the graphic design of banknotes, and serves in particular for applying forgeryproof security features on banknotes. It is stressed here that the production of banknotes using the software of JURA JSP can be ruled out for technical reasons. Therefore, the Mugabe regime can produce banknotes anytime without the software by JURA JSP – by loosing [sic]the high security features.

It is de facto impossible to prevent Fidelity Printers and Refiners (PVT) Ltd. from using the software, since the software was installed locally and cannot be removed by JURA JSP.

Mind you, it is odd to think of there being such panic about installing sophisticated security features on billion-dollar banknotes that can hardly buy a biscuit, and will be worthless in weeks. What forger would waste their time copying that?

It is equally odd to think that the ability of a goverment to pay its troops may rest in the hands of a few graphic designers and press operators.

IT IS USEFUL, having emailed someone who works in an office, to receive back an automated message that they are not at work today, and so you cannot expect an immediate answer. But having just received an ‘Out of Office’ message once again from a staff member of the British Computer Society — on a Saturday, note you, when I didn’t expect him to be there anyway! — I think it’s time these tools gained some intelligence.

The BCS is far from unique in relying for much of what it does on a dispersed army of volunteers, who work from home, and mostly out of office hours and at the weekend. I shall be spending much of this August Bank Holiday weekend preparing for a BCS meeting. Now, I know that BCS staff don’t spend their weekends at the office, and so does everybody else. Couldn’t the ‘Out of Office’ message system be tailored to shut down over the weekend?

(End of small rant.)