Print this page

Author: Richard Barbrook

NEW YORK PROPHECIES: THE IMAGINARY FUTURE OF ARTIFICIAL INTELLIGENCE

The Future Is What It Used To Be

‘Biological intelligence is fixed, because it is an old, mature paradigm, but the new paradigm of non-biological computation and intelligence is growing exponentially. The crossover will be in the 2020s and after that, at least from a hardware perspective, non-biological computation will dominate…’
Kurzweil 2004, p. 3.

At the beginning of the 21st century, the dream of artificial intelligence is deeply embedded within the modern imagination. From childhood onwards, people in the developed world are told that computers will one day be able to reason – and even feel emotions – just like humans. In science fiction stories, thinking machines have long been favourite characters. Audiences have grown up with images of robot buddies like Data in Star Trek TNG and of pitiless monsters like the cyborg in The Terminator (Startrek.com 2005; Cameron 1984). These science fiction fantasies are encouraged by confident predictions from prominent computer scientists. Continual improvements in hardware and software will eventually led to the creation of artificial intelligences more powerful than the ‘biological intelligence’ of the human mind.

Commercial developers are already looking forward to selling sentient machines which can do the housework and help the elderly (Honda 2004). Some computer scientists even believe that the creation of ‘non-biological intelligences’ is a spiritual quest. In California, Ray Kurzweil, Vernor Vinge and their colleagues are eagerly waiting for the Singularity: the First Coming of the Silicon Messiah (Vinge 1993; Bell 2004). Whether inspired by money or mysticism, all these advocates of artificial intelligence share the conviction that the present must be understood as the future in embryo – and the future illuminates the potential of the present. Every advance in computing technology is heralded as another step towards the creation of a fully conscious machine. The prophecy of artificial intelligence comes closer to fulfilment with the launch of each new piece of software or hardware. It is not what computers can do now that is important but what they are about to do. The present is the beta version of a science fiction dream: the imaginary future.

Despite its cultural prominence, the meme of sentient machines is vulnerable to theoretical exorcism. Far from being a free-floating signifier, this prophecy is deeply rooted in time and space. Not surprisingly, contemporary boosters of artificial intelligence rarely acknowledge the antiquity of the concept itself. They want to move forwards not look backwards. Yet, it’s over forty years since the dream of thinking machines first gripped the public’s imagination. The imaginary future of artificial intelligence has a long history. Analysing this original version of this prophecy is the precondition for understanding its contemporary iterations. With this motivation in mind, let’s go back to the second decade of the Cold War when the world’s biggest computer company put on a show about the wonders of thinking machines in the financial capital of the most powerful and wealthiest nation on the planet…

A Millennium Of Progress

On the 22nd April 1964, the New York World’s Fair was opened to the general public. During the next two years, this modern wonderland welcomed over 51 million visitors. Every section of the American elite was represented at the exposition: the federal government, US state governments, large corporations, financial institutions, industry lobbies and religious groups. The World’s Fair proved that the USA was the leader in everything: consumer goods, democratic politics, show business, modernist architecture, fine art, religious tolerance, domestic living and, above all else, new technology. As one of the exposition’s advertising slogans implied, a ‘millennium of progress’ had culminated in the American century (Stanton, 2004, 2004a; Luce 1941).

This patriotic message was celebrated in the awe-inspiring displays of new technologies at the World’s Fair. Writers and film-makers had long fantasised about travelling to other worlds. Now, in NASA’s Space Park, the public could admire the huge rockets which had taken American astronauts into orbit (Editors of Time-Life Books 1964, p. 208; Laurence 1964, pp. 2-14). Despite its early setback when the Russians launched the first satellite in 1957, the USA was now on the verge of overtaking its rival in the ‘space race’ (Schefter 1999, pp. 145-231). Best of all, visitors to the World’s Fair were told that they too would have the opportunity to become astronauts in their own lifetimes. In the General Motors’ Futurama pavilion, Americans of the 1980s were shown taking their holidays on the moon. (Editors of Time-Life Books 1964, p. 222). Other corporations were equally confident that the achievements of the present would soon be surpassed by the triumphs of tomorrow. At its Progressland pavilion, General Electric predicted that electricity generated by nuclear fusion reactors would be ‘too cheap to meter’ (Editors of Time-Life Books 1964, pp. 90-92; Laurence 1964, pp. 40-43). In the imaginary future of the World’s Fair, Americans would not only become space tourists but also be blessed with free energy.

For many corporations, the most effective method of proving their technological modernity was showcasing a computer. Clairol’s machine selected ‘the most flattering hair shades’ for female visitors and Parker Pen’s mainframe matched American kids with ‘pen pals’ in foreign countries (Editors of Time-Life Books 1964, pp 86, 90). However impressive they might have appeared to their audience, these exhibits were nothing more than advertising gimmicks. In contrast, IBM was able to dedicate its pavilion exclusively to the wonders of computing as a distinct technology. For over a decade, this corporation had been America’s leading mainframe manufacturer. In 1961, one single product – the IBM 1401 – had accounted for a quarter of all the computers operating in the USA (Pugh 1995, pp. 265-267). In the minds of many Americans, IBM was computing. Just before the opening of the World’s Fair, the corporation had launched a series of products which would maintain its dominance over the industry for another two decades: the System/360 (DeLamarter 1986, pp. 54-146). Seizing the opportunity for self-promotion offered by the exposition, the bosses of IBM commissioned a pavilion designed to eclipse all others. Eero Saarinen – the renowned Finnish architect – supervised the construction of the building: a white, corporate-logo-embossed, egg-shaped theatre which was suspended high in the air by 45 rust-coloured metal trees. Underneath this striking feature were interactive exhibits celebrating IBM’s contribution to the computer industry (Stern, Mellins and Fishman 1997, pp. 1046-1047).

For the theatre itself, Charles and Ray Eames – the couple who epitomised American modernist design – created the main attraction at the IBM pavilion: ‘The Information Machine’. After taking their places in the 500-seat ‘People Wall’, visitors were elevated upwards into the egg-shaped structure. Once inside, a narrator introduced a ‘mind-blowing’ multi-media show about how the machines exhibited in the IBM pavilion were forerunners of the sentient machines of the future. Computers were in the process of acquiring consciousness. (Editors of Time-Life Books 1964, pp. 70-74; Laurence 1964, pp. 57-58). In the near future, every American would own a devoted mechanical servant just like Robby the Robot in the popular 1956 sci-fi film Forbidden Planet (Wilcox 1999). At the New York World’s Fair, IBM proudly announced that this dream of artificial intelligence was finally about to be realised. With the launch of the System/360 series, mainframes were now powerful enough to construct the prototypes of a fully conscious computer.

The IBM pavilion’s stunning combination of avant-garde architecture and multi-media performance was a huge hit with both the press and the public. Alongside space rockets and nuclear reactors, the computer had confirmed its place as one of the three iconic technologies of modern America. The ideological message of these machines was clear-cut: the present was the future in embryo. Within at the IBM pavilion, computers existed in two time frames at once. On the one hand, the current models on display were prototypes of the sentient machines of the future. On the other hand, the vision of computer consciousness showed the true potential of the mainframes exhibited in the IBM pavilion. At the 1964 New York World’s Fair, the launch of the System/360 series was celebrated as the harbinger of the imaginary future of artificial intelligence.

‘Duplicating the problem-solving and information-handling capabilities of the [human] brain is not far off; it would be surprising if it were not accomplished within the next decade.’
Simon, p. 39.

Inventing the Thinking Machine

The futurist fantasies of IBM’s multi-media show were inspired by the dispassionate logic of the academy. Alan Turing – the founding father of computer science – had defined the development of artificial intelligence as the long-term goal of this new discipline. In the mid-1930s, this Cambridge mathematician had published the seminal article which described the abstract model for a programmable computer: the ‘universal machine’ (Turing 2004). During the Second World War, his team of engineers had created a pioneering electronic calculator to speed up the decryption of German military signals. When the conflict was over, Turing moved to Manchester University to join a team of researchers who were building a programmable machine. As proposed in his 1936 article, software would be used to enable the hardware to perform a variety of different tasks. On 21st June 1948, before he’d even taken up his new post, Turing’s colleagues switched on the world’s first electronic stored-program computer: Baby. The theoretical concept described in an academic journal had taken material form as an enormous metal box filled with valves, switches, wires and dials. (Turing 2004a; Agar 2001 pp. 3-5, 113-124; Hodges 1992, pp. 314-402).

For Turing, Baby was much more than just an improved version of the office tabulator. When software could control hardware, counting became consciousness. In a series of seminal articles, Turing argued that his mathematical machine was the precursor of an entirely new life form: the mechanical mathematician. He backed up this prediction by defining human intelligence as what computers could do. Since calculating was a sophisticated type of thinking, calculating machines must be able to think. If children acquired knowledge through education, educational software could create knowledgeable computers. Because the human brain worked like a machine, it was obvious that a machine could behave like an electronic brain (Turing 2004a, 2004b, 2004d, 2004e; Schaffer 2000). According to Turing, although the early computers weren’t yet powerful enough to fulfil their true potential, continual improvements in hardware and software would – sooner or later – overcome these limitations. In the second half of the twentieth-century, computing technology was rapidly evolving towards its preordained destiny: artificial intelligence.

‘The memory capacity of the human brain is probably of the order of ten thousand million binary digits. But most of this is probably used in remembering visual impressions, and other comparatively wasteful ways. One might reasonably hope to be able to make some real progress [towards artificial intelligence] with a few million digits [of computer memory].’
Turing 2004a, p. 393

In his most famous article, Turing described a test for identifying the winner of this race to the future. Once an observer couldn’t tell whether they were talking with a human or a computer in an on-line conversation, then there was no longer any substantial difference between the two types of consciousness. If the imitation was indistinguishable from the original, the machine must be thinking. The computer had passed the test (Turing 2004c, pp. 441-448; Schaffer 2000). From this point onwards, computers were much more than just practical tools and tradable commodities. As Turing’s articles explained, the imaginary future of artificial intelligence revealed the transformative potential of this new technology. Despite their shortcomings, the current models of computers were forerunners of the sentient machines to come.

By the late-1940s, the catechism of artificial intelligence had been fixed. Within computing, what was and what will be were one and the same thing. Despite this achievement, Turing was a prophet whose influence was waning within his own country. The computer might have been invented in Britain, but its indebted government lacked the resources to dominate the development of this new technology (Agar 2003, pp. 266-278). Across the Atlantic, the situation was very different. During the Second World War, the American government had also provided generous funding for research into electronic calculation. Crucially, when the victory was won, scientists working on these projects didn’t encounter severe problems in maintaining their funding. While money was scarce in Britain, the USA could easily afford to pay for cutting-edge research into new technologies. Once the Cold War was underway, American politicians had no problem in justifying these subsidies to their constituents (Leslie 1993, pp. 1-13; Lewontin 1997).

In the USA, computer scientists possessed another major advantage over their British rivals: the meta-theory of cybernetics. During the late-1940s and early-1950s, a group of prominent American intellectuals came together at the Macy conferences to explore ways of breaking down the barriers between the various academic disciplines: (Heims 1991). From the outset, Norbert Wiener was recognised as the guru of this endeavour. While working at MIT, he had devised cybernetics as a new theoretical framework for analysing the behaviour of both humans and machines. The input of information about the surrounding environment led to the output of actions designed to transform the environment. Dubbed ‘feedback’, this cycle of stimulus and response reversed the spread of entropy within the universe. Order could be created out of chaos (Wiener 1948, pp. 74-136). According to Wiener, this master theory described all forms of purposeful behaviour. Whether in humans or machines, there was continual feedback between information and action. The same mathematical equations could be used to describe the behaviour of living organisms and technological systems (Wiener 1948, pp. 168-191). Echoing Turing, this theory implied that it was difficult to tell the difference between humans and their machines (Wiener 1948, pp. 21, 32-33). In 1948, Wiener outlined his new master theory in a book filled with pages of mathematical proofs: Cybernetics – or command and control in the animal and the machine.

Much to his surprise, this academic had written a bestseller. For the first time, a common set of abstract concepts covered both the natural sciences and the social sciences. Wiener‘s text had provided potent metaphors for describing the new hi-tech world of Cold War America. Even if they didn’t understand his mathematical equations, readers could easily recognise cybernetic systems within the social institutions and communication networks which dominated their everyday lives. Feedback, information and systems soon became incorporated into popular speech (Conway and Siegelman 2005, pp. 171-194; Heims 1991, pp. 271-272). Despite this public acclamation, Wiener remained an outsider within the US intelligentsia. Flouting the ideological orthodoxies of Cold War America, this guru was a pacifist and a socialist.

In the early-1940s, Wiener – like almost every US scientist – had believed that developing weapons to defeat Nazi Germany benefited humanity. When the Cold War started, his military-funded colleagues claimed that their research work was also contributing to the struggle against an aggressive totalitarian enemy (Lewontin 1997). Challenging this patriotic consensus, Wiener argued that American scientists should adopt a very different stance in the confrontation with Russia. He warned that the nuclear arms race could lead to the destruction of humanity. Faced with this dangerous new situation, responsible scientists should refuse to carry out military research (Conway and Siegelman 2005, pp. 237-243, 255-271). During the 1950s and early-1960s, Wiener’s political dissidence inspired his socialist interpretation of cybernetics. In the epoch of corporate monopolies and atomic weaponry, the theory that explained the behaviour of both humans and machines must be used to place humans in control of their machines. Abandoning his earlier enthusiasm for Turing’s prophecy, Wiener now emphasised out the dangers posed by sentient computers (Wiener 1966, pp. 52-60, 1967, pp. 239-254). Above all, this attempt to build artificial intelligences was a diversion from the urgent task of creating social justice and global peace.

‘The world of the future will be an ever more demanding struggle against the limitations of our own intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.’
Wiener 1966, p. 69.

By opposing the militarisation of scientific research, the founder of cybernetics had embarrassed his sponsors among the US elite. Fortunately for the rulers of America, there was another brilliant mathematician at the Macy conferences who was also a fanatical Cold War warrior: John von Neumann. Traumatised by the nationalisation of his family’s bank during the 1919 Hungarian revolution, his anti-communist politics were so extreme that he had argued in the mid-1940s in favour of launching a pre-emptive war to stop Russia acquiring nuclear weapons (Heims 1980, pp. 235-236, 244-251). While playing a leading role in developing the atomic bomb, von Neumann had already applied his mathematical and organisational talents to the new field of computing. When the first Macy conference was held in 1946, his team of researchers were working on building a prototype mainframe for the US navy (Ceruzzi 2003, pp. 21-4). In von Neumann, the American empire had found a guru without any trace of heresy.

At the early Macy conferences, the political differences among its attendees weren’t apparent. United by the anti-fascist struggle, Wiener and von Neumann could champion the same concept of cybernetics (Conway and Siegelman 2005, pp. 143-149; Heims 1980, pp. 201-207). But, as their politics diverged, these two stars of Macy conferences began advocating rival interpretations of this meta-theory. In its left-wing version, artificial intelligence was denounced as the apotheosis of technological domination. When he formulated his right-wing remix, von Neumann took cybernetics in exactly the opposite direction. Above all, his interpretation emphasised that this master theory had been inspired by the prophecy of thinking machines. Back in the mid-1930s, Turing had briefly worked with von Neumann at Princeton. A decade before his own involvement in computing, this Hungarian scientist had known all about the idea of the universal machine. When two Chicago psychologists in the early-1940s applied Turing’s theory to explain the processes of human thought, von Neumann became fascinated by the implications of their speculations.

Since the mechanical calculator was modelled on the human brain, Warren McCulloch and Walter Pitts argued that consciousness was synonymous with calculation. Like the electrical contacts of an IBM tabulator, neurons were switches which transmitted information in binary form (McCulloch and Pitts 1943). Entranced by this inversion of Turing’s line of argument, von Neumann became convinced that it was theoretically possible to build a thinking machine. If neurons acted as switches within the human brain, valves could be used to create an electronic brain (von Neumann 1966, pp. 43-46, 1976, pp. 308-311). When he was working on computer research for the US military, von Neumann used the human brain as the model for his eponymous computer architecture. Echoing Turing, this prophet claimed that – as the number of valves in a computer approached those of the neurons in the brain – the machine would be able to think (von Neumann 1966, pp. 36-41, 1976, pp. 296-300, 2000, pp. 39-52). Within a decade, von Neumann and his colleagues would be equipping the US military with cybernetic soldiers capable of fighting and winning a nuclear war.

‘Dr. McCulloch: How about designing computing machines so that if they were damaged in air raids … they could replace parts … and continue to work?
Dr. von Neumann: These are really quantitative rather than qualitative questions.’
von Neumann 1976, p. 324

By the early-1950s, the USA’s academic and corporate research teams had seized the leadership of computing from their British rivals. From then onwards, all of the most advanced machines were made-in-America (Ceruzzi 2003, pp. 13-46). Yet, by creating cybernetics without Wiener, von Neumann had ensured that these US laboratories would also follow Turing’s path to the imaginary future: building artificial intelligence. The metaphor of feedback now proved that computers operated like humans. Inputs of information led to outputs of action. If the mind operated like a machine, then it must be possible to develop a machine which duplicated the functions of the mind. Computers could already calculate faster than their human inventors. Mastering the complexities of mathematical logic must be the first step towards endowing these machines with the other attributes of human consciousness. Language was a set of rules which could be codified as software. Learning from new experiences could be programmed into hardware (Minsky, 2004, 2004a). Throughout the 1950s and early-1960s, American scientists worked hard to build the thinking machine. Once it had enough processing power, the computer would achieve consciousness. (Edwards 1996, pp. 239-273). When IBM launched its System/360 mainframe at the 1964 World’s Fair, Turing’s dream appeared to be close to realisation.

Cold War Computing

A quarter of a century earlier, one of the stars of the 1939 New York World’s Fair had been Electro: a robot which – according to its publicists – ‘… could walk, talk, count on its fingers, puff a cigarette, and distinguish between red and green with the aid of a photoelectric cell’ (Eames and Eames 1973, p. 105). For all its fakery, this exhibit was the first iteration of the imaginary future of artificial intelligence. Before the 1939 World’s Fair, robots in books and movies had almost always been portrayed as Frankenstein monsters intent on destroying their human creators (Shelley 1969; Lang 2003). Inspired by his visits to the exposition, Isaac Asimov decided to change this negative image. Just like Electro, the robots in his sci-fi stories were safe and friendly products of a large corporation (Asimov, 1968, 1968a). By the time that the 1964 New York World’s Fair opened, this positive image of artificial intelligence had become one of the USA’s most popular imaginary futures. In both science fiction and science fact, the robot servant was the symbol of better times to come.

At the 1939 New York World’s Fair, Electro was competing against the technological icon of the moment: the motor car. The must-see attractions were Democracity – a model featured in the New York State’s Perisphere building – and Futurama – a diorama inside the General Motors’ pavilion. Both exhibits promoted a vision of an affluent and hi-tech America of the 1960s. In this imaginary future, the majority of population lived in family homes in the suburbs and commuted to work in their own motor cars (Exposition Publications 1939, pp. 42-45, 207-209). For most visitors to the 1939 New York World’s Fair, this prophecy of consumer prosperity must have seemed like a utopian dream. The American economy was still recovering from the worst recession in the nation’s history, Europe was on the brink of another devastating civil war and East Asia was already engulfed by murderous conflicts. Yet, by the time that the 1964 World’s Fair opened, the most famous prediction of the 1939 exposition had been realised. However sceptical visitors might have been back in 1939, the Democracity and Futurama dioramas seemed remarkably prescient twenty-five years later. By the early-1960s, America was a suburban-dwelling, car-owning consumer society. The imaginary future had become contemporary reality.

‘The motor car … directs [social] behaviour from economics to speech. Traffic circulation is one of the main functions of a society … Space [in urban areas] is conceived in terms of motoring needs and traffic problems take precedence over accommodation … it is a fact that for many people the car is perhaps the most substantial part of their ‘living conditions’.’
Lefebvre 1984, p. 100.

Since the predictions of the 1939 exposition had largely come true, visitors to the 1964 New York World’s Fair could have been forgiven for believing that its three main imaginary futures would also be realised. Who could doubt that – by 1990 at the latest – the majority of Americans would be enjoying the delights of space tourism and unmetered electricity? Best of all, they would be living in a world where sentient machines were their devoted servants. However, the American public’s confidence in these imaginary futures would have been founded upon a mistaken sense of continuity. Despite being held on the same site and having many of the same exhibitors, the 1964 World’s Fair had a very different focus from its 1939 antecedent. Twenty-five years earlier, the centrepiece of the exposition had been the motor car: a mass produced consumer product. In contrast, the stars of the show at the 1964 World’s Fair were state-funded technologies for fighting the Cold War. Computers calculated the trajectories which would guide American missiles armed with nuclear bombs to destroy Russian cities and their unfortunate inhabitants (Isaacs and Dowling 1998, pp. 230-243). While its 1939 predecessor had showcased motorised transportation for the masses, the stars of the 1964 World’s Fair were the machines of atomic armageddon.

When the IBM pavilion was being designed, the corporation had to deal with this public relations problem. Like nuclear reactors and space rockets, computers had also been developed as Cold War weaponry. ENIAC – the first prototype mainframe built in America – was a machine for calculating tables to improve the accuracy of artillery guns (Ceruzzi 2003, p. 15). From the early-1950s onwards, IBM’s computer division was focused on winning orders from the Department of Defence (Pugh 1995, pp. 167-172). Using mainframes supplied by the corporation, the US military prepared for nuclear war, organised invasions of ‘unfriendly’ countries, directed the bombing of enemy targets, paid the wages of its troops, ran complex war games and managed its supply chain (Berkeley 1962, pp. 56-7, 59-60, 137-145). Thanks to the American taxpayers, IBM had become the technological leader of the computer industry.

When the 1964 New York World’s Fair opened, IBM was still closely involved in a wide variety of military projects. Yet, its pavilion was dedicated to promoting the sci-fi fantasy of thinking machines. Like the predictions of unmetered energy and space tourism, the imaginary future of artificial intelligence distracted visitors at the World’s Fair from discovering the original motivation for developing IBM’s mainframes: killing millions of Russian civilians. Although the superpowers’ imperial hegemony depended upon atomic weapons, the threat of global annihilation made their possession increasingly problematic. Two years earlier, the USA and Russia had almost blundered into a catastrophic war over Cuba (Dallek 2003, pp. 535-574). Despite disaster being only narrowly averted, the superpowers were incapable of stopping the arms race. In the bizarre logic of the Cold War, the prevention of an all-out confrontation between the two blocs depended upon the continual growth in the number of nuclear weapons held by both sides. The ruling elites of the USA and Russia had difficulties in admitting to themselves – let alone to their citizens – the deep irrationality of this new form of military competition. In a rare moment of lucidity, American analysts invented an ironic acronym for this high-risk strategy of ‘mutually assured destruction’: MAD (Isaacs and Dowling 1998, pp. 230-243; Kahn 1960, pp. 119-189).

Not surprisingly, the propagandists of both sides justified the enormous waste of resources on the arms race by promoting the peaceful applications of the leading Cold War technologies. By the time that the 1964 New York World’s Fair opened, the weaponry of genocide had been successfully repackaged into people-friendly products. Nuclear power would soon be providing unmetered energy for everyone. Space rockets would shortly be taking tourists for holidays on the moon. Almost all traces of the military origins of these technologies had disappeared. Similarly, in the IBM pavilion, the System/360 mainframe was promoted as the harbinger of artificial intelligence. Visitors were expected to admire the technological achievements of the corporation not to question its dubious role in the arms race. The horrors of the Cold War present had been hidden by the marvels of the imaginary future.

IBM’s ideological legerdemain was made much easier by one of the distinguishing features of industrial modernity: the breaking of the explicit links between products and their producers. For millennia, warriors and priests had overtly extracted the surpluses of the peasantry for their own benefit. But, when Europeans started to privatise land ownership and mechanise handicraft production, a new – and more advanced – economic system was born: liberal capitalism. Entrepreneurs proved that price competition could indirectly coordinate human labour much more efficiently than the direct methods of feudalism. Adventurers discovered that selling commodities in the world market was much more profitable than rack-renting peasants in one locality. For the first time in human history, the productive potential of collective labour was being realised (Smith 1976, pp. 1-287, 401-445; Marx 1976, pp. 762-940).

In this new economy, people were required to interact with each other through things: commodities, money and capital. The distribution and division of labour across the economy was regulated by the prices and wages set by market competition. However, the demise of the aristocracy and the priesthood hadn’t ended class rule. When labour was bought and sold in the capitalist economy, equality within the marketplace resulted in inequality inside the workplace (Marx 1976, pp. 270-280; Rubin 1972, pp. 77-253). Because commodities were exchanged with others of equivalent value, this new form of class rule was very different from its predecessor. Indirect exploitation had replaced direct domination. Above all, the impersonal movements of the markets now determined the destiny of individuals. Things not people now ruled the world.

‘The mysterious character of the commodity-form consists … simply in the fact that the commodity reflects the social characteristics of [women and] men’s own labour as objective characteristics of the products of labour themselves, as the socio-natural properties of these things. Hence it also reflects the social relations of the producers to the sum total of labour as a social relation between objects, a relation which exists apart from and outside the producers.’
Marx 1976, pp. 164-165

The growth of capitalism created a much more complex division of labour within the economy. In parallel with the proliferation of factory and office jobs, scientific research also emerged as a distinct profession (Bahr 1980; Smith 1976, pp. 7-16). Successful firms grew by not only employing more workers but also investing in new machinery. In the fetishised world of capitalism, the growth in productivity caused by the increasingly sophisticated cooperation of factory labour and scientific research was expressed as the development of cutting-edge technologies. With human creativity hidden behind the commodity, the process of modernity had acquired a highly visible object as its subject: the ‘… automatic system of machinery … a moving power that moves itself.’ (Marx 1973, p. 692).

During the mid-twentieth century, the fetishisation of technology reached its apotheosis in the prophecy of artificial intelligence. Living in a society where new machinery appeared to be the driving force of social evolution, Turing and von Neumann’s claim that machines were evolving into living beings didn’t seem outlandish. When computers were operating, the labour involved in developing their hardware and writing their programs wasn’t immediately visible. Mesmerised by technological fetishism, the admirers of Turing and von Neumann had convinced themselves that an electronic brain would soon be able to think just like a human brain. In 1964, the calculating power of the System/360 mainframe was so great that this IBM machine seemed be on the threshold of consciousness. The creation was about to become the creator.

Cybernetic Supremacy

At the 1964 World’s Fair, imaginary futures temporarily succeeded in concealing the primary purpose of its three iconic technologies from the American public. But even the flashiest showmanship couldn’t hide dodgy use values forever. As the decades passed, none of the predictions made at the World’s Fair about the key Cold War technologies were realised. Energy remained metered, tourists didn’t visit the moon and computers never became intelligent. Unlike the prescient vision of motoring for the masses at the 1939 World’s Fair, the prophecies about the star technologies of the 1964 exposition seemed almost absurd twenty-five years later. Hyper-reality had collided with reality – and lost.

Despite the failure of its prophecy, IBM suffered no damage. In stark contrast with nuclear power and space travel, computing was the Cold War technology which successfully escaped from the Cold War. Right from the beginning, machines made for the US military were also sold to commercial clients (Pugh 1995, pp. 152-155). By the time that IBM built its pavilion for the 1964 World’s Fair, the imaginary future of artificial intelligence had to hide more than the unsavoury military applications of computing. Commodity fetishism also performed its classic function of concealing the role of human labour within production. Computers were described as ‘thinking’ so the hard work involved in designing, building, programming and operating them could be discounted. Above all, the prophecy of artificial intelligence obscured the role of technological innovation within American workplaces.

The invention of computers came at an opportune moment for big business. During the first half of the twentieth century, large corporations had become the dominant institutions of the American economy. Henry Ford’s giant car factory became the eponymous symbol of the new social paradigm: Fordism (Ford and Crowther 1922; Aglietta 1979). When it boosted their profits, corporations replaced the indirect regulation of production by markets with direct supervision by bureaucrats. As the wage-bill for white-collar employees steadily rose, businesses needed increasing amounts of equipment to raise productivity within the office. Long before the invention of the computer, Fordist corporations were running an information economy with tabulators, typewriters and other types of office equipment (Beniger 1986, pp. 291-425). However, by the beginning of the 1950s, the mechanisation of clerical labour had stalled. Increases in productivity in the office were lagging well behind those in the factory. When the first computers appeared on the market, corporate managers quickly realised that the new technology offered a solution to this pressing problem (Sobel 1981, pp. 95-184). The work of large numbers of tabulator operators could now be done by a much smaller group of engineers using a mainframe (Berkeley 1962, p. 5). Even better, the new technology of computing enabled capitalists to deepen their control over their organisations. Much more information about many more topics could now be collected and processed in increasingly complex ways. The managers were masters of all that they surveyed.

Almost from its first appearance in the workplace, the mainframe was caricatured – with good reason – as the mechanical perfection of bureaucratic tyranny. In Asimov’s sci-fi stories, Mr and Mrs Average were the owners of robot servants. Yet, when the first computers arrived in America’s factories and offices, this new technology was controlled by the bosses not the workers. In 1952, Kurt Vonnegut published a sci-fi novel which satirised the authoritarian ambitions of corporate computing. In his dystopian future, the ruling elite had delegated the management of society to an omniscient artificial intelligence.

‘EPICAC XIV … decided how many [of] everything America and her customers could have and how much they would cost. And it … would decide how many engineers and managers and … civil servants, and of what skills, would be needed to deliver the goods; and what I.Q. and aptitude levels would separate the useful men [and women] from the useless ones, and how many … could be supported at what pay level…’
Vonnegut 1969, p. 106.

At the 1964 World’s Fair, the IBM pavilion promised that thinking machines would be the servants of all of humanity. Yet, at the same time, its sales personnel were telling the bosses of large corporations that computers were hard-wiring bureaucratic authority into modern society. Herbert Simon – one of America’s leading management theorists and a former colleague of von Neumann – foresaw that the increasing power of mainframes would enable companies to automate more and more clerical tasks. Just like in the factory, machines were taking over from human labour in the office. When artificial intelligence arrived, mainframes would almost completely replace bureaucratic and technical labour within manufacturing. The ultimate goal was the creation of the fully automated workplace. Companies would then no longer need either blue-collar or white-collar workers to make products or provide services. Even most managers would become surplus to requirements. Instead, thinking machines would be running the factories and offices of America (Simon 1965, pp. 26-52). In the imaginary future of artificial intelligence, the corporation and the computer would be one and the same thing.

This prediction was founded upon von Neumann’s conservative interpretation of cybernetics. In his management theory texts, Simon argued that the workings of a firm resembled the operations of a computer. As in McCulloch and Pitts’ psychology, this identification was made in two directions. Managing workers was equated with programming a computer. Writing software was like drawing up a business plan. Both employees and machinery were controlled by orders issued from above. The workers’ dystopia of Big Brother mainframe had now mutated into the capitalist utopia of cybernetic Fordism. Ironically, the credibility of Simon’s managerial ideology depended upon his readers forgetting the fierce criticisms of corporate computing made by the founding father of systems theory. Echoing Marx, Wiener had warned that the primary role of new technology under capitalism was to intensify the exploitation of the workers. Instead of creating more leisure time and improving living standards, the computerisation of the economy under Fordism would increase unemployment and cut wages (Wiener 1967, pp. 206-221). If Vonnegut’s dystopia was to be avoided, American trade unionists and political activists must mobilise against the corporate Golem (Wiener 1966, pp. 54-55). According to Wiener, cybernetics proved that artificial intelligence threatened the freedoms of humanity.

‘Let us remember that the automatic machine … is the precise equivalent of slave labour. Any labour which competes with slave labour must accept the economic conditions of slave labour.’
Wiener 1967, p. 220.

For business executives, Vonnegut and Wiener’s nightmare was their computer daydream. However, this corporate vision of cybernetic Fordism meant forgetting the history of Fordism itself. This economic paradigm had been founded upon the successful co-ordination of mass production with mass consumption. Ironically, since these exhibits were more closely connected to social reality, Democracity and Futurama in 1939 provided a much more accurate prediction of the development path of computing than the IBM pavilion did in 1964. Just like motor cars twenty-five years earlier, this new technology was also slowly being transformed from a rare, hand-made machine into a ubiquitous, factory-produced commodity. As in the Ford factories, IBM’s System/360 mainframes were manufactured on assembly-lines (Pugh, Johnson and Palmer 1991, pp. 87-105, 204-210). These opening moves towards the mass production of computers anticipated what would be most important advance in this sector twenty-five years later: the mass consumption of computers. In its formal design, the 1964 System/360 mainframe was a bulky and expensive prototype of the much smaller and cheaper IBM PCs of 1989.

The imaginary future of artificial intelligence was a way of avoiding thinking about the likely social consequences of the widespread ownership of computers. In the early-1960s, Big Brother mainframe belonged to big government and big business. Above all, feedback was knowledge of the ruled monopolised by the rulers. However, as Wiener had pointed out, Fordist production would inevitably transform expensive mainframes into cheap commodities (Wiener 1967, pp. 210-211). In turn, increasing ownership of computers was likely to disrupt the existing social order. For the feedback of information within human institutions was most effective when it was two-way (Wiener 1967, pp. 67-73). By reconnecting conception and execution, cybernetic Fordism threatened the social hierarchies which underpinned Fordism itself.

‘… the simple coexistence of two items of information is of relatively small value, unless these two items can be effectively combined in some mind … which is able to fertilises one by means of the other. This is the very opposite of the organisation which every member travels a preassigned
path …’
Wiener 1967, p. 172.

At the 1964 World’s Fair, this possibility was definitely not part of IBM’s imaginary future. Rather than aiming to produce ever greater numbers of more efficient machines at cheaper prices, the corporation was focused on steadily increasing the capabilities of its computers to preserve its near-monopoly over the military and corporate market. Instead of room-sized machines shrinking down into desktops, laptops and, eventually, mobile phones, IBM was convinced that computers would always be large and bulky mainframes. The corporation fervently believed that – if this path of technological progress was extrapolated – artificial intelligence must surely result. Crucially, this conservative recuperation of cybernetics implied that sentient machines would inevitably evolve into lifeforms which were more advanced than mere humans. The Fordist separation between conception and execution would have achieved its technological apotheosis.

Not surprisingly, IBM was determined to counter this unsettling interpretation of its own futurist propaganda. At the 1964 World’s Fair, the corporation’s pavilion emphasised the utopian possibilities of computing. Yet, despite its best efforts, IBM couldn’t entirely avoid the ambiguity inherent within the imaginary future of artificial intelligence. This fetishised ideology could only appeal to all sections of American society if computers fulfilled the deepest desires of both sides within the workplace. Therefore, in the exhibits at its pavilion, IBM promoted a single vision of the imaginary future which combined two incompatible interpretations of artificial intelligence. On the one hand, workers were told that all their needs would be satisfied by sentient robots: servants who never tired, complained or questioned orders. On the other hand, capitalists were promised that their factories and offices would be run by thinking machines: producers who never slacked off, expressed opinions or went on strike. Robby the Robot had become indistinguishable from EPICAC XIV. If only at the level of ideology, IBM had reconciled the class divisions of 1960s America. In the imaginary future, workers would no longer need to work and employers would no longer need employees. The sci-fi fantasy of artificial intelligence had successfully distracted people from questioning the impact of computing within the workplace. After visiting IBM’s pavilion at the 1964 World’s Far, it was all too easy to believe that everyone would win when the machines acquired consciousness.

Inventing New Futures

Forty years later, we’re still waiting for the imaginary future of artificial intelligence. In the intervening period, we’ve been repeatedly promised its imminent arrival. Yet, despite continual advances in hardware and software, machines are still incapable of ‘thinking’. The nearest thing to artificial intelligence which most people have encountered are characters in video games. But, as the growing popularity of on-line gaming demonstrates, a virtual opponent is a poor substitute for a human player. Looking back at the history of this imaginary future, it is obvious that neither the optimistic nor the pessimistic versions of artificial intelligence have been realised. Robby the Robot isn’t our devoted servant and EPICAC XIV doesn’t control our lives. Instead of evolving into thinking machines, computers have become consumer goods. Room-sized mainframes have kept on shrinking into smaller and smaller machines. Computers are everywhere in the modern world – and their users are all too aware that they’re dumb.

Repeated failure should have discredited the imaginary future of artificial intelligence for good. Yet, its proponents remain unrepentant. Four decades on from the 1964 World’s Fair, IBM is still claiming that its machines are on the verge of acquiring consciousness (Bell 2004, p. 2). The persistence of this sci-fi fantasy demonstrates the continuing importance of von Neumann’s conservative appropriation of cybernetics within the computer industry. As in the early-1960s, artificial intelligence still provides a great cover story for the development of new military technologies. Bringing on the Singularity seems much more friendly than collaborating with American imperialism. Even more importantly, this imaginary future continues to disguise the impact of computing within the workplace. Both managers and workers are still being promised technological fixes for socio-economic problems. The dream of sentient machines makes better media copy than the reality of cybernetic Fordism. At the beginning of the 21st century, artificial intelligence remains the dominant ideological manifestation of the promise of computing.

The credibility of this imaginary future depends upon forgetting its embarrassing history. Looking back at how earlier versions of the prophecy were repeatedly discredited encourages deep scepticism about its contemporary iterations. Our own personal frustrations with computer technology should demonstrate the improbability of its transformation into the Silicon Messiah. Forty years after the New York World’s Fair, artificial intelligence has become an imaginary future from the distant past. What is needed instead is a much more sophisticated analysis of the potential of computing. Wiener – not von Neumann – must be our cybernetic guru. The study of history should inform the reinvention of the future. Messianic mysticism must be replaced by pragmatic materialism. Above all, this new image of the future should celebrate computers as tools for augmenting human intelligence and creativity. Praise for top-down hierarchies of control must be superseded by the advocacy of two-way sharing of information. Let’s be inspired and passionate about imagining our own visions of the better times to come (Barbrook and Schultz 1997; Barbrook 2000).

==================================================

The arguments in this article are developed further in Richard Barbrook, Imaginary Futures: from thinking machines to the global village, Pluto, London 2007. Check out the book’s website: <www.imaginaryfutures.net>.

=================================================

References

Agar, J. (2001) Turing and the Universal Machine: the making of the modern computer, Cambridge: Icon.

Agar, J. (2003) The Government Machine: a revolutionary history of the computer, Cambridge Mass: MIT Press.

Aglietta, M. (1979) A Theory of Capitalist Regulation: the US experience, London: Verso.

Asimov, I. (1968) I, Robot, London: Panther.

Asimov, I. (1968a) The Rest of the Robots, London: Panther.

Bahr, H.-D. (1980) ‘The Class Structure of Machinery: notes on the value form’ in P. Slater (ed.), Outlines of a Critique of Technology, London: Ink Links, pp. 101-141.

Barbrook, R. (2000) ‘Cyber-communism: how the Americans are superseding capitalism in cyberspace’, Science as Culture, No. 1, Vol. 9, pp. 5-40, <www.imaginaryfutures.net/2007/04/17/cyber-communism-how-the-americans-are-superseding-capitalism-in-cyberspace>.

Barbrook, R. and Schultz, P. (1997) ‘The Digital Artisans Manifesto’, ZKP 4, Ljubljana: nettime, pp. 52-53, <www.ljudmila.org/nettime/zkp4/72.htm>.

Bell, J. (2004) ‘Exploring the ‘Singularity’’, <www.kurzweilai.net/meme/frame.html?main=/articles/art0584.html?m%3D1>.

Beniger, J. (1986) The Control Revolution: technological and economic origins of the information society, Cambridge Mass: Harvard University Press.

Berkeley, E. (1962) The Computer Revolution, New York: Doubleday.

Cameron, J. (dir.) (1984) The Terminator, MGM/United Artists.

Ceruzzi, P. (2003) A History of Modern Computing, Cambridge Mass: MIT Press.

Conway, F. and Siegelman, J. (2005) Dark Hero of the Information Age: in search of Norbert Wiener father of cybernetics, New York: Basic Books.

Dallek, R. (2003) John F. Kennedy: an unfinished life 1917-1963, London: Penguin.

DeLamarter, R. (1986) Big Blue: IBM’s use and abuse of power, London: Pan.

Eames, C. and Eames, R. (1973) A Computer Perspective: a sequence of 20th century ideas, events and artefacts from the history of the information machine, Cambridge Mass: Harvard University Press.

Editors of Time-Life Books (1964) Official Guide New York World’s Fair 1964/5, New York: Time.

Edwards, P. (1996) The Closed World: computers and the politics of discourse in Cold War America, Cambridge Mass: MIT Press.

Exposition Publications (1939) Official Guide Book of the New York World’s Fair 1939, New York: Exposition Publications.

Ford, H. and Crowther, S. (1922) My Life and Work, London: William Heinemann.

Heims, S. (1980) John von Neumann and Norbert Wiener: from mathematics to the technologies of life and death, Cambridge Mass: MIT Press.

Heims, S. (1991) The Cybernetics Group, Cambridge Mass: MIT Press.

Hodges, A. (1992) Alan Turing: the enigma, London: Vintage.

Honda (2004) ‘Asimo’, <world.honda.com/ASIMO>.

Isaacs, J. and Dowling, T. (1998) Cold War: for 45 years the world held its breath, London: Bantam.

Kahn, H. (1960) On Thermonuclear War, Princeton: Princeton University Press.

Ray Kurzweil, (2004) ‘The Intelligent Universe’, <www.kurzweilai.net/meme/frame.html?main=memelist.html?m=3%23534>.

Lang, F. (dir.) (2003) Metropolis, Eurekavideo.

Laurence, W. (1964) Science at the Fair, New York: New York 1964-1965 Fair Corporation.

Lefebvre, H. (1984) Everyday Life in the Modern World, New Brunswick: Transaction Publications.

Leslie, S. (1993) The Cold War and American Science: the military-industrial complex at MIT and Stanford, New York: Columbia University Press.

Lewontin, R.C. (1997) ‘The Cold War and the Transformation of the Academy’ in A. Schiffin (ed.), The Cold War and the University, New Press, New York, pp. 1-34.

Luce, H. (1941) The American Century, New York: Time.

Marx, K. (1973) Grundrisse, London: Penguin.

Marx, K. (1976) Capital Volume 1: a critique of political economy, London: Penguin.

McCulloch, W. and Pitts, W. (1943) ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’, Bulletin of Mathematical Biophysics, Volume 5, pp. 115-133.

Minsky, M. (2004) ‘Steps Towards Artificial Intelligence’, <web.media.edu/~minsky/papers/steps.html>.

Minsky M. (2004) ‘Matter, Mind and Models’, <web.media.edu/~minsky/papers/MatterMindModels.txt>.

von Neumann, J. (1966) The Theory of Self-Reproducing Automata, Urbana: University of Illinois Press.

von Neumann, J. (1976) ‘The General and Logical Theory of Automata’. Collected Works Volume V: design of computers, theory of automata and numerical analysis, Oxford: Pergamon Press, pp. 288-326.

von Neumann, J. (2000) The Computer and the Brain, Yale: Yale University Press.

Pugh, E. (1995) Building IBM: shaping an industry and its technology, Cambridge Mass: MIT Press.

Pugh, E., Johnson, L. and Palmer, J. (1991) IBM’s 360 and Early 370 Systems, Cambridge Mass: MIT Press.

Rubin, I. (1972) Essays on Marx’s Theory of Value, Detroit: Black & Red,.

Schaffer, S. (2000) ‘OK Computer’, <www.imaginaryfutures.net/2007/04/16/ok-computer-by-simon-schaffer>.

Schefter, J. (1999)The Race: the definitive story of America’s battle to beat Russia to the moon, London: Century.

Shelley, M. (1969) Frankenstein: the modern Prometheus, Oxford: Oxford University Press.

Simon, H. (1965) The Shape of Automation for Men and Management, New York: Harper.

Sobel, R. (1981) IBM: colossus in transition, New York: Truman Talley.

Smith, A. (1976) An Inquiry into the Nature and Causes of the Wealth of Nations Volume 1 & Volume 2, Chicago: University of Chicago Press.

Stanton, J. (2004) ‘Best of the World’s Fair’, <naid.sppsr.ucla.edu/ny64fair/map-docs/bestoffair.htm>.

Stanton, J. (2004a) ‘Building the 1964 World’s Fair’, <naid.sppsr.ucla.edu/ny64fair/map-docs/buildingfair.htm>.

Startrek.com (2005), ‘Data’, <www.startrek.com/startrek/view/series/TNG/character/1112457.html>.

Stern, R., Mellins, T. and Fishman, D. (1997) New York 1960: architecture and urbanism between the Second World War and the Bicentennial, Köln: Benedikt Taschen.

Turing, A., (2004) ‘On Computable Numbers, with an Application to the Entscheidungsproblem’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 58-90.

Turing, A., (2004a) ‘Lecture on the Automatic Computing Engine’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 378-394.

Turing, A., (2004b) ‘Intelligent Machinery’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 410-432.

Turing, A., (2004c) ‘Computing Machinery and Intelligence’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 433-464.

Turing, A., (2004d) ‘Intelligent Machinery, a Heretical Theory’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 472-475.

Turing, A., (2004e) ‘Can Digital Computers Think?’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 482-486.

Vinge, V. (1993) ‘The Coming Technological Singularity: how to survive in the post-human era’, VISION-21 Symposium, 30-31 March, <www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html>.

Vonnegut, K. (1969) Player Piano, St. Albans: Panther.

Wiener, N. (1948) Cybernetics – or command and control in the animal and the machine, New York: John Wiley.

Wiener, N. (1966) God & Golem, Inc.: a comment of certain points where cybernetics impinges on religion, Cambridge Mass: MIT Press.

Wiener, N. (1967) The Human Use of Human Beings: cybernetics and society, New York: Avon Books.

Wilcox, F. (dir.) (1999) Forbidden Planet, Turner Entertainment.

Within this MySpace version of the electronic agora, cybernetic communism was mainstream and unexceptional. What had once been a revolutionary dream was now an enjoyable part of everyday life.