Archive for September, 2007

Author to Host Leiden University Conference – 3rd October 2007

Sunday, September 30th, 2007

Dr. Richard Barbrook will host and speak at The Netherlands Conference “The Futures of Digital Technology� at Leiden University in The Netherlands, 3rd October 2007. He joins academics, digital professionals and writers, including popular science fiction writer China Mièville. Please see the attached press release and conference flyer for more information about this event.

Download the workshop programme (PDF)

Download the press release (Word)

INTERNATIONAL POLITICAL ECONOMY LECTURES

Wednesday, September 19th, 2007

1POL547
INTERNATIONAL POLITICAL ECONOMY
SEMESTER 1 2007-2008

Dr. Richard Barbrook
Room 408 Wells Street
Extension 2313
R.Barbrook (a) wmin.ac.uk

Option: International Political Economy

Code: 1POL547

Staff: Richard Barbrook and Nik Howard

Time and Location: Thursdays 2.00pm – 5.00pm
Fyvie Lecture Theatre

Assessment Scheme: Essay (2,500 – 3,000 words): 30%
Unseen Examination: 70%

Assessment Criteria: Full Details in your handbook.

Synopsis: Criteria for grading assessed work will include the following:

Structure and Quality of Argument
Use of Evidence
Contents
Writing, Presentation and Communication Skills

Assignment Deadline: Tuesday 8th January 2008

Office Hours: Please email or phone to make an appointment

Room 408 Wells Street

Website: http://www.imaginaryfutures.net/author/ipe/

There is a link in the blogroll on the bottom right-hand side of the Imaginary Futures website.

Key text: Meghnad Desai, Marx’s Revenge: the resurgence of capitalism and the death of statist socialism, Verso, London 2002.

The quickest and easiest way to obtain a copy of this book is from abebooks.

Essay Questions

LECTURE PROGRAMME

1. 27th September
Introduction: What is International Political Economy and why does it determine your life? (RB)

2. 4th October
The Lessons of History: the East Asian miracle and Japan’s bubble economy of the late 1980s. (NH)

3. 11th October
The 18th and 19th Century Theories of the Global Political Economy: Liberalism versus Nationalism versus Marxism. (RB)

4. 18th October
The 20th Century Theories of the Global Political Economy:
Leninism versus Keynesianism versus Neo-Liberalism. (RB)

5. 25th October
Theory into Practice: North Korea and South Korea. (NH)

6. 1st November
Building the World Market: the rise and fall of the British empire 1649– 1914. (RB)

7. 8th November
Private Study Week

8. 15th November
Managing the Global Economy: the ascendancy and decline of the American empire 1914-2007. (RB)

9. 22nd November
Money as Command: chaos and order in the international financial system. (RB)

10. 29th November
The Carbon Economy: the affluent society and the limits to growth. (RB)

11. 6th December
The Global Village: the Net as dotcom capitalism and cybernetic communism. (RB)

12. 13th December
Think Locally, Act Globally: beyond the neo-liberal paradigm. (RB)

READING LIST

Michel Aglietta, M. (1979) A Theory of Capitalist Regulation: the US experience, Verso, London.

Albert, M. (1993) Capitalism vs. Capitalism, Four Wall Eight Windows, New York.

Ambrose, S. (1971) Rise to Globalism: American foreign policy 1938-1970, Penguin, London.

Aoki Masahiko, Kim Hyung-ki and Okuno-Fujiwara Masahiro (eds.) (1998) The Role of Government in East Asian Economic Development: Comparative Institutional Analysis, Oxford University Press. New York.

Atkinson, B and Johns S (2001) Studying Economics, Palgrave.

Bacevich, A. (2002) American Empire: the realities and consequences of U.S. diplomacy, Harvard University Press, Cambridge Mass.

Barbrook, R. (1990) ‘Mistranslations: Lipietz in London and Paris’, http://www.imaginaryfutures.net/2007/02/01/mistranslations-lipietz-in-london-and-paris-by-richard-barbrook/

Barbrook, R. (2006) The Class of the New, Open Mute, http://www.theclassofthenew.net

Barbrook, R. (2007), Imaginary Futures: from artificial intelligence to the global village, Pluto, London, http://www.imaginaryfutures.net

Bell, D. (1973) The Coming of Post-Industrial Society: a venture in social forecasting, Basic Books, New York.

Walden, B. and Rosenfeld, S. (1992) Dragons in Distress: Asia’s Miracle Economies in Crisis, Penguin Books. London.

Berry, C. (1997) Social Theory of the Scottish Enlightenment, Edinburgh University Press, Edinburgh.

Bhagwati, J. (2004) In Defence of Globalisation, Oxford University Press.

Brenner, R. (2002) The Boom and the Bubble: the US in the world economy, Verso, London.

Buchan, J. (2006) Adam Smith and The Pursuit of Perfect Liberty, Profile.

Cain, P.J. and Hopkins, A.G. (1993) British Imperialism: innovation and expansion 1688-1914, Longman, London.

Cassidy, J. (2002) dot.con: the greatest story ever sold, Penguin, London.

Castells, M. (1996) The Rise of the Network Society: the information age – economy, society and culture volume 1, Blackwell, Oxford.

Chiu, Stephen W.K. and So, Alvin Y. (1995) East Asia and the World Economy, Sage Publications. California, London and New Delhi.

Crump, J. (2003) Nikkeiren and Japanese Capitalism, Routledge/Curzon. Richmond, England.

Cumings, B. (2004) North Korea: Another Country, New Press.

Meadows, D and the Club of Rome (1972) The Limits to Growth, Earth Island, London.

Desai, M. (2002) Marx’s Revenge: the resurgence of capitalism and the death of statist socialism, Verso, London.

Diamond, J (1997) Guns, Germs and Steel: The Fates of Human Societies, New York: Norton.

Earle E M (ed) Adam Smith, Alexander Hamilton and Friedrich List, The Economic Foundations of Military Power, Makers of Modern Strategy.

Eckert, C. J. et al. (1990) Korea Old and New: A History, Ilchokak Publishers. Seoul.

Economides, S and Wilson, P (2001) The Economic Factor in International Relations.

Fallows, J. (1994) Looking at the Sun: The Rise of the New East Asian Economic and Political System, Vintage.

Florida, R. (2002) The Rise of the Creative Class: and how it’s transforming work, leisure, community and everyday life, Basic, New York.

Frieden, J and Lake, D (eds) (2001) International Political Economy: Perspectives on Global Power and Wealth, New York: St Martins Press.

Fukuyama, F. (1992) The End of History and the Last Man, Penguin, London.

Galbraith, J.K. (1961) The Great Crash 1929, Penguin, London.

Galbraith, J.K. (1970) The Affluent Society, Penguin, London.

Gilpin, R (2001) Global Political Economy, Princeton University Press.

Gittings, J. (2006) The Changing Face of China: from Mao to market, Oxford University Press, Oxford.

Godard, R and Cronin, P (ed) (2003) International Political Economy, Palgrave.

Gruber, H. (1991) Red Vienna: experiment in working-class culture 1919-1934, Oxford University Press, Oxford.

Gundar Frank, A. (1969) Capitalism and Underdevelopment in Latin America: historical studies of Chile and Brazil, Monthly Review, New York.

Halliday, J. and McCormack, G. (1973) Japanese Imperialism Today: ‘Co-Prosperity in Greater East Asia’, Penguin Books.

Harris, N. (1987) The End of the Third World: newly industrialising countries and the decline of an ideology, Penguin, London.

Hayek, F. A. (1944) The Road to Serfdom, Routledge, London.

Held, D and McGrew, A. (1999) Global Transformations: Politics, Economics and Culture, Cambridge: Polity Press.

Helleiner, E. (1994) States and the Re-Emergence of Global Finance, Cornell.

Hersh, J. (1993) The USA and the Rise of East Asia since 1945: Dilemmas of the Postwar International Political Economy, Macmillan, London.

Hettine, B. (ed) (2001) International Political Economy, London: Zed Books.

Hilferding, R. (1981) Finance Capital: a study of the latest phase of capitalist development, Routledge & Kegan Paul, London.

Hirst, P. and Thompson, G. (2000) Globalisation in Question, Cambridge: Polity Press.

Hobsbawm, E. (1968) Industry and Empire: an economic history of Britain since 1750, Penguin, London.

Hobson, J.A. (1902) Imperialism: a study, George Allen & Unwin, London.

Huntington, S. (2002) The Clash of Civilisations and the Remaking of World Order, Free Press, London.

Hyam, R. (2002) Britain’s Imperial Century 1815-1914: a study of empire and expansion, Palgrave Macmillan, Basingstoke.

Irwin, D. A. (1996) Against the Tide: An Intellectual History of Free Trade, Princeton University Press.

Johnson, C. (1982). MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975, Stanford University Press.

Jones, G. S. (2005) An End to Poverty: A Historical Debate, Profile Books.

Kalecki, M. (1972) The Last Phase in the Transformation of Capitalism, Monthly Review, New York.

Kant, I. (1983 ) ‘To Perpetual Peace: a philosophical sketch’, Perpetual Peace and Other Essays, Hackett, Indianapolis, pages 107-143.

Kelly, K. (1998) New Rules for the New Economy: 10 ways that the network economy is changing everything, Fourth Estate, London.

Kennedy, P. (1989) The Rise and Fall of Great Powers: economic change and military conflict from 1500 to 2000, Fontana, London.

Keynes, J.M. (1936) The General Theory of Employment, Interest and Money, MacMillan, London.

Kihl, Young Whan, ed (1994) Korea and the World: Beyond the Cold War, Westview Press, Boulder Colorado.

Krugman, P. (November 1994). “The Myth of Asia’s Miracle: A Cautionary Fable�. Foreign Affairs.

Landes, D. (1998) The Wealth and Poverty of Nations, New York: W W Norton.

Lenin, V.I. (1965) Imperialism: the highest stage of capitalism, Progress, Moscow.

Lipietz, A. (1987) Mirages and Miracles: the crises of global Fordism, Verso, London.

Luxemburg, R. (1963) The Accumulation of Capital, Routledge, London.

Madison, A (2001) The World Economy: A Millennial Perspective, OECD 2001.

Mandel, E. (1980) Long Waves of Capitalist Development, CUP, Cambridge.

Mao, Z. (1967) ‘Analysis of the Classes in Chinese Society’, Selected Works of Mao Zedong Volume 1, Foreign Languages Press, Beijing, pages 13-21.

Marx, K. (1976) Capital Volume 1: a critique of political economy, Penguin, London.

Marx, K. (1978) Capital Volume 2: a critique of political economy, Penguin/New Left Review, London.

Marx, K. (1981) Capital Volume 3: a critique of political economy, Penguin/New Left Review, London.
McLean, B. and Elkind, P. (2004) The Smartest Guys in the Room: the amazing rise and scandalous fall of Enron, Portfolio, New York.

McLuhan, M. (1964) Understanding Media: the extensions of man, Routledge & Kegan Paul, London.

Meiskens Wood, E. (2003) Empire of Capital, Verso, London.

von Mises, L. (1947) Planned Chaos, Foundation for Economic Education, Irvington-On-Hudson.
Negri, T. (1988) ‘Keynes and the Capitalist Theory of the State’, Revolution Retrieved: selected writings on Marx, Keynes, capitalist crisis & new social subjects 1967-83, Red Notes, London, pages 9-42.

Negri T. and Hardt, M. (2000) Empire, Harvard University Press, Cambridge Mass.

Ocalan, A. (2007) Prison Writings: the roots of civilisation, Pluto, London.

Odell, J. S. (2000) Negotiating the World Economy, Cornell University Press.

Okina Y., Shirakawa M., and Shiratsuka S., ‘The Asset Price Bubble and Monetary Policy: Japan’s Experience in the Late 1980s and the Lessons’, Monetary and Economic Studies (February 2001).

Pilbeam, K. (2006) Finance and Financial Markets, Palgrave.

Phillips, N. (2006) Globalising International Political Economy, Palgrave.

Pressman, S. (1999) Fifty Major Economists, Routledge, London.

Reich, R. (1991) The Work of Nations: a blueprint for the future, Simon &
Schuster, London.

Ricardo, D. (1911) The Principles of Political Economy and Taxation, Everyman, London.

Richardson, B. (1997) Japanese Democracy: Power, Coordination, and Performance, Yale University Press. New Haven and London.

Ridderstråle, J. and Nordström, K. (2000) Funky Business: talent makes capital dance, ft.com, London.

Roberts, P. (2004) The End of Oil, Bloomsbury.

Rostow, W.W. (1960) The Stages of Economic Growth: a non-communist manifesto, Cambridge University Press, Cambridge.

Rubin, I.I. (1979) A History of Economic Thought, Ink Links, London.

Sachs, J. (2005) The End of Poverty, Penguin.

Scholte, J. (2000) Globalisation, Critical Introduction, London: Macmillan.

Schumpeter, J. (1976) Capitalism, Socialism and Democracy, Harper, New York.

Smith, A. (1976) An Inquiry into the Nature and Causes of the Wealth of Nations Volume 1 & Volume 2, University of Chicago Press, Chicago.

Stiglitz, Joseph and Yusuf S. eds. (2001) Rethinking the East Asian Miracle, World Bank and Oxford University Press, Washington DC and New York.

Stiglitz, Joseph (2003) Globalization and Its Discontents, Penguin Books, London.

Stubbs, S. and Underhill, G. (1999) Political Economy and the Changing Global Order, Palgrave.

Szporluk, R. (1988) Communism & Nationalism: Karl Marx versus Friedrich List, OUP, Oxford.

Todd, T. (2004) After the Empire: the breakdown of the American order, Constable, London.

Toynbee, A. (1946) A Study of History, Oxford University Press, Oxford.

van Wolferen, K. (1990) The Enigma of Japanese Power: People and Politics in a Stateless Nation, Random Books.

Wallerstein, I. (1979) The Capitalist World-Economy, Cambridge University Press, Cambridge.

Warren, B. (1980) Imperialism: Pioneer of Capitalism, Verso, London.

White, D. (1996) The American Century: the Rise & Decline of the United States as a World Power, Yale University Press, New Haven 1996.

Wood, C. (2005). The Bubble Economy: Japan’s Extraordinary Speculative Boom of the ‘80s and the Dramatic Bust of the ‘90s, Barnes and Noble.

World Bank (1993) The East Asian Miracle: Economic Growth and Public Policy, Oxford University Press.

World Bank (2004) World Development Indicators, Washington DC: World Bank.

Yarborough, B. and Yarborough, Y. (1997) The World Economy: Trade and Finance, Dryden Press.

PERIODICALS

The Economist
Capital & Class
Journal of Political Economy
International Studies Perspectives
The World Economy
Third World Quarterly
Race and Class
New Left Review
Journal of World Trade
OPEC Review
Asian Pacific Review
World Development
The Journal of Asian Studies
The Journal of Japanese Studies
Japan Forum

WEBSITES

http://www.imf.org/external/data.htm

http://www.iccwbo.org/

http://www.europa.eu.int

http://worldbank.org/

http://eldis.org/

http://www.leftbusinessobserver.com/

http://www.newunionism.net/

http://www.un.org/climatechange/

http://www.japanesestudies.org.uk

https://www.cia.gov/library/publications/the-world-factbook/

Assignment

Answer ONE question, typed in 12 point or above and approximately 2,500 words in length.

Submit to the Assessment Letterbox in the SSHL Undergraduate Office, Wells Street by 6.00pm, Tuesday 8th January 2008.

Severe penalties will apply to late entries.

Cronopis Associats Interview

Saturday, September 15th, 2007

Andrés Lomeña of Cronopis Associats interviewed Richard Barbrook on 13th September 2007.

Question 1

AL: Franco Bifo Berardi criticises your The Holy Fools. In his opinion, you simplify the rhizomatic thought of Deleuze and Guattari, making equal it to technonomadism and The Californian Ideology. Berardi argues that the state cannot solve the self-organisational structure of the Net. My question is: what ethical and aesthetic paradigm should we take for Internet given that the May ’68 Revolution was defeated?

RB: Bifo is attacking me for the very crime which inspired my article! In late-1990s London, Hari Kunzru and others at Wired UK were arguing that Gilles Deleuze and Félix Guattari were proponents of the Californian ideology. Knowing these gurus’ political history and theoretical writings, I was curious as to why it was so easy to confuse their particular brand of hippie leftism with its apparent opposite: dotcom neo-liberalism. A decade earlier, an anarchist comrade who was running a pirate radio station in Paris had jokingly told me that Deleuze and Guattari – and their milieu – should be called the “Pol Pot tendencyâ€?. Michel Foucault only a few years earlier had been praising the Islamist seizure of power in Iran as a revolution against modernity. When D&G fantasised about nomads destroying the city in A Thousand Plateaus, they – scarily – did mean what they were saying! In The Holy Fools, I tried to explain this intellectual paradox: these prophets of hardcore anti-modernism had provided the theoretical tools for their own post-modernist recuperation. As Bifo’s loud protests demonstrate, there is obviously an embarrassing similarity between New Left anti-statism and New Right anti-statism. Much more seriously, D&G were also proponents of a post-structuralist approach which is completely compatible with the tenets of McLuhanism. Information – not humanity – is the subject of history. Artists and intellectuals are the new class of the new. So maybe it wasn’t so surprising that D&G’s vision of the Net was so easily misinterpreted as a celebration of the dominant imaginary future of the American empire: the information society…

Question 2

AL: You are really witty when you quote and analyse Eric S. Raymond’s essays. You are the first one. It demonstrates that you study the phenomena of Internet from inside, without keeping your distance; we need to contaminate ourselves to talk about Net (the metaphor would be: “we cannot talk about dirtiness if we do not get dirty�). In my opinion, there is a maladjustment between the generations. Are there more hopes with our generation, totally immersed in new medias?

RB: I’m from the punk rock generation – my formative moment was seeing the Sex Pistols and discovering Situationism as a 20-year old student in 1976. When the Net took off in England two decades later, I thought that technology had finally caught up with what me and my mates had been doing for all of our adult lives. In 2007, instead of climbing tower blocks to install pirate transmitters like we did in the 1980s, my students run their radio stations from their bedrooms – and, most wonderfully, have many more listeners than we ever did. Respect! Of course, it is easy for people from my generation to say that the youth aren’t as politically engaged as we were, but the historical moment is very different. If nothing else, today’s 20-somethings have the advantage over us that Leninism is ancient history. In my Imaginary Futures book, I argue that they also possess the privilege of living within the information society. It is difficult to believe that the Net will liberate humanity when almost everyone you know has a broadband connection – and corporate capitalism is more in control of the global economy than it has ever been.

Question 3

AL: The New Left sought anarcho-communism, based on gift economy. However, the New Economy of the cyberspace is an advanced way of social democracy: anarcho-communism is sponsored by corporate capital. This seems paradoxical. Is it possible to have a collapse of the system? All of this is quite disconcerting.

RB: The contemporary symbiosis of cybernetic communism and dotcom capitalism is only a paradox if – like Deleuze – you are anti-Hegelian! However, if you study history, this contradictory phenomenon is unexceptional. The feudal monarchy played a key role in the rise of capitalism and – against its own intentions – destroyed its own patriarchal power. Stalinist states industrialised their economies and – in the process – undermined the social foundations of totalitarian rule. We should not be surprised that corporate capitalism is similarly unaware of its own historical mission. Kevin Kelly in New Rules for the New Economy says that dotcom entrepreneurs should adopt the maxim of “follow the free�: commercialising the innovations of the hi-tech gift economy. But, as the music industry has found out to its cost, the opposite is also taking place: the decommodification of proprietary information. Don’t be disconcerted, enjoy the paradox!

Question 4

AL: I would like to read a critical evaluation of Google. What do you think? Do they symbolise a huge change in the recent history of the Net?

RB: So would I. What is interesting is that Google makes its money out of searches not content. When most of the information on the Net is made by amateurs who work for nothing and pay for their own hosting, then the profit-making point is owning the large numbers of servers needed to catalogue and sort this data. Google is getting rich off what neo-classical economists call a “natural monopoly�: a privatised public utility. I wonder if Bifo disapproves of the French state’s plans to launch a European competitor to this American hegemon?!

Question 5

AL: Creative commons modified the map of Internet. Many people think that this licence is a restrictive choice (with a libertarian make up). They would prefer the deregulated situation before Creative Commons. I do not have opinion here. What about you?

RB: Tellingly, Tim Berners-Lee – the inventor of the web – didn’t release html under a copyleft licence because its provisions were too restrictive. Despite its name, Creative Commons is also a form of private property. If you want to operate within the contemporary economy, this licence does offer some protection against your work being ripped off or being used inappropriately. If you were a cynic, copyleft could also be seen as the last stand of intellectual property lawyers against the decommodification of information on the Net. Imprisoning teenagers for sharing music or movie files with each other is absurd in the 2000s. According to its promoters, Creative Commons is the only way that commercial organisations can sue each other for making money out of what private individuals are doing on a daily basis…

Question 6

AL: What do you think about the Technorealism Manifesto? I sort of agree with it. It is very symptomatic of lot of manifestos that I’ve read on the Internet.

RB: Sweet. We should encourage all signs of resistance in America against the neo-liberal hegemony which dominates that long suffering country.

Question 7

AL: Is it possible to have a new Luddite movement in our society (I still remember a Thomas Pynchon’s article where he was really ironic about the Luddites)?

RB: I’m all in favour of celebrating Luddism as long as we’re talking about the Luddites in early-19th century England. These heroic rebels were founders of the Labour movement in this country. Unfortunately, many on the Left believe the smears of the liberal bourgeoisie against them. Contrary to the dictionary definition, the Luddites were NOT against all new technologies – only those which deskilled and displaced artisanal workers. Spinning Jennies deserved to be smashed – and Jacquard Looms were rightly cherished. If we want to learn from the Luddites, we should welcome technologies which make our work more interesting and our lives more pleasurable…

Question 8

AL: We must to extend the political history of Internet into all areas of life. However, the thing is that if we begin transforming cyberspace and then we want to change our society… it seems to me that we are adopting a naive reformism. The Internet is a good battlefield, but it’s not the only field where we can fight, right?

RB: The Net is a tool not a talisman. When Debord published The Society of the Spectacle in 1967, only the privileged few could make radio and television programmes. Four decades later, anyone with the time and money in the North can broadcast their efforts to a global audience over the Net. Contrary to Debord’s expectations, the smashing of the spectacle didn’t require a proletarian revolution. Does this make us backsliding reformists? Or does it mean that we realise that we live within a historical process which capital and labour – consciously or unconsciously – compete to mould society in their own interest? If we’re lefties, our ambition is to ensure our side is more conscious about what we’re doing than our opponents.

Question 9

AL: How do you value the role of John Perry Barlow in the history of the Internet? Everyone can laugh at his neo-liberal utopianism, but he founded the EFF, among other things. Maybe we should consider his Declaration of Independence of Cyberspace like poetry, not like a political programme.

RB: John Perry Barlow was also Dick Cheney’s campaign manager when he stood for the US Senate in 1978! Conclusion: you can smoke good weed and still be on the wrong side of the barricades.

Question 10

AL: Is there anything else that you want to add?

RB: Check out the website of my new book: http://www.imaginaryfutures.net

NEW YORK PROPHECIES: THE IMAGINARY FUTURE OF ARTIFICIAL INTELLIGENCE

Saturday, September 8th, 2007

The Future Is What It Used To Be

‘Biological intelligence is fixed, because it is an old, mature paradigm, but the new paradigm of non-biological computation and intelligence is growing exponentially. The crossover will be in the 2020s and after that, at least from a hardware perspective, non-biological computation will dominate…’
Kurzweil 2004, p. 3.

At the beginning of the 21st century, the dream of artificial intelligence is deeply embedded within the modern imagination. From childhood onwards, people in the developed world are told that computers will one day be able to reason – and even feel emotions – just like humans. In science fiction stories, thinking machines have long been favourite characters. Audiences have grown up with images of robot buddies like Data in Star Trek TNG and of pitiless monsters like the cyborg in The Terminator (Startrek.com 2005; Cameron 1984). These science fiction fantasies are encouraged by confident predictions from prominent computer scientists. Continual improvements in hardware and software will eventually led to the creation of artificial intelligences more powerful than the ‘biological intelligence’ of the human mind.

Commercial developers are already looking forward to selling sentient machines which can do the housework and help the elderly (Honda 2004). Some computer scientists even believe that the creation of ‘non-biological intelligences’ is a spiritual quest. In California, Ray Kurzweil, Vernor Vinge and their colleagues are eagerly waiting for the Singularity: the First Coming of the Silicon Messiah (Vinge 1993; Bell 2004). Whether inspired by money or mysticism, all these advocates of artificial intelligence share the conviction that the present must be understood as the future in embryo – and the future illuminates the potential of the present. Every advance in computing technology is heralded as another step towards the creation of a fully conscious machine. The prophecy of artificial intelligence comes closer to fulfilment with the launch of each new piece of software or hardware. It is not what computers can do now that is important but what they are about to do. The present is the beta version of a science fiction dream: the imaginary future.

Despite its cultural prominence, the meme of sentient machines is vulnerable to theoretical exorcism. Far from being a free-floating signifier, this prophecy is deeply rooted in time and space. Not surprisingly, contemporary boosters of artificial intelligence rarely acknowledge the antiquity of the concept itself. They want to move forwards not look backwards. Yet, it’s over forty years since the dream of thinking machines first gripped the public’s imagination. The imaginary future of artificial intelligence has a long history. Analysing this original version of this prophecy is the precondition for understanding its contemporary iterations. With this motivation in mind, let’s go back to the second decade of the Cold War when the world’s biggest computer company put on a show about the wonders of thinking machines in the financial capital of the most powerful and wealthiest nation on the planet…

A Millennium Of Progress

On the 22nd April 1964, the New York World’s Fair was opened to the general public. During the next two years, this modern wonderland welcomed over 51 million visitors. Every section of the American elite was represented at the exposition: the federal government, US state governments, large corporations, financial institutions, industry lobbies and religious groups. The World’s Fair proved that the USA was the leader in everything: consumer goods, democratic politics, show business, modernist architecture, fine art, religious tolerance, domestic living and, above all else, new technology. As one of the exposition’s advertising slogans implied, a ‘millennium of progress’ had culminated in the American century (Stanton, 2004, 2004a; Luce 1941).

This patriotic message was celebrated in the awe-inspiring displays of new technologies at the World’s Fair. Writers and film-makers had long fantasised about travelling to other worlds. Now, in NASA’s Space Park, the public could admire the huge rockets which had taken American astronauts into orbit (Editors of Time-Life Books 1964, p. 208; Laurence 1964, pp. 2-14). Despite its early setback when the Russians launched the first satellite in 1957, the USA was now on the verge of overtaking its rival in the ‘space race’ (Schefter 1999, pp. 145-231). Best of all, visitors to the World’s Fair were told that they too would have the opportunity to become astronauts in their own lifetimes. In the General Motors’ Futurama pavilion, Americans of the 1980s were shown taking their holidays on the moon. (Editors of Time-Life Books 1964, p. 222). Other corporations were equally confident that the achievements of the present would soon be surpassed by the triumphs of tomorrow. At its Progressland pavilion, General Electric predicted that electricity generated by nuclear fusion reactors would be ‘too cheap to meter’ (Editors of Time-Life Books 1964, pp. 90-92; Laurence 1964, pp. 40-43). In the imaginary future of the World’s Fair, Americans would not only become space tourists but also be blessed with free energy.

For many corporations, the most effective method of proving their technological modernity was showcasing a computer. Clairol’s machine selected ‘the most flattering hair shades’ for female visitors and Parker Pen’s mainframe matched American kids with ‘pen pals’ in foreign countries (Editors of Time-Life Books 1964, pp 86, 90). However impressive they might have appeared to their audience, these exhibits were nothing more than advertising gimmicks. In contrast, IBM was able to dedicate its pavilion exclusively to the wonders of computing as a distinct technology. For over a decade, this corporation had been America’s leading mainframe manufacturer. In 1961, one single product – the IBM 1401 – had accounted for a quarter of all the computers operating in the USA (Pugh 1995, pp. 265-267). In the minds of many Americans, IBM was computing. Just before the opening of the World’s Fair, the corporation had launched a series of products which would maintain its dominance over the industry for another two decades: the System/360 (DeLamarter 1986, pp. 54-146). Seizing the opportunity for self-promotion offered by the exposition, the bosses of IBM commissioned a pavilion designed to eclipse all others. Eero Saarinen – the renowned Finnish architect – supervised the construction of the building: a white, corporate-logo-embossed, egg-shaped theatre which was suspended high in the air by 45 rust-coloured metal trees. Underneath this striking feature were interactive exhibits celebrating IBM’s contribution to the computer industry (Stern, Mellins and Fishman 1997, pp. 1046-1047).

For the theatre itself, Charles and Ray Eames – the couple who epitomised American modernist design – created the main attraction at the IBM pavilion: ‘The Information Machine’. After taking their places in the 500-seat ‘People Wall’, visitors were elevated upwards into the egg-shaped structure. Once inside, a narrator introduced a ‘mind-blowing’ multi-media show about how the machines exhibited in the IBM pavilion were forerunners of the sentient machines of the future. Computers were in the process of acquiring consciousness. (Editors of Time-Life Books 1964, pp. 70-74; Laurence 1964, pp. 57-58). In the near future, every American would own a devoted mechanical servant just like Robby the Robot in the popular 1956 sci-fi film Forbidden Planet (Wilcox 1999). At the New York World’s Fair, IBM proudly announced that this dream of artificial intelligence was finally about to be realised. With the launch of the System/360 series, mainframes were now powerful enough to construct the prototypes of a fully conscious computer.

The IBM pavilion’s stunning combination of avant-garde architecture and multi-media performance was a huge hit with both the press and the public. Alongside space rockets and nuclear reactors, the computer had confirmed its place as one of the three iconic technologies of modern America. The ideological message of these machines was clear-cut: the present was the future in embryo. Within at the IBM pavilion, computers existed in two time frames at once. On the one hand, the current models on display were prototypes of the sentient machines of the future. On the other hand, the vision of computer consciousness showed the true potential of the mainframes exhibited in the IBM pavilion. At the 1964 New York World’s Fair, the launch of the System/360 series was celebrated as the harbinger of the imaginary future of artificial intelligence.

‘Duplicating the problem-solving and information-handling capabilities of the [human] brain is not far off; it would be surprising if it were not accomplished within the next decade.’
Simon, p. 39.

Inventing the Thinking Machine

The futurist fantasies of IBM’s multi-media show were inspired by the dispassionate logic of the academy. Alan Turing – the founding father of computer science – had defined the development of artificial intelligence as the long-term goal of this new discipline. In the mid-1930s, this Cambridge mathematician had published the seminal article which described the abstract model for a programmable computer: the ‘universal machine’ (Turing 2004). During the Second World War, his team of engineers had created a pioneering electronic calculator to speed up the decryption of German military signals. When the conflict was over, Turing moved to Manchester University to join a team of researchers who were building a programmable machine. As proposed in his 1936 article, software would be used to enable the hardware to perform a variety of different tasks. On 21st June 1948, before he’d even taken up his new post, Turing’s colleagues switched on the world’s first electronic stored-program computer: Baby. The theoretical concept described in an academic journal had taken material form as an enormous metal box filled with valves, switches, wires and dials. (Turing 2004a; Agar 2001 pp. 3-5, 113-124; Hodges 1992, pp. 314-402).

For Turing, Baby was much more than just an improved version of the office tabulator. When software could control hardware, counting became consciousness. In a series of seminal articles, Turing argued that his mathematical machine was the precursor of an entirely new life form: the mechanical mathematician. He backed up this prediction by defining human intelligence as what computers could do. Since calculating was a sophisticated type of thinking, calculating machines must be able to think. If children acquired knowledge through education, educational software could create knowledgeable computers. Because the human brain worked like a machine, it was obvious that a machine could behave like an electronic brain (Turing 2004a, 2004b, 2004d, 2004e; Schaffer 2000). According to Turing, although the early computers weren’t yet powerful enough to fulfil their true potential, continual improvements in hardware and software would – sooner or later – overcome these limitations. In the second half of the twentieth-century, computing technology was rapidly evolving towards its preordained destiny: artificial intelligence.

‘The memory capacity of the human brain is probably of the order of ten thousand million binary digits. But most of this is probably used in remembering visual impressions, and other comparatively wasteful ways. One might reasonably hope to be able to make some real progress [towards artificial intelligence] with a few million digits [of computer memory].’
Turing 2004a, p. 393

In his most famous article, Turing described a test for identifying the winner of this race to the future. Once an observer couldn’t tell whether they were talking with a human or a computer in an on-line conversation, then there was no longer any substantial difference between the two types of consciousness. If the imitation was indistinguishable from the original, the machine must be thinking. The computer had passed the test (Turing 2004c, pp. 441-448; Schaffer 2000). From this point onwards, computers were much more than just practical tools and tradable commodities. As Turing’s articles explained, the imaginary future of artificial intelligence revealed the transformative potential of this new technology. Despite their shortcomings, the current models of computers were forerunners of the sentient machines to come.

By the late-1940s, the catechism of artificial intelligence had been fixed. Within computing, what was and what will be were one and the same thing. Despite this achievement, Turing was a prophet whose influence was waning within his own country. The computer might have been invented in Britain, but its indebted government lacked the resources to dominate the development of this new technology (Agar 2003, pp. 266-278). Across the Atlantic, the situation was very different. During the Second World War, the American government had also provided generous funding for research into electronic calculation. Crucially, when the victory was won, scientists working on these projects didn’t encounter severe problems in maintaining their funding. While money was scarce in Britain, the USA could easily afford to pay for cutting-edge research into new technologies. Once the Cold War was underway, American politicians had no problem in justifying these subsidies to their constituents (Leslie 1993, pp. 1-13; Lewontin 1997).

In the USA, computer scientists possessed another major advantage over their British rivals: the meta-theory of cybernetics. During the late-1940s and early-1950s, a group of prominent American intellectuals came together at the Macy conferences to explore ways of breaking down the barriers between the various academic disciplines: (Heims 1991). From the outset, Norbert Wiener was recognised as the guru of this endeavour. While working at MIT, he had devised cybernetics as a new theoretical framework for analysing the behaviour of both humans and machines. The input of information about the surrounding environment led to the output of actions designed to transform the environment. Dubbed ‘feedback’, this cycle of stimulus and response reversed the spread of entropy within the universe. Order could be created out of chaos (Wiener 1948, pp. 74-136). According to Wiener, this master theory described all forms of purposeful behaviour. Whether in humans or machines, there was continual feedback between information and action. The same mathematical equations could be used to describe the behaviour of living organisms and technological systems (Wiener 1948, pp. 168-191). Echoing Turing, this theory implied that it was difficult to tell the difference between humans and their machines (Wiener 1948, pp. 21, 32-33). In 1948, Wiener outlined his new master theory in a book filled with pages of mathematical proofs: Cybernetics – or command and control in the animal and the machine.

Much to his surprise, this academic had written a bestseller. For the first time, a common set of abstract concepts covered both the natural sciences and the social sciences. Wiener‘s text had provided potent metaphors for describing the new hi-tech world of Cold War America. Even if they didn’t understand his mathematical equations, readers could easily recognise cybernetic systems within the social institutions and communication networks which dominated their everyday lives. Feedback, information and systems soon became incorporated into popular speech (Conway and Siegelman 2005, pp. 171-194; Heims 1991, pp. 271-272). Despite this public acclamation, Wiener remained an outsider within the US intelligentsia. Flouting the ideological orthodoxies of Cold War America, this guru was a pacifist and a socialist.

In the early-1940s, Wiener – like almost every US scientist – had believed that developing weapons to defeat Nazi Germany benefited humanity. When the Cold War started, his military-funded colleagues claimed that their research work was also contributing to the struggle against an aggressive totalitarian enemy (Lewontin 1997). Challenging this patriotic consensus, Wiener argued that American scientists should adopt a very different stance in the confrontation with Russia. He warned that the nuclear arms race could lead to the destruction of humanity. Faced with this dangerous new situation, responsible scientists should refuse to carry out military research (Conway and Siegelman 2005, pp. 237-243, 255-271). During the 1950s and early-1960s, Wiener’s political dissidence inspired his socialist interpretation of cybernetics. In the epoch of corporate monopolies and atomic weaponry, the theory that explained the behaviour of both humans and machines must be used to place humans in control of their machines. Abandoning his earlier enthusiasm for Turing’s prophecy, Wiener now emphasised out the dangers posed by sentient computers (Wiener 1966, pp. 52-60, 1967, pp. 239-254). Above all, this attempt to build artificial intelligences was a diversion from the urgent task of creating social justice and global peace.

‘The world of the future will be an ever more demanding struggle against the limitations of our own intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.’
Wiener 1966, p. 69.

By opposing the militarisation of scientific research, the founder of cybernetics had embarrassed his sponsors among the US elite. Fortunately for the rulers of America, there was another brilliant mathematician at the Macy conferences who was also a fanatical Cold War warrior: John von Neumann. Traumatised by the nationalisation of his family’s bank during the 1919 Hungarian revolution, his anti-communist politics were so extreme that he had argued in the mid-1940s in favour of launching a pre-emptive war to stop Russia acquiring nuclear weapons (Heims 1980, pp. 235-236, 244-251). While playing a leading role in developing the atomic bomb, von Neumann had already applied his mathematical and organisational talents to the new field of computing. When the first Macy conference was held in 1946, his team of researchers were working on building a prototype mainframe for the US navy (Ceruzzi 2003, pp. 21-4). In von Neumann, the American empire had found a guru without any trace of heresy.

At the early Macy conferences, the political differences among its attendees weren’t apparent. United by the anti-fascist struggle, Wiener and von Neumann could champion the same concept of cybernetics (Conway and Siegelman 2005, pp. 143-149; Heims 1980, pp. 201-207). But, as their politics diverged, these two stars of Macy conferences began advocating rival interpretations of this meta-theory. In its left-wing version, artificial intelligence was denounced as the apotheosis of technological domination. When he formulated his right-wing remix, von Neumann took cybernetics in exactly the opposite direction. Above all, his interpretation emphasised that this master theory had been inspired by the prophecy of thinking machines. Back in the mid-1930s, Turing had briefly worked with von Neumann at Princeton. A decade before his own involvement in computing, this Hungarian scientist had known all about the idea of the universal machine. When two Chicago psychologists in the early-1940s applied Turing’s theory to explain the processes of human thought, von Neumann became fascinated by the implications of their speculations.

Since the mechanical calculator was modelled on the human brain, Warren McCulloch and Walter Pitts argued that consciousness was synonymous with calculation. Like the electrical contacts of an IBM tabulator, neurons were switches which transmitted information in binary form (McCulloch and Pitts 1943). Entranced by this inversion of Turing’s line of argument, von Neumann became convinced that it was theoretically possible to build a thinking machine. If neurons acted as switches within the human brain, valves could be used to create an electronic brain (von Neumann 1966, pp. 43-46, 1976, pp. 308-311). When he was working on computer research for the US military, von Neumann used the human brain as the model for his eponymous computer architecture. Echoing Turing, this prophet claimed that – as the number of valves in a computer approached those of the neurons in the brain – the machine would be able to think (von Neumann 1966, pp. 36-41, 1976, pp. 296-300, 2000, pp. 39-52). Within a decade, von Neumann and his colleagues would be equipping the US military with cybernetic soldiers capable of fighting and winning a nuclear war.

‘Dr. McCulloch: How about designing computing machines so that if they were damaged in air raids … they could replace parts … and continue to work?
Dr. von Neumann: These are really quantitative rather than qualitative questions.’
von Neumann 1976, p. 324

By the early-1950s, the USA’s academic and corporate research teams had seized the leadership of computing from their British rivals. From then onwards, all of the most advanced machines were made-in-America (Ceruzzi 2003, pp. 13-46). Yet, by creating cybernetics without Wiener, von Neumann had ensured that these US laboratories would also follow Turing’s path to the imaginary future: building artificial intelligence. The metaphor of feedback now proved that computers operated like humans. Inputs of information led to outputs of action. If the mind operated like a machine, then it must be possible to develop a machine which duplicated the functions of the mind. Computers could already calculate faster than their human inventors. Mastering the complexities of mathematical logic must be the first step towards endowing these machines with the other attributes of human consciousness. Language was a set of rules which could be codified as software. Learning from new experiences could be programmed into hardware (Minsky, 2004, 2004a). Throughout the 1950s and early-1960s, American scientists worked hard to build the thinking machine. Once it had enough processing power, the computer would achieve consciousness. (Edwards 1996, pp. 239-273). When IBM launched its System/360 mainframe at the 1964 World’s Fair, Turing’s dream appeared to be close to realisation.

Cold War Computing

A quarter of a century earlier, one of the stars of the 1939 New York World’s Fair had been Electro: a robot which – according to its publicists – ‘… could walk, talk, count on its fingers, puff a cigarette, and distinguish between red and green with the aid of a photoelectric cell’ (Eames and Eames 1973, p. 105). For all its fakery, this exhibit was the first iteration of the imaginary future of artificial intelligence. Before the 1939 World’s Fair, robots in books and movies had almost always been portrayed as Frankenstein monsters intent on destroying their human creators (Shelley 1969; Lang 2003). Inspired by his visits to the exposition, Isaac Asimov decided to change this negative image. Just like Electro, the robots in his sci-fi stories were safe and friendly products of a large corporation (Asimov, 1968, 1968a). By the time that the 1964 New York World’s Fair opened, this positive image of artificial intelligence had become one of the USA’s most popular imaginary futures. In both science fiction and science fact, the robot servant was the symbol of better times to come.

At the 1939 New York World’s Fair, Electro was competing against the technological icon of the moment: the motor car. The must-see attractions were Democracity – a model featured in the New York State’s Perisphere building – and Futurama – a diorama inside the General Motors’ pavilion. Both exhibits promoted a vision of an affluent and hi-tech America of the 1960s. In this imaginary future, the majority of population lived in family homes in the suburbs and commuted to work in their own motor cars (Exposition Publications 1939, pp. 42-45, 207-209). For most visitors to the 1939 New York World’s Fair, this prophecy of consumer prosperity must have seemed like a utopian dream. The American economy was still recovering from the worst recession in the nation’s history, Europe was on the brink of another devastating civil war and East Asia was already engulfed by murderous conflicts. Yet, by the time that the 1964 World’s Fair opened, the most famous prediction of the 1939 exposition had been realised. However sceptical visitors might have been back in 1939, the Democracity and Futurama dioramas seemed remarkably prescient twenty-five years later. By the early-1960s, America was a suburban-dwelling, car-owning consumer society. The imaginary future had become contemporary reality.

‘The motor car … directs [social] behaviour from economics to speech. Traffic circulation is one of the main functions of a society … Space [in urban areas] is conceived in terms of motoring needs and traffic problems take precedence over accommodation … it is a fact that for many people the car is perhaps the most substantial part of their ‘living conditions’.’
Lefebvre 1984, p. 100.

Since the predictions of the 1939 exposition had largely come true, visitors to the 1964 New York World’s Fair could have been forgiven for believing that its three main imaginary futures would also be realised. Who could doubt that – by 1990 at the latest – the majority of Americans would be enjoying the delights of space tourism and unmetered electricity? Best of all, they would be living in a world where sentient machines were their devoted servants. However, the American public’s confidence in these imaginary futures would have been founded upon a mistaken sense of continuity. Despite being held on the same site and having many of the same exhibitors, the 1964 World’s Fair had a very different focus from its 1939 antecedent. Twenty-five years earlier, the centrepiece of the exposition had been the motor car: a mass produced consumer product. In contrast, the stars of the show at the 1964 World’s Fair were state-funded technologies for fighting the Cold War. Computers calculated the trajectories which would guide American missiles armed with nuclear bombs to destroy Russian cities and their unfortunate inhabitants (Isaacs and Dowling 1998, pp. 230-243). While its 1939 predecessor had showcased motorised transportation for the masses, the stars of the 1964 World’s Fair were the machines of atomic armageddon.

When the IBM pavilion was being designed, the corporation had to deal with this public relations problem. Like nuclear reactors and space rockets, computers had also been developed as Cold War weaponry. ENIAC – the first prototype mainframe built in America – was a machine for calculating tables to improve the accuracy of artillery guns (Ceruzzi 2003, p. 15). From the early-1950s onwards, IBM’s computer division was focused on winning orders from the Department of Defence (Pugh 1995, pp. 167-172). Using mainframes supplied by the corporation, the US military prepared for nuclear war, organised invasions of ‘unfriendly’ countries, directed the bombing of enemy targets, paid the wages of its troops, ran complex war games and managed its supply chain (Berkeley 1962, pp. 56-7, 59-60, 137-145). Thanks to the American taxpayers, IBM had become the technological leader of the computer industry.

When the 1964 New York World’s Fair opened, IBM was still closely involved in a wide variety of military projects. Yet, its pavilion was dedicated to promoting the sci-fi fantasy of thinking machines. Like the predictions of unmetered energy and space tourism, the imaginary future of artificial intelligence distracted visitors at the World’s Fair from discovering the original motivation for developing IBM’s mainframes: killing millions of Russian civilians. Although the superpowers’ imperial hegemony depended upon atomic weapons, the threat of global annihilation made their possession increasingly problematic. Two years earlier, the USA and Russia had almost blundered into a catastrophic war over Cuba (Dallek 2003, pp. 535-574). Despite disaster being only narrowly averted, the superpowers were incapable of stopping the arms race. In the bizarre logic of the Cold War, the prevention of an all-out confrontation between the two blocs depended upon the continual growth in the number of nuclear weapons held by both sides. The ruling elites of the USA and Russia had difficulties in admitting to themselves – let alone to their citizens – the deep irrationality of this new form of military competition. In a rare moment of lucidity, American analysts invented an ironic acronym for this high-risk strategy of ‘mutually assured destruction’: MAD (Isaacs and Dowling 1998, pp. 230-243; Kahn 1960, pp. 119-189).

Not surprisingly, the propagandists of both sides justified the enormous waste of resources on the arms race by promoting the peaceful applications of the leading Cold War technologies. By the time that the 1964 New York World’s Fair opened, the weaponry of genocide had been successfully repackaged into people-friendly products. Nuclear power would soon be providing unmetered energy for everyone. Space rockets would shortly be taking tourists for holidays on the moon. Almost all traces of the military origins of these technologies had disappeared. Similarly, in the IBM pavilion, the System/360 mainframe was promoted as the harbinger of artificial intelligence. Visitors were expected to admire the technological achievements of the corporation not to question its dubious role in the arms race. The horrors of the Cold War present had been hidden by the marvels of the imaginary future.

IBM’s ideological legerdemain was made much easier by one of the distinguishing features of industrial modernity: the breaking of the explicit links between products and their producers. For millennia, warriors and priests had overtly extracted the surpluses of the peasantry for their own benefit. But, when Europeans started to privatise land ownership and mechanise handicraft production, a new – and more advanced – economic system was born: liberal capitalism. Entrepreneurs proved that price competition could indirectly coordinate human labour much more efficiently than the direct methods of feudalism. Adventurers discovered that selling commodities in the world market was much more profitable than rack-renting peasants in one locality. For the first time in human history, the productive potential of collective labour was being realised (Smith 1976, pp. 1-287, 401-445; Marx 1976, pp. 762-940).

In this new economy, people were required to interact with each other through things: commodities, money and capital. The distribution and division of labour across the economy was regulated by the prices and wages set by market competition. However, the demise of the aristocracy and the priesthood hadn’t ended class rule. When labour was bought and sold in the capitalist economy, equality within the marketplace resulted in inequality inside the workplace (Marx 1976, pp. 270-280; Rubin 1972, pp. 77-253). Because commodities were exchanged with others of equivalent value, this new form of class rule was very different from its predecessor. Indirect exploitation had replaced direct domination. Above all, the impersonal movements of the markets now determined the destiny of individuals. Things not people now ruled the world.

‘The mysterious character of the commodity-form consists … simply in the fact that the commodity reflects the social characteristics of [women and] men’s own labour as objective characteristics of the products of labour themselves, as the socio-natural properties of these things. Hence it also reflects the social relations of the producers to the sum total of labour as a social relation between objects, a relation which exists apart from and outside the producers.’
Marx 1976, pp. 164-165

The growth of capitalism created a much more complex division of labour within the economy. In parallel with the proliferation of factory and office jobs, scientific research also emerged as a distinct profession (Bahr 1980; Smith 1976, pp. 7-16). Successful firms grew by not only employing more workers but also investing in new machinery. In the fetishised world of capitalism, the growth in productivity caused by the increasingly sophisticated cooperation of factory labour and scientific research was expressed as the development of cutting-edge technologies. With human creativity hidden behind the commodity, the process of modernity had acquired a highly visible object as its subject: the ‘… automatic system of machinery … a moving power that moves itself.’ (Marx 1973, p. 692).

During the mid-twentieth century, the fetishisation of technology reached its apotheosis in the prophecy of artificial intelligence. Living in a society where new machinery appeared to be the driving force of social evolution, Turing and von Neumann’s claim that machines were evolving into living beings didn’t seem outlandish. When computers were operating, the labour involved in developing their hardware and writing their programs wasn’t immediately visible. Mesmerised by technological fetishism, the admirers of Turing and von Neumann had convinced themselves that an electronic brain would soon be able to think just like a human brain. In 1964, the calculating power of the System/360 mainframe was so great that this IBM machine seemed be on the threshold of consciousness. The creation was about to become the creator.

Cybernetic Supremacy

At the 1964 World’s Fair, imaginary futures temporarily succeeded in concealing the primary purpose of its three iconic technologies from the American public. But even the flashiest showmanship couldn’t hide dodgy use values forever. As the decades passed, none of the predictions made at the World’s Fair about the key Cold War technologies were realised. Energy remained metered, tourists didn’t visit the moon and computers never became intelligent. Unlike the prescient vision of motoring for the masses at the 1939 World’s Fair, the prophecies about the star technologies of the 1964 exposition seemed almost absurd twenty-five years later. Hyper-reality had collided with reality – and lost.

Despite the failure of its prophecy, IBM suffered no damage. In stark contrast with nuclear power and space travel, computing was the Cold War technology which successfully escaped from the Cold War. Right from the beginning, machines made for the US military were also sold to commercial clients (Pugh 1995, pp. 152-155). By the time that IBM built its pavilion for the 1964 World’s Fair, the imaginary future of artificial intelligence had to hide more than the unsavoury military applications of computing. Commodity fetishism also performed its classic function of concealing the role of human labour within production. Computers were described as ‘thinking’ so the hard work involved in designing, building, programming and operating them could be discounted. Above all, the prophecy of artificial intelligence obscured the role of technological innovation within American workplaces.

The invention of computers came at an opportune moment for big business. During the first half of the twentieth century, large corporations had become the dominant institutions of the American economy. Henry Ford’s giant car factory became the eponymous symbol of the new social paradigm: Fordism (Ford and Crowther 1922; Aglietta 1979). When it boosted their profits, corporations replaced the indirect regulation of production by markets with direct supervision by bureaucrats. As the wage-bill for white-collar employees steadily rose, businesses needed increasing amounts of equipment to raise productivity within the office. Long before the invention of the computer, Fordist corporations were running an information economy with tabulators, typewriters and other types of office equipment (Beniger 1986, pp. 291-425). However, by the beginning of the 1950s, the mechanisation of clerical labour had stalled. Increases in productivity in the office were lagging well behind those in the factory. When the first computers appeared on the market, corporate managers quickly realised that the new technology offered a solution to this pressing problem (Sobel 1981, pp. 95-184). The work of large numbers of tabulator operators could now be done by a much smaller group of engineers using a mainframe (Berkeley 1962, p. 5). Even better, the new technology of computing enabled capitalists to deepen their control over their organisations. Much more information about many more topics could now be collected and processed in increasingly complex ways. The managers were masters of all that they surveyed.

Almost from its first appearance in the workplace, the mainframe was caricatured – with good reason – as the mechanical perfection of bureaucratic tyranny. In Asimov’s sci-fi stories, Mr and Mrs Average were the owners of robot servants. Yet, when the first computers arrived in America’s factories and offices, this new technology was controlled by the bosses not the workers. In 1952, Kurt Vonnegut published a sci-fi novel which satirised the authoritarian ambitions of corporate computing. In his dystopian future, the ruling elite had delegated the management of society to an omniscient artificial intelligence.

‘EPICAC XIV … decided how many [of] everything America and her customers could have and how much they would cost. And it … would decide how many engineers and managers and … civil servants, and of what skills, would be needed to deliver the goods; and what I.Q. and aptitude levels would separate the useful men [and women] from the useless ones, and how many … could be supported at what pay level…’
Vonnegut 1969, p. 106.

At the 1964 World’s Fair, the IBM pavilion promised that thinking machines would be the servants of all of humanity. Yet, at the same time, its sales personnel were telling the bosses of large corporations that computers were hard-wiring bureaucratic authority into modern society. Herbert Simon – one of America’s leading management theorists and a former colleague of von Neumann – foresaw that the increasing power of mainframes would enable companies to automate more and more clerical tasks. Just like in the factory, machines were taking over from human labour in the office. When artificial intelligence arrived, mainframes would almost completely replace bureaucratic and technical labour within manufacturing. The ultimate goal was the creation of the fully automated workplace. Companies would then no longer need either blue-collar or white-collar workers to make products or provide services. Even most managers would become surplus to requirements. Instead, thinking machines would be running the factories and offices of America (Simon 1965, pp. 26-52). In the imaginary future of artificial intelligence, the corporation and the computer would be one and the same thing.

This prediction was founded upon von Neumann’s conservative interpretation of cybernetics. In his management theory texts, Simon argued that the workings of a firm resembled the operations of a computer. As in McCulloch and Pitts’ psychology, this identification was made in two directions. Managing workers was equated with programming a computer. Writing software was like drawing up a business plan. Both employees and machinery were controlled by orders issued from above. The workers’ dystopia of Big Brother mainframe had now mutated into the capitalist utopia of cybernetic Fordism. Ironically, the credibility of Simon’s managerial ideology depended upon his readers forgetting the fierce criticisms of corporate computing made by the founding father of systems theory. Echoing Marx, Wiener had warned that the primary role of new technology under capitalism was to intensify the exploitation of the workers. Instead of creating more leisure time and improving living standards, the computerisation of the economy under Fordism would increase unemployment and cut wages (Wiener 1967, pp. 206-221). If Vonnegut’s dystopia was to be avoided, American trade unionists and political activists must mobilise against the corporate Golem (Wiener 1966, pp. 54-55). According to Wiener, cybernetics proved that artificial intelligence threatened the freedoms of humanity.

‘Let us remember that the automatic machine … is the precise equivalent of slave labour. Any labour which competes with slave labour must accept the economic conditions of slave labour.’
Wiener 1967, p. 220.

For business executives, Vonnegut and Wiener’s nightmare was their computer daydream. However, this corporate vision of cybernetic Fordism meant forgetting the history of Fordism itself. This economic paradigm had been founded upon the successful co-ordination of mass production with mass consumption. Ironically, since these exhibits were more closely connected to social reality, Democracity and Futurama in 1939 provided a much more accurate prediction of the development path of computing than the IBM pavilion did in 1964. Just like motor cars twenty-five years earlier, this new technology was also slowly being transformed from a rare, hand-made machine into a ubiquitous, factory-produced commodity. As in the Ford factories, IBM’s System/360 mainframes were manufactured on assembly-lines (Pugh, Johnson and Palmer 1991, pp. 87-105, 204-210). These opening moves towards the mass production of computers anticipated what would be most important advance in this sector twenty-five years later: the mass consumption of computers. In its formal design, the 1964 System/360 mainframe was a bulky and expensive prototype of the much smaller and cheaper IBM PCs of 1989.

The imaginary future of artificial intelligence was a way of avoiding thinking about the likely social consequences of the widespread ownership of computers. In the early-1960s, Big Brother mainframe belonged to big government and big business. Above all, feedback was knowledge of the ruled monopolised by the rulers. However, as Wiener had pointed out, Fordist production would inevitably transform expensive mainframes into cheap commodities (Wiener 1967, pp. 210-211). In turn, increasing ownership of computers was likely to disrupt the existing social order. For the feedback of information within human institutions was most effective when it was two-way (Wiener 1967, pp. 67-73). By reconnecting conception and execution, cybernetic Fordism threatened the social hierarchies which underpinned Fordism itself.

‘… the simple coexistence of two items of information is of relatively small value, unless these two items can be effectively combined in some mind … which is able to fertilises one by means of the other. This is the very opposite of the organisation which every member travels a preassigned
path ɉ۪
Wiener 1967, p. 172.

At the 1964 World’s Fair, this possibility was definitely not part of IBM’s imaginary future. Rather than aiming to produce ever greater numbers of more efficient machines at cheaper prices, the corporation was focused on steadily increasing the capabilities of its computers to preserve its near-monopoly over the military and corporate market. Instead of room-sized machines shrinking down into desktops, laptops and, eventually, mobile phones, IBM was convinced that computers would always be large and bulky mainframes. The corporation fervently believed that – if this path of technological progress was extrapolated – artificial intelligence must surely result. Crucially, this conservative recuperation of cybernetics implied that sentient machines would inevitably evolve into lifeforms which were more advanced than mere humans. The Fordist separation between conception and execution would have achieved its technological apotheosis.

Not surprisingly, IBM was determined to counter this unsettling interpretation of its own futurist propaganda. At the 1964 World’s Fair, the corporation’s pavilion emphasised the utopian possibilities of computing. Yet, despite its best efforts, IBM couldn’t entirely avoid the ambiguity inherent within the imaginary future of artificial intelligence. This fetishised ideology could only appeal to all sections of American society if computers fulfilled the deepest desires of both sides within the workplace. Therefore, in the exhibits at its pavilion, IBM promoted a single vision of the imaginary future which combined two incompatible interpretations of artificial intelligence. On the one hand, workers were told that all their needs would be satisfied by sentient robots: servants who never tired, complained or questioned orders. On the other hand, capitalists were promised that their factories and offices would be run by thinking machines: producers who never slacked off, expressed opinions or went on strike. Robby the Robot had become indistinguishable from EPICAC XIV. If only at the level of ideology, IBM had reconciled the class divisions of 1960s America. In the imaginary future, workers would no longer need to work and employers would no longer need employees. The sci-fi fantasy of artificial intelligence had successfully distracted people from questioning the impact of computing within the workplace. After visiting IBM’s pavilion at the 1964 World’s Far, it was all too easy to believe that everyone would win when the machines acquired consciousness.

Inventing New Futures

Forty years later, we’re still waiting for the imaginary future of artificial intelligence. In the intervening period, we’ve been repeatedly promised its imminent arrival. Yet, despite continual advances in hardware and software, machines are still incapable of ‘thinking’. The nearest thing to artificial intelligence which most people have encountered are characters in video games. But, as the growing popularity of on-line gaming demonstrates, a virtual opponent is a poor substitute for a human player. Looking back at the history of this imaginary future, it is obvious that neither the optimistic nor the pessimistic versions of artificial intelligence have been realised. Robby the Robot isn’t our devoted servant and EPICAC XIV doesn’t control our lives. Instead of evolving into thinking machines, computers have become consumer goods. Room-sized mainframes have kept on shrinking into smaller and smaller machines. Computers are everywhere in the modern world – and their users are all too aware that they’re dumb.

Repeated failure should have discredited the imaginary future of artificial intelligence for good. Yet, its proponents remain unrepentant. Four decades on from the 1964 World’s Fair, IBM is still claiming that its machines are on the verge of acquiring consciousness (Bell 2004, p. 2). The persistence of this sci-fi fantasy demonstrates the continuing importance of von Neumann’s conservative appropriation of cybernetics within the computer industry. As in the early-1960s, artificial intelligence still provides a great cover story for the development of new military technologies. Bringing on the Singularity seems much more friendly than collaborating with American imperialism. Even more importantly, this imaginary future continues to disguise the impact of computing within the workplace. Both managers and workers are still being promised technological fixes for socio-economic problems. The dream of sentient machines makes better media copy than the reality of cybernetic Fordism. At the beginning of the 21st century, artificial intelligence remains the dominant ideological manifestation of the promise of computing.

The credibility of this imaginary future depends upon forgetting its embarrassing history. Looking back at how earlier versions of the prophecy were repeatedly discredited encourages deep scepticism about its contemporary iterations. Our own personal frustrations with computer technology should demonstrate the improbability of its transformation into the Silicon Messiah. Forty years after the New York World’s Fair, artificial intelligence has become an imaginary future from the distant past. What is needed instead is a much more sophisticated analysis of the potential of computing. Wiener – not von Neumann – must be our cybernetic guru. The study of history should inform the reinvention of the future. Messianic mysticism must be replaced by pragmatic materialism. Above all, this new image of the future should celebrate computers as tools for augmenting human intelligence and creativity. Praise for top-down hierarchies of control must be superseded by the advocacy of two-way sharing of information. Let’s be inspired and passionate about imagining our own visions of the better times to come (Barbrook and Schultz 1997; Barbrook 2000).

==================================================

The arguments in this article are developed further in Richard Barbrook, Imaginary Futures: from thinking machines to the global village, Pluto, London 2007. Check out the book’s website: <www.imaginaryfutures.net>.

=================================================

References

Agar, J. (2001) Turing and the Universal Machine: the making of the modern computer, Cambridge: Icon.

Agar, J. (2003) The Government Machine: a revolutionary history of the computer, Cambridge Mass: MIT Press.

Aglietta, M. (1979) A Theory of Capitalist Regulation: the US experience, London: Verso.

Asimov, I. (1968) I, Robot, London: Panther.

Asimov, I. (1968a) The Rest of the Robots, London: Panther.

Bahr, H.-D. (1980) ‘The Class Structure of Machinery: notes on the value form’ in P. Slater (ed.), Outlines of a Critique of Technology, London: Ink Links, pp. 101-141.

Barbrook, R. (2000) ‘Cyber-communism: how the Americans are superseding capitalism in cyberspace’, Science as Culture, No. 1, Vol. 9, pp. 5-40, <www.imaginaryfutures.net/2007/04/17/cyber-communism-how-the-americans-are-superseding-capitalism-in-cyberspace>.

Barbrook, R. and Schultz, P. (1997) ‘The Digital Artisans Manifesto’, ZKP 4, Ljubljana: nettime, pp. 52-53, <www.ljudmila.org/nettime/zkp4/72.htm>.

Bell, J. (2004) ‘Exploring the ‘Singularity’’, <www.kurzweilai.net/meme/frame.html?main=/articles/art0584.html?m%3D1>.

Beniger, J. (1986) The Control Revolution: technological and economic origins of the information society, Cambridge Mass: Harvard University Press.

Berkeley, E. (1962) The Computer Revolution, New York: Doubleday.

Cameron, J. (dir.) (1984) The Terminator, MGM/United Artists.

Ceruzzi, P. (2003) A History of Modern Computing, Cambridge Mass: MIT Press.

Conway, F. and Siegelman, J. (2005) Dark Hero of the Information Age: in search of Norbert Wiener father of cybernetics, New York: Basic Books.

Dallek, R. (2003) John F. Kennedy: an unfinished life 1917-1963, London: Penguin.

DeLamarter, R. (1986) Big Blue: IBM’s use and abuse of power, London: Pan.

Eames, C. and Eames, R. (1973) A Computer Perspective: a sequence of 20th century ideas, events and artefacts from the history of the information machine, Cambridge Mass: Harvard University Press.

Editors of Time-Life Books (1964) Official Guide New York World’s Fair 1964/5, New York: Time.

Edwards, P. (1996) The Closed World: computers and the politics of discourse in Cold War America, Cambridge Mass: MIT Press.

Exposition Publications (1939) Official Guide Book of the New York World’s Fair 1939, New York: Exposition Publications.

Ford, H. and Crowther, S. (1922) My Life and Work, London: William Heinemann.

Heims, S. (1980) John von Neumann and Norbert Wiener: from mathematics to the technologies of life and death, Cambridge Mass: MIT Press.

Heims, S. (1991) The Cybernetics Group, Cambridge Mass: MIT Press.

Hodges, A. (1992) Alan Turing: the enigma, London: Vintage.

Honda (2004) ‘Asimo’, <world.honda.com/ASIMO>.

Isaacs, J. and Dowling, T. (1998) Cold War: for 45 years the world held its breath, London: Bantam.

Kahn, H. (1960) On Thermonuclear War, Princeton: Princeton University Press.

Ray Kurzweil, (2004) ‘The Intelligent Universe’, <www.kurzweilai.net/meme/frame.html?main=memelist.html?m=3%23534>.

Lang, F. (dir.) (2003) Metropolis, Eurekavideo.

Laurence, W. (1964) Science at the Fair, New York: New York 1964-1965 Fair Corporation.

Lefebvre, H. (1984) Everyday Life in the Modern World, New Brunswick: Transaction Publications.

Leslie, S. (1993) The Cold War and American Science: the military-industrial complex at MIT and Stanford, New York: Columbia University Press.

Lewontin, R.C. (1997) ‘The Cold War and the Transformation of the Academy’ in A. Schiffin (ed.), The Cold War and the University, New Press, New York, pp. 1-34.

Luce, H. (1941) The American Century, New York: Time.

Marx, K. (1973) Grundrisse, London: Penguin.

Marx, K. (1976) Capital Volume 1: a critique of political economy, London: Penguin.

McCulloch, W. and Pitts, W. (1943) ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’, Bulletin of Mathematical Biophysics, Volume 5, pp. 115-133.

Minsky, M. (2004) ‘Steps Towards Artificial Intelligence’, <web.media.edu/~minsky/papers/steps.html>.

Minsky M. (2004) ‘Matter, Mind and Models’, <web.media.edu/~minsky/papers/MatterMindModels.txt>.

von Neumann, J. (1966) The Theory of Self-Reproducing Automata, Urbana: University of Illinois Press.

von Neumann, J. (1976) ‘The General and Logical Theory of Automata’. Collected Works Volume V: design of computers, theory of automata and numerical analysis, Oxford: Pergamon Press, pp. 288-326.

von Neumann, J. (2000) The Computer and the Brain, Yale: Yale University Press.

Pugh, E. (1995) Building IBM: shaping an industry and its technology, Cambridge Mass: MIT Press.

Pugh, E., Johnson, L. and Palmer, J. (1991) IBM’s 360 and Early 370 Systems, Cambridge Mass: MIT Press.

Rubin, I. (1972) Essays on Marx’s Theory of Value, Detroit: Black & Red,.

Schaffer, S. (2000) ‘OK Computer’, <www.imaginaryfutures.net/2007/04/16/ok-computer-by-simon-schaffer>.

Schefter, J. (1999)The Race: the definitive story of America’s battle to beat Russia to the moon, London: Century.

Shelley, M. (1969) Frankenstein: the modern Prometheus, Oxford: Oxford University Press.

Simon, H. (1965) The Shape of Automation for Men and Management, New York: Harper.

Sobel, R. (1981) IBM: colossus in transition, New York: Truman Talley.

Smith, A. (1976) An Inquiry into the Nature and Causes of the Wealth of Nations Volume 1 & Volume 2, Chicago: University of Chicago Press.

Stanton, J. (2004) ‘Best of the World’s Fair’, <naid.sppsr.ucla.edu/ny64fair/map-docs/bestoffair.htm>.

Stanton, J. (2004a) ‘Building the 1964 World’s Fair’, <naid.sppsr.ucla.edu/ny64fair/map-docs/buildingfair.htm>.

Startrek.com (2005), ‘Data’, <www.startrek.com/startrek/view/series/TNG/character/1112457.html>.

Stern, R., Mellins, T. and Fishman, D. (1997) New York 1960: architecture and urbanism between the Second World War and the Bicentennial, Köln: Benedikt Taschen.

Turing, A., (2004) ‘On Computable Numbers, with an Application to the Entscheidungsproblem’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 58-90.

Turing, A., (2004a) ‘Lecture on the Automatic Computing Engine’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 378-394.

Turing, A., (2004b) ‘Intelligent Machinery’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 410-432.

Turing, A., (2004c) ‘Computing Machinery and Intelligence’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 433-464.

Turing, A., (2004d) ‘Intelligent Machinery, a Heretical Theory’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 472-475.

Turing, A., (2004e) ‘Can Digital Computers Think?’ in B. J. Copeland (ed.), The Essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence and artificial life plus the secrets of the Enigma, Oxford: Oxford University Press, pp. 482-486.

Vinge, V. (1993) ‘The Coming Technological Singularity: how to survive in the post-human era’, VISION-21 Symposium, 30-31 March, <www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html>.

Vonnegut, K. (1969) Player Piano, St. Albans: Panther.

Wiener, N. (1948) Cybernetics – or command and control in the animal and the machine, New York: John Wiley.

Wiener, N. (1966) God & Golem, Inc.: a comment of certain points where cybernetics impinges on religion, Cambridge Mass: MIT Press.

Wiener, N. (1967) The Human Use of Human Beings: cybernetics and society, New York: Avon Books.

Wilcox, F. (dir.) (1999) Forbidden Planet, Turner Entertainment.