Essay/History of Technology

It’s not all lightbulbs

If we abandon the cult of the Great White Innovator, we will understand the history of technology in a much deeper way

W Patrick McCray

In the General Electrics Research laboratory circa 1949. Photo by Bettmann/Getty

W Patrick McCray

is professor of history at the University of California, Santa Barbara. His latest book is The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future (2012). He is currently working on a book on art-technology collaborations for MIT Press.

Published in association with
The Maintainers
an Aeon Partner


Brought to you by, an Aeon partner

3,700 words

Edited by Sam Haselby

Syndicate this Essay

Innovation has become a defining ideology of our time. Be disruptive, move fast, break things! And everyone knows – right? – what innovation looks like. Just Google the word. You’ll see lots of lightbulbs. Lightbulbs represent a sudden flash of inventiveness experienced by Thomas Edison or other mythic geniuses.

Innovation, as an infinite progression of advertisements, political campaigns and university incubators tell us, is Always A Very Good Thing. And, like all myths, this one holds some truth. Technological innovation has raised standards of living, made populations healthier, safer and smarter.

But, in large part because this isn’t always true, it’s essential to understand how science and technology advances actually happen and affect the world. Because of their importance, it’s essential to reflect more critically on our collective myths about innovation.

First, forget all those images that a web search gives. The driving forces of innovation are not mythic isolated geniuses, almost always represented as men, be it Edison or Steve Jobs. That view is at best misleading, the history of technology and science’s version of the Great (White) Man approach to history. For instance, Edison almost never worked alone. The more than 2 billion smartphones used around the world today function not because of Jobs’s singular genius, not even because of the private sector, but because of research and development funded by an entrepreneurial state.

The history of technology is too important to be left to the technologists. Relying on PayPal’s founders Elon Musk or Peter Thiel to tell us how that history goes is like turning to Bill Clinton or Newt Gingrich to tell the political history of the 1990s. Books such as Walter Isaacson’s The Innovators (2014) or Steven Johnson’s How We Got to Now (2015) give us accounts of lone genius men toiling in industrial labs and Bay Area garages. This view of innovation – narrow and shallow – casts a long shadow, one that obscures the broad and deep currents that actually drive technological innovation and shape its impact on society.

Instead, consider the Otts. Somewhere in Kansas during the early years of the Great Depression, Bill Ott and his daughter Lizzie did something different with their car. By removing the rear tyre and adding a drive belt, they built a homemade car-powered washing machine. As an ‘innovation thought leader’ at Davos or TED might say, the Otts hacked the automobile and re-invented the washing machine. Stated simply – they innovated. So how come you haven’t heard of the Otts? Because the Great White Man narrative of innovation ignores the critical role that anonymous, unrecognised people such as Bill and Lizzie Ott play in the incrementalism that is the real stuff of technological change. Most of the time, innovators don’t move fast and break things.

The Ott Family with T-Ford washing machine. Photo courtesy Ronald Kline, Cornell University.

Over the past two centuries, almost all professional scientists and engineers have worked not to cut down the old trees of technologies and knowledge and grow new ones, but to nurture and prune the existing ones. In corporate-based science and technology, disruption is very rare, continuity rules, and makes change and advance possible. At different times in history, such disruption was even discouraged. At the great industrial labs of the early 20th century, companies such as General Electric (GE) or AT&T didn’t want their engineers and scientists to create excessive technological novelty – tens of millions of company dollars had been invested to build existing technological systems. Instead, research managers such as Willis R Whitney, head of GE’s research, sought incremental improvements that would marginally advance the company’s technologies and extend its intellectual property regime. Kenneth C Mees, who ran Kodak’s research lab for decades, noted in 1920 that corporate research managers did not seek brilliant, eccentric (and unpredictable) geniuses. Provided that a researcher was well-trained, anyone could make a contribution to research ‘even though he be entirely untouched by anything that might be considered the fire of genius’.

As we redefine our sense of what an innovator is and what talents she might possess, we start to see that the industrial revolutions of the past few centuries did not have one single global meaning. The economic reshuffling, social upheaval and environmental exploitation of modern industrial revolutions look very different from the perspective of a person living in Europe than from the perspective of people in Asia or Africa, for example. If we leave the shadow of the cult of the Great White Innovator theory of historical change, we can see farther, and deeper.

The global view shifts the focus from Manchester, Lowell, Detroit and Silicon Valley. It involves accepting that innovation and technological change are more than just making things. Ironically, this allows us to begin to glimpse a more familiar world where activities such as maintenance, repair, use and re-use, recycling, obsolescence and disappearance dominate. A much more global picture, one that includes people whose lives and contributions the Great White Innovator narrative marginalised, comes into view. The Lizzie Otts of the world can take their proper place as participants and contributors.

Sign up for Aeon’s Newsletter

Every year, I teach a course on the history of technology. At the start of the each term, I ask my students at the University of California to finish this sentence: ‘Technology is…?’

The responses are predictable. To most undergraduates, technology means the machines and devices around them – cars, laptops, smart phones and, yes, lightbulbs. At the end of term, I ask them the question again. If I’m lucky and have taught a good course, my students will have come to understand that technology is more than just things. It’s more complex and richer than just the machines around them. It includes things we don’t typically think of as things, such as patents, regulations, professional accreditations and, of course, the institutions that make these things.

Take one example – technical standards. When you go to the hardware store and buy a screw to replace one that’s broken, you probably feel pretty confident that when the label says a ‘3/8 metal screw with 32 threads per inch’, that’s what you’re getting. That’s because American and European bureaucrats and engineers worked for decades to establish standards. Without these, interchangeable parts and global trade would have been practically impossible. Largely ignored, often invisible, standards created stability in technological systems. Whether it’s screws or shipping containers, standards transformed the novel into the mundane, and made the local into the global.

Making standards wasn’t about making new material objects exactly. Establishing standards meant making consensus via some sort of political process. For screw threads – a mundane, possibly quite boring example – this required a series of national and international meetings, and input from professional engineering societies (itself somewhat of an ‘innovation’ as engineers in the US and overseas began to organise themselves politically). The goal wasn’t disruption and moving fast but rather reaching agreement and creating technological stability. This political engineering sometimes meant overcoming complaints that the standards promoted by large companies such as AT&T stifled innovation and further centralised its corporate power. It took the action of national organisations to override resistance. In 1924, the president of the American Standards Association argued that standards were ‘the liberator’ that relegated problems that had already been solved to the realm of the routine.

As political artefacts, standards embody certain ideologies. For the internet, it is an aspiration towards openness – open systems, open access, open source. In the US, this ideology has deep historical roots. Some ideas inherent in this openness can be traced from the civil liberties driving resistance towards England’s Stamp Act in the mid-18th century to 20th-century ideals of open societies as alternatives to fascist and communist regimes. The philosopher Langdon Winner argued in 1980 that artefacts have politics, beliefs and assumptions about the world and society that are embedded and written into their very fabric.

As a result, technical standards – the very ‘things’ that allow my laptop and your iPhone to seamlessly (more or less) connect to networks as we move about the planet – requires the International Organization of Standardization (ISO), as well as recognition and cooperation from state agencies such as the US Federal Communications Commission or the International Telecommunication Union. Techno-libertarians might claim ‘I made it’ but the reality is that, without international standards, whatever they made wouldn’t work very well.

Efficiency, therefore, is not some timeless universal value but something grounded deeply in particular historical circumstances

Core ideas and beliefs are additional ‘things’ that underpin our technological world. Central among these is a pervasive ideology – the quest for efficiency – that runs throughout past and present industrial revolutions. The quest for greater efficiency and rational operations flowed from the automatic flour mill that Oliver Evans patented in 1790 to the stopwatch-obsessed scientific managers who applied their techniques to the management of factory and home. The ideal of Frederick Winslow Taylor’s quest for a ‘one best way’ continued into 1960s-era modernisation plans in the world’s poor regions. Capitalist and communist systems alike embraced it, competing to outdo one another in productivity and efficiency. This same ideal burns bright in today’s descriptions of a forthcoming ‘Fourth Industrial Revolution’, where the cyber and physical worlds are linked.

At the beginning of Joseph Conrad’s novel Heart of Darkness (1902), Marlow holds forth on what distinguishes the British empire from its predecessors or rival imperialists. ‘What saves us is efficiency,’ he claims, ‘the devotion to efficiency.’ Conrad wrote his book when machines were the measure of a culture. Efficiency enabled the civilised to control the savage. A beacon for industrial revolutions, a devotion to efficiency illuminated the path from the waterwheel to social control and, in Britain’s case, to an unprecedented global empire.

Efficiency, therefore, is not some timeless universal value but something grounded deeply in particular historical circumstances. At various times, efficiency was a way of quantifying machine performance – think: steam engines – and an accounting principle coupled to the new applied sciences of mechanics and thermodynamics. It was also about conservation and stability. By the early 20th century – the apogee of Taylorism – experts argued that increases in efficiency would realise the full potential of individuals and industries. Dynamism and conservatism worked together in the pursuit of ever-greater efficiency.

But a broad look at the history of technology plainly shows that other values often take precedence over efficiency, even in the modern era. It would, for example, offer several advantages in efficiency if, instead of every apartment or home having its own kitchen, multiple families shared a communal kitchen, and indeed in some parts of the world they do. But in the prevalent ideology of domesticity, every family or even single person must have their own kitchen, and so it is.

Nor, despite what Silicon Valley-based techno-libertarians might argue, does technological change automatically translate to increased efficiency. Sometimes, efficiency – like the lone eccentric innovator – is not wanted. In the 1960s, for instance, the US military encouraged metal-working firms, via its contracting process, to adopt expensive numerically controlled machine tools. The lavish funding the Department of Defense devoted to promoting the technology didn’t automatically yield clear economic advantages. However, the new machines – ones that smaller firms were hard-pressed to adopt – increased centralisation of the metalworking industry and, arguably, diminished economic competition. Meanwhile, on the shop floor, the new manufacturing innovations gave supervisors greater oversight over production. At one large manufacturing company, numerical control was referred to as a ‘management system’, not a new tool for cutting metal. Imperatives besides efficiency drove technological change.

The history of technological change is full of examples of roads not taken. There are many examples of seemingly illogical choices made by firms and individuals. This shouldn’t surprise us – technological change has always been a deep and multilayered process, one that unfolds in fits and starts and unevenly in time and space. It’s not like the ‘just so stories’ of pop history and Silicon Valley public relations departments.

Although technology is most assuredly not just things, there’s no denying its fundamental materiality. The physical reality of technologies settles over time, like sediment. Over time, technologies, like mountains or old cities, form layers that a geologist might conjure and a historian can try to understand. Technologies stack. Look at this painting: 

 American Progress (1872) by John Gast

Here, liberty glides forward across the North American continent. Settlers follow in her wake. Natives and nature scatter before her. She holds a telegraph cable and unspools it alongside the tracks of an advancing railway. On one hand, this is a portrait of American manifest destiny. Seen another way, it’s a vivid example of how interdependent this era’s transportation and communication systems were. The pattern isn’t unique to the US. From 19th-century steamship routes and submarine telegraph cables to today’s data communication systems – it’s the same paths being traced as they layer on top of one another.

The technological world isn’t flat: it is lumpy and bumpy, with old and new technologies accumulating on top of and beside each other

As the historian of computing Nathan Ensmenger puts it, geography shapes technology and vice versa. In the early 20th century, the Southern Pacific was one of the largest railroad companies in the US. By 1930, the company and its subsidiaries operated more than 13,000 miles of track. In the 1970s, a unit of Southern Pacific maintained a series of microwave communication towers along its railway lines. Microwave communications gave way to a network of fibre-optic cables laid along railway tracks. Around 1978, the Southern Pacific Communications Company began providing a long-distance phone service. When this split from the larger railroad company, the firm needed a new name. The choice was Southern Pacific Railroad Internal Network Telecommunications. With its original infrastructure built on 19th-century railroad lines, SPRINT got to be one of  the largest wireless service providers in the US by incremental change and layers built on top of layers.

As they layer and stack, technologies persist over time. For instance, 19th-century Japan was a world where steam and sail, railroads and rickshaws all shared common space.  Industrial revolutions were distributed unequally in place and time. In the Second World War, the most common transport for the German army wasn’t tanks and other motorised vehicles but horses. The technological world wasn’t flat. This is the world, still, today. It is lumpy and bumpy, with old and new technologies accumulating on top of and beside each other.

Our prevailing focus on the shock of the technological new often obscures or distorts how we see the old and the preexisting. It’s common to hear how the 19th-century telegraph was the equivalent of today’s internet. In fact, there’s a bestseller about it, The Victorian Internet (1998) by Tom Standage. Except this isn’t true. Sending telegrams 100 years ago was too expensive for most people. For decades, the telegraph was a pricey, elite technology. However, what was innovative for the majority of people c1900 was cheap postage. So, during the heyday of the so-called Victorian internet, transoceanic postal systems made communication cheap, reliable and fast. The flow of information grew significantly more accessible and democratic. Although hard to imagine today, bureaucrats and business leaders alike spoke about cheap postage in laudatory terms that resemble what we hear for many emerging technologies today. By not seeing these older technologies in the past, we stand in danger of ignoring the value and potential of technologies that exist now in favour of those about to be. We get, for instance, breathless stories about Musk’s Hyperloop and neglect building public transport systems based on existing, proven technologies or even maintaining the ones we have.

Just as ‘computer’ is a synecdoche for ‘technology’, Silicon Valley has come to reflect a certain monoculture of thought and expression about technology

If we maintain a narrow and shallow view of innovation, notions of making (new) stuff too easily predominate. In the 1880s, the insurance executive-turned-entrepreneur George Eastman and his colleagues invented new types of photographic film. This film was easier to use and develop but, still, sales were stagnant. Then Eastman had the idea of going for the untapped market of people who wanted to try photography but found it intimidating. In 1888, Eastman’s company introduced the Kodak camera with the slogan: ‘You press the button, we do the rest.’ For $25 – a large sum in 1890 – one could buy a camera preloaded with 100 exposures. When done, the amateur photographer simply sent the camera to Eastman Kodak where the film was removed and processed, while the developed pictures, along with the camera re-loaded with fresh film, were sent back. More than inventing a new camera, Eastman’s company invented a new community of users – amateur photographers. And, of course, Eastman’s entrepreneurial initiative would have been impossible without the existence of a robust government-created postal network. His system stacked on top of an existing one just as much of today’s ‘disruptive innovation’ relies on the internet.

Today, this same narrowness persists in popular perceptions of what a ‘technology company’ is. As Ian Bogost recently noted in The Atlantic, the ‘technology’ in the tech sector is typically restricted to computer-related companies such as Apple and Alphabet while the likes of GE, Ford or Chevron are overlooked. This is absurd. Surely Boeing – which makes things – is a ‘tech company’, as is Amazon, which delivers things using Boeing’s things. Revising our sense of what technology is – or who does innovation – reshapes and improves our understanding of what a technology company is.

One cause of this confusion, I believe, stems from our decades-long fascination with Silicon Valley: once a romance, it now has all the hallmarks of a dysfunctional relationship. Just as ‘computer’ is a synecdoche for ‘technology’, Silicon Valley has come to reflect a certain monoculture of thought and expression about technology. One must tread carefully here, of course. Just as the medieval Catholic Church or the Cold War Kremlin were not monolithic entities, there is not one single Silicon Valley. Rather, it’s a complex assemblage of workers, managers, investors, engineers, et al. Unfortunately, some technology pundits ignore this diversity and reduce Silicon Valley to a caricature landscape of disruptive startups.

Ironically, many high-tech intellectuals present an extreme perspective of technology that rejects its ‘thinginess’. A persistent flaw in today’s digital boosterism is forgetting that all the stuff that makes the internet and the web work is actually made of something – silicon, plastic, rare-earth minerals mined in Bolivia or China. The Foxconn workers in Shenzhen who assemble iPhones and other high-tech devices certainly see it that way. Popular terminology – the ‘Cloud’ being the most pernicious – obscures the undeniable (but not all-encompassing) materiality of technology. So do maps of the internet that represent its complex physical infrastructure as a network of disembodied nodes and flowcharts.

Perhaps most simply, what you will almost never hear from the tech industry pundits is that innovation is not always good. Crack cocaine and the AK-47 were innovative products. ISIS and Los Zetas are innovative organisations. Historians have long shown that innovation doesn’t even always create jobs. It sometimes destroys them. Automation and innovation, from the 1920s through the 1950s, displaced tens of thousands of workers. Recall the conflict between Spencer Tracy (a proponent of automation) and Katharine Hepburn (an anxious reference librarian) in the film Desk Set (1957).

And what of broader societal benefits that innovation brings? In Technological Medicine (2009), Stanley Joel Reiser makes a compelling case that, in the world of healthcare, innovation can bring gains and losses – and the winners are not always the patients. The innovation of the artificial respirator, for example, has saved countless lives. It has also brought in new ethical, legal and policy debates over, literally, the meaning of life and death. And there are real questions about the ethics of resource expenditure in medical innovation. Can spending large amounts pursuing innovative treatments or cures for exotic, rare diseases be ethical when the same monies could without question save millions of lives afflicted with simple health challenges?

continuity and incrementalism are a much more realistic representation of technological change

It’s unrealistic to imagine that the international obsession with innovation will change any time soon. Even histories of nation-states are linked to narratives, rightly or wrongly, of political and technological innovation and progress. To be sure, technology and innovation have been central drivers of the US’s economic prosperity, national security and social advancement. The very centrality of innovation, which one could argue has taken on the position of a national mantra, makes a better understanding of how it actually works, and its limitations, vital. Then we can see that continuity and incrementalism are a much more realistic representation of technological change.

At the same time, when we step out of the shadow of innovation, we get new insights about the nature of technological change. By taking this broader perspective, we start to see the complexity of that change in new ways. It’s then we notice the persistent layering of older technologies. We appreciate the essential role of users and maintainers as well as traditional innovators such as Bill Gates, Steve Jobs, and, yes, Bill and Lizzie Ott. We start to see the intangibles – the standards and ideologies that help to create and order technology systems, making them work at least most of the time. We start to see that technological change does not demand that we move fast and break things. Understanding the role that standards, ideologies, institutions – the non-thing aspects of technology – play, makes it possible to see how technological change actually happens, and who makes it happen. It makes it possible to understand the true topography of technology and the world today. 

Syndicate this Essay

Ideas make a difference

If you enjoy Aeon, please show your support

Aeon is not-for-profit
and free for everyone
Make a Donation
Get Aeon straight
to your inbox
Follow us on
Video/Internet & Communication

A classic film on communication finds renewed meaning in the age of memes and emojis

22 minutes

Essay/Technology & the Self

Natural, shmatural

Mother Nature might be lovely, but moral she is not. She doesn’t love us or want what’s best for us

Molly Hodgdon

Idea/Computing & Artificial Intelligence

Quantum cryptography is unbreakable. So is human ingenuity

Joshua Holden

Video/Computing & Artificial Intelligence

Machine learning is important, but some AIs just want to have fun

57 minutes

Essay/Computing & Artificial Intelligence

Raising good robots

We already have a way to teach morals to alien intelligences: it's called parenting. Can we apply the same methods to robots?

Regina Rini

Idea/Computing & Artificial Intelligence

The body is the missing link for truly intelligent machines

Ben Medlock