Listen to this essay
23 minute listen
Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span.
This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance.
I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong.
The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it.
Consider a simple observation. The same person who cannot get through a novel can watch a three-hour video essay on the decline of the Ottoman Empire. The same teenager who supposedly lacks attention span can maintain game focus for hours while parsing a complex narrative across multiple storylines, coordinating with teammates, adapting strategy in real time. That’s not inferior cognition. It’s different cognition. And the difference isn’t the screen. It’s the environment.
The dominant platforms have been deliberately engineered to fragment attention in service of advertising revenue
Gloria Mark, Chancellor’s Professor of Informatics at the University of California Irvine, has tracked attention spans on screens for two decades. In 2004, people averaged two and a half minutes on any screen before switching tasks. By 2016, that had fallen to 47 seconds. This is frequently cited as evidence that screens inherently fragment attention. But look closer at what Mark’s research actually shows. The fragmentation correlates not with screens in general but with specific design patterns: notification systems, variable reward schedules, infinite scroll. These are choices made by specific companies for specific economic reasons. They are not inherent properties of the medium.
Peer-reviewed research demonstrates that social media platforms exploit variable reward schedules, the same psychological mechanisms that make gambling addictive. Users don’t know what they’ll find when they open an app; they might see hundreds of likes or nothing at all. This unpredictability acts as a powerful reinforcement signal (often discussed via dopamine ‘reward prediction error’ mechanisms), keeping people checking habitually. This isn’t because screens are inherently attention-destroying. It’s because the dominant platforms have been deliberately engineered to fragment attention in service of advertising revenue.
We have been here before. Not just once, but repeatedly, in a pattern so consistent it reveals something essential about how cultural elites respond to changes in how knowledge moves through society.
In the late 19th century, more than a million boys’ periodicals were sold per week in Britain. These ‘penny dreadfuls’ offered sensational stories of crime, horror and adventure that critics condemned as morally corrupting and intellectually shallow. By the 1850s, there were up to 100 publishers of this penny fiction. Victorian commentators wrung their hands over the degradation of youth, the death of serious thought, the impossibility of competing with such lurid entertainment.
But walk backwards through history, and the pattern repeats with eerie precision. In the 18th and early 19th centuries, novel-reading itself was the existential threat. The terms used were identical to today’s moral panic: ‘reading epidemic’, ‘reading mania’, ‘reading rage’, ‘reading fever’, ‘reading lust’, ‘insidious contagion’. The journal Sylph worried in 1796 that women ‘of every age, of every condition, contract and retain a taste for novels … the depravity is universal.’
Late-Victorian schooling became entangled with anxiety about what working-class children were reading
The predicted disasters were apocalyptic. J W Goethe’s epistolary novel The Sorrows of Young Werther (1774) was blamed for triggering copycat suicides across Europe. Johann Peter Frank’s six-volume A System of Complete Medical Police (1779-1819) listed ‘reading of poisonous novels’ among the causes of suicide. Arthur Schopenhauer in 1851 described ‘bad books’ as ‘intellectual poison’. If the manipulative potential of novels were truly that great, as one historian dryly notes, women would have been eloping in hordes.
They didn’t. The disaster never materialised. But the panic served its purpose.
What’s revealing about these panics is who was doing the panicking and why. In 1533, Thomas More had denounced Protestant texts as ‘deadly poisons’ threatening to infect readers with ‘contagious pestilence’. Today, the Cato Institute’s research on historical literacy notes that in the 17th and 18th centuries, ‘some people considered literacy’s spread subversive or corrupting. The expansion of literacy from a tiny elite to the general population scared a lot of conservatives.’
Here’s the detail that crystallises the pattern: in England and Wales, compulsory attendance was formalised by the 1880 Education Act, and late-Victorian schooling became entangled with anxiety about what newly literate working-class children were reading – with ‘penny dreadfuls’ and ‘reading trash’ a recurring target of cultural commentary and educational concern. The panic wasn’t really about literacy declining. It was about literacy escaping elite control.
Go back further still, to the foundational panic. Socrates worried that writing would ‘produce forgetfulness in the minds of those who learn to use it, because they will not practise their memory.’ He feared readers would ‘seem to know many things, when they are for the most part ignorant’, and warned about confusion and moral disorientation. The irony, as the scholar Walter Ong noted in 1985, is that the weakness in Plato’s position is putting these misgivings about writing into writing.
The pattern extends into the 20th century with mechanical precision. In 1941, the American paediatrician Mary Preston claimed that more than half of the children she studied were ‘severely addicted’ to radio and movie crime dramas, consumed ‘much as a chronic alcoholic does drink’. The psychiatrist Fredric Wertham testified before US Congress that, as he put it in his book Seduction of the Innocent (1954), comics cause ‘chronic stimulation, temptation and seduction’, calling them more dangerous than Hitler. Thirteen American states passed restrictive laws. The comics historian Carol Tilley later exposed the flaws in Wertham’s research, but by then the damage was done.
Amy Orben, a psychologist studying technology panics, identifies the ‘Sisyphean cycle’: each generation fears new media will corrupt youth; politicians exploit these fears while deflecting from systemic issues like inequality and educational underfunding; research begins too late; and by the time evidence accumulates showing mixed effects dependent on context, a new technology emerges and the cycle restarts.
The penny dreadfuls didn’t follow you into your bedroom at midnight, vibrating with notifications
What demonstrates that these panics were exaggerated? The predicted disasters never arrive. Adolescent aggression continued after comic book restrictions – because comics weren’t the cause. Novels didn’t trigger mass elopements. Radio didn’t destroy children’s capacity for thought. Each panic uses identical rhetoric: addiction metaphors, moral corruption, passive victimhood, apocalyptic predictions. Each time, the research eventually shows complex effects mediated by content, context and individual differences. And, each time, when the disaster fails to materialise, attention simply shifts to the next technology.
These publications and technologies existed alongside serious thought. The penny dreadfuls didn’t prevent Charles Dickens, John Stuart Mill or Charles Darwin from flourishing. What’s different now isn’t the existence of shallow content, which has always been abundant. What’s different is the existence of delivery mechanisms actively engineered to prevent the kind of attention that serious thought requires. The penny dreadfuls didn’t follow you into your bedroom at midnight, vibrating with notifications.
This distinction matters because it changes everything about the available responses. If the problem is screens inherently, then we need cultural revival, a return to books, perhaps even a neo-Luddite retreat from technology. But if the problem is design, then we need design activism and regulatory intervention. The same screens that fragment attention can support it. The same technologies that extract human attention can cultivate it. The question is who designs them, for what purposes, and under what constraints.
In the library, I watch people navigate information in ways that would have seemed impossible to previous generations. A research question that once required weeks of archival work now takes hours. But more than efficiency has changed. The nature of synthesis itself has transformed.
Ideas now move through multiple channels simultaneously. A documentary provides emotional resonance and visual evidence. Its transcript enables the precision needed to locate a specific argument. A newsletter unpacks the implications. A podcast allows the ideas to marinate during a commute. Each mode contributes something the others cannot. This isn’t decline. It’s expansion.
What strikes me most is the difference between people who’ve learned to construct what I call ‘containers for attention’ – bounded spaces and practices where different modes of engagement become possible – and those who haven’t. The distinction isn’t about intelligence or discipline. It’s about environmental architecture. Some people have learned to watch documentaries with a notebook, listen to podcasts during walks when their minds can wander productively, read physical books in deliberately quiet spaces with phones left behind. They’re not rejecting technology. They’re choreographing it.
Literacy is about something deeper: the capacity to construct and navigate environments where understanding becomes possible
Others are drowning, attempting sustained thought in environments engineered to prevent it. They sit with laptops open, seven tabs competing for attention, notifications sliding in from three different apps, phones vibrating every few minutes. They’re trying to read serious material while fighting a losing battle against behavioural psychology weaponised at scale. They believe their inability to focus is a personal failure rather than a design problem. They don’t realise they’re trying to think in a space optimised to prevent thinking.
This is where my understanding of literacy has fundamentally shifted. I used to believe, as I was taught, that literacy was primarily about decoding text. But watching how people actually learn and think has convinced me that literacy is about something deeper: the capacity to construct and navigate environments where understanding becomes possible.
Consider those who flourish with audiobooks but struggle with printed text. For years, educators told them they had learning disabilities, by which they meant: disabilities that prevented learning through the one true method we recognise. But they don’t have learning disabilities. The instruction has a disability – it can’t accommodate different neurological architectures. Give them the same text as audio, and suddenly the ‘disability’ vanishes. The ideas that were opaque on the page become transparent in sound. Not because audio is superior to text, but because particular neurologies process spoken language more fluently than written symbols.
Research in universal design for learning has demonstrated this definitively. The neuropsychologist David H Rose, co-founder of the Center for Applied Special Technology, notes that ‘each brain is made of billions of interconnected neurons that form unique pathways. Like fingerprints, no two brains are alike.’ Studies show that: ‘The need to overcome learning disabilities raises the focus on the “disability of the instruction”, not only the learning disability of the learner.’ When we insist on a single mode of engagement, we’re not identifying who can think and who cannot. We’re identifying who happens to think in the particular way our systems recognise.
Libraries are adapting. We’ve created what I call a ‘habitat for multimodal literacy’. The silent reading room remains, sacred and inviolate. But it’s been joined by maker spaces where people think with their hands, where building physical models while running computer simulations reveals things neither mode alone could teach. Recording studios where oral traditions find new life, where explaining ideas aloud to an imagined audience requires different cognitive work than writing an essay, often producing more sophisticated analysis. Collaborative zones where knowledge emerges through dialogue, where ideas stuck in one person’s head become visible and available for others to extend, challenge, refine.
These aren’t concessions to declining attention spans. They’re recognitions that human understanding has always been richer than any single medium could contain. We’re not abandoning literacy. We’re discovering what literacy meant all along: not just the ability to decode symbols on a page, but the capacity to move fluently between all the ways humans encode meaning.
The people who cannot sit through novels aren’t broken. They’re adapted to an environment we built
The pattern I observe repeatedly: people who ‘can’t focus’ on traditional texts can maintain extraordinary concentration when working across modes. They struggle with philosophy textbooks but thrive when they can listen to lectures while taking visual notes, discuss ideas in study groups, and write while pacing. This isn’t deficit. It’s difference. And our responsibility is to build environments where that difference becomes an asset rather than an obstacle.
But expansion without architecture is chaos, and that’s where we’ve stumbled. The people who cannot sit through novels aren’t broken. They’re adapted to an environment we built. We hand them infinite information and wonder why they drown. We give them tools designed to fracture attention and blame them when their attention fractures. We built a world that profits from distraction and then pathologise the distracted.
The cognitive operations that the declinists valorise – sustained attention, logical development, revision, the capacity to build complex arguments – aren’t properties of paper. They’re properties of writing as a practice. Immanuel Kant didn’t need bound paper specifically to write the Critique of Pure Reason (1781); he needed a medium that allowed him to externalise thought, revise it, and develop it over time. Digital documents do this as effectively as paper. The problem is that most digital engagement isn’t writing-based. It’s consumption of algorithmically curated feeds optimised by sophisticated behavioural engineering to maximise time-on-platform.
We haven’t become post-literate. We’ve become post-monomodal. Text hasn’t disappeared; it’s been joined by a symphony of other channels. Your brain now routinely performs feats that would have seemed impossible to your grandparents. You parse information simultaneously across text, image, sound and motion. You navigate conversations that jump between platforms and formats. You synthesise understanding from fragments scattered across a dozen different sources.
The real problem isn’t mode but habitat. We don’t struggle with video versus books. We struggle with feeds versus focus. One happens in an ecosystem designed for contemplation, the other in a casino designed for endless pull-to-refresh.
Reading worked so well for so long not because text is magic, but because books came with built-in boundaries. They end. Pages stay still. Libraries provide quiet. These weren’t features of literacy itself but of the habitats where literacy lived. We need to rebuild those habitats for a world where meaning travels through many channels at once.
This is where libraries become more essential, not less. The library of the future isn’t a warehouse for books. It’s a gymnasium for attention. It’s where communities go to practise different modes of understanding. The reading room remains sacred, but it’s joined by recording booths, visualisation labs and collaborative spaces where people learn to translate ideas between formats. Libraries become the place where you learn not just to read, but to move fluently between all the ways humans share meaning.
To name the actors responsible and then treat the outcome as inevitable is to provide them cover
What troubles me most about the declinist position is not its diagnosis but its conclusion. The commentators who lament the post-literate society often identify the same villains I do. They recognise that technology companies are, in Marriott’s words, ‘actively working to destroy human enlightenment’, that tech oligarchs ‘have just as much of a stake in the ignorance of the population as the most reactionary feudal autocrat.’
And then they surrender. As Marriott says: ‘Nothing will ever be the same again. Welcome to the post-literate society.’
This is the move I cannot follow. To name the actors responsible and then treat the outcome as inevitable is to provide them cover. If the crisis is a force of nature, ‘screens’ destroying civilisation like some technological weather system, then there’s nothing to be done but write elegiac essays from a comfortable distance. But if the crisis is the product of specific design choices made by specific companies for specific economic reasons, then those choices can be challenged, regulated, reversed.
The fatalism, however beautifully expressed, serves the very interests it condemns. The technology companies would very much like us to believe that what they’re doing to human attention is simply the inevitable result of technological progress rather than something they’re doing to us, something that could, with sufficient political will, be stopped.
Your inability to focus isn’t a moral failing. It’s a design problem. You’re trying to think in environments built to prevent thinking. You’re trying to sustain attention in spaces engineered to shatter it. You’re fighting algorithms explicitly optimised to keep you scrolling, not learning.
The solution isn’t discipline. It’s architecture. Build different defaults. Create different spaces. Establish different rhythms. Make depth as easy as distraction currently is. Make thinking feel as natural as scrolling currently does.
What if, instead of mourning some imaginary golden age of pure text, we got serious about designing for depth across all modes? Every video could come with a searchable transcript. Every article could offer multiple entry points for different levels of attention. Our devices could recognise when we’re trying to think and protect that thinking. Schools could teach students to translate between modes the way they once taught translation between languages.
Books aren’t going anywhere. They remain unmatched for certain kinds of sustained, complex thinking. But they’re no longer the only game in town for serious ideas. A well-crafted video essay can carry philosophical weight. A podcast can enable the kind of long-form thinking we associate with written essays. An interactive visualisation can reveal patterns that pages of description struggle to achieve.
The choice isn’t between books and screens. The choice is between intentional design and profitable chaos
The future belongs to people who can dance between all modes without losing their balance. Someone who can read deeply when depth is needed, skim efficiently when efficiency matters, listen actively during a commute, and watch critically when images carry the argument. This isn’t about consuming more. It’s about choosing consciously.
We stand at an inflection point. We can drift into a world where sustained thought becomes a luxury good, where only the privileged have access to the conditions that enable deep thinking. Or we can build something unprecedented: a culture that preserves the best of print’s cognitive gifts while embracing the possibilities of a world where ideas travel through light, sound and interaction.
The choice isn’t between books and screens. The choice is between intentional design and profitable chaos. Between habitats that cultivate human potential and platforms that extract human attention.
The civilisations that thrive won’t be the ones that retreat into text or surrender to the feed. They’ll be the ones that understand a simple truth: every idea has a natural form, and wisdom lies in matching the mode to the meaning. Some ideas want to be written. Others need to be seen. Still others must be heard, felt or experienced. The mistake is forcing all ideas through a single channel, whether that channel is a book or a screen.
Your great-grandchildren won’t read less than you do. They’ll read differently, as part of a richer symphony of sense-making. Whether that symphony sounds like music or noise depends entirely on the choices we make right now about the shape of our tools, the structure of our schools, and the design of our days.
The elegant lamenters offer a eulogy. I’m more interested in a fight.






