From digital failures in the UK’s justice system to the Post Office tragedy and widespread anxiety about artificial intelligence across the world, the pace of technological change is testing the capacity of society’s core institutions to adapt and adopt.
Over half a century ago, futurist Alvin Toffler’s book Future Shock predicted a transformation to super-industrialism: population explosion, information overload in daily life, and hordes of nomadic workers.
The writer’s forecasting record is mixed. While he was prescient about constant technological and labour churn, with short-lived products and what we’d now call the gig economy, he got it wrong on demographics, where public concerns now centre on populations that are getting older and smaller rather than an overcrowded planet.
But his real contribution lies in recognising the importance of velocity in technological change, not just its destination. He popularised the term ‘future shock’ to mean the perception of ‘too much change in too short a period of time’.
Toffler, who died in 2016, might have been surprised to see how much of society over the last decade or so has been rooted in persistence and nostalgia, not dizzying change. In the long 2010s, culture became increasingly derivative. Remakes and sequels dominated cinema box offices, decades-old musical hits were revived on TikTok, and throwback fashion cycled back. Even visions of the future were rooted in retro aesthetics.
In the second half of the 2020s, the rapid and disorienting development that Future Shock describes seems to be returning. Driverless cars are roaming the streets of San Francisco, Chongqing and soon London, while chatbots powered by artificial intelligence (AI) are coding, comforting and seducing. Today, there is a broad sense of both possibility and unease.
Last week at the Economics Observatory, four articles traced the human consequences of the rocky integration of technology into public life. The first three tracked a shift that is now in its maturity: the digitalisation of business and services, with two calamities and a hard-fought success. The final piece looked at public reactions to a trend with the velocity and disruption that Toffler identified: the swift development and integration of advanced AI.
Trust in digital systems: the state, the courts and the postal service
In the first piece in our series, Joe Tomlinson (King’s College London and the Institute for Fiscal Studies, IFS) examines recent serious failures in the digital systems of HM Courts and Tribunals Service (HMCTS). A cyber-attack on the Legal Aid Agency exposed personal details of vulnerable people and other data going back to 2007. Soon after, a leaked internal report suggested that HMCTS covered up software bugs that allegedly caused evidence to go missing. (HMCTS disputes this account and says that an internal investigation found ‘no evidence’ of any impact on case outcomes.)
The failures came after years of budget cuts, with Ministry of Justice spending down 24% per capita since 2008, raising serious questions about the country’s capacity for justice.
The Post Office Horizon scandal offers a still-darker example of the human consequences of failures to recognise and take accountability for problems with digital systems. Over 700 sub-postmasters were wrongfully prosecuted based on faulty accounting software, with 236 imprisoned.
Here at the Observatory, Jay Dwyer-Joyce (University College London) explains how blind faith in technology, combined with a legal presumption of reliability, led to an illusion of infallibility that ruined lives.
Both these examples show the dangers of treating information technology (IT) as a purely technical concern. They were the consequence of decades of the long IT transformation, in which capabilities and services have been automated without an equivalent investment in their stewardship. In the case of the Post Office, the disruption this brought without the scrutiny they need proved fatal for a number of people.
The next article in our series provides a counterpoint: a success informed by digitalisation failures before it. Tom Loosemore, who started and scaled the UK’s Government Digital Service, recounts the creation of GOV.UK, the ‘boring’ but effective platform that combined 2,500 fragmented government websites. As well as an internal alignment of incentives, key to this transformation was transparency: the GOV.UK team worked in the open, publishing source code for scrutiny, with features and limitations out in the public.
These three examples, two failures and one success, emerged from the same wave of digitalisation. They’re separated not by the technology employed but by the institutions surrounding them. Horizon and HMCTS show what happens when systems are deployed without the understanding or resources for accountability to keep up.
In contrast, GOV.UK succeeded in part by building systems simple enough to be trustworthy, and transparent enough to be scrutinised. The question now is whether society can learn from the last wave in time for the next.
AI: the next future shock?
My own piece on attitudes to AI reveals public concerns akin to Toffler’s in the development and integration of generative AI.
In a poll published last autumn by Pew Research Center, respondents reported widespread anxiety around AI, with concerns outweighing excitement in every country surveyed except Israel and South Korea. People report distrust of AI almost everywhere – from financial services to sports and travel. Workers worldwide worry that AI will replace their jobs, and one in five Britons even see the technology as the most likely cause of human extinction.
The survey results also reflect the same divisions that Future Shock anticipates. As technological whiplash alienates older generations, surveys of AI sentiment show the old as the most worried. And having secured their prosperity, the richest societies report the highest unbalance between risks and benefits, while fast-growing developing economies see AI as an opportunity.
Keeping pace
Toffler’s insight wasn’t about where we are heading – he was often wrong about that – it was that the rate of change could outpace our ability to adapt. The examples of Horizon and HMCTS show what this can look like: budgets cut, and accountability left behind with blind faith in new systems.
Anxiety over AI reflects a society burned by past failures. Whether we have learned from these experiences remains to be seen.
Where can I find out more?
- Will digital failures undermine trust in the justice system?
- Trust and technology: what went wrong with the Post Office?
- The UK government’s digital transformation: how did it come about?
- Who trusts AI?
Who are experts on this question?
- Joe Tomlinson
- Jay Dwyer-Joyce
- Tom Loosemore