Questions and answers about
the economy.

Trust and technology: what went wrong with the Post Office?

The Post Office scandal brought into sharp focus the risks associated with placing blind faith in technology. To learn from this troubling episode, and starting rebuilding public trust, policy-makers must challenge the idea of the infallibility of machines within the legal system.

Over the past few years, the Post Office has found itself at the heart of one of the most devastating miscarriages of justice in British legal history. Although the scandal was thrust into the national spotlight in 2023 with ITV’s docuseries Mr. Bates vs The Post Office, its roots stretch back over two decades, and its consequences have been severe for hundreds of innocent people.

Beginning in 1999, the Post Office prosecuted more than 700 sub-postmasters and employees for theft, fraud and false accounting. These charges were based on discrepancies flagged by ‘Horizon’, the organisation’s accounting system. But it later emerged that Horizon was riddled with bugs and flaws – errors that could themselves have caused the financial shortfalls it reported.

Despite mounting evidence of the Horizon system’s poor reliability, the Post Office pursued legal action against its own employees. Between 1999 and 2015, over 900 sub-postmasters were wrongfully prosecuted, with 236 sent to prison. Many others were forced to repay fictitious shortfalls or lost their employment contracts, leading to bankruptcies, family breakdowns, and at least thirteen suicides.

The scandal has since been described as one of the most widespread miscarriages of justice in British history. A statutory inquiry revealed that senior Post Office and Fujitsu (the Japanese company behind the Horizon system) staff knew, or should have known, about the system’s flaws, yet continued to rely on its data in court. The fallout has exposed deep systemic failures in the UK’s legal and regulatory frameworks, prompting calls for reform and leaving a lasting scar on public trust in the justice system.

Presiding over the case, Mr Justice Fraser concluded that it was indeed possible for errors in the Horizon software to have caused the apparent shortfalls in branch accounts, casting serious doubt on the legitimacy of the original convictions. This judgement marked a turning point in the story. In March 2020, the Criminal Cases Review Commission (CCRC) referred 39 cases to the Court of Appeal, all of which were later quashed. The CCRC acknowledged that the scale of the injustice could be far greater, estimating that up to 750 cases might require review – an unprecedented figure in British legal history.

Computer reliability and the illusion of infallibility

While the Post Office scandal makes for compelling television, it also raises deeper and more troubling questions: how did this happen? How was it possible for a trusted public institution to prosecute hundreds of employees based on such unreliable evidence?

Several factors contributed to this outcome. One was the stark imbalance of power between the Post Office – a well-resourced, institutionally trusted organisation – and the sub-postmasters, many of whom lacked the means to mount a proper legal defence. Some represented themselves; others accepted plea deals simply to avoid the stress and cost of a prolonged court battle.

Equally troubling was the Post Office’s aggressive prosecutorial stance. Sub-postmasters were falsely told they were the only ones experiencing issues with Horizon, despite widespread discrepancies across branches and internal awareness of the system’s flaws. The organisation pressed ahead with prosecutions even when it knew the evidence was unreliable.

Yet beyond these legal and institutional dynamics lies a subtler, systemic issue: the unquestioning trust placed in computer-generated evidence. Horizon’s outputs were treated as infallible by investigators, lawyers and judges, reflecting a broader societal tendency to place too much trust in digital systems.

This is not an isolated phenomenon. The Horizon scandal is emblematic of a wider challenge facing public institutions: how to responsibly govern the use of data and algorithms in decision making. Similar concerns have emerged in other public bodies, such as the Department for Work and Pensions’ use of machine learning to assess Universal Credit claims, and the Home Office’s deployment of algorithmic tools in immigration decisions. In both cases, critics have warned that opaque systems may reinforce bias, limit accountability and erode public trust.

Recognising these risks, the UK government has begun to implement reforms. The Cabinet Office’s Central Digital and Data Office has launched an algorithmic transparency standard for public sector bodies, requiring clear documentation of how and why algorithms are used, what data they rely on, and the level of human oversight involved. Similarly, the Public Authority Algorithmic and Automated Decision-Making Systems Bill, currently under review in Parliament, proposes independent dispute resolution mechanisms and stronger safeguards against bias and discrimination in automated systems.

The Horizon scandal is not just a failure of technology, it is a failure of governance, ethics and oversight. It underscores the urgent need for public institutions to earn trust not through reputation alone, but also through transparency, accountability and a willingness to question the systems they rely on.

The legal presumption of reliability

The legal framework in the UK enforces this bias towards the computer reliability. In the UK legal system, there exists a presumption that:

“In the absence of evidence to the contrary, the courts will presume that mechanical instruments (including computer systems) were in order at the material time.”

This means that unless actively challenged, any data produced by a computer system is assumed to be reliable. Historically, ‘reliable’ is defined as a system’s ability to perform as required, without failure, for a given time interval under specified conditions. But in practice, this presumption can be dangerously simplistic.

In the case of the sub-postmasters, while not explicitly quoted in any of the civil or criminal proceedings, the legal presumption placed an unreasonable burden on the individuals to prove that Horizon was faulty – something they were ill-equipped to do. Most lacked the technical expertise to challenge the system’s integrity, and even if they had the knowledge, they didn’t have access to the necessary internal data. The Post Office, meanwhile, was under no obligation to defend Horizon’s reliability and was in a position to withhold critical information that may have placed doubt upon the system, including known error logs.

This presumption has come under increasing scrutiny. Barrister Stephen Mason, a long-time critic of the principle, has argued for years that courts should treat computer evidence with greater caution. With the spotlight now firmly on the Post Office case, legal reform is gaining momentum. Amendments to this presumption are being actively considered, and it’s likely we’ll see significant changes in the near future – changes that could reshape how digital evidence is treated in British courts.

Trust, transparency and accountability

Transparency in software is essential for trusting its outputs. While we may not need to understand every system detail on a daily basis, when errors occur, especially in legal or financial contexts, access to the system’s workings becomes critical. In the Post Office Horizon scandal, sub-postmasters were accused based on data they couldn’t access or challenge. Crucially, they weren’t asking for full system disclosure, legal teams knew such requests would likely be dismissed as ‘fishing expeditions’ (where plaintiffs seek broad access to documentation in hopes of uncovering relevant evidence). Courts assumed the system was functioning correctly and saw no need to examine its inner workings, making it nearly impossible to trace errors or contest the evidence.

Software used in legal or financial contexts must be held to higher standards. Trust should be built on rigorous testing, transparency and accountability. Mechanisms for auditing and verifying outputs must be embedded into these systems from the outset.

Estonia’s X-Road platform offers a compelling contrast. As the backbone of the country’s e-Government infrastructure, X-Road enables secure, decentralised data exchange across Estonian public and private sectors. It supports services ranging from healthcare to digital voting and is widely regarded as a model for transparent digital governance.

Unlike Horizon, X-Road is built on principles of openness and accountability. Every transaction is logged and traceable, allowing independent audits. Its open-source nature means the codebase is publicly available for inspection and improvement, fostering trust through verifiable transparency.

In Horizon’s case, sub-postmasters were accused based on data they could neither access nor challenge. The system’s complexity and secrecy made it impossible to understand how figures were generated or to identify errors. Estonia’s approach, by contrast, empowers users to question and verify system behaviour. If discrepancies arise, there are mechanisms to investigate and resolve them.

Systems like X-Road show that complexity need not be a barrier to transparency. With thoughtful design, open governance and user empowerment, digital infrastructure can serve the public interest without compromising justice or truth.

While X-Road has not yet faced the same level of legal scrutiny as Horizon, and operates under a different governmental framework, it offers a hopeful blueprint. As society becomes increasingly reliant on digital systems, the Horizon scandal serves as a stark reminder of the dangers of misplaced trust, and the need for transparency in technology.

Conclusion: lessons beyond the scandal

The Post Office scandal is a sobering reminder of what happens when institutions place blind faith in technology and fail to listen to the people affected by it. It challenges us to rethink how we treat digital evidence and how we build trust in the systems we rely on.

For the hundreds of sub-postmasters whose lives were upended, justice has come painfully late. Yet their courage in speaking out has sparked a national reckoning – one that may lead to lasting change in how we design, deploy and scrutinise the technologies that shape our lives.

More broadly, the scandal reveals a deeper crisis: the erosion of trust. When digital systems fail, and accountability is absent, public confidence suffers. Not just in the technology, but in the organisations that use it. As artificial intelligence and algorithmic decision-making become more embedded in governance, finance and law, the risks grow more complex and consequential. We must ensure these systems serve the public interest, not obscure it.

Encouragingly, reforms are underway. The UK is reviewing the legal presumption that computer systems are inherently reliable, a shift that could rebalance the burden of proof in courtrooms. Meanwhile, the introduction of algorithmic transparency standards is helping public bodies document and disclose how automated systems influence decisions. These steps reflect a growing recognition that trust must be earned through transparency, accountability and meaningful oversight.

Where can I find out more?

Who are the experts on this question?

  • Peter Bernard Ladkin
  • Paul Marshall
  • Steven Murdoch
Author: J Dwyer-Joyce
Image: KHellon for iStock
Related Articles
View all articles
Do you have a question surrounding any of these topics? Or are you an economist and have an answer?
Ask a Question
OR
Submit Evidence