Digital welfare dystopia

By Samuel Woodhams | Digital rights researcher and journalist

Algorithms to determine welfare payments and detect fraud are becoming standard practice around the world. From Manchester to Melbourne, peoples’ lives are being shaped by secretive tools that determine who is eligible for what, and how much debt is owed.

Although the technology has been around for some time, the outbreak of COVID-19 renewed enthusiasm for the digital welfare state and, for thousands of cash-strapped public bodies, the promise of increased efficiency and lower costs has proven irresistible.

But the tools come with significant hidden costs. They violate our privacy, exacerbate inequality and often get things wrong, leading to often terrifying consequences.

In short, these tools don’t improve our welfare, they threaten it. And unless we alter their course, we will tumble “zombie-like into a digital welfare dystopia,” as the former United Nations special rapporteur Philip Alston memorably said.

Staff members from Qian Ji Data Co take photos of the villagers for a facial data collection project, which would serve for developing artificial intelligence (AI) and machine learning technology, in Jia county, Henan province, China March 20, 2019

Staff members from Qian Ji Data Co take photos of the villagers for a facial data collection project, which would serve for developing artificial intelligence (AI) and machine learning technology, in Jia county, Henan province, China March 20, 2019. REUTERS/Cate Cadell

Privacy infringing algorithms

The products used in the digital welfare state all operate slightly differently, and the data analysed also varies. They can be used to assess someone’s eligibility for support, determine how much someone receives, and predict whether someone is likely to claim too much. Typically, this means the tools will access information about someone’s employment status, number of children, gender, age, and where they live.

To amalgamate these disparate data points, manufacturers are tasked with combining several existing databases into one huge dataset containing millions of rows of sensitive data. By doing so, they encourage mass surveillance and discriminatory profiling, while benefiting companies that threaten everybody’s privacy.

The companies involved range from credit-rating giants to specialised data-mining firms. In other words, they are data-hungry corporations responsible for the datafication of public life associated with surveillance capitalism.

Automated discrimination

Given the sensitive nature of the data analysed, the use of automated decision-making tools in this context obviously risks discrimination. Thankfully, there is greater awareness that algorithms have the potential to reflect and entrench the biases contained within existing data sets.

As a 2019 United Nations report on the UK’s digital welfare state concluded: “Algorithms and other forms of AI are highly likely to reproduce and exacerbate biases reflected in existing data and policies. In-built forms of discrimination can fatally undermine the right to social protection for key groups and individuals.”

Protesters demonstrate against IT company Atos's involvement in tests for incapacity benefits outside the Department for Work and Pensions in London August 31, 2012. REUTERS/Neil Hall

Protesters demonstrate against IT company Atos’s involvement in tests for incapacity benefits outside the Department for Work and Pensions in London August 31, 2012. REUTERS/Neil Hall

But it’s not just the use of skewed data that may exacerbate inequity. The way the technology is deployed is often discriminatory, as well.

In the Netherlands, an algorithm eerily similar to that which falsely accused thousands of benefit fraud is still being used in Utrecht. But it’s not being used across the entire city, it’s only targeting people who live in the low-income neighbourhood of Overvecht.

This establishes a clear double standard and means that those on society’s periphery bear the brunt of surveillance and the potential ramifications of the technology’s faults.

By pursuing these tools, the idea that a citizen can be knowable, quantifiable and predictable risks spreading further into local and national governance. In doing so, the structural issues that influence criminality are likely to be ignored in favour of an individualistic approach cloaked in technology’s veneer of objectivity.

But they do work, right?

There’s now overwhelming evidence that these tools don’t even work very well.

In Australia, nearly half a million people receiving welfare support were wrongly accused of lying about their income and given fines. In the Netherlands, tens of thousands of people were wrongly accused of owing money to the state by an algorithm that breached EU human rights legislation. And in Britain, the Department of Work and Pensions (DWP) has been found to use a secretive algorithm that “targets disabled people in a disproportionate, unfair and discriminatory way.”

So, what can be done when the algorithm gets it wrong? Unfortunately, people who want to contest a decision often face years of bureaucracy, with authorities rarely admitting their mistakes.

Screenshot of Melanie Klieve speaking via video link at the Royal Commission hearing into Robodebt, an automated debt recovery scheme that wrongly calculated that welfare recipients owed money, in Brisbane, Australia. December 5, 2022

Screenshot of Melanie Klieve speaking via video link at the Royal Commission hearing into Robodebt, an automated debt recovery scheme that wrongly calculated that welfare recipients owed money, in Brisbane, Australia. December 5, 2022. Thomson Reuters Foundation/Seb Starcevic

In part, that’s because authorities themselves appear unaware of exactly how their tools work and are unwilling, or incapable, of explaining how they work to citizens.

In 2021, I filed a Freedom of Information request with the DWP to find out more about their self-proclaimed “cutting-edge artificial intelligence” tool designed to catch people responsible for large-scale benefits fraud. But, as is often the case, they refused to divulge any new information.

The complete lack of transparency and accountability was summarised perfectly by Lina Dencik, co-director of the Data Justice Lab at the University of Cardiff: “Rather than the state being accountable to its citizens, the datafied welfare state is premised on the reverse, making citizens’ lives increasingly transparent to those who are able to collect and analyse data, at the same time knowing increasingly little about how or for what purpose the data is collected.”

Remedies

The use of algorithms in the digital welfare state is forcing society’s most vulnerable to become test subjects of opaque tools that few appear to fully understand. While important, I think we need to look beyond ways of increasing the efficacy, transparency and accountability of these tools.

Instead, authorities should ask themselves whether processes that can have such a profound impact on citizens’ well-being should ever be automated. And whether the promise of lowering costs and increased efficiency will ever really be worth the risk.

Any views expressed in this newsletter are those of the author and not of Context or the Thomson Reuters Foundation.