Automation, uncertainty, and the Robodebt scheme
Lazarus
The recently concluded Royal Commission into the Robodebt Scheme exposed the manifest flaws in an automated system used for raising social security debts.
Among the many lessons we can draw from the wider Robodebt scandal is the need to design systems (whether human or automated) with a complex, uncertain world in mind.
While Robodebt wasn’t an artificial intelligence system, it’s a cautionary tale as we contemplate an increasingly automated future, especially in the context of substantial developments in AI.
Robodebt was a process by which Centrelink took annual Australian Taxation Office income data, and averaged it into fortnightly instalments. If this averaged amount didn’t match income declared by a social security recipient, and the person didn’t respond to a request for further information, they were assumed to have incorrectly declared their income, and a debt was raised against them.
These debt-raising decisions were unfair, and were seen as such by many of those receiving notifications of automatically generated debts.
The findings of the royal commission will shed important light on issues relating to the extent of automation of government decision-making, review processes, and the social security system. What’s clear from the evidence put before the royal commission is that many people recognised Robodebt’s flaws, but failed to take issue with the outputs of the automated system.
Automation bias and uncertainty tolerance
Why did so many people choose to trust the Robodebt automated system over the drumbeat of criticism that it was unlawful, and its outcomes were flawed?
One factor, among many, is likely the human tendency for uncertainty intolerance.
As humans, we tend to crave certainty, despite the complex world we live in being filled – at every turn – with uncertainty. When faced with uncertainty, especially when risk and liability are involved (as in Robodebt), we seem to focus on suppressing uncertainty.
The appeal of an “objective” non-human decision-making machine can be an alluring salve to treat the discomfort of uncertainty.
The automated system used in Robodebt didn’t hedge when it produced an output. Humans needed to decide whether the technology provided a proper basis for raising a social security debt. Too few acted when questions about its adequacy were raised, and the flawed, unlawful system persisted for years.
If we, as individuals and a species, are more uncertainty-intolerant, we may be less comfortable with the discomfort uncertainty typically provokes, and more likely to perceive the output of an automated system as “certain”.
This trust in technology can be so extreme that we allow the technology to run without monitoring or oversight, termed automation bias (or automation misuse).
The American version of the satirical television mockumentary series The Office illustrated this well in an episode in season four. Michael Scott, the show’s protagonist, played by Steve Carell, drives his car into a lake because the GPS tells him to take “a right turn”. Michael does so because “the machine knows”, despite clear visual evidence that this would result in the car plummeting into the lake.
This example illustrates the important role human judgment has in interpreting automated systems.
Those engaging with such platforms must be able to adaptively respond to uncertainty – to acknowledge the limitations of automated recommendations, sit with the discomfort of uncertainty, and still make a decision.
If they accept the uniform presence of uncertainty in complex decision-making, then the tendency towards automation bias and misuse may be reduced. The alternative of drawing on this human skill could be drowning in the lake.
Those in the automation field (including AI and machine learning) agree that human intellect is critical in all fields, including medicine, policing, and law. We’re the ones responsible for the final decision, best-placed to know what to ask of the technology, and what to make of technological recommendations. But this process, too, is filled with uncertainty.
AI isn’t as ‘certain’ as it may first appear
Our community, faced with an opaque decision-making system, might well be more inclined to conclude it’s an unfair system. “Procedural justice” research confirms that members of the community more readily see decisions as just when they can see and understand the reasoning process behind them.
Robodebt was opaque; it’s taken Senate committee hearings, judicial review proceedings, a class action and the royal commission to unpick the details of the scheme.
In contrast, true artificial intelligence systems are likely to make decisions without ever showing their work – the humans subject to government decisions made using AI might never be able to understand the precise reasoning leading to the decision.
Much can be learned here from the literature about the human capacity to tolerate uncertainty.
At Monash, we recognise that a key component of education is preparing students for adaptively responding to uncertainty naturally present in our complex and unpredictable world.
More broadly, research into uncertainty tolerance has important implications as government considers implementing more automated decision-making, and the increasing use of artificial intelligence.
Hard lessons learned
Uncertainty is everywhere – we can’t eliminate it even with technology. Attempts to replace uncertainty with the mirage of AI certainty can also be dangerous. Technology itself introduces uncertainties, as the steps that AI takes towards an output are often proprietary.
So what are some ways forward?
- Developing workplaces that acknowledge and value the central role of humans in decision-making and judgment. This requires an assertion of the complex and messy nature of decisions such as social security.
- Build in processes and protocols for effective AI management, making it clear who bears accountability for automated systems.
- Developing ourselves, the systems we work within, and our communities to be more uncertainty-tolerant. Ultimately, this can help us work better together with the natural uncertainties present in our complex world, and with the algorithmic “certainties” that automated systems introduce.
About the Authors
-
Michelle lazarus
Associate Professor, Faculty of Medicine, Nursing and Health Sciences
Michelle is the Director of the Centre for Human Anatomy Education and Curriculum Integration Network lead for the Monash Centre for Scholarship in Health Education. She leads an education research team focused on exploring the role that education has in preparing learners for their future careers. She is an award-winning educator, recognised with the Australian Award for University Teaching Excellence in 2021, a Senior Fellowship of the Higher Education Academy (SFHEA) and author of the Uncertainty Effect: How to survive and thrive through the unexpected.
-
Joel townsend
Associate Professor, Law Experiential Education, Faculty of Law, Monash University
Joel is the Director of Monash Law Clinics, a community legal centre which provides assistance to disadvantaged members of the community, and gives law students an early experience of legal practice. Joel worked for nearly 15 years at Victoria Legal Aid before coming to Monash. He has broad experience and deep expertise in public law. At VLA, he was centrally involved in litigation and public advocacy seeking to bring the robodebt scheme to an end.
Other stories you might like
-
The inescapable truth of uncertainty
We all face it – whether we invite it in or not. From our workplaces to the societies within which we live, uncertainty is everywhere.
-
Rising from the ashes: Higher education in the age of AI
In the AI age, rewarding the beauty of our imperfections by designing learning activities and assessments that reframe “deficiencies” as human assets that can be complemented by AI could be the way forward.
-
AI in healthcare education: Is it ready to teach the future?
Healthcare is increasingly turning to AI to make patient care more effective, safe, and efficient, but the question remains: Does the reality match the intentions?
-
Check, mate: A lesson in the need for stronger AI laws and regulation
The news that a robot broke a seven-year-old’s finger in a chess tournament raises a fundamental legal question: Who’s liable for the acts of a robot?