Did we learn nothing from robodebt?

NDIS automation will put vulnerable lives at the mercy of machines. Full article by Dr Georgia van Toorn. 

| 16 Dec 2025

Article originally published in The Guardian

Never again. That was the resounding message delivered via the robodebt royal commission from the thousands of Australians whose lives and livelihoods were upended by the cruellest experiment in bureaucratic automation the country has ever seen.

Now, with plans under way to automate the calculation of individual NDIS support plans, it appears we’ve reached new heights of institutional amnesia. Once again, vital lifelines to support will be placed at the mercy of automated systems. Only this time, the mechanisms for review and redress will be vanishingly thin.

In plans revealed by Guardian Australia this week, the National Disability Insurance Agency has set its sights on reforming the way individual disability support budgets are determined, reducing the scope for human discretion and giving data and algorithms a greater role.

Currently, a person’s support budget is determined through a combination of computer-based tools and human discretion. These tools help generate an initial support plan that an NDIA delegate modifies to suit individual needs and circumstances.

From mid-2026, a new assessment tool will be introduced, the data from which will be fed into software to generate a budget. The NDIA delegate’s role will be limited to that of accepting or rejecting the budget.

It is hard to overstate the potential dangers of replacing discretion with algorithms in a system as vital and consequential as the NDIS.

There is no doubt people with disability are all too familiar with problems of bias in human decision-making. But removing human discretion altogether is not the answer. In fact, it raises the stakes dramatically.

In a more automated system, a lot will hinge on the quality of data fed into the algorithm. If that data is inaccurate – if the assessment fails to capture a true picture of a person’s support needs – the resulting budget will be flawed and the person will pay the price.

According to revelations in Guardian Australia this week, there will be no safety net – no scope for either the NDIA delegate or the administrative appeals tribunal to amend the budget. In other words, the computer will have the final say.

In clearcut cases where a person’s disability aligns with standard categories, a more automated process may well suffice. It might succeed in streamlining the process and reducing administrative delays. But as is so often the case with digital tools, the benefits and harms will be unevenly distributed. The people who are most likely to have their funding cut or supports misjudged are the people whose disabilities are complex, fluctuating or not easily captured through standardised assessments.

A more automated process may be particularly perilous for people whose disabilities are affected or compounded by social factors – such as poverty, racial, sexual or gender discrimination, and entanglements with the carceral system – which fall well beyond the bounds of data captured by existing assessment tools. Just like with robodebt, the system will inflict its heaviest toll on the communities least able to bear it.

In discussions of budgets and algorithms, we must be mindful that what’s at stake for NDIS participants is nothing less than life-sustaining support. Cuts to budgets translate into fewer hours of funded support, fewer showers, reduced access to essential therapies, compromised independence and an increased risk of social isolation and harm. For some, it will be the difference between living independently or being forced into institutional care. In the worst cases, as the disability royal commission has made painfully clear, reductions in support can be a threat to life.

People with disabilities have historically been described as the canaries in the coalmine, subjected to experiments that would never be tolerated for other groups. Yet what we are seeing now in the automation of support planning is something perhaps even more troubling – that is, the conscious acceptance of predictable harms. Once efficiency justifies foreseeable harm, we have entered dangerous territory.

But it is not too late. The Morrison government attempted similar reforms in 2021, but they were ultimately deemed too risky and politically untenable. These new reforms present a chance, indeed the obligation, to insist once again that algorithmic efficiency should never come at the expense of human need.

Dr Georgia van Toorn is an Associate Investigator at the UNSW node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S). 

Above image: Dr van Toorn left at the film screening of I am not a number, available on SBS on Demand