Greg Baker comments on the release of the voluntary AI safety standard by DISR
By: Greg Baker
Posted on
The core issue with the standard is that it assumes AI will continue to be centrally controlled, primarily used for cost-reduction purposes -- as it has been in traditional AI projects.
Photo by Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI 2 / CC-BY 4.0
The release of the voluntary AI safety standard by the Department of Industry, Science and Resources (DISR) represents an attempt to guide AI development in Australia. It’s well-written and clear, with contributions from top government, academic, and industry minds. Yet, I have reservations.
The core issue with the standard is that it assumes AI will continue to be centrally controlled, primarily used for cost-reduction purposes -- as it has been in traditional AI projects.
This is flatly contradicted by what we've seen in the last 18 months. The AI activities that get a lot of interest have been about individuals spontaneously discovering that they can do things that they couldn't have done before. Just a few obvious ones: coming up with new ideas, overcoming writer's block, having a personal tutor on just about any topic, and being able to automate through programming.
It's that last one --- automating activities by prompt engineering --- that will revolutionise workplaces. It's a bottom-up activity that doesn't follow a mandated process. There's no need for a priesthood of data scientists to make AI happen any more.
Given this context, it's clear why the DISR’s AI safety standard doesn't fit the future of AI. The principles in the standard assume AI will remain a centrally controlled, organisationally visible activity, monitored and regulated by processes that belong to a pre-2022 world. Take, for instance, the first principle: “Establish, implement and publish an accountability process.” This presumes that AI will always be centrally deployed and its usage monitored at the leadership level. But in a world where individuals control their own AI systems, how realistic is this?
Similarly, the second principle's emphasis on a "risk management process" hinges on the idea that AI deployments will be rare, highly controlled events. They won't be. Principles like "protect AI systems and implement data governance systems" (#3) and "test AI models and systems to evaluate model performance" (#4) make sense in a world where AI models are trained in-house. However, that’s no longer how AI works. The largest models today are built by a handful of companies, and the vast majority of users rely on these pre-trained models. AI is infrastructure, not a project.
The safety standard’s focus on centralised accountability, risk management, and governance doesn’t reflect this. It's a framework for supervised machine learning, and nowadays that is just one tiny backwater part of AI.
The conversations we should be having are about:
- The need for regulation to prevent AI having access to financial instruments to pay people, and simultaneously having the authority to request things to be done. That is a dangerous combination, but is perfectly legal today.
- Frameworks for handling white-collar labour disruption: is a 25% job change a constructive dismissal? Most white collar workers are going to experience that this decade.
- Negotiating overseas enclaves of Australian territory to build solar-powered data centres operating 24/7, ensuring Australian data remains on Australian soil even when it’s nighttime in Perth and Sydney.
- Protections for employees who fully automate their own jobs.
This piece was originally published by Greg Baker here.
Contact
You may also like
Innovative approaches to AI ethics at the Global Humanities Institute on Design Justice AI
Professor Kate Henne and members of ANU JusTech (who are members of the Integrated AI Network) presented their work on design justice in generative AI.
The three questions agencies must ask themselves to use AI responsibly
When it comes to implementing the Government’s new policy on the responsible use of AI, one element in particular will be critical: AI leadership. That’s especially true when it comes to the appointment of ‘accountable officials’. But what matters most when making these appointments? ANU experts Maia Gould and Ellen O’Brien explain.