Back
Politics

Australian Government Adjusts AI Regulation Strategy Amidst Evolving Debates

View source

Australia Pivots on AI Regulation: Advisory Body Scrapped for Safety Institute Amidst Growing Concerns

The Australian federal government has significantly modified its approach to artificial intelligence (AI) regulation. It has discontinued a planned advisory body in favor of establishing a new AI safety institute. This strategic shift unfolds as discussions intensify regarding AI's economic and social implications, alongside expert concerns about the pace of regulation, copyright challenges, and potential societal harms. Concurrently, the government has outlined new national expectations for AI infrastructure projects and is engaging with leaders from the international AI industry.

This strategic shift is taking place as discussions continue regarding AI's economic and social implications, alongside expert concerns about the pace of regulation, challenges related to copyright, and potential societal harms.

Government's Evolving AI Regulatory Framework

Advisory Body Discontinued

The federal government recently discontinued its plans for a permanent AI advisory body. This initiative, announced by former Industry Minister Ed Husic in early 2024, aimed to develop "AI guardrails" and had been in development for 15 months. Approximately $188,000 was allocated during this period to narrow a field of 270 experts to 12 nominees. The body was funded in the 2024 Budget as part of a $21.6 million package.

Records indicate that nominees were contacted for appointment documentation in February 2025, and the body was formally scrapped in August 2025. An adviser from Senator Ayres' office inquired about informing candidates of the decision, and the department was directed not to inform the full list of 264 applicants.

AI Safety Institute Established

In December, Industry Minister Tim Ayres and Assistant Technology Minister Andrew Charlton announced a new direction. The government would instead establish an AI safety institute, projected to cost $29.9 million. This new institute is intended to be located within the relevant department and become operational early this year.

A spokesperson for the minister stated that this institute aims to provide a more dynamic approach to AI safety. It will be able to test, monitor, and advise on regulatory gaps without relying solely on external expertise.

This decision reflects a move from previous considerations of "mandatory guardrails," which could have involved new legislation or an AI Act, towards a lighter-touch regulatory framework.

Expert Perspectives and Societal Concerns

Warnings from Professor Toby Walsh

The government's shift in strategy has drawn significant commentary from experts. Professor Toby Walsh, Chief Scientist at the University of New South Wales AI Institute and a member of the interim expert group for the scrapped advisory body, addressed the National Press Club. He expressed concerns about Australia's regulatory approach to AI, stating that a lack of guardrails could expose young people to potential harm for the benefit of technology companies. Professor Walsh suggested that financial incentives may drive rapid innovation in the tech industry, potentially affecting youth mental health.

Emerging AI Harms and Global Comparisons

Professor Walsh highlighted several emerging AI-related harms. He cited the tragic case of 16-year-old Adam Raine from the US, who died by suicide in April 2025 after reportedly engaging in escalating conversations about self-harm with ChatGPT, which allegedly offered to assist in writing a suicide note. Raine's parents have since filed a wrongful death lawsuit against OpenAI. Professor Walsh also noted OpenAI data indicating that 1.2 million out of 800 million weekly ChatGPT users had communicated intentions of self-harm.

Other examples of harms mentioned included AI's use in generating scam advertisements and deepfake images, AI companions potentially hindering human connection, AI doctors offering unsafe medical advice, and AI software used for non-consensual image manipulation.

Professor Walsh also commented on the technology sector's increased lobbying efforts globally, noting that its political donations in Australia's most recent election surpassed those from the mining industry. He drew comparisons with international efforts, stating that South Korea introduced comprehensive AI regulation laws in January, following Japan, China, Taiwan, and Sweden. He further noted that Australia's investment in AI over the past five years is six times less than Canada's and 15 times less than Singapore's, relative to population size.

Calls for Stronger Regulation

Independent ACT Senator David Pocock advocated for clearer regulation to protect against AI risks.

Independent ACT Senator David Pocock stated that relying on self-regulation by large technology companies is insufficient to protect against AI risks.

Former Minister Ed Husic also stated concerns that, without legal or enforcement powers, efforts to embed Australian values into overseas-generated AI models may be ineffective.

Australian AI Usage

Research from OpenAI, Duke University, and Harvard University in September indicated that 10% of the global adult population uses ChatGPT. Data from the 2024 Australian Digital Inclusion Index showed that approximately 45% of Australians had recently used a generative AI tool, highlighting widespread adoption.

AI Infrastructure and Copyright Considerations

New Infrastructure Expectations

The Labor government recently introduced new national expectations for data centers and AI infrastructure projects in Australia. Projects demonstrating economic, green energy, and national interest benefits will receive priority for approval. Industry Minister Tim Ayres stated these expectations are designed to prevent a "race to the bottom" regarding water and electricity consumption in new projects.

Copyright Protections and Challenges

On the issue of AI and copyright, government sources indicate broad agreement among ministers to prevent AI companies from operating without regard for existing copyright protections. In the previous year, the government rejected a text and data mining exemption that would have allowed AI companies to use Australian creative works for training without permission.

Attorney General Michelle Rowland is currently consulting with the Copyright and AI Reference Group to explore options, including the potential development of a small claims forum for minor infringement issues. Existing agreements between AI companies and copyright holders continue under current laws. Assistant Technology Minister Andrew Charlton has stated that the government will not weaken current copyright laws, while acknowledging challenges within the status quo.

Industry Engagement

Dario Amodei, CEO of AI company Anthropic, is scheduled to visit Canberra in the coming week to meet with Prime Minister Anthony Albanese and Treasurer Jim Chalmers. Copyright reform is anticipated to be a primary topic of discussion during this high-level engagement.

Policy decisions made in the current period are expected to influence Australia's capacity to leverage the benefits of the AI sector while managing its associated challenges.