In a collective effort to address the potential risks posed by Artificial Intelligence (AI), Australia’s AI safety leaders have joined forces to urge Ed Husic MP to take AI safety seriously as he moves to regulate the industry.
Greg Sadler, spokesperson for Australians For AI Safety, expressed concern over the Minister’s dismissive attitude towards the possibility of catastrophic or existential harms from AI, brushing them off as “darkly negative.” Despite growing global awareness and mounting real-world examples of AI-related harms, the Minister’s stance remains unchanged.
The letter, supported by a diverse group of individuals and organizations representing various domains of Australian AI expertise, calls for the nation to participate in global agreements that consider long-term risks and increase support for AI safety research in Australian universities.
The Albanese Government took its first steps towards modernizing laws related to AI on 1 June. As part of this initiative, a report by Australia’s Chief Scientist was published, and Australians were invited to share their perspectives on governing AI.
Contrary to being alarmist, the signatories of the letter advocate for a measured approach to AI safety. Their primary objective is to integrate AI safety as an essential aspect of AI development rather than ignoring it altogether.
The United Nations has also highlighted the concerns over AI risks. Secretary-General António Guterres emphasized the potential for catastrophic or existential risks associated with generative AI, declaring it an existential threat to humanity comparable to the risk of nuclear war.
Richard Dazeley, a Professor of Artificial Intelligence and Machine Learning at Deakin University and a signatory to the letter, emphasized the need for transparency in AI labs and regulations. He stressed the importance of understanding emerging AI models and their capabilities before their widespread deployment.
While some countries, such as Singapore and the UK, are taking strides towards AI safety by establishing safety-focused government labs like AI Verify Foundation and the European Centre for Algorithmic Transparency, Australia has yet to propose a similar national lab.
Jenna Ong, a community organizer from Canberra, expressed relief that experts are addressing this critical issue. Community groups nationwide are engaging in discussions to respond to Ed Husic’s consultation, advocating for governments to heed AI safety experts’ advice and take a long-term perspective that considers the potential catastrophic risks posed by highly capable AI agents.
Bridget Loughhead, a participant in a recent AI forum in Melbourne, expressed disappointment that the Government’s 42-page AI discussion paper failed to mention the pressing issue of AI safety. She emphasized that Australia has a history of leadership in nuclear safety and biosecurity and should strive to lead in AI safety as well.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.