Australian government officials are considering introducing a mandatory labelling requirement for AI-generated material to mitigate the potential negative impacts associated with the rapidly advancing technology.
The move comes as concerns grow over the rise of deep fake images, videos, and lifelike robotic recordings that have the potential to deceive and mislead the public. Industry Minister Ed Husic, who is spearheading the government’s efforts to regulate artificial intelligence (AI), recently met with Sam Altman, the CEO of OpenAI, the parent company of ChatGPT, to discuss the urgent need for measures to address these challenges.
The proposed labelling system aims to provide transparency and empower individuals to make informed decisions when consuming AI-generated content, ultimately fostering media literacy and safeguarding against the dissemination of misinformation. It would serve as a visual cue, alerting viewers that the media they are consuming has been generated by AI algorithms. By providing transparent information about the origin and nature of the content, individuals can make more informed decisions and develop critical media literacy skills.
However, implementing effective labelling mechanisms for AI-generated content presents various challenges. Technological solutions need to be developed to detect and authenticate AI-generated media accurately. Additionally, standards and guidelines must be established to ensure consistency and reliability across different platforms and industries. Collaborative efforts between government bodies, industry stakeholders, and AI research organisations will be crucial to devising comprehensive frameworks that balance innovation and public trust.
As AI algorithms and tools become increasingly sophisticated, it has become harder for the average person to discern real from fabricated content. Deep fake images and videos created using AI can convincingly replicate individuals or events, leading to the spread of misinformation and damaging consequences. Similarly, lifelike robotic recordings can create an illusion of interaction with humans, blurring the lines between AI and human communication.
Professor Geoff Webb from the Department of Data Science & AI, Faculty of Information Technology, said: “The Australian government is to be commended on the considered and practical approach they are taking to the real and immediate risks associated with AI.
“Vested interests are generating much hysteria and overblown hyperbole. But we can sleep assured that there is no risk that we will be subjugated by AI overlords. Nonetheless, there are many current risks associated with the current technologies.
“I am cautious about exactly how effective mandatory labelling of AI generated content would be. It is unlikely to stop bad actors, such as a foreign power seeking to disrupt an Australian election or referendum.
“The best way to ensure that Australia is not left at the mercy of the foreign interests that currently control these technologies is to increase Australian workforce training and investment in Australian AI research and development.”
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.