By Dan Pearce, General Counsel, Holding Redlich
Rapid advancements in AI and high-profile privacy breaches are driving a wave of government legislative reform, which will determine how businesses can develop, use and rely on AI tools – whether for automating operations, enhancing customer service or making data-driven decisions.
Since releasing the mandatory guidelines and voluntary safeguards in September 2024, the government has been consulting on how to determine what constitutes a high risk setting for AI deployment and how to effectively legislate the proposed reforms to AI regulation.
The government has also announced a National AI Capability Plan (Plan) to grow investment, strengthen AI capabilities, boost AI skills and secure economic resilience. The Plan is expected to be released at the end of 2025, after targeted and public consultation. More recently, the Commonwealth’s House of Representatives Standing Committee on Employment, Education and Training delivered its report, ‘The Future of Work’, where it called for AI products used by employers in recruitment, remuneration and training to be regarded as “high risk” for the purposes of attracting the mandatory guardrails.
Overseas, President Trump moved away from the responsible AI safety rules that had been introduced by the previous administration, prioritising AI development as a key focus for the US. The release of the Chinese generative AI product, DeepSeek, caused market turmoil, signalling that the US’s lead in AI could be quickly eroded by cheaper alternatives. At the global AI summit in February, France committed billions to AI initiatives and indicated a preference for innovation over increased regulation.
Cyber Security Act update
New subordinate legislation under the new Cyber Security Act, referred to as the Cyber Security Rules, were passed on 4 March 2025. The following measures will take effect in the coming months:
- new security standards for smart devices – from March 2026, manufacturers and suppliers of consumer-grade smart devices must meet the required security standards for those devices, such as secure default settings, unique device passwords, regular security updates and encryption of sensitive data
- ransomware payment reporting – from 30 May 2025, organisations other than small businesses with a turnover of less than $3 million must report ransomware payments to the Australian Signals Directorate within 72 hours
- new Cyber Incident Review Board – the Board will be in operation from 30 Mary 2025 to review and assess major cyber incidents that impact Australia’s defence or cause serious public concern.
Although not all of the above measures will apply to SMEs, any business working with larger organisations may face tighter security expectations, requiring improved cyber resilience.
Introduction of ‘digital duty of care’
A recent report reviewing Australia’s online safety laws has recommended introducing a “digital duty of care” for large social media platforms to take reasonable steps to actively protect users from harmful content. Under this ‘duty’, platforms would be required to conduct regular risk assessments to identify any harmful content on their platform, to be transparent about the results, and to respond to user complaints regarding the content complained about.
Users would also have a right to submit complaints to a regulator, potentially similar to the EU’s Digital Services Coordinator, which could take action if the platform fails to address the issue. There could also be significant fines and other penalties for non-compliance.
The proposed duty is along similar lines to that announced by the government in November 2024 (which it plans to pursue if re-elected in 2025) and would likely draw on the examples recently established in the EU and the UK. If implemented, the duty is also seen as an alternative approach to combatting online misinformation and disinformation, particularly after the withdrawal of the bill that was specifically aimed at such conduct.
Some commentators have also advocated for this duty as a preferable alternative to the government’s recent legislation that will introduce an effective ban on children under 16 having access to social media platforms, which is due to be implemented by December 2025. Further work will be needed to define the types of harm that would attract the duty, and the report suggests separating the Online Safety Act regime from the existing national classification scheme for computer games and film to provide more flexibility in dealing with harms such as eating disorders.
Ongoing privacy reform
Last but not least, the regulatory environment for the privacy of personal information continues to evolve. Recent developments include the legislation effecting major changes to the Commonwealth Privacy Act passed in December 2024, new guidance on privacy issues with the use of AI for developers and businesses using AI, and the Privacy Commissioner’s determination on the use of facial recognition technology by Bunnings released in November 2024.
Steps SMEs should take now
This is now an ideal time for SMEs to consider their compliance readiness as we may continue to see more developments in AI, cyber security and privacy this year. SMEs should review their current cyber security processes, strengthen their data security, assess their AI usage, and ensure staff are well-trained to handle any cyber security or privacy breaches.
This article provides general information only and does not constitute legal advice.
ALSO READ: Understanding employment contracts
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.