The Australian government has unveiled a budget prioritising two key areas: fostering innovation and ensuring social well-being.
This dual focus reflects a balancing act often faced by economic plans. While some see support for small and medium-sized enterprises (SMEs) and deep-tech startups as crucial for future growth, others emphasize the need for immediate assistance to those most affected by current economic challenges. This budget aims to address both long-term aspirations, like transitioning to renewable energy, and immediate needs, highlighting the interconnectedness of social welfare and economic prosperity.
Australia’s government announces a significant investment in science and technology, aiming to realign research and development policies with national priorities and bolster innovation outcomes. Over the next eight years, $38.2 million will be dedicated to supporting a diverse STEM workforce, particularly focusing on women and marginalized groups. In a bid to integrate Australia’s artificial intelligence expertise, $21.6 million will be allocated over five years, including establishing an AI Advisory Body. An additional $8.5 million over two years will enhance the government’s capacity to lead a safe and responsible AI agenda.
Ensuring the continuity of Australia’s measurement science capabilities, $145.4 million will be invested over two years, with a particular focus on critical funding needs. Further investments include $479.9 million over nine years to support the development of a world-first quantum computer by PsiQuantum in collaboration with the Queensland Government. Additionally, $25.9 million over two years will be allocated to guarantee affordable nuclear medicines for Australians through the Australian Nuclear Science and Technology Organisation (ANSTO). In tandem with these initiatives, various Commonwealth departments will receive support totaling $288 million over four years to roll out the Digital ID scheme, including a pilot for a government-backed digital wallet. Additionally, a digital trade processes acceleration program will receive $29.9 million over four years, aiming to advance trade digitization efforts.
For the safe and responsible deployment of AI, $39.9 million over five years will be directed towards various programs, with a focus on reshaping the National AI Centre into an AI advisory body. This transition will involve its relocation from CSIRO to the Department of Industry, Science, and Resources.
Prophet CEO and Co-Founder Jordan Taylor-Bartels
“Fiscal relaxation coupled with the precarious level of the cash rate has injected considerable uncertainty into the economic outlook, complicating projections and potentially hindering growth prospects for brands. Last night’s budget made it clear that interest rates are going to stay higher for longer, with the treasurer anticipating that the RBA won’t drop rates by any more than 0.75 per cent in the next two years. Naturally, many marketers are looking at how to augment traditional campaign strategies with tools like predictive analysis to ensure they cut down on wasted spends.
A lot of these functions increasingly harness AI to produce the best results – something which we’ve embraced as an opportunity in developing the Prophet platform. As an Australian-based martech business, we’re always looking to see how Federal Government policies and funding may further enable us in creating world-class solutions for marketers.
The investment of $39.9 million for AI policy and regulation to ensure the safe and responsible development and deployment of AI is good to see, but the Government appears to still largely be focused almost exclusively on building guardrails rather than also enabling innovation.
We would have liked to have seen a further commitment by the Federal Government specifically to support and help fund locally developed AI innovations. We have the opportunity to put Australia on the map in this rapidly-evolving industry, but this does require further cooperation from the Government to ensure success.
With the Government pledging a significant $1.7b for investments in innovation, science and digital capabilities, it’s a shame there was very little else in the budget for homegrown startups specifically. For example: the $4.8m allocated to boost the Austrade Landing Pads program seems like it will only benefit a very finite amount of startups specifically seeking expansion in Indonesia and Vietnam.
Elsewhere in the budget, almost $40 million in funding for a range of STEM programs to increase diversity in education and industry will indirectly support the AI and tech sectors by fostering a skilled workforce, which is a positive. The introduction of a new National Innovation visa aimed at attracting talented migrants could further support Australian AI development by facilitating access to global talent. We’re hopeful this brings more talent to Australia’s AI frontier.”
Charles Ferguson, APAC General Manager at G-P
“The government’s $39.9 million investment into regulating AI use is a welcome one, as recent research by G-P found that 87% of Australian business executives plan to invest more into AI in the coming year.
At the same time, these executives are more concerned about the financial penalties and fines due to the incorrect use of AI than their counterparts around the world. As part of these new programs and policies, it is important that the government clearly stipulate guidelines for the responsible use of AI, so that businesses can gain regulatory clarity and better set themselves up for success.
“More must also be done to support AI innovation among SMEs in order to prevent a gulf in AI adoption between smaller and larger businesses. This is especially important as our research found that smaller businesses are taking a more cautious approach to AI adoption. 65% of executives at medium-sized businesses said they were concerned about adopting AI too soon without a strategy and resources, compared to 60% of executives at enterprises. These businesses are also dedicating less spend towards AI, compared to executives at larger enterprises.”
Craig Nielsen, VP, Asia Pacific & Japan, GitLab
“The Australian Government’s Budget announcement of investment into AI regulation underscores the need for action to address emerging AI risks. The regulatory framework we put in place today will help organisations future-proof how they evolve with AI, ensuring they reap the benefits of AI without creating vulnerabilities.All organisations aiming to benefit from AI must share the responsibility of its ethical adoption, not solely regulators. In order to integrate AI properly, leaders need to assess how AI fits into their broader goals and security and privacy policies. Without the proper guardrails in place, such as how AI tools store and protect data, organisations are vulnerable to security risks, fines, customer attrition, and reputational damage.
“To contain and protect valuable IP, organisations should create strict policies outlining the approved usage of AI-generated code. When incorporating third-party platforms for AI, organisations benefit from a thorough due diligence assessment ensuring that their data, both the model prompt, and output, will not be used for AI/ML model training and fine-tuning. This may inadvertently expose their intellectual property to other businesses.The promise of AI is far-reaching, from enhancing software development and marketing to finance and beyond. However, the long-term viability of AI can only be achieved with transparency and strategic implementation. Especially in a world where every company is a software company, AI-powered workflows will enhance efficiency and reduce cycle times in every phase of the software development lifecycle. By automating software delivery and securing end-to-end software supply chains, organisations can balance speed and security – securing their software supply chains, gaining full visibility into their threat landscape, and establishing policies to aid compliance adherence to deliver secure software faster.”
Peter Marelas, Chief Architect, APJ, New Relic
“The AI inclusions in the Budget announcement earlier is a welcome step forward in ensuring that AI is developed and used in a responsible and beneficial manner. It sets forth a clear vision for the future of AI in Australia, and it provides a roadmap for how different stakeholders can work together to achieve this vision. The focus on the core principles of safety, security, transparency and accountability is a clear call to action to organisations to start adopting responsible practices. Some of these practices include controlling the way AI systems data is collected and used, while getting visibility in model inputs, outputs, and decision-making processes that can provide a clearer understanding of how AI systems operate.
“For any regulatory response to work, a culture of compliance will become a necessity. Implementing full visibility and control across the breadth and depth of the entire AI stack now will enable organisations to get on the front foot with these regulations. An additional consideration is the cost implications of compliance. Businesses could be required to disclose details about their operations, including the total compute power and clusters associated with their AI models. There’s also a heightened focus on ensuring the cybersecurity of AI models, which can add even more operational overhead. Practices like AI observability emerge as a crucial element to guide and support software organisations through the emerging regulatory landscape of AI. By providing in-depth insights and logging of AI system operations, AIobservability enables more informed decision-making regarding where and how to invest in compliance related initiatives. It provides the necessary tools and processes to ensure safety, transparency, and continuous improvements needed in AI systems, while offering visibility to meet new requirements as they are established.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.