Agentic AI is already operating in Australian businesses, making decisions and acting across systems. Here’s why governance can no longer be delayed.
Why this matters: New global research from TrendAI, drawing on 3,700 business and IT decision makers across 23 countries, finds organisations are deploying agentic AI faster than governance frameworks can keep up.
For most of its recent history, artificial intelligence in the enterprise has been a tool that waited. You prompted it, it responded. You asked, it answered. Agentic AI is different. These systems set their own sub-goals, take sequences of actions, access data, interact with external platforms, and complete complex multi-step tasks with little to no human intervention. They do not wait to be asked. They act.
That shift is significant for business. The productivity ceiling that came with human-paced AI interaction is gone. Agents run in parallel, around the clock, across systems. But the same quality that makes agentic AI powerful is what makes its governance so urgent. When an agent makes a decision, triggers a process, or interacts with a critical system, the consequences are real. Sometimes they are irreversible. And in most organisations right now, nobody has clearly agreed on who is responsible when something goes wrong.
New global research from TrendAI puts numbers to that gap. The study, drawing on 3,700 business and IT decision makers across 23 countries including Australia, found that 67% of organisations have felt pressured to approve AI despite known security concerns. Almost one in five Australian respondents, 19%, described those concerns as extreme, and said they were overridden anyway, to keep pace with competitors and internal demand.
Rachel Jin, Chief Platform and Business Officer and Head of TrendAI, said the findings reflect a structural problem that goes beyond individual organisations. “Organisations are not lacking awareness of risk, they’re lacking the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely.”
In Australia, 68% of organisations say AI is advancing more quickly than they can secure it. Almost two thirds, 64%, report having comprehensive AI policies in place. But more than 40% say unclear regulation, compliance standards, and a lack of internal governance remain key barriers to safe adoption. In practice, AI is frequently operationalised before the rules governing its use are fully established. Forty-four percent of senior business decision makers report only moderate understanding of the legal frameworks that govern AI at all.
Srujan Talakokkula, Managing Director ANZ at TrendAI, said the confidence businesses project about AI preparedness is masking a deeper uncertainty. “While many organisations across Australia and New Zealand report strong confidence in AI preparedness and strong recognition of AI’s role in combating AI-driven threats, there is a clear gap in understanding of legal frameworks governing AI and differing views on accountability and human oversight across both business and IT leadership.”
The governance problem is being compounded by how AI rollout decisions are made inside organisations. When security teams are cut out of top-down deployment decisions, workarounds follow. The use of unsanctioned or shadow AI tools is growing as a direct result. And the threat environment is not standing still. Recent TrendAI threat research shows attackers are already using AI to automate reconnaissance, accelerate phishing campaigns, and lower the barrier to entry for cybercrime, increasing both the speed and scale of attacks on organisations whose own governance is still being written.
Nowhere is the accountability gap more visible than in agentic AI specifically. Less than half of respondents globally, 44%, believe agentic AI will significantly improve cyber defence in the short term. In Australia, the specific concerns are landing in a handful of places. Almost half, 45%, say AI agents accessing sensitive data is their biggest risk. Over a third, 34%, point to autonomous code deployment. Almost one in three, 31%, fear abuse of trusted AI status, while 30% are concerned about hallucinations or false outputs.
Nearly a third of business decision makers globally admit they lack observability or auditability over these systems entirely. That means organisations cannot see what their agents did, cannot reconstruct why, and cannot reliably intervene once they are running. Up to 54% of Australian respondents say they support the introduction of AI kill switch mechanisms to shut systems down in the event of failure or misuse. Nearly half remain unsure whether that is even the right approach. Less than half of Australian business decision makers, 42%, believe a human should always remain in the loop on AI-driven security operations.
Jin said that lack of consensus is itself the warning sign. “Agentic AI is moving organisations into a new risk category. Our research shows the concerns are already clear, from sensitive data exposure to loss of oversight. Without visibility and control, organisations are deploying systems they don’t fully understand or govern, and that risk is only going to increase unless action is taken.”
Talakokkula said the answer starts with visibility across the full AI lifecycle. “With governance challenges intensifying and AI-driven threats becoming more sophisticated, visibility of assets and risk management across the entire AI lifecycle is critical. This research highlights the importance of working with trusted partners that allow organisations to safely deploy and scale AI.”
The pattern the research reveals is consistent across markets. The conditions that make agentic AI deployment fast are, in many organisations, the same conditions making it ungoverned. Competitive pressure, unclear accountability, and frameworks still being written after systems are already running are combining to create a risk gap that is growing alongside the technology itself. The governance conversation has not kept pace. The AI did not wait for it.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.
