Data security is the number one reason small business owners hesitate to adopt AI. This week in Let’s Talk, our experts share the simplest, safest ways to start without putting your business at risk.
For many small business owners, the biggest barrier to adopting AI is not knowing where to start. It is not knowing what happens to their data when they do. Customer records, financial information, internal documents, the fear of feeding sensitive business information into an unfamiliar system is real and reasonable. This week in Let’s Talk, we asked our panel the question that holds most owners back: what is the simplest way to start using AI in your business without risking a data leak? From choosing the right tools to setting clear boundaries for your team, our experts share the practical first steps that let you move forward with confidence.
Let’s Talk!
Aaron Bugal, Field CISO, APJ, Sophos
“The safest way to begin is to start with enterprise AI tools that sit inside the systems you already trust, rather than letting employees experiment with public consumer apps on their own.
For most businesses, that means choosing AI services that operate within your existing productivity or cloud environment, apply the same access controls your staff already use, and come with clear commercial commitments around how data is handled. That gives you a much safer starting point than relying on unsanctioned tools where sensitive information can be pasted in without oversight.
Just as importantly, start with low-risk use cases. Use AI first for tasks like drafting internal summaries, improving meeting notes, helping with research, or streamlining routine administration. Those are practical ways to build confidence and productivity without exposing your most sensitive customer, financial, or intellectual property data on day one – getting familiar with the tools through low-risk activities will help smooth the adoption and potential unwanted use.
The other critical step is governance. Businesses don’t need to solve every AI policy question before they begin, but they do need some basic guardrails: clear guidance on what data can and cannot be entered, visibility for IT and security teams, and controls such as identity management, logging, and data loss prevention. The real risk usually isn’t AI itself – it’s employees using it without any oversight.
In simple terms, the best way to start using AI safely is to bring it in through the front door: approved tools, clear rules, limited initial use cases, and a measured rollout. That lets organisations capture the productivity benefits of AI while keeping security, compliance, and trust intact.”
Adam Beavis, Vice President and Country Manager, ANZ, Databricks
“The easiest way to start using AI without risking data leaks is to begin with low-risk, internal use cases built on a governed data foundation. Think summarising documents or automating reporting, where the data is already secure and controlled. This allows them to see value quickly while keeping sensitive information contained.
The real differentiator is what sits underneath. AI is only as secure as the data it touches. Rather than “bolting on” controls later, data platforms are designed to embed governance directly into how data and AI are used. Capabilities like Databricks’ Unity Catalog allow businesses to centrally manage access to data across tables, files, and models, with fine-grained controls. This ensures sensitive information can be masked or restricted – before it is used in AI workflows.
Businesses that succeed are those that treat data governance as a core capability from the beginning. Start small, get your data controls right from day one, and you’ll be in a position to scale AI across the business with confidence, without putting customer trust at risk.”
Craig Nielsen, Vice President, Asia Pacific & Japan, GitLab
“Security teams should treat AI as a force multiplier for existing capabilities, not a replacement for expertise. Start with high-volume, low-context tasks where AI excels: automated triage of vulnerability reports, initial classification of security incidents, and pattern recognition across security telemetry.
We’ve seen success using AI to generate initial security release documentation and perform preliminary bug bounty triage, reducing response times while maintaining human oversight for critical decisions. The key is establishing clear boundaries: AI handles the data processing and initial analysis, while security professionals provide context, validate findings, and make strategic decisions.
Implement feedback loops where human corrections train your AI systems to better understand your specific threat landscape. This collaborative model scales security operations without sacrificing the nuanced judgment that only experienced practitioners can provide.”
Lisa Fortey, General Manager, Logicalis Australia
“The simplest way for startups and SMEs is to use AI without risking data leaks is to establish strong guardrails before feeding it business data.
Without that foundation, it becomes difficult to manage what AI can access, interpret, or generate. Even well-intentioned use can quickly expose sensitive information or create unintended risk when boundaries aren’t clearly defined.
Organisations can protect against data risk by limiting scope early, setting clear rules around where AI is used, restricting access to appropriate data, and giving teams guidance on how to apply it responsibly.
Start small. Begin with low-risk, well-understood tasks rather than opening AI up across the organisation. Done right, this approach delivers value quickly. Early wins tend to come from focused use cases that streamline manual work and improve visibility, turning existing processes into faster, more insight-driven outcomes without increasing exposure.
For startups and SMEs, it comes down to discipline early. Control how AI is used from the outset, and it becomes far easier to expand safely over time.”
David Torgerson, VP of Technology and Security, Lucid Software
“The safest way to start using AI is to begin with a tightly scoped, low-risk use case. Rather than attempting a company-wide rollout, start a single, well-defined task, like an admin workflow. This allows you to build a controlled environment, validating results and establishing clear accountability before scaling up.
AI requires access to large datasets to be effective, which naturally creates exposure. Lucid’s research shows a gap in Australia’s adoption readiness. While 80% of Aussie workers are using AI tools, 51% say security concerns are the biggest barrier to wider use. Hackers no longer just target the underlying database but the AI layer itself. AI also tends to surface underlying gaps in process and documentation, which can introduce risk if left unaddressed.
Treat AI as a distinct layer in your technology stack, with its own governance, controls, and accountability. By securing one workflow at a time and ensuring your data and processes are well defined, you can scale AI confidently without introducing unnecessary risk.”
Greg Statton, VP and CTO APJ, Cohesity
“The simplest and safest way to start using AI is to ask not what you can do with AI, but what AI can do for you. We must view AI as a real-life problem-solver, not a product.
To achieve this, we can start small, with low-risk data and clear use-cases. Start with singular tasks, for example, drafting FAQs, summarising documents, or creating marketing content, and avoid using sensitive customer or financial information early on. This allows us to test AI’s value without introducing unnecessary risk.
Just as importantly, businesses must ensure the right data protection policies are implemented from the outset. That means knowing who can access your data, only using trusted AI tools, and ensuring you have secure, reliable backups in place. After all, strong cyber resilience isn’t about simply stopping attacks, it’s about making sure your data is protected and recoverable as you adopt new technologies like AI, so that when attacks occur you can recover quickly.
From there, measure results and expand use-cases across the business, bringing more data in as your confidence grows. As AI evolves, particularly with “agentic” systems that can take on actions, the need for strong data controls and resilience only increases.
Businesses that take this approach i.e., starting small, while building on a resilient data foundation, can unlock AI’s benefits quickly, without exposing themselves to unnecessary risk.”
Darren Guccione, CEO, Keeper Security, Inc.
“The simplest way to start using AI safely is to treat it like any other privileged system from day one. Begin with a tightly controlled, low-risk use case. Then apply core security principles: enforce least-privilege access, restrict what data AI tools can see and ensure all interactions are logged and auditable. This is zero trust security in practice, where no user, device or system is trusted by default. Crucially, avoid connecting AI tools directly to sensitive systems or datasets without identity security brokering and governance. Every AI tool deployed introduces a new identity and access pathway – each one a potential attack surface that must be secured the same as any privileged human user.
Don’t rely on AI platforms to secure themselves. Enforce encrypted storage, role-based access controls and centralised visibility at the access layer – before any AI tool ever touches your data. Start small, secure the access layer first and expand only when you have full visibility and policy enforcement in place.”
Inbal Rodnay, AI strategist and technology adoption expert, Inbal Rodnay
“Balancing client confidentiality, compliance and risk is not only which AI tool to use but which version. The best place to start is with the AI already in use. If you run on Microsoft, that’s Copilot. If you use Google, it’s Gemini. Since you’ve already paid for the platform, you aren’t adding another subscription or another login for your team to forget or to learn how to use. To keep data safe, use a business account, never a personal one. On a paid business plan, your data stays inside your environment and isn’t used to train the AI. That’s the protection you’re paying for. It’s your strongest line of defence. Before anyone starts using AI, check your subscription agreement and confirm you’re on a business tier. If you’re using a free or personal version, you don’t have the same protections. Using a free AI account and manually redacting or anonymising everything isn’t a sensible workaround. It breaks the flow of work, and humans being human, something will eventually slip through. It’s a policy prone to error. Free versions are generally more limiting and you should never share confidential client or business information on a free account.
Before building a custom chatbot or automating onboarding process, start small. Experiment with summarising documents or drafting emails, and avoid pasting sensitive client data until you’re confident in your setup. It’s important to build AI literacy. A business subscription protects your data at the technical level but it does nothing to stop someone from pasting a client’s tax file number into the wrong tab because they don’t understand the consequences of doing that. Your team needs to know what these tools are good at, where things can go wrong, and when to slow down and check the output before it goes anywhere. Train your people first to understand how the tool works, grasp the intent behind using AI, how to collaborate, and establish guardrails and policies for AI use.”
Daniel Garcia, Vice President and General Manager for APAC, Kaseya
“For many Australian SMB’s, AI feels like a double-edged sword. Essential for growth but a potential nightmare for data privacy. Kaseya’s 2026 State of the MSP Report reveals that while 48% of providers identify AI and automation as the top client need for the year ahead, a cautious 34% are concerned that AI could introduce new security risks.
The simplest and safest way to start is not by adding new, standalone tools for individual touchpoints, but through consolidating your tech stack. Our research highlights that product complexity is now a primary barrier to effective security. When AI is scattered across siloed, disconnected applications, data leaks are more likely to occur.
SMB’s should instead consolidate AI capabilities within a unified platform to ensure data stays within governed guardrails. By applying automation first to high-volume, internal tasks like monitoring and alert management, you harden your defensive stature rather than weakening it. Simplifying your stack turns AI into a talent multiplier that avoids the noise of disconnected tools while keeping security as a foundational core.”
David Hayes, Director of Sales APAC, Arctic Wolf
“The simplest way to start using AI without risking a data leak is to recognise that cybersecurity isn’t just about technology, it’s about people. While AI is a massive driver for growth, the rush for AI productivity often outpaces established data governance and security protocols.
We are seeing that today in Australia, where a cyberattack is reported every six minutes, that SMEs face the same sophisticated, AI-enabled threats as global enterprises but lack the deep bench of security experts needed for robust risk management.
To bridge this gap, SMEs must move from reactive firefighting to a proactive risk management framework. The goal is not to reinvent cybersecurity, but to evolve by reinforcing existing controls:
- Implement the basics: Mandatory Multi-Factor Authentication (MFA) is a non-negotiable, low-cost hurdle that prevents leaked credentials from becoming a total system compromise.
- Establish AI governance: Create clear AI usage policies for employees to ensure proprietary data stays within secure, private environments.
- Continuous awareness: Develop a comprehensive human risk management strategy. This fosters a security-conscious workforce that knows their role in preventing attacks.
In 2026, being proactive is the only road to sustainable peace of mind and growth for SMEs.”
Darren Lonsdale, Managing Director, ANZ, Prosci
“AI adoption remains a challenge for many organisations with nearly half report difficulties, and 48% are still faced with a user proficiency gap. When it comes to using AI safely, the biggest risk isn’t the technology itself, it’s the people who use it.
The simplest way to start using AI without risking data leaks is to begin with governance. Safe adoption requires clear policies, defined guardrails and a shared understanding of what data can and cannot be used. However, governance alone isn’t enough, organisations need a structured approach to ensure these guardrails are understood, adopted and consistently applied across the business.
These initiatives need to be aligned to core business objectives and embedded within the broader organisational strategy as AI adoption is not a standalone technology shift, but a change in how people work. Without this foundation, even the most sophisticated tools can introduce unnecessary risk.
From there, organisations should take a structured, phased approach. Focus on a small number of low-risk use cases and scale intentionally. This allows teams to build confidence and refine controls before broader rollout.
Equally important is developing a capable workforce and culture that supports responsible innovation. Role-based training, with clear “dos and don’ts”, helps employees understand both the opportunities and risks of AI tools. Programs such as the Prosci’s AI Adoption Workshop provides a framework and guidance to build this capability, enabling employees to use AI safely and supporting effective adoption from the outset.”
Kawin Boonyapredee, APAC CISO Advisor, KnowBe4
“Many organisations believe the safest and fastest way to adopt AI is by focusing on technology controls, yet evidence shows human behaviour remains the greatest source of risk. KnowBe4’s latest State of Human Risk report found that 90% of organisations have experienced security incidents caused by employee mistakes. That risk only increases as employees experiment with AI tools without clear guidance on how sensitive data should be handled.
The good news is that employees already demonstrate stronger security awareness in the workplace. Our recent survey of working Australians found that 53% put more thought into protecting work accounts over personal ones, reflecting the impact of clear policies, training and accountability.
That’s why education should come first. When employees understand what information is sensitive, how AI systems process data and what should never be shared, they make better informed decisions every day. Building a strong security culture helps people use AI confidently and responsibly, while reducing the risk of accidental data exposure.”
Luke Creswick, Principal Solutions Consultant, Asia Pacific, iManage
“There are a few steps you can take to ensure effective AI use in your business while protecting your and your client’s data.
- Start by ensuring that the underlying data that fuels your AI initiatives is appropriately organised, secured, and governed (for example, in a modern, cloud-native DMS). Not only does this help protect any sensitive data, but it also enables AI to deliver more accurate and reliable outcomes based on the context that your DMS provides.
- If available, use AI capabilities that are already built into the systems that organise, secure, and govern your data (i.e. your DMS). This way, you don’t have to transfer any potentially sensitive data from one system to another. Removing data from a secure system can increase information security risks.
- When using dedicated AI solutions, ensure they, and your DMS (where you organise, secure, and govern your data), support Model Context Protocol (MCP). This provides an extra layer of security, allowing data to remain protected in place while providing external AI solutions access to it.”
MJ Robotham, Director, APAC, NinjaOne
“AI promises to transform nearly any part of business it touches, but in order to remain secure, organisations must hone in on their AI adoption when first implementing it.
Rather than allowing employees to experiment freely with public AI platforms and LLMs, organisations should work with their IT teams to first identify opportunities to automate low risk use cases, such as summarising dense documentation, and automating other routine tasks that bring value to the business without requiring access to sensitive company or customer data.
From there, the priority for IT teams should be around maintaining visibility and control. AI adoption should remain inside IT’s purview, to reduce the risks posed by unsanctioned tools and unmanaged AI usage that can impact compliance and security. If organisations do not know which tools are being used, where data is flowing, or which endpoints are interacting with those tools, it becomes difficult to properly secure them.
This is where endpoint management becomes imperative. Businesses need a clear view of devices, applications, and users across their domain before they embark on their AI adoption journey. With strong endpoint management, IT teams can enforce guardrails, monitor employee AI usage, and minimise the risk of sensitive information being shared inappropriately.
AI will undoubtedly bring value to many organisations, but the safest first step is not broad deployment. It is controlled implementation, grounded in visibility, governance, and the IT oversight needed to keep innovation secure.”
Maria Kathopoulis, CEO & Chief Marketing Officer, UNTMD
“One of the biggest misconceptions about AI is that using it automatically exposes your company’s data to the internet. In reality, most modern AI platforms provide enterprise controls where your data is not used to train public models and remains within secure environments.
The simplest way for businesses to start is by focusing on low-risk internal productivity use cases. Tasks such as meeting summaries, drafting documents, analysing reports or creating marketing content can deliver immediate efficiency gains without involving sensitive customer information.
Many organisations are also experimenting with AI governance layers such as OpenClaw, which sit between employees and AI tools to monitor usage, control prompts and create audit trails. These systems can help businesses understand how AI is being used across teams and reduce the risk of uncontrolled data sharing.
However, technology alone is not the solution. Some companies mistakenly assume that installing a control layer automatically eliminates risk. In practice, security still depends on clear internal policies, role-based access controls and staff training.
The safest way to adopt AI is gradually: start with controlled use cases, establish governance early and expand adoption as your systems mature.”
Morgan Wilson, Founder & Director, creditte chartered accountants & advisors
“AI is useful. It is also genuinely risky if you hand it sensitive data without thinking.
The simplest way to start is to separate what is internal from what is not. Use AI tools for tasks that involve no client data, no financials, no personally identifiable information. Drafting social posts, summarising industry articles, writing job ads. That is a safe starting point.
Before you touch any paid tool, read its privacy policy. Specifically look for whether your inputs are used to train the model. Many free and low-cost tools are. That matters when the input is a client contract or a profit and loss statement.
If you are using AI inside your business operations, default to tools that offer a data processing agreement. Check before you commit, not after.
Start narrow. One use case. No sensitive data. Build the habit before you build the capability.”
Shruta Satam, Co-founder and CEO, Pustakh
“The simplest way to start is to pick one repetitive, low-stakes task and let AI handle it. Drafting meeting summaries, summarising research, or preparing first drafts of internal documents are good starting points. The key is choosing tasks where a mistake is easy to catch and correct.
On data security, the rule is simple: never paste confidential client data, financial records, or personally identifiable information into a public AI tool. Most data leak risks come not from the AI itself but from what people put into it. Use tools that offer enterprise privacy settings or data-off modes, and establish a clear internal policy before you start.
The biggest mistake businesses make is waiting for a perfect AI strategy before beginning. Start small, stay curious, and build from there. AI does not need to transform your entire operation overnight. One well-chosen task, done consistently, builds the muscle memory and confidence to go further.
Start simple. Stay safe. Scale gradually.”
Anthony Daniel, Managing Director, ANZ and the Pacific Islands, WatchGuard Technologies
“The simplest way for SMEs to start using AI is to begin with trusted tools that are already built with strong security controls, rather than introducing unvetted standalone platforms into the environment. This reduces complexity and limits unnecessary exposure from the outset.
The real risk is not in AI itself, but how it’s used. Employees may unintentionally upload sensitive company or customer data into public tools without visibility or clear policies. WatchGuard’s H2 2025 Internet Security Report found that unique endpoint malware surged by more than 1,548% in just six months. Every unmanaged AI tool is a potential entry point for these threats.
To reduce this risk, SMEs should prioritise access control before scaling AI adoption. Many businesses are moving away from traditional VPN-based access toward a Zero Trust approach, where every device, user and connection is continuously verified. This is critical in an AI context, as it helps prevent sensitive data being exposed through compromised accounts or unsecured endpoints.
A practical starting point is to focus on contained, low-risk use cases, such as automation or internal productivity, using AI within platforms you already trust. This allows businesses to gain value while maintaining control. Before implementing AI in your business, establish clear AI usage policies: define what data can be shared, enforce multi-factor authentication, and apply Zero Trust principles. This ensures AI adoption remains secure and sustainable.”
Ramesh Thiagalingam, VP of MLOps and Platform Engineering, ELMO Software
“Start by treating AI like any other vendor—with clear guardrails, not blanket bans. The single biggest risk isn’t AI itself; it’s employees pasting customer data, contracts or payroll information into free public tools to “save time”. Silence from leadership gets read as permission, so the first step is a simple ‘Acceptable Use’ policy that tells your team which tools are approved, what data is allowed in them, and what’s off-limits.
From there, start with enterprise-grade tools—ChatGPT Team, Microsoft Copilot, Claude for Work—where your data is contractually excluded from model training and stays inside your tenancy. Avoid free consumer versions for anything work-related.
Pick one low-risk, high-value use case to begin with: meeting summaries, drafting internal comms, or cleaning up spreadsheets. Roll it out to a small pilot group, capture what works, then expand.
Finally, make sure someone owns AI governance—usually IT or Security—so new tools get reviewed before they’re adopted rather than after. Frameworks like ISO 42001 give you a structured path if you need one. Safe AI adoption isn’t about saying no. It’s about saying yes, with a plan.”
Scott Maynard, Founder, Excite Media
“The best way to avoid data leaks when using AI tools is to have a healthy level of scepticism about what happens to the information you input.
Once you press “enter”, you no longer have control over where it’s sent or stored. So treat the information you enter accordingly.
Put simply, don’t enter any information you’re not happy to have stored in some mysterious place on the internet!
This doesn’t mean you can’t still leverage these AI tools for your business. It just means you should be conscious of what you enter, and try to “anonymise” the information where you can. For example, don’t use real people or business names. Or if you’re working with financial information, use dummy numbers that will still allow you to have the discussion. You can then apply the outcomes to your real numbers offline.
These tools can hugely benefit your business, just be conscious of what you share so you get the upside while minimising the risk.”
Niall McConville, Managing Director, Bombora Advice
“The simplest way to start using AI is to keep it practical and use a bit of common sense.
AI is best thought of as an assistant, not a search engine or a source of truth. It’s very good at helping you draft documents, summarise information and think through ideas, but it shouldn’t replace your judgement or be trusted blindly.
Avoid free versions where you don’t know how your data is being used, you may think it’s free but it’s likely your data is being used and that’s the actual price you are paying. It’s worth paying for a reputable business‑grade option and understanding the data settings.
Finally, set some simple guardrails so staff know what’s allowed and what isn’t. Used sensibly, AI can save time without creating unnecessary risk.”
Sam Makwana, General Manager, Impressive
“The simplest way to start using AI safely in business is to avoid connecting sensitive systems, like websites or cloud databases, directly to AI platforms. Direct API connections can pose security risks and cause data leaks. Instead, use AI as an external or read-only tool for generating ideas, analysing non-sensitive data, or drafting outputs that you can apply manually.
Additionally, anonymise any data before input, restrict AI access to only what’s necessary, and regularly review outputs for unintended exposure. Starting with low-risk use cases allows businesses to gain AI benefits without compromising security. Never use AI just for the sake of it. Always conduct an internal cost-benefit analysis before deciding to integrate AI into any of your existing systems.”
George Spink, Executive General Manager, Decoda
“The most effective way to integrate AI without compromising security is to stop treating it as a software purchase and start treating it as a cultural shift.
Most data leaks occur due to a lack of awareness rather than technical failure, so the first step must be to establish clear policies. Your team needs to understand that any data submitted to general AI tools can potentially be used to train the model, making it public property.
Start by auditing the tools you already use. Most providers have settings to disable data sharing for model training, so ensure these are toggled off. As you scale, move to enterprise accounts. This isn’t about surveillance, it’s about visibility. These tiers provide audit logs and prompt tracking, allowing you to identify whether sensitive data – such as customer PII or confidential IP – is being handled inappropriately.
For high-stakes operations, the goal is zero-data retention. We’ve found the most secure progression is hosting models through infrastructure providers like AWS or Azure within Australia. This ensures data only exists in the prompt and is never used for training.
For absolute control in sensitive environments, running open-source models locally on your own hardware eliminates the third party. Start simple, lock down your settings, and build your infrastructure as your data sensitivity grows.”
Sally Branson, Managing Director, Sally Branson Consulting Group
Start with the basics: understand what AI tool you are using, who owns your data when you use it, and what that platform’s terms of service actually say. Most data leaks do not happen because of sophisticated attacks. They happen because someone typed confidential client information into a free AI tool without reading the fine print.
For SMEs, the simplest starting point is to choose one tool, read its privacy policy, and create a short internal protocol before anyone uses it. What can go in, what cannot, and who approves exceptions.
Then communicate that to your team and your stakeholders. Clients and partners increasingly want to know how the businesses they work with are using AI. Getting ahead of that conversation is not just good governance, it is good reputation management.
AI is not going away. The businesses that use it well will build trust. The ones that stumble into it will spend considerably more than the cost of a policy fixing the fallout.”
Victor Horlenko, Head of AI Innovations, Devart
“Most AI data leaks aren’t sophisticated attacks – they’re employees pasting client emails into free chatbots. Two things fix this: the right infrastructure and basic governance.
Infrastructure. Pick one sanctioned AI provider. Free tiers often train on your data; unofficial resellers may log everything and route prompts through servers you haven’t vetted. Your realistic options are an enterprise tier (ChatGPT Enterprise, Claude for Enterprise, Copilot, Gemini for Workspace), AI through your existing cloud (Bedrock, Azure OpenAI, Vertex AI), or self-hosted open models. Whatever you choose, get a contract with a no-training clause and wire it into your SSO and logging.
Governance. Infrastructure alone doesn’t stop shadow AI — the personal accounts and browser extensions employees use to move faster. You need a one-page policy naming what’s approved and what data is off-limits, data classification humans can actually apply, and a lightweight approval path so the right way is also the easy way.
The smallest version that works: one approved tool, one short policy, one named owner, thirty minutes of training, and a ninety-day review. Everything else can evolve from there.”
Clayton Pilat, Practice Lead – Artificial Intelligence, Synechron Australia
“To ensure safe AI deployment, governance is critical from day one. Begin with small, managed projects that don’t involve sensitive data. For example, summarise meeting notes that don’t include confidential client information.
Start by creating a basic inventory of where and how AI will be used, including the data it will access. Effective governance requires assessing risk exposure and prioritising critical, high-impact AI systems.
Then put guardrails in place early to promote ethical practices by embedding privacy, fairness and transparency. These also enable efficient operations by monitoring usage, managing costs and detecting any anomalies.
It’s important to manage access by treating every AI agent as a distinct, governed identity with unique credentials. Apply a “least privilege” approach with strict permissions and continuous oversight.
As models and regulations evolve, your governance measures must adapt accordingly. In Australia, privacy and data legislation are continuously updated, and regulators now require the use of AI tools to comply with the Privacy Act. While minimal disclosure was once common, opacity around AI use is now becoming a liability.”
Matt Hodgson, Founder, Bring Digital Performance
“We started with a business account on a platform that has enterprise privacy settings built in. Claude Teams, ChatGPT Team, and Gemini for Workspace all prevent your prompts from being used to train the underlying models. That one step removes most of the risk before you’ve even opened the tool.
From there, set one rule for your team: no client names, no financial figures, no personally identifiable information in prompts. Use placeholders and descriptive context instead. It takes 10 minutes to explain and it works.
Then pick one task that currently eats time and test AI on it. At Bring, we use it to analyse SEO data quickly, spotting patterns and anomalies across complex data sets that would otherwise take hours to interpret manually. The time saving was immediate and the risk was minimal.
The businesses winning with AI right now are not the ones that moved fastest. They are the ones who set clear boundaries early and built confidence from there. Start with one use case, prove it works, then expand.”
Richard Taylor, Head of Innovation, Spinach
“Redaction before Action
The “Zero-Data” approach is your safest bet. Use AI for reasoning, not record-keeping. Instead of uploading a sensitive contract, paste a generic version with all PII (Personally Identifiable Information) scrubbed. Ask for the analysis, then apply those insights back to your private files locally. If you must use live data, bypass “Free” tiers immediately. Enterprise versions of tools like Microsoft Copilot or ChatGPT Team provide “commercial data protection,” ensuring your inputs aren’t used to train the global model.”
Seamus Phan, CTO, McGallen & Bolden Pte Ltd
“The fear of “data leakage” is the biggest anchor dragging down AI adoption for many organizations, from government agencies, multinational corporations, tech startups, and emerging domestic businesses. Most leaders rightly hesitate to feed proprietary secrets into the cloud’s global maw. But there is a pragmatic, simple, and cost-effective alternative: offline RAG (Retrieval-Augmented Generation), without all the worries of agentic AI.
Think of RAG as digital sovereignty. By running models with tools locally querying your own confidential documents, you eliminate external API risks and recurring costs. Your sensitive data never leaves your office environment, or even your individual space.
The workflow is straightforward. You ingest your internal documents, such as DOCs, PDFs, CSVs, or reports, into a local vector database. The AI then retrieves relevant chunks to answer questions based strictly on your facts and never anything outside of that, which effectively reduces hallucinations.
For added safety, redact Personally Identifiable Information (PII) before ingestion. If you have a decent GPU, the performance is snappy, secure, and achieves high accuracy on domain-specific data.
AI need not be a cloud-only mystery. Keep your intelligence local, your costs low, and your data under your own space.”
Airing Wang, Co-Founder, AI JAR
“The simplest way for businesses to start using AI without risking data leaks is to treat it as a decision-support layer, not a data repository.
One of the biggest misconceptions is that companies need to upload internal data to get value from AI. In reality, the safest and fastest entry point is using AI on external intelligence such as market trends, consumer behaviour and competitor activity before bringing any proprietary information into the mix.
This gives businesses a low-risk way to generate market insights, product ideas and go-to-market strategies using structured, credible external data, without exposing sensitive internal files. It mirrors how traditional new product development often begins: with market scoping and opportunity sizing before any internal rollout.
From there, businesses can introduce internal data gradually and in a controlled way, with clear governance, role-based access and defined workflows. The key is progression, not full integration on day one.
The businesses moving fastest today are not the ones building the most complex AI systems. They are the ones starting with a clear use case, trusted data inputs and outputs that support real decisions.
AI should not create new risk. It should reduce uncertainty”
Karrina Mountfort, Founder, The AI Assembly
“Most data leaks from AI do not come from the tools themselves. They come from people who were never told what not to share, or which settings to change.
So start somewhere safe. Marketing and content is the obvious place. It is almost entirely public information, it is one of the most time-consuming tasks for small teams, and the cost of getting it wrong is low. If something does leak, it is probably your brand story, your service offer, or your expertise. That is not a risk. That is reach. You want AI to know and readily show your details.
Small teams that do not show up consistently fly under the radar. AI can change that, without touching anything sensitive.
Get comfortable there first. Build your confidence. Then, when your foundations are solid and your team knows what responsible use looks like, that is when you look at deeper integration.”
Mareike Niedermeier, Founder and CEO, Sales Savvy Online
“Separate what you’re asking AI to do from what data it needs to do it. Most of the highest-value use cases, such as drafting content, summarising research, generating ideas, and writing frameworks, require zero sensitive information.
No client data. No financial records. No proprietary processes. Start there.
Use a paid tier of a reputable tool rather than a free version.
Free tiers have weaker data protections and often use your inputs for model training.
Don’t paste customer lists, financial reports, or internal IP into any AI tool until you’ve read its data policy and understand exactly where that input goes.
The risk isn’t AI. It’s treating a public tool like a private one.
Keep sensitive data out of it, build confidence with low-risk tasks first, and the data-leak concern largely solves itself before you ever need to tackle the more complex use cases.”
Ivan Logan, Founder, CEO and Director, Omnivore
“Treat AI as you would any external partner to your business. If you wouldn’t share certain information with a third party, you shouldn’t be sharing it with AI. This is particularly true of any personally identifiable or sensitive data, or anything you wouldn’t want made public. It’s worth formalising this in your employee handbook, just as you would any other workplace policy.
At the same time, AI outputs should be treated like any contractor deliverable: they need to be checked for quality, accuracy and appropriateness. The business should define a set of approved tools that have been assessed for their intended use, rather than relying on unverified platforms.
Importantly, there should always be a human in the loop as the final checkpoint before anything is used externally. Realise that the field is evolving fast. Review how you are using AI on a regular basis and adjust your policies as required.”
Kate Massey, General Manager, APAC,Athos Commerce
“Australian businesses are moving quickly from AI curiosity to practical use. KPMG research reveals that 65% of businesses are already using AI, with 49% of employees using it regularly at work. The challenge is no longer whether to start, but how to do it safely while protecting intellectual property.
The most effective approach is to focus on AI that operates within trusted platforms, where data stays contained, access is controlled, and usage is auditable. By doing so, businesses can reduce both accidental leakage from ad hoc use and structural risk from under-secured custom builds.
For retailers, secure, embedded AI platforms are already enhancing product discovery, personalisation, and merchandising. These AI systems can automate manual tasks like product data feed management, while also understanding shopper intent to deliver more relevant, personalised shopping experiences. Crucially, it does so without exposing sensitive data externally, such as product performance, margin or sales history.
Real gains do not come from building bespoke AI applications. That approach often slows progress and introduces risk. Immediate value comes from established, embedded AI technologies that reduce manual effort, increase output, and maintain data control.”
Julian Vido, AI Safety Lead, MYOB
“New MYOB analysis shows SMEs using AI are growing 2.8 times faster than those that aren’t, with most adopters reporting time savings and productivity gains.
But our research also reveals a readiness gap: two‑thirds of SME employers aren’t actively looking for AI experience when they hire, 72% have no plans to offer AI training and 89% of SMEs don’t have a responsible AI policy in place. Without basic upskilling and guardrails, it’s harder for teams to use AI safely and effectively, even when it’s built into platforms they already use.
Further insights from the National AI Centre (NAIC) indicate that 40% of SMEs now use AI in some capacity, but for the large number who aren’t (and don’t plan to), the main barriers cited are trust and risk concerns.
The Australian Government’s National AI Plan, AI Adopt Program and Guidance for AI Adoption provide practical guidance to SMEs, supporting them to bring AI into day‑to‑day operations responsibly. The newly established Australian AI Safety Institute will add independent testing and advice on emerging risks, giving businesses more confidence in the tools they choose.
For SMEs, that points to a simple approach. Pick a specific, well-understood problem that AI is proven to solve. Use features embedded in platforms you already trust, and be clear about what data can go in. Build a responsible AI culture with your team: agree the rules together, write them down, and keep talking about what’s working and what isn’t. If you get stuck, the NAIC now provides a template AI policy and screening tool to get you started.”
Mark Khabe, Co-Founder, PRIME BPM
“There is a clear shift in how business leaders think about AI. They are no longer asking “Does AI work?” but rather “How do I use AI while keeping my data safe?”
This clearly shows that while AI has now moved from hype cycle into real adoption, data privacy concerns continue to limit adoption. The key lies in starting with low-risk, high-impact use cases that don’t require exposing sensitive data.
Improving internal operations is one of the simplest and safest entry points. Businesses can start with non-sensitive processes, such as HR hiring and onboarding, procurement workflows, internal approvals, expense management, etc. Mapping workflows, analysing inefficiencies, and identifying improvement opportunities in these core processes delivers immediate gains in efficiency and productivity without putting critical data at risk.
AI is particularly effective when used to handle repetitive and manual tasks. By accelerating activities, such as documenting processes, analysing patterns, and generating actionable insights, it lets teams focus on strategy and execution, where human expertise matters the most.
Equally important is governance. Use enterprise-grade AI tools with clear data boundaries, access controls, and auditability, and avoid sharing raw business-critical data in open, public AI environments.
By starting small, businesses can build confidence and scale AI safely, without compromising trust or security.”
Lee Robson, Founder & Director, iStories
“Start with your communications. It’s a low-risk, high-return entry point because you’re not feeding AI your financial data, customer records, or IP. You’re giving it your story.
The practical first step is using purpose-built AI tools designed for business use, not free consumer apps where your inputs may train the model. Look for platforms with clear data handling policies, no training on user inputs, and enterprise-grade security underpinning the output.
The rule of thumb: never input what you wouldn’t publish. Master that discipline first, then gradually expand how you use AI from there.”
Alfie Lagos, Founder, Lexlab & Neuravo
“The biggest data leak risk with AI is far more mundane than people think. Right now, someone on your team is copying client data into a free-tier tool that trains on every input. Nobody’s auditing it, nobody’s asked what happens to that data, and the tool is quietly absorbing everything. That’s the leak. And it’s happening in most businesses already.
So before you adopt anything, ask three questions of your AI provider. Does this tool train its models on my inputs? Where is my data stored and for how long? And can I get a written data processing agreement that confirms both? If your provider can’t answer those clearly, walk away. Most major platforms now offer business or enterprise-tier subscriptions that explicitly wall off your data from model training, but you need to verify that. Don’t assume a paid plan equals a safe plan. Read the terms. Ask for the documentation.
We chose Claude internally at Lexlab. One tool, company-wide. The impact has been significant across QA’ing, research, analysis, and reporting. But the tool mattered far less than the commitment. We rolled it out with structured training and weekly follow-ups to make sure it sticks. Not a one-off lunch and learn that everyone forgets by Friday.
My advice: stop spreading thin across half a dozen AI experiments with no governance. Pick one tool. Verify the data protections yourself. Train your people properly. And don’t double down, triple down. The businesses pulling ahead aren’t running the most pilots. They’re the ones going deep on one.”
Peter Fraser, Founder and Principal, Stratagem Corporate Advisory
“AI can save your business hours each week, help win customers faster and reduce admin, but only if you use it wisely and protect your data. My top three tips are:
- Start with one task that wastes time
Do not try to transform the whole business overnight. Begin with a practical use case such as drafting proposals, replying to customer enquiries, creating marketing content, summarising meetings or streamlining admin. Quick wins build confidence without exposing sensitive systems. - Prevent data leaks from day one
Never upload customer records, contracts, pricing, payroll details or confidential business plans into open AI tools. Use trusted business-grade platforms with strong privacy settings, access controls and security protections designed to help prevent data leaks and misuse. - Give your team simple rules
Make it clear which AI tools are approved, what information cannot be entered, and when a human must review output before it goes to a customer or supplier. Most data leaks happen through confusion, not bad intent.
The businesses getting the most from AI are not always the biggest. They are the ones starting early, learning fast and putting trust at the centre. Begin small, stay smart and grow with confidence.”
David Fischl, Legal Digital Transformation Lead Partner and Corporate and Commercial Team Lead Partner, Hicksons| Hunt & Hunt | Holman Webb
“Data leaks come from your employees making honest mistakes, such as putting your business’s confidential information into the wrong applications. The simplest way to start using AI securely is to design your approach around how your people use it.
Start with the provider. Use a reputable AI service that contracts to keep your information confidential and agrees not to train its models on your data. Check the data-handling terms, where the data is stored, and the confidentiality arrangements. Free applications rarely offer this.
The provider is only half the job. You also need an internal AI policy and training, so your staff know which tools are approved and how to use them properly. The policy should be specific about how tools are used, not just which ones. The same AI product can sit under a commercial contract when accessed through a work account, and under standard consumer terms when the same person logs in from their phone.
Once your people are using a secure tool and know what they’re doing with it, you quickly start seeing where AI genuinely helps. That’s when the real work begins. Businesses that get this right are running more profitably, giving their staff better days at work, and looking after their customers better too.”
Craig Butler, Managing Director ANZ, Kinectify
“The simplest way to use AI in your business without risking data leaks is to avoid using sensitive data entirely. Most of the early productivity wins don’t require it.
Begin with internal, non-confidential workflows where AI can compress hours of work into minutes. Tasks such as summarising meeting notes, drafting job descriptions, and researching unfamiliar topics are a great way to start with AI, as they carry minimal data risk, deliver genuine efficiency gains and allow teams to build fluency with the tools before the stakes go up.
A few practical guardrails can also go a long way towards securing your data. Using reputable AI providers with strong security measures, along with introducing and implementing a business-wide AI usage policy that specifies how, what and when to use AI within the business, also helps protect your data.
When adopting AI in business, treat it as a highly capable but junior teammate. Always review, never rubber-stamp.”
Andrew Kay, Director of Systems Engineering, APJ, Illumio
“When it comes to AI, the goal shouldn’t be to eliminate risk entirely, that’s not realistic for any business today. As AI becomes more powerful and more widely used, cyber risk comes with it. That’s true for large enterprises and it’s true for small businesses, too.
AI does change the security picture. It can make it easier for attackers to move faster, spot weaknesses, and slip through gaps that are hard to see. But it also offers real advantages for small and midsize businesses. AI can improve productivity, lower costs, and help smaller teams compete with much larger organisations. Avoiding AI because of risk can hold your business back just as much as a security incident can.
A more practical approach is to assume that issues will happen at some point, and design your business so they don’t become disasters when they do. That means focusing less on perfection and more on resilience. This is where a Zero Trust mindset matters. In simple terms, that means:
- Only giving people, systems, and AI tools access to what they actually need.
- Clearly defining what your AI systems are allowed to touch.
- Continuously watching how systems and data interact.
If something goes wrong, strong visibility and clear boundaries help you isolate the problem quickly, instead of letting it spread. The goal isn’t to stop every incident, it’s to limit the impact, keep your business running, and recover fast. For small businesses starting with AI, that’s the safest path forward: adopt AI thoughtfully, set clear boundaries, and make sure you can see and contain problems early. That’s how you use AI with confidence, without letting one mistake put everything at risk.”
Craig Stockdale, ANZ Country Managing Director, Wasabi Technologies
“As AI adoption accelerates globally, businesses are facing increasing data sprawl as information moves across more systems, teams and tools. The challenge is not just using AI, but ensuring the right foundations and guardrails are in place so it can be used safely, without exposing sensitive information.
The simplest way to begin is to start with a low-risk use case, such as drafting internal checklists or summarising non-sensitive documents, and restricting AI to a clearly approved, permissioned set of data.
When AI is embedded directly into business systems, leaks typically happen through the pathways you already have, like over-permissioned folders, misconfigured connectors and outputs that are easily reshared, so the real risk becomes what data AI systems are allowed to access in the first place. This is where your data foundation matters.
Thinking strategically about where your data lives, how quickly can you retrieve it, who can access it and how it is protected, creates a boundary between safe and unsafe data use. This should be reinforced with strong identity controls, such as multi-factor authentication, ensuring that access to sensitive systems and AI-connected data is verified beyond just a password. Strong governance at the storage layer, such as tight access controls, immutability and hidden copies, is crucial to ensuring AI systems only interact with approved datasets and act as the final line of defence against tampering or data loss.
Businesses that keep data protection front of mind are the ones that will be able to scale AI safely without increasing the risk of data leakage or loss of control.”
Nigel Tan, APAC Security Evangelist Director, Delinea
“The safest way to use AI in your business is by starting small, identifying use cases and deciding what data is needed to deliver the outcome you expect. Most AI tools now allow you to create AI agents that can help you, sometimes even autonomously. But you have to define what information these agents can access in the process, and for how long.
One important point to have in mind is that you have to be vigilant and avoid the temptation to give AI too much access without knowing the risks or how to avoid them.
In our Identity Security Report, we found that 90% of Australian organisations are feeling pressure to loosen AI controls and move as quickly as possible. The problem is that giving AI too much access without proper controls exposes businesses to new types of cyberattacks or critical information leaks. The benefits of rushing AI adoption are not worth the risks.”
Des Viranna, Head of Advisory and AI, Altis Consulting
“The safest starting point is the most practical: use AI only on information you would be comfortable sharing publicly.
Most AI tools store or learn from data you send them. Leave confidential or commercially sensitive data out until you have the right controls in place.
My tips to start using AI safely:
- Create a safe testing environment: Setting up a separate space to test AI tools before applying them to real data protects you and helps you learn where AI adds value before committing.
- Read the provider agreement: Most reputable AI providers publish a data processing agreement explaining what they do with your inputs. Check whether your data is used for model training and if you can opt out. Many skip this.
- Use AI as a thinking partner: Before a tough client conversation, pricing decision or new service idea, talk it through with AI. Ask it to challenge your thinking or suggest options. It’s like speaking with a knowledgeable colleague.
- Brief your team: A short conversation about what not to share goes further than software settings.
The organisations that get the most from AI aren’t the ones that move fastest. They build structure first, test deliberately and scale what works.”
Nirlep Adhikari, Founder & Director, Mount Mindforce
“Start at the edges, not the core.
Use AI for the things that don’t touch your sensitive data first. Drafting communications, summarising meeting notes, researching competitors, generating marketing copy. Low risk, immediate value, zero exposure.
The mistake I see founders make is reaching straight for the centre of their business — automating workflows that sit on top of customer data, financial records, proprietary logic. That’s where the risk lives.
A simple rule: if the AI tool you’re using needs access to your core database or customer records, stop and ask three questions. Who owns the data I’m feeding this? Where is it stored? Is this platform training its models on my inputs?
If you’re unsure about any of those three answers, that’s the moment to bring in someone who can audit your stack before you go further.
Build speed at the edges. Build trust at the core. That’s how you use AI without handing your business away.”
Clayton Pilat, Practice Lead – Artificial Intelligence, Synechron Australia
To ensure safe AI deployment, governance is critical from day one. Begin with small, managed projects that don’t involve sensitive data. For example, summarise meeting notes that don’t include confidential client information.
Start by creating a basic inventory of where and how AI will be used, including the data it will access. Effective governance requires assessing risk exposure and prioritising critical, high-impact AI systems.
Then put guardrails in place early to promote ethical practices by embedding privacy, fairness and transparency. These also enable efficient operations by monitoring usage, managing costs and detecting any anomalies.
It’s important to manage access by treating every AI agent as a distinct, governed identity with unique credentials. Apply a “least privilege” approach with strict permissions and continuous oversight.
As models and regulations evolve, your governance measures must adapt accordingly. In Australia, privacy and data legislation are continuously updated, and regulators now require the use of AI tools to comply with the Privacy Act. While minimal disclosure was once common, opacity around AI use is now becoming a liability.”
Dean Swan, Vice President & GM APJ, monday.com
“The simplest way to start using AI in your business without risking data leaks is to keep it inside the systems where your data already lives and is already governed.
Using a standalone AI tool for sensitive information is like reviewing a confidential contract in a café down the street. The café isn’t malicious; it’s just not built to protect it. Embedded AI is reading that contract at your desk, inside a building with badge access, cameras, and a shredder. Same work, completely different risk profile.
Most data leaks happen at the point of exposure. When an employee runs sensitive data through a tool that sits entirely outside the business’s security perimeter, the problem is the environment the model is being used in.
The conversation is shifting away from standalone AI tools and toward AI that’s embedded in the platforms teams already use, where it inherits the permissions, access controls, and audit trails you’ve already built. You’re not creating a new surface area for risk, you’re extending capabilities you already trust.
From there, the best starting point is narrow. Pick one low-stakes workflow and let AI operate within that contained process. You’ll learn how your teams actually use it, where the governance gaps are, and where the real value sits, all without betting the business on a single rollout.”
Santanu Dutt, Vice President and Head of Technology, Asia-Pacific & Japan, Zscaler
“The simplest starting point is surprisingly not the AI tool itself. It is understanding and controlling your data.
First, classify your data. Know what information your business holds, what is sensitive, and what is regulated. Without this baseline, it’s impossible to safely introduce AI into daily workflows.
Second, tighten identity and access controls. Clearly define who can access what data, why they need it, and ensure that access is monitored. AI tools are powerful, but without strong identity governance, they can unintentionally expose information to the wrong users or systems.
Principles around a strong security and architecture posture with regards to your data and identity & access do not change when considering AI, in fact there is an even higher need for those with AI.
Once these foundations are in place, organisations should guide employees towards approved enterprise AI tools rather than leaving them to experiment with public ones. Well‑intentioned staff may upload sensitive files to access AI platforms without realising the risks.
The same discipline will also apply to AI agents. As they become part of the workforce, they must be treated like any new employee – given verified identities, restricted roles, and continuous oversight.
Lastly, applying the philosophy of Zero Trust and leveraging a Zero Trust platform (versus legacy security technology) become a necessity for AI adoption.”
Gareth Russell, Field CTO for APAC, Commvault
“The simplest way to start using AI without risking data leaks is to make two early decisions; choose the right tools, and understand your data.
First, avoid free, consumer-grade AI tools. These often fall into the realm of shadow IT and can introduce risk. Instead, choose business-tier AI platforms, where providers are contractually obligated not to train on your inputs and offer clear data handling terms.
But tool choice is only half the equation. You can’t prevent data leakage if you don’t know where your sensitive data resides. This is where many fall short, treating data awareness as something to address later. In reality, it should be the starting point.
When you have data visibility, you can understand your exposure and apply guardrails. For example, should an AI agent have access to a shared drive full of over-permissioned HR files or customer records? AI will use whatever it can access, and surface it. Knowing your data allows you to set clear policies on what AI can and cannot access, restrict sensitive sources, and enforce controls that prevent this exposure.
Just as important, data awareness enables you to respond effectively if something goes wrong. If a prompt exposes sensitive information, an agent writes to the wrong location, or output becomes public, you need to quickly answer what was exposed, who was affected, and what needs to be recovered.
Because AI, for now, is only as safe as the data you allow it to access. Without the right guardrails, it increases the risk of unintended data exposure.”
Muthukumar T, Partner, Befree
“The fear of data leaks is one of the most common reasons Australian SMEs hesitate to adopt AI, and it’s a legitimate concern. But the answer isn’t to avoid AI altogether. It’s to be deliberate about where and how you introduce it.
Start with a simple rule: keep sensitive data out of general-purpose AI tools. Public AI platforms – the kind you access through a browser are not designed for confidential client records, payroll data, or financial information. Use them for drafting, research, and internal communications. Nothing that, if leaked, would damage a client relationship or trigger a compliance issue.
For your finance and back-office operations, the safer path is working with platforms and partners that have built data security into the architecture itself. ISO 27001 certification, role-based access controls, and encrypted workflows aren’t features to negotiate, rather they’re the baseline.
This is where many SMEs find the outsourcing model genuinely useful. Rather than managing AI adoption in-house and hoping your team gets it right, you partner with a team that has already built compliant, automation-enabled processes around your data. The AI benefit arrives without the risk landing on your desk.
AI is not going away, and businesses across industries that engage with it thoughtfully now will be better positioned than those scrambling to catch up later. The key word is thoughtfully, because speed without structure is where data leaks actually happen.”
Russell Todd, Security Solutions Lead, Avanade Australia
“The simplest way to start using AI without risking data leaks is to work inside the systems that your business already trusts.
For instance, organisations running on Microsoft 365, that means starting with tools like Copilot, which operate within existing security, compliance, and access controls. Your data stays in your environment, rather than being exposed to unknown third-party platforms, making it a far safer place to start for everyday use.
The step that most businesses overlook is putting basic governance in place before rolling AI out more broadly. Clear guidance on what data can and can’t be used, particularly around customer information, financial data and anything sensitive should be clearly defined. It doesn’t need to be complex, but it needs to be explicit.
Our research shows two in three Australian business and IT leaders see poor data governance as their biggest barrier to AI adoption. In most cases, it comes down to a lack of clarity on safe use rather than the technology itself. Getting those fundamentals right before you start is what separates businesses that adopt AI confidently and move beyond experimentation.”
Sonia Eland, Executive Vice President and Country Manager, Australia and New Zealand,HCLTech
“The simplest way to start using AI without increasing the risk of data leakage is to approach adoption through governance, not just experimentation. Begin with clearly defined, low sensitivity to use cases and establish boundaries around how information is handled before introducing any tools into workflows.
This means being deliberate about data classification, ensuring that sensitive or proprietary information is excluded from early use, and embedding oversight into every interaction. AI should be treated as an extension of existing processes – subject to the same controls, review mechanisms, and accountability as any other business system.
Equally important is building organisational awareness. Teams need a practical understanding of how AI systems process inputs, where risks may arise, and how to apply sound judgement when engaging with them. This creates a culture of responsible use rather than reactive risk management.
Starting small, with structured guardrails and a focus on repeatable practices, allows businesses to build confidence while maintaining control. Over time, this measured approach enables more sophisticated applications without compromising data integrity or trust.”
Billy Loizou, AVP and General Manager, Amperity
“The simplest way to start with AI isn’t to roll it out across the business. It’s to begin with a single, well-governed use case on top of data you trust.
To use AI safely, customer data needs to be unified in an environment where identity is resolved, permissions are enforced, and data doesn’t need to move every time it’s used. Focus on one dataset or use case, get governance right, then expand.
What’s changing is how teams act on that data. Instead of exporting datasets into multiple systems, they can work directly where the data lives. With solutions like Amperity’s Customer Data Agent, teams can explore data, build audiences, and activate campaigns using natural language, all within a controlled environment.
Amperity is now available in AWS regions in Sydney and Melbourne, helping organisations run within local cloud infrastructure and better align with regional data residency and privacy requirements.
AI raises the bar for control. The organisations that get this right build the foundation first, so they can move fast without creating risk.”
Matthew Owens, Director, Annexa
“Most organisations don’t have a formal policy on which AI tools employees can use with business data – and in the absence of one, people default to whatever is easiest and most familiar. That usually means personal ChatGPT or Claude accounts, which may be using your conversations to train AI models by default.
Switching to commercial accounts on the same platforms will have training off by default, data handling governed properly and the cost difference is marginal compared to the risk being removed.
That one change removes the most significant data risk without requiring any new technology investment or complex implementation. The next step is connecting AI directly to your business systems rather than exporting data out of them to use AI elsewhere. Platforms like NetSuite now allow AI tools to work directly with live business data inside the system, operating within the same permission controls that govern every other user.
The AI can only see what the user can see and every interaction is logged. The organisations getting this right have been deliberate about which tools are sanctioned for business data and make sure that expectation is understood across the team.”
Heather Marano, Director, AwardGenie
“Many SMEs mistakenly believe that paid subscriptions for generic generative AI tools automatically protect their data. In reality, these platforms often use inputs for model training by default, creating a data leak risk where your proprietary strategy could eventually resurface in a competitor’s AI output.
A robust solution requires a three-pronged approach: strong AI policies, in-house training, and specialist “closed-loop” AI tools. Without all three, employees will inevitably seek out “shadow AI” – using personal AI accounts to stay productive and leaking data despite good intentions.
The most robust AI-powered tools have “privacy by design” as the foundation, ensuring captured data is never used to train the underlying AI model. At AwardGenie, for example, we handle highly sensitive business data – used in award submissions – within a closed-loop workflow to completely mitigate the risk of leaks.
Ultimately, security comes from a proactive approach to both processes and platforms, ensuring your intellectual property remains exactly where it belongs: with you.”
Sarah Richardson, CEO, Australian Loyalty Association
“We work in the loyalty space where AI acts as the foundational layer to enhance programs and customer interactions. Data-driven personalisation is becoming far more sophisticated, with brands using agentic AI and advanced analytics to deliver more relevant experiences at scale.
The Australian Loyalty Association research shows more than 40% of consumers are willing to use AI tools to speed up shopping decisions and identify better value, highlighting how quickly AI-assisted discovery is entering everyday purchasing behaviour.
In loyalty, customer data is one of the most valuable assets a brand holds. To use AI effectively and safely, this treasure trove of data first needs to be unified across touchpoints so brands have a clear, accurate view of each member.
Equally important is the technology stack that supports loyalty. It must promote strong governance and ethical management, ensuring data is accessed responsibly and used only for clearly defined purposes. AI should operate within these guardrails, enhancing personalisation and decision-making without compromising trust.
For leaders, the best way to begin is not by implementing AI everywhere at once, but by starting small and solving real problems. That might mean improving how offers are recommended, helping members discover the best way to redeem points, or identifying behaviours that strengthen loyalty. By focusing on practical use cases and building on well-governed customer data, organisations can introduce AI in a way that improves loyalty experiences while protecting the integrity of the data that powers them.
We teach this very topic at The Australian Loyalty Association’s Artificial Intelligence in Loyalty course. The leaders that learn from us understand how organisations can prepare for the next wave of AI-driven commerce, where autonomous agents increasingly assist consumers with purchasing decisions, recommendations and brand interactions. In this new digital era, loyalty strategies must evolve to remain relevant in a world of AI-mediated customer relationships.”
Mark West, RVP ANZ, Kong
“AI adoption is really picking up speed across APAC, and that’s driving a big surge in the volume of LLM and MCP traffic that organisations have to handle. The issue is the more these agents start talking to each other without any proper governance in place, the tougher it gets to keep visibility, control costs, or stay on top of compliance.
There are also alarming new figures emerging around enterprise data leaks. IBM’s Cost of a Data Breach Report finds 86 per cent of organisations have no visibility into their AI data flows, and 20 per cent of security breaches are now classified as ‘shadow AI incidents’.
For businesses looking to start with AI, the priority should be putting governance and security foundations in place before scaling adoption. Organisations in our region need to think seriously about how they manage AI responsibly from the outset, particularly as systems become more interconnected.
What is increasingly needed is a connectivity layer across AI systems that provides oversight and control. With the right infrastructure in place, teams can govern multi-agent traffic in a single environment, helping improve visibility, strengthen compliance and reduce the risk of data leakage. Kong’s Agent Gateway is an example of this.
Ultimately, adopting an AI control plane today helps prepare organisations for evolving governance requirements while supporting ethical innovation over the next decade.”
Kathryn Giudes, Founder and Managing Director, ORCA Opti
“Your team is already using AI. Do you know what they are sending and saying to Commercial AI?
Employees are pasting client data into ChatGPT, drafting proposals in Gemini, running code through Claude, on personal accounts your IT team can’t see. Every one of those prompts leaves your network, gets logged on foreign infrastructure, and is governed by Terms of Service your legal team has never reviewed (or agreed to). Most platforms grant themselves broad rights over everything you submit, retain data for months or years, and can change those terms at any time.
This isn’t a future risk. It’s happening now, in every department, on every device.
The simplest way to start using AI safely is to choose technology with guardrails built into the architecture, not bolted on afterwards. That means AI that operates within your existing security perimeter, with clear rules on what data can be queried, with full visibility over how AI is being used across the organisation.
At ORCA Opti, we built AI Guardian to solve exactly this problem. It applies real-time guardrails around every AI interaction; both inside our platform and across external AI tools your people are already using. It detects risks before they become incidents, protects sensitive data before it leaves, and gives leadership a tool to combat shadow AI usage, while allowing the team to remain productive.
No data leaves your security boundary. No prompts are logged to external model providers. No content licences are granted to third parties. Because it runs on Australian-hosted infrastructure, your AI capability remains operational even if international access is disrupted.
AI is powerful. But without guardrails, it’s your biggest unmonitored data channel. With the right architecture, it becomes a secure operational advantage, not a liability.”
Jonathan Reeve, Regional Director ANZ, Eagle Eye
“If you have a loyalty program, begin small with AI and build on strong data foundations.
Rather than treating AI as a large-scale transformation project, businesses should identify a single problem with clear commercial impact and run a focused proof of concept.
At Eagle Eye, we believe in delivering value quickly, building confidence internally, and then expanding. AI is evolving too rapidly for three-year roadmaps, so progress comes from small steps taken at pace.
AI agents have straightforward but demanding requirements from loyalty infrastructure. They need simplicity, meaning machine-readable rules with no hidden terms. They need speed, specifically sub-second response times during peak traffic. And they need instantaneous accuracy, covering real-time balance verification and offer adjudication. How fast a brand’s loyalty and personalisation offering is will have a big impact.
That is why governance and infrastructure are critical. AI tools rely on access to customer and operational data, so organisations must ensure the systems connecting that data are secure, structured and well-governed before scaling use cases. This is particularly important as AI agents increasingly participate in loyalty and commerce.
In this environment, the performance of underlying systems becomes critical. AI agents need machine-readable rules, fast APIs and real-time access to accurate information. Businesses that start with controlled pilots, connect data responsibly and ensure infrastructure can respond in real time will be best positioned to unlock AI’s benefits without exposing sensitive information.”
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.
