Dynamic Business Logo

Let’s Talk: What should SMEs know before deploying AI across their operations?

In this week’s edition of Let’s Talk, our experts explore the pressing legal questions confronting business owners deploying AI-generated content across their operations.

AI tools promise remarkable efficiency gains for small businesses, with research showing 46 per cent of adopters reporting operational improvements.

But behind the productivity boost lies uncertain legal terrain that could expose businesses to significant risk. From copyright ownership gaps to potential infringement claims and privacy breaches, Australian SME owners face a complex regulatory landscape that remains in flux as the government rejects broad exceptions and moves towards creator compensation frameworks. In this week’s edition of Let’s Talk, our experts explore the pressing legal questions confronting business owners deploying AI-generated content across their operations.

Let’s Talk!

Mark Lukie, Director of Solutions Architects for APAC, Barracuda Networks

Mark Lukie
Mark Lukie, Director of Solutions Architects for APAC, Barracuda Networks

“The AI risk hiding in plain sight

The most immediate risk is not AI outputs; it is the input. When employees use publicly available AI tools, they can upload internal information in efforts to generate content including documents, client data and financial records. Once that data leaves your environment, you have limited control over how it is stored, used or protected. This is shadow AI: the use of AI tools at work without formal approval or oversight. Employees turn to widely available tools to summarise documents, analyse data, or generate content because they are convenient. It is rarely deliberate misuse. The problem is that in doing so, they may share internal information with systems that sit entirely outside your security and compliance frameworks.

The practical response for business leaders is not to block AI use outright. Employees will find a way regardless. Instead, understand which tools are being used and what data is being shared through them. Establish clear policies that distinguish between approved and unapproved tools. Where possible, direct staff toward enterprise-grade options that keep data within your control. Shadow AI is already in your business. The question is whether you know where.”

Billy Loizou, Area Vice President and General Manager, APAC, Amperity

Billy Loizou
Billy Loizou, Area Vice President and General Manager, APAC, Amperity

“Every business owner I speak to right now is asking the same question: why isn’t our AI investment delivering the results we expected? The answer usually has less to do with the AI itself and more to do with the customer data behind it.

Most businesses sit on enormous amounts of customer data, but it’s fragmented across systems, channels, and identifiers. A customer who shops in-store, browses online, and interacts through a loyalty program often exists as multiple disconnected records. When AI operates without a complete and trusted understanding of that customer, it doesn’t eliminate inefficiency. It scales it.

The bigger issue is speed. Businesses are getting better at collecting customer signals, but many still struggle to turn those signals into decisions quickly enough to matter. By the time teams manually connect the data, analyse it, and act, the moment has often passed.

That’s where AI is starting to shift the equation. We’re moving toward systems where marketers can describe goals in plain language and have the underlying intelligence and execution handled automatically. But those systems only work when the underlying customer data is connected, governed, and identity-resolved first.

Otherwise, businesses risk more than poor customer experiences. They risk AI making decisions based on incomplete records, outdated preferences, or inaccurate context. That creates operational, legal, and reputational exposure at scale.

The companies that succeed with AI won’t necessarily be the ones using the most tools. They’ll be the ones that can turn trusted customer context into decisions and action in real time.”

Jonathan Reeve, Regional Director, ANZ, Eagle Eye

Jonathan Reeve
Jonathan Reeve, Regional Director, ANZ, Eagle Eye

“Something important is happening in the retail sector right now, and most businesses haven’t felt it yet – but they will.

AI shopping agents are beginning to sit between your customers and their purchase decisions.

Woolworths is already piloting this, upgrading its Olive chatbot from a customer service tool into an AI agent that can build a shopping basket on a customer’s behalf. When an agent is evaluating where to shop and which offers to activate, it doesn’t browse like a human. It sends an API call to your loyalty platform and expects a response in milliseconds. If your system can’t keep up, the agent simply optimises towards the best price. Your program becomes invisible.

Many businesses are already doing personalisation, but it’s not accessible in real-time because the underlying systems are legacy platforms that are often not available in real-time, i.e. response times of a few hundred milliseconds… That worked fine when humans were making decisions at their own pace. It is completely inadequate when a machine is deciding in real time.

The good news is that the foundation required for the agentic era – real-time, individualised, API-accessible loyalty – is the same foundation that already delivers better experiences for human shoppers today. You don’t need to build two separate systems, you just need to move. Batch-cycle loyalty programs that can’t verify an offer or a points balance in under half a second are already approaching their use-by date, even if most customers haven’t noticed yet.

There’s a legal and data-ownership dimension here that most retailers haven’t considered either. When an AI agent makes a purchase on a customer’s behalf, questions around offer validity, terms and conditions, and promotional liability become significantly more complex. Businesses that haven’t reviewed their program rules, data agreements, and customer consent frameworks with agentic commerce in mind are carrying quiet legal exposure.

The businesses that invest in real-time infrastructure now won’t just be ready for AI agents. They’ll already be winning with human customers.”

Maria Kathopoulis, CEO & Chief Marketing Officer, UNTMD Media

Maria Kathopoulis
Maria Kathopoulis, CEO & Chief Marketing Officer, UNTMD Media

“Most SMEs are using AI content without understanding the legal exposure attached to it.

First, ownership is not always clear. In many jurisdictions, AI-generated content without sufficient human input may not qualify for copyright protection. That means you may not fully own what you are publishing.

Second, liability still sits with you. If AI produces misleading, defamatory, or infringing content, your business is responsible, not the tool.

Third, training data risk matters. AI models are trained on vast datasets, and there is ongoing legal debate around whether outputs can unintentionally replicate copyrighted material. This creates potential exposure, particularly in commercial use.

Fourth, brand risk is immediate. AI content often lacks nuance, accuracy, and compliance awareness. In regulated industries, this can create legal and reputational damage quickly.

The practical approach is simple.

Use AI for acceleration, not autonomy.

Maintain human review layers for anything customer-facing.

Avoid publishing sensitive, legal, or medical claims without verification.

Document how content is created if ownership becomes relevant.

AI is a leverage tool. But without controls, it introduces legal ambiguity most SMEs are not equipped to manage.”

Mareike Niedermeier, Founder and CEO, Sales Savvy Online

Mareike Niedermeier
Mareike Niedermeier, Founder and CEO, Sales Savvy Online

“First, ownership issues arise with AI-generated content. In many jurisdictions, this type of content lacks clear copyright protection, meaning the “asset” you think you’re creating may not legally belong to you. It’s essential to review the platform’s terms of service, as they can vary significantly.

Second, be aware of data input risks. The information you provide to an AI tool may be used to train future models, depending on the platform and your settings. If you input client data, confidential strategies, or sensitive business information into a free AI tool without understanding its privacy policy, you are taking an unassessed risk.

Third, consider your liability for the output. If the AI-generated content contains factual, legal, or financial errors, you are responsible for what you publish, not the tool.

Always treat AI output as a starting point rather than a final product. Verify everything before sharing it with clients or posting it on public platforms.”

Sally Branson, Managing Director, Sally Branson Consulting Group

Sally Branson
Sally Branson, Managing Director, Sally Branson Consulting Group

“Every SME owner using AI-generated content needs to understand three things before they go any further: who owns what you create with it, what legal exposure you’re carrying, and whether your stakeholders would be surprised to learn you’d used it.

That last one is where most businesses underestimate the risk.

Undisclosed AI use in customer communications, marketing, or public-facing content isn’t just a legal question. It’s a trust question. And trust, once lost, is expensive to rebuild.

The practical answer is straightforward: be transparent about how and where you’re using AI in your communications. Not with a disclaimer buried in fine print, but as a clear, consistent position your team understands and your stakeholders can rely on.

The second risk is more subtle. AI content used without proper human oversight can misrepresent your brand voice, your values, or your expertise. When that happens at scale, the damage compounds quickly.

Use AI as a tool. Keep a human in the loop for anything that represents your business publicly. Disclose where it’s material. And make sure whoever is generating your content on your behalf, in-house or outsourced, understands the standards you’re held to.”

Aaron Kay, Founder, NxxGen Studio

Aaron Kay
Aaron Kay, Founder, NxxGen Studio

For Small & Medium sized business owners, AI-generated content might truly be a shortcut to faster marketing, lower production costs and more creative output. But before using it in a business, there are three things every owner should understand: ownership, originality and responsibility.

The first mistake is assuming that because your business generated an image, video, article or campaign asset using AI, you automatically have full ownership of it. The reality is more complex. Different AI platforms have different terms around commercial use, licensing and reuse of generated outputs. SMEs should always check whether the content can legally be used in advertising, ecommerce, social media or paid campaigns. Often AI Tools for image gen may not be allowed for commercial usage

The second issue is originality. AI tools are trained on large datasets, and while outputs may look new, there can still be risks around resemblance to existing creative work, brand assets, people, products or copyrighted material.

This matters especially when AI is used for visuals, fashion, product imagery or branded campaigns – with Nxx Gen Studio we generate all of our content using detailed workflow processes ensuring that content adheres to likeness IP regulations & so that also as a business we are able to commercialise and allow usage of our own original IP for our clients to use

The third point is accountability. If an AI-generated asset misleads customers, uses someone’s likeness without permission, copies a competitor’s style too closely or makes inaccurate claims, the business using that content is still responsible – this means that brands should not just copy businesses but rather take key elements as inspiration while adding your own personal flair, touch and aesthetics to ensure that your outputs are original

At NxxGen Studio we deliver high quality, professional grade image and video assets for fashion brands, saving them upto 90% in cost savings while also fast-tracking content delivery from weeks to within 72 hours.”

Jon Michail, Group CEO, Image Group International

Jon Michail
Jon Michail, Group CEO, Image Group International

“AI doesn’t own the risk… You do.

The moment it goes live, it becomes your claim, your judgment, and your liability.

The real issue isn’t legality. It’s whether your reputation can carry it.

More importantly, liability does not transfer to the tool. If the content is misleading, inaccurate, or infringes on someone else’s rights, the accountability lies with the business using it.

But in our experience with leaders, the greater risk is not legal; it’s reputational authority. Generic or poorly validated AI content signals a lack of judgment and can erode trust quickly. In markets where credibility drives decisions, that damage compounds over time.

AI should be treated as a drafting tool, not an authority. Every output requires human oversight (preferably with lived experience), verification, and alignment with your brand standards.

The real question is not whether you can use AI. It’s whether you and your reputation are prepared to stand behind what it produces.”

David Dahdah, Head of Client Success and AI Innovation, The Big Smoke

David Dahdah
David Dahdah, Head of Client Success and AI Innovation, The Big Smoke

“In most countries, AI-generated content is not copyrightable. No human made it so how could anyone hold copyright? That means a competitor can lift your AI-written landing page and you can do precisely nothing about it.

The infringement risk runs both ways. AI models train on copyrighted material and can reproduce it without flagging it. Run AI copy through a plagiarism checker before it goes live. For images, use platforms that licence their training data and actually offer indemnification.

If you deliver AI-generated work to clients, check your contracts. Most include originality warranties. Using AI without disclosure could put you in breach, and your professional indemnity insurance almost certainly has not caught up with this yet.

Australia does not have an AI-specific law yet, but the Australian Consumer Law already covers you. Misleading or deceptive conduct is misleading or deceptive conduct, regardless of whether a human or a model wrote it. The ACCC has flagged AI-generated fake reviews and fabricated claims as a priority enforcement area. Mandatory guardrails are coming too, and the federal government’s proposals are in progress. Being upfront now costs nothing. Being caught hiding it later will.

Use AI for drafts. Apply your own judgement. Keep records. Stop seeing it as something to hide.”

Vivienne Winborne, Senior Marketing Manager, One Little Seed

Vivienne Winborne
Vivienne Winborne, Senior Marketing Manager, One Little Seed

“My main concerns about AI are ethical issues, data privacy, and the risk of dying of boredom from the AI-generated slop.

Starting with the ethical concerns, many AI algorithms become “black boxes,” where users have limited understanding of how information is compiled and which sources are used. It can lead to built-in bias in decision-making (for example, hiring decisions), but can also make it hard for your company to explain outcomes or decisions to your customers.

Data privacy is another huge issue, with a Shadow AI Report estimating that 98% of organisations have team members using unsanctioned AI, and that around 70% are aware of employees sharing sensitive data with AI tools.

Lastly, too many businesses are using it to generate reams of generic, boring content. With 60% of Google searches ending without a click, the information people consume is being shaped by the summaries that AI tools serve up. And since AI prioritises unique insights, authority and consistent, easy-to-interpret information, boring content won’t cut it.

My biggest tips for business owners are to have a firm plan, an AI policy, and to seek expert support if it isn’t available internally.”

Melissa Laurie, CEO, Oysterly Media

Melissa Laurie
Melissa Laurie, CEO, Oysterly Media

“Australian consumers no longer take content at face value. Oysterly Media’s recent research of 1,200 Australians found 82.5 per cent now cross-check before they buy, and more than 8 in 10 want AI generated images and video clearly labelled.

For SMEs using AI to produce blogs, social posts and product imagery, there is a trust risk as well as a legal one.

For example, Australian copyright law only protects material generated by humans. Without provable human authorship, you may struggle to assert your rights against a competitor copying your content. The flip side of the same coin is that you can also inherit someone else’s infringement.

AI tools are trained on huge datasets, some of it copyrighted, meaning you could unknowingly publish content that belongs to someone else. Liability sits with the business that publishes, not the platform.

That said, AI generated imagery can offer genuine value to SMEs in the right circumstances. Product shoots are expensive, and sourcing high quality images for niche products, seasonal campaigns or concept visuals is not always practical. AI can meaningfully fill that gap.

The key is transparency. Businesses that label AI generated content clearly meet consumer expectations and build trust that will convert browsers into buyers.

The smartest SMEs are use AI as a tool, not as a shortcut, and they keep humans in the loop.”

Michael Russell, Managing Director, Finwave Finance

Michael Russell
Michael Russell, Managing Director, Finwave Finance

“AI-generated content is moving faster than Australian law, and most SME owners are operating on assumptions that have not been tested yet.

The first thing to understand is ownership. Under Australian copyright law, copyright requires a human author. Content generated purely by AI with no meaningful human creative input sits in uncertain territory. That does not mean you cannot use it, but it does mean you may not own it in the way you think you do, and neither might anyone else.

The second issue is accuracy and liability. If AI-generated content published under your business name contains false claims, misleading comparisons, or errors in regulated areas like finance, health, or legal services, the ACCC and relevant regulators do not care how the content was produced. You signed off on it. That makes it your problem.

The third is disclosure. While Australia has no blanket AI labelling law yet, consumer protection obligations around misleading conduct apply. If AI-generated content creates a false impression, the origin is unlikely to be a defence.

Practical advice: treat AI output the same way you would treat copy from a junior contractor. Review it, fact-check it, and take ownership of it before it goes anywhere near a customer.

The tool is not the risk. Publishing without thinking is.”

Steven Richardson, Co-Founder, SILQ

Steven Richardson
Steven Richardson, Co-Founder, SILQ

“Four things SME and legal practice owners must understand before using AI-generated content.

1. Content ownership is not guaranteed. Content created solely by AI may not belong to you, meaning competitors can reproduce it. Australian Copyright Law requires a human author, so producing and sharing AI content without human review can raise issues.

2. Professional duty applies. AI will affect lawyers in two ways: While it can lead to faster research and quicker document turnaround, anything produced still requires qualified review. A false citation or outdated statute remains the practitioner’s responsibility.

3. Confidentiality. Content produced by AI is in the public domain. For lawyers, this could breach legal privilege and raise concerns under the Privacy Act.

4. Risks of copyright infringement or humiliation. AI trains on datasets that may include copyrighted or made-up material. If the output is poor, the legal exposure sits with you, not AI.

Key takeaway: Process is protection. Document how AI is being used, who reviews it, and how final decisions are made. The paper trail matters if a dispute arises.

SILQ legal practice management software automates processes to streamline legal administration. When exploring AI, we encourage clients to be guided by their Law Society.”

Natalie Ashes, CEO, WorkClone

Natalie Ashes
Natalie Ashes, CEO, WorkClone

“Most SME owners assume that if they paid for content, they own it. With AI, that assumption can be wrong.

WorkClone’s latest survey on AI in the Workplace found that 90% of employees now use AI tools, with 74% doing so weekly. Much of that usage is generating content, marketing copy, customer communications and product descriptions. But ownership of AI-generated content is genuinely contested. In most jurisdictions, copyright does not automatically attach to content produced by AI without meaningful human authorship. Content your business paid to create may not legally belong to you.

There is a second exposure most owners haven’t considered. The ACCC has signalled that deploying AI-generated content without disclosure, including fake reviews or AI-written testimonials, may constitute misleading conduct under the Australian Consumer Law.

The fix is not to avoid AI. It is to use it deliberately. That means understanding what your tool’s terms actually say about ownership, ensuring a human meaningfully shapes the output, and taking legal advice before deploying AI content in customer-facing contexts where authenticity would reasonably be assumed.

AI moves fast. The legal frameworks are catching up. The gap between the two is where liability quietly accumulates.”

Christopher Bruce, Special Counsel, DBH Lawyers

Christopher Bruce
Christopher Bruce, Special Counsel, DBH Lawyers

“We are seeing AI tools increasingly being adopted by SMEs to improve efficiency. While there can be significant benefits, the adoption of AI also presents its own unique legal risks and challenges.

Every AI platform will have its own terms of service. Those terms typically will, among other things, indicate whether the platform can be used for commercial purposes, who has rights over the content generated, and a limitation on liability. It is important to be familiar with these terms if AI is being used on a commercial basis, where the consequences can be significant.

AI content generally doesn’t have the benefit of copyright protection unless there is significant human contribution. This means that it will likely be difficult to prevent third parties (and in particular competitors) from copying such content.

A SME is normally liable for the consequences of its output of AI content. Care needs to be exercised so as not to make inaccurate claims, infringe trademarks, be misleading, breach confidentiality or publish harmful content.

Used responsibly, AI is a helpful tool, but in a commercial context, SMEs need to carefully consider these issues and have in place appropriate governance and review processes to mitigate legal risks.”

Tracey Mylecharane, Director, TM Legal Atelier

Tracey Mylecharane
Tracey Mylecharane, Director, TM Legal Atelier

“Three critical things SME owners must understand about AI-generated content.

1. No human author = no copyright protection: Under the Copyright Act 1968 (Cth), only original works created by humans are protected. If AI generates your marketing materials or website copy without substantial human authorship, you likely have no copyright protection, which may mean you may not have recourse if your material is copied.

2. Training data risks: Many AI tools learn from existing content, and that content may be protected by copyright. Using AI-generated content could inadvertently infringe on someone else’s intellectual property, exposing your business to claims.

3. Contract implications: If your contracts promise that you will deliver original work or grant copyright licenses to clients to use your work, AI-generated content could breach those terms because you are not producing original work. Your contracts need to address whether AI tools are used and how ownership is handled.

The takeaway: Use AI as a tool, not a replacement for human creativity. Always review, edit, and add your expertise. And update your contracts and terms of service to address AI use explicitly. Prevention beats disputes every time.”

Dunja Lewis, Co-Founder and Chief Innovation Officer, AIUC Global

Dunja Lewis
Dunja Lewis, Co-Founder and Chief Innovation Officer, AIUC Global

“Small and medium-sized businesses are rapidly adopting AI, whether leadership has formally approved it or not. Staff are already using AI to write emails, generate proposals, review resumes, summarise documents and analyse data – often through tools the organisation has never assessed or approved.

At the same time, a growing industry of AI consultants and online “experts” are pushing governance frameworks, policies and compliance templates as the solution. While governance remains important, many organisations are discovering that policies alone are no longer enough. Employees frequently bypass restrictive or impractical controls when operational pressure, productivity demands and convenience outweigh governance processes.

The result is a growing gap between documented AI governance and actual AI usage inside organisations.

This matters because customer and stakeholder trust is eroding quickly. Data breaches, confidential information leakage and inaccurate AI-generated outputs linked to poorly governed AI usage are becoming increasingly visible – and increasingly reported by media outlets.

This is why cultural change, open conversations about responsible AI usage and practical transparency measures are becoming critical for SMEs. Frameworks such as the AI Usage Classifications developed by AIUC Global help organisations create visibility, support effective review processes and encourage responsible AI usage without driving it underground.”

Aparna Watal, Partner, Halfords IP

Aparna Watal
Aparna Watal, Partner, Halfords IP

“GenAI is becoming a standard business tool for marketing copy, imagery, social media, even product descriptions. But most business owners are adopting it without understanding the legal minefield underneath.

The first thing to grasp is that ownership of AI-generated content is not entirely unsettled. Under Australian copyright law, protection generally requires human authorship. If a piece of content is produced with minimal human direction or agency, like a quick prompt and a generated image, it may attract little to no copyright protection. That means competitors could freely copy it, and you’d have limited recourse.

But here’s where it gets more complicated. The AI tools generating that content may themselves have been trained on copyrighted works, such as images, articles, designs, often without the rights holders’ knowledge or consent. Australia is currently in active policy debate about whether a ‘text and data mining’ exception should be introduced to legitimise this kind of training. Until that’s resolved, outputs that are substantially similar to protected works could expose your business to infringement claims you didn’t see coming.

There’s also a practical brand risk. If you’re using AI-generated content to build your business identity, logos, brand imagery, campaign visuals, and that content lacks clear ownership or infringes on someone else’s rights, your IP foundations are on shaky ground. I’ve seen how quickly brand disputes escalate; the cost of fixing the problem after the fact is always far greater than getting it right from the start.

Treat AI-generated content like any other asset. Check your platform’s terms, document your human input and editorial decisions, and get your trade marks registered so your brand remains distinctly yours, regardless of how the content was made.”

Kathryn Giudes, Founder and Managing Director, ORCA Opti

Kathryn Giudes
Kathryn Giudes, Founder and Managing Director, ORCA Opti

“Most business owners ask the wrong question about AI. They ask “what can it do?” before asking “what does it do with my data?”

Every prompt with the major commercial AI models does two things at once: it generates a response, and it sends your data out of your organisation. Under consumer and Pro-tier terms, which cover most commercial AI use, that data is logged, sent to the US, retained for years, reviewed by humans, and used by default to train the next model. It cannot be fully deleted. For US-headquartered providers, your data sits under US legal jurisdiction, accessible to foreign governments via the CLOUD Act.

The IP question compounds this. Australian copyright requires human authorship. Content generated entirely by AI may attract no protection; meaning competitors can use it freely. Unless you’ve added genuine human value, you can not prove it’s yours. Because AI models were trained on vast amounts of third-party material, that same output may infringe someone else’s rights.

Then there’s the attack surface. A chatbot, co-pilot or automation tool creates structural vulnerabilities no contract, or terms of use, address. AI can be tricked into exfiltrating your data through hidden instructions buried in emails, documents, or web pages. Major vendors have classified some of these as unfixable by design.

The businesses getting AI right treat it with the same rigour they’d apply to any system handling their most sensitive information. Because that’s exactly what it is.

At ORCA Opti, we built AI guardian for this reason: a layer that keeps your logic, knowledge, agents and workflows inside your security boundary, while still leveraging the power of the major models. Your data stays local. Nothing is logged.”

Sumi Roy, Content Writer, Ecommerceally

Sumi Roy
Sumi Roy, Content Writer, Ecommerceally

“Certainly, AI generated works can offer both efficiency and scalability but SME owners need to see a clear path to ownership of work created & identify any liabilities relating to infringement laws. This is because AI-generated content distributed through their own licensing agreement that does not provide a guarantee of ownership.

Critically, the legal liability lies with the business itself rather than the supplying AI platform. Hence, if AI generated content infringes intellectual property laws or is misleading in nature, the legal liabilities will be those of the end user and not of the content supplier.

Data protection is another important issue when using the AI-generated content. AI tool may store user supplied information for model training purpose or process it to generate another content depending on the platform’s policies. This may trigger data privacy, confidentiality and regulatory compliance issues particularly when businesses handle sensitive customer financial or commercial data.

Therefore, Businesses should consider terms associated with AI tools & make use of human judgment before using AI-generated content as a finished product. By ensuring internal organizational policies & review processes SMEs can utilize AI responsibly without exposing themselves to legal complications.”

Matthew Owens, Director, Annexa

Matthew Owens
Matthew Owens, Director, Annexa

“What’s changing right now in AI is how it’s starting to show up inside the systems that finance teams rely on every day. That’s where it becomes more relevant, because it’s no longer sitting outside the business as a separate tool, it’s starting to influence the workflows and decisions happening inside core systems.

The right foundations enable speed, visibility, and efficiency at scale. The wrong ones cause operational drag and vulnerabilities that are expensive to diagnose and harder to unwind.

At Annexa, a big part of our work is helping finance leaders identify where AI can genuinely improve business processes and how to introduce it safely into operational systems. Here are common themes that tend to catch people out with AI-generated content.

The distinction between consumer and commercial AI accounts matters more than most teams realise. Consumer accounts on platforms like Claude Free, Pro and Max and standard ChatGPT accounts can now be set to train on your conversations unless you actively opt out. Data from those accounts can be retained for up to five years if training is enabled. This applies even to paid consumer plans – Claude Pro is a consumer account, not a commercial one.

There’s also what’s sometimes called shadow AI – where an employee is using a personal Claude Pro or free ChatGPT account to analyse sensitive data, that data may be feeding model training without anyone in the organisation being aware.

The right safeguard is to establish a clear organisational policy that defines which AI tools are approved for use with finance data and ensures those tools are accessed via enterprise or API pathways – not personal accounts.

These policies change – sometimes with limited notice. Building a periodic review of your approved AI tools and their current data terms into your governance calendar is a simple step that keeps your organisation ahead of the risk.

As embedded AI shifts from emerging capability to standard infrastructure, the businesses best placed to take advantage will be those that already have their data, systems, and integrations in order.”

Tracy Moore, General Manager, Data and AI, MYOB

Tracy Moore
Tracy Moore, General Manager, Data and AI, MYOB

“What many SMEs don’t realise about AI-generated content is that using it does not automatically mean you own it – or that it’s legally safe to publish.

As AI becomes part of everyday business life, from marketing copy to proposals, many owners are moving faster than ever, which can make it tough to stop and consider the risks. In fact, MYOB’s latest Business Monitor shows 84% of SMEs don’t have clearly documented responsible AI use policies in place

There are three things every SME should know about safely using AI for business.

  • First, you may not own what you publish. Platform terms vary, so businesses should understand them before building marketing or customer assets around AI-generated content.
  • Second, confirm accuracy. AI produces polished, plausible copy, but it does not check facts. Every output should be treated as a draft: verify facts, review tone and check originality before publishing.
  • Third, treat the prompt like you would an email. Financial data, customer information and commercially sensitive material should never be pasted into public AI tools.

Importantly, SMEs don’t have to navigate this alone. The National AI Centre provides practical guidance, including template policies and assessment tools, to help businesses adopt AI responsibly. At MYOB, we’ve aligned our approach to this guidance, embedding governance and safety into how AI features are designed and delivered, so businesses can use them with greater confidence.”

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

Yajush Gupta

Yajush Gupta

Yajush writes for Dynamic Business and previously covered business news at Reuters.

View all posts