Dynamic Business Logo
Home Button
Bookmark Button

Anish Sinha, COO and cofounder of Upcover

Small businesses face new insurance options as deepfake fraud surges 1,600%

Despite 89% confidence in spotting fakes, Australians correctly identify AI-generated scams only 42% of the time

What’s happening: Australian businesses can now access specialist insurance coverage addressing deepfake fraud, as new research reveals only 42% accuracy in detecting AI-generated scams.

Why this matters: Nearly nine in ten Australians are confident they can spot an AI-generated scam, but new research from CommBank shows they are only able to correctly distinguish between real and AI-generated images 42% of the time.

Australian businesses now face a deepfake detection crisis, with new research revealing most people cannot identify AI-generated fraud even when they believe they can.

Australians are only able to correctly distinguish between real and AI-generated images 42% of the time, which is below the chance of a random guess, according to research from Commonwealth Bank involving 1,988 respondents nationally.

The findings expose a dangerous gap between confidence and capability. Nearly nine in ten Australians (89%) are confident to some extent they can spot an AI-generated scam, yet their actual detection rate falls well below random guessing.

Trust exploited

The technology works because it targets fundamental human instincts, according to Professor Monica Whitty, Professor of Human Factors in Cyber Security at Monash University.

“Humans tend to trust faces, voices and familiar people. Deepfakes take advantage of that instinct,” Professor Whitty said.

Less than half of Australians (42%) are familiar with AI-enhanced scams, despite deepfakes exploding across social media platforms, websites, messaging apps, and telecommunication channels.

The attacks have accelerated rapidly. Around one in four (27%) Australians say they had witnessed a deepfake scam in the past year, with investment scams representing 59% of incidents, business email compromise 40%, and relationship scams 38%.

For small businesses, the exposure is particularly acute. Around four in ten (41%) small business owners are familiar with deepfake scams, yet small businesses reported that half of all deepfake scam attempts (50%) arrived by email.

Detection fails

Age provides little protection against the technology. Australians aged over 65 are only 6% less accurate than those younger than them, showing that deepfakes can fool people of all ages.

The sophistication continues to advance. More than 50% of fraud cases in Australia now involve AI and deepfakes, with Australians reporting about $119 million in scam-related losses in the first four months of 2025 alone.

“The findings reveal a growing gap between confidence and reality, and that gap is exactly what scammers are looking to exploit as they increasingly turn to AI to target everyday Australians and small businesses,” said James Roberts, General Manager of Group Fraud at CommBank.

The human factor compounds the vulnerability. Only one in five (20%) Australians say they have set up a safe word with their loved ones to confirm it’s really them, despite nearly three-quarters of Australians (74%) agreeing that they should.

Professor Whitty noted that many Australians don’t talk openly about deepfake scams, with only a third discussing AI-generated scams with their relatives or friends.

Coverage arrives

Against this backdrop, specialist insurance protection addressing deepfake attacks has become available for Australian businesses through providers like upcover, which recently launched Coalition’s Deepfake Response Endorsement for eligible cyber insurance policies.

The coverage includes deepfake forensic analysis, legal support to remove fraudulent content from online platforms, and crisis communications assistance. “We’ve already seen this with larger organisations, but a convincing deepfake of a founder or CEO can wipe out trust with customers, investors or staff overnight,” said Anish Sinha, COO and cofounder of upcover.

The timing reflects a wider insurance gap. The Insurance Council of Australia states that 20% of Australian SMEs and 35 to 70% of larger businesses have standalone cyber insurance. “Deepfakes are the new frontier; they target trust, the most valuable asset that any business has,” said Connor McKay, Head of Business Development, Australia, at Coalition.

Protection steps

Roberts emphasised that fundamental security practices remain effective even as technology evolves. “The good news is that the steps that keep people safe don’t need to evolve at the same speed as the technology does. Deepfakes might be new, but the same tried and tested habits remain your best defence, even against AI-powered scams,” Roberts said.

For businesses implementing protection measures, cybersecurity experts recommend never relying solely on audio or visual cues for authentication. Instead, require secondary verification for financial transactions, such as callback confirmations or digital signatures.

Organisations should develop a robust cybersecurity culture and encourage employees to create their own cybersecurity hygiene and function as human firewalls throughout all of their business activities, including work from home arrangements, according to guidance from KPMG.

Security frameworks should replace implicit trust with continuous validation by implementing a zero trust model, multifactor authentication, behavioural biometrics, single sign on, password management, and privileged access management.

Verification protocols should extend beyond technology. Only 55% of small businesses had cross-checked supplier payment details in the last six months, despite email remaining the primary delivery method for deepfake attempts.

Train employees to confirm requests for wire transfers via direct phone calls using pre-established numbers. Establish a family password or secret question to help confirm if an urgent request from supposed relatives is real, whilst at work, take the step to confirm unusual executive requests by calling the person directly using their known contact information.

Security awareness training should include practical detection techniques. Listen for longer than usual pauses between words and sentences when verifying audio, as the person’s voice may also sound flat and emotionless. For video calls, watch out for unnatural eye movements, patchy skin tones, odd lighting, awkward postures or body movements and poor lip syncing.

“Scammers are using AI to create fake investment videos, deepfake celebrities, and even voice and text clones of loved ones, senior executives and government officials. Talking openly about this technology is one of the easiest ways to help stay ahead of it,” Roberts added.

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

What do you think?

    Be the first to comment

Add a new comment

Yajush Gupta

Yajush Gupta

Yajush writes for Dynamic Business and previously covered business news at Reuters.

View all posts