Dynamic Business Logo
Home Button
Bookmark Button

Zed Law founder explains what OpenAI’s legal advice policy means for your business

OpenAI didn’t ban legal advice from ChatGPT. Zed Law founder Ryan Zahrai explains what actually changed and why the legal AI vacuum matters for Australian businesses.

There’s been a lot of noise online about ChatGPT being “banned” from giving legal and health advice. It hasn’t.

What OpenAI actually did was tighten the rules so the platform can’t hand out personalised legal or medical advice unless there’s a qualified professional involved. No tailored legal documents, no specific recommendations, no acting as your lawyer or doctor. The system can still explain concepts, unpack jargon, or help you prep for meetings — it just won’t tell you whether to sue someone, sack someone, or sign something.

And that’s exactly how it should be.

Why this matters more than most people think

If you’re running a business, you’ve probably used AI to draft something: a policy, a clause, a contract template. It’s fast, it’s confident, and it sounds right. But sounding right and being right are two very different things.

The reality is that general-purpose AI has a hallucination rate of roughly 15 to 30 per cent. That’s fine if you’re writing marketing copy or brainstorming ideas. It’s not fine when you’re relying on it to interpret workplace law or corporate governance duties.

I’ve seen the consequences firsthand. One client came to me convinced they could lodge a general protections claim for what was basically a breach of contract. Another believed they could claim compensation from a company because it failed to tick certain governance boxes. ChatGPT had told them both they could. They were wrong. And the problem wasn’t that AI gave them advice — it’s that they believed it.

The OpenAI policy change isn’t a crackdown; it’s a correction. The company basically said, “If the job needs a licence, get someone licensed involved.” That’s not anti-innovation; it’s common sense.

The social-media storm missed the point

Most of the online commentary missed the nuance. People saw headlines about “bans” and “restrictions” and assumed OpenAI had backflipped on its mission. In reality, this update keeps AI useful while protecting users from themselves.

You can still ask ChatGPT to explain a legal principle or summarise a regulation in plain English, and it will do that. What it won’t do is pretend to know your business, your employees, or your contracts. That boundary is healthy, as it reduces risk, helps users stay compliant, and reinforces the idea that AI is a tool, not a replacement.

The legal-AI vacuum

Of course, where there’s a gap, there’s opportunity.

OpenAI’s move created what I call a legal-AI vacuum. Businesses still want the speed and efficiency of AI, but they also need the reliability and protection that come from human oversight. The DIY model doesn’t cut it anymore, and that’s where responsible legal-AI platforms come in.

At Zed Law, we’ve worked closely with Veraty to design a model that fills this space responsibly. The technology handles the volume — generating, comparing, summarising — and where it’s high stakes, a lawyer validates the output before it’s used. Clients are producing contracts faster, saving serious money, and staying compliant in the process, while law firms are getting work done faster, and starting to move away from time billing to value pricing while increasing margins. 

Across the board, Veraty law firm users are seeing an uplift in recurring revenue and business users are saving money year-on-year, by removing rework and avoiding disputes. Those results are achieved by connecting AI to the right advice at the right time, not replacing legal advice. 

Human in the loop is the blueprint

If the early phase of AI was about automation, the next phase is about integration. The real value will sit in the systems that combine automation with expertise

Having a “qualified person in the loop” isn’t red tape. It’s what keeps innovation sustainable. It’s what ensures that AI doesn’t just move fast, but moves correctly. And it’s what turns tools like Veraty from clever software into genuine business infrastructure.

Because when you’re dealing with contracts, employment law, governance, or compliance, speed without accuracy is just risk wearing running shoes.

Australia’s edge in compliant AI

Globally, regulators are catching up fast. The EU’s AI Act, the UK’s AI white paper, and draft frameworks here in Australia are all moving in the same direction. That is, high-risk AI domains such as  legal, medical, and government, must be supervised by qualified humans.

That alignment is good news. It sets a clear expectation that responsible innovation means designing smarter systems around people – not removing them.

Australia’s legal tech scene has quietly emerging as a leader here. We’ve been building compliance-first platforms before it was fashionable. The advantage is agility – we’re smaller, we can adapt faster, and we don’t have to unwind legacy models that were built for volume billing instead of verified outcomes.

What this means for business operators

If you’re a founder, HR manager, or corporate lead, this change isn’t something to panic over. It’s an opportunity to get your processes right.

Let AI handle the grunt work: summarising, comparing, preparing. But don’t let it decide strategy, interpret rights, or make calls that could end up in court. Loop in professionals early and make verification part of your workflow.

AI is at its best when it’s used as a co-pilot, not a captain.

And if you’re thinking “we don’t have time for that”, I’d go as far to say that you probably don’t have time for the fallout from getting it wrong, either.

The bottom line

OpenAI’s policy is a guardrail, not an obstacle. It reminds us that the power of AI doesn’t come from pretending to be human; it comes from working alongside humans who know where the legal lines are drawn.

So, use AI responsibly; let it draft, compare, and brainstorm. Then get a lawyer to confirm the rest. That’s not slowing down; that’s how you build trust at scale.

AI is powerful when it knows its limits — and when you know yours. The smartest operators aren’t ditching their lawyers; they’re building them into their AI workflows.

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

What do you think?

    Be the first to comment

Add a new comment

Ryan Zahrai

Ryan Zahrai

View all posts