Australia’s Federal Court now has formal rules on how AI can be used in legal proceedings. If you ever find yourself in a dispute, these rules apply to you too, not just the lawyers.
What’s happening: Chief Justice Debra Mortimer of the Federal Court of Australia issued a new Generative AI Practice Note, setting out rules for how AI can and cannot be used in court proceedings. The rules apply to everyone involved in Federal Court matters, including litigants without legal representation, witnesses, and third parties required to produce documents.
Why this matters: If your business ever ends up in a legal dispute, these obligations apply to you directly, not just your lawyer. And more broadly, the practice note signals that AI accountability is becoming a formal standard in Australia’s legal system.
Australia’s Federal Court has spent the better part of two years watching AI enter the legal system, sometimes helpfully, sometimes badly.
Courts around the country and internationally have encountered cases where AI-generated documents included fictional case citations, fabricated legal sources, and confident but entirely wrong statements of law. The Federal Court has now decided it has seen enough to act.
Last week, Chief Justice Debra Mortimer issued the Generative AI Practice Note, GPN-AI, setting out the court’s expectations for how AI tools can and cannot be used in Federal Court proceedings. The practice note applies to everyone who appears before or files documents with the court. That includes litigants who do not have lawyers, witnesses, and third parties required to produce documents under subpoena or court order. It is not a ban on AI. It is a framework for using it responsibly, with consequences for those who do not.
What the practice note says
The practice note is built around three core expectations. Anyone using generative AI must have a basic understanding of what it can and cannot do. Any use of AI must not undermine the administration of justice. And if the court requires it, a person must disclose whether and how AI was used in preparing documents for a proceeding.
For documents filed in proceedings, whether pleadings, written submissions, or lists of documents, the person responsible for preparing the document must be able to confirm that the facts stated can actually be proved, that legal authorities cited genuinely exist and support the stated proposition, that evidence cited is real and likely to be admissible, and that chronologies are accurate. If AI was used in preparing those documents, the responsible person must have checked all of that themselves. The AI does not carry the responsibility. The person does.
For affidavits, witness statements, and expert reports, the rules are tighter still. A witness who makes a statement is representing that the document reflects their own recollection, knowledge, and experience. An expert providing a report has an overriding duty to assist the court impartially and must offer their own opinion and reasoning. Where AI was used to summarise or analyse information that a witness relies on, or to create images, recordings, or other materials presented to the court, that use must be disclosed in the body of the document.
Nandan Subramaniam, Principal of Zed Law, a specialist corporate and commercial law firm, welcomes the practice note but is direct about what it does and does not change. “The Federal Court isn’t anti-AI; it’s anti-laziness,” he said. “There’s a meaningful difference between a lawyer who uses AI as a thinking tool and one who uses it as a shortcut around thinking. The new rules don’t change anything for firms that were already doing this properly. For the rest, this is a wake-up call that was a long time coming.”
Where the risks are
The practice note is explicit about what can go wrong. Generative AI may produce results that are not accurate, entirely fictitious, or plainly wrong. It may generate case citations that do not exist, provide incorrect or misleading statements of law, introduce factual errors, and then confirm that all of it is accurate if asked. The presentation of false or inaccurate information to the court is described in the practice note as unacceptable, and Chief Justice Mortimer notes it is likely to frustrate the just resolution of proceedings. The consequences for using AI in ways that contradict the practice note include adverse costs orders and issues with professional obligations for lawyers.
For SME owners who find themselves in a legal dispute, the practical implication is clear. If you or your team use AI to help prepare documents for court proceedings, you cannot simply file what the AI produces. You are responsible for verifying every fact, every citation, and every claim in those documents. The court will hold you to that standard whether you have a lawyer or not.
The confidentiality problem
The practice note raises a separate concern that is particularly relevant for businesses. When confidential information is entered into a generally accessible AI tool, it may become available to other people. Users may not know where that information is stored, how it is used, or who can access it. The court identifies several categories of information that may be legally restricted, including information subject to court orders on confidentiality or suppression, legally privileged material, and information that is otherwise private or confidential without the consent of the relevant party.
The warning is direct: entering restricted information into an AI tool in a way that does not accord with the obligations attached to it must not occur. The practice note also notes that using a closed or enterprise AI tool does not automatically solve the problem, because outputs from that tool could still be used in ways that breach confidentiality obligations if not handled carefully.
For SME owners who use AI tools in their day-to-day operations and also find themselves in legal proceedings, this is a specific risk to be aware of. Information that comes up in a legal context carries obligations that do not disappear just because it is convenient to paste it into a chat interface.
What SMEs should take from this
Subramaniam’s broader concern is not that the rules go too far, but that how they are implemented matters enormously. “We need to be careful that accountability doesn’t tip into discouragement,” he said. “The courts should be actively embracing AI where it helps lawyers do better work faster and at lower cost. That’s not a threat to the administration of justice. It’s entirely consistent with it. The goal should be clear standards and transparency, not a chilling effect on adoption.”
For SME owners, that tension is worth sitting with. The Federal Court’s practice note does not say do not use AI. It says use it responsibly, know its limits, verify its outputs, and disclose where it was used when required. Those are reasonable standards, and they apply beyond the courtroom too. Any business using AI to produce documents, summarise information, or support decision-making is operating in a space where the same principles, accuracy, accountability, and transparency, are becoming baseline expectations.
The practice note will be reviewed and updated as the technology evolves. The Federal Court has also indicated it plans to hold a symposium in the coming months on the challenges and benefits of generative AI in proceedings. For now, the rules are in effect, and anyone who appears before the Federal Court needs to understand them.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.
