General
I
March 27, 2026

AI as a confidante? Legal privilege and the ever-increasing use of AI

Widely available AI tools like ChatGPT, Gemini and Claude have become go-to sources of guidance and advice. These platforms readily offer suggestions on almost any topic. Many people now turn to them not just for everyday queries, but for more complex problems. Potential legal claims, potential legal liability and contract negotiations are topics that often fall into this category.

When seeking advice from a lawyer on these topics, the confidential communications that take place are usually protected by privilege – meaning clients can refuse to disclose those communications in the event of legal proceedings.

Interacting with an AI platform about these topics may feel private, even though users are generally aware that their conversations may be stored or accessed or often used to train the underlying models. Yet many people are candid in their conversations with AI tools and share details that they consider to be private and confidential. Could records of those conversations be required to be disclosed in legal proceedings?

In the United States, it appears the answer is "Yes". The recent decision of the United States District Court (Southern District of New York) in United States v Heppner considered this issue and found that communications with an AI platform were not privileged, and so were not protected from disclosure and use in legal proceedings.

Legal privilege: a brief overview

Privilege is a form of legal protection. It covers communications where clients seek and receive legal advice from their legal adviser (known in New Zealand as “legal advice privilege”). When a communication is protected by legal advice privilege, it doesn’t need to be disclosed in legal proceedings. Legal advice privilege allows clients to speak freely with their legal advisers, knowing those communications won’t be revealed.

Another type of privilege in New Zealand is “litigation privilege”, which applies to material prepared by clients, legal advisers or third parties, for the dominant purpose of preparing for a legal proceeding. Unlike legal advice privilege, it is not essential that the material is communicated to or from a legal adviser.

For material to be (and remain) privileged, it must be confidential. Privilege will be “waived” where it is disclosed in circumstances that are inconsistent with a claim to confidentiality.

In New Zealand, there have been no cases yet that have considered whether communications with an AI platform are protected by privilege – but there is now a case in the United States that provides guidance.

United States v Heppner

In United States v Heppner, Mr Heppner was indicted and arrested on a number of charges, including securities fraud and wire fraud. The FBI executed a search warrant at Mr Heppner’s home and seized documents and electronic devices, including copies of communications that Mr Heppner had with the generative AI platform Claude. Mr Heppner’s legal counsel argued these communications occurred after Mr Heppner knew he was under investigation, and in anticipation of Mr Heppner being indicted, and outlined an overall defence strategy and potential arguments responding to the anticipated charges.

The Court considered whether these communications were privileged and found they were not. The following points were influential in the Court’s decision:

• The communications were not between Mr Heppner and his legal counsel. There was no suggestion that Claude was an attorney. In fact, the US Government directly asked Claude if it was and its response was, unsurprisingly, “I’m not a lawyer and can’t provide formal legal advice or recommendations”.

• The communications were not confidential. The Court found generally, AI users do not have substantial privacy interests in information they voluntarily disclose with publicly accessible AI platforms. Claude’s Privacy Policy let users know Anthropic collected data on user’s inputs and Claude’s outputs, used that data to train Claude, and that data could be disclosed to third parties including government authorities.

• The communications were not for the purpose of Mr Heppner obtaining legal advice from his legal counsel. Claude was not a legal adviser, and Mr Heppner had not been told by his lawyers to use Claude.

Mr Heppner’s legal counsel argued the communications could be privileged because they incorporated their legal advice given to Mr Heppner. The Court’s view was that Mr Heppner had waived privilege in that advice by sharing it with Claude.

Implications for New Zealand (and Australia)

The Court’s decision in United States v Heppner has not yet been considered in New Zealand.

There is an important distinction between the United States and New Zealand in that, instead of “litigation privilege” which exists in New Zealand (and Australia), the United States has the “work product doctrine”. The work product doctrine is similar in that it protects material prepared in anticipation of litigation, but there is a further requirement that the material be “prepared by or at the behest of counsel”. In the Heppner decision there was no evidence that Mr Heppner’s legal counsel had directed him to communicate with Claude or use Claude to prepare material to assist his defence.

However, litigation privilege (as with any privilege) still relies on confidentiality. The Court’s comments in the Heppner decision suggest interactions with Claude are not sufficiently confidential to be privileged. These findings are specific to Claude and Anthropic’s terms, and it is possible other AI platforms’ terms which provide greater assurances around confidentiality and data protection (as many private AI tools on the market now – particularly enterprise-grade tools – do) will be considered to be sufficiently confidential to maintain a privilege claim. However, there is still a risk, and practically most people will not know what those terms say.

The Heppner decision highlights the tension between modern technology and legal principles developed long before AI tools like Claude existed. The Court emphasised that privilege is grounded in a “trusting human relationship”, something it found cannot not exist between an AI tool and a user. It also rejected the idea that using other Internet-based software, such as cloud-based word processing applications, creates any inherent expectation of privilege. These are issues that will require further consideration, and we will monitor developments with interest.

While not directly applicable, the underlying principles of the Heppner decision could have significant implications for other areas, for example, companies looking to protect their innovations and products via patents. A fundamental pillar of patentability is novelty, i.e. the innovation must be new. While the specific requirements differ slightly jurisdiction to jurisdiction, if disclosure by an innovator to an AI model is considered to be non-confidential (i.e. inadvertent public disclosure), that disclosure could put any subsequent patent for that innovation at risk of being invalid.

If you have any questions about the way you or your business uses AI tools, please get in touch.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

General
March 27, 2026

AI as a confidante? Legal privilege and the ever-increasing use of AI

Widely available AI tools like ChatGPT, Gemini and Claude have become go-to sources of guidance and advice. These platforms readily offer suggestions on almost any topic. Many people now turn to them not just for everyday queries, but for more complex problems. Potential legal claims, potential legal liability and contract negotiations are topics that often fall into this category.

When seeking advice from a lawyer on these topics, the confidential communications that take place are usually protected by privilege – meaning clients can refuse to disclose those communications in the event of legal proceedings.

Interacting with an AI platform about these topics may feel private, even though users are generally aware that their conversations may be stored or accessed or often used to train the underlying models. Yet many people are candid in their conversations with AI tools and share details that they consider to be private and confidential. Could records of those conversations be required to be disclosed in legal proceedings?

In the United States, it appears the answer is "Yes". The recent decision of the United States District Court (Southern District of New York) in United States v Heppner considered this issue and found that communications with an AI platform were not privileged, and so were not protected from disclosure and use in legal proceedings.

Legal privilege: a brief overview

Privilege is a form of legal protection. It covers communications where clients seek and receive legal advice from their legal adviser (known in New Zealand as “legal advice privilege”). When a communication is protected by legal advice privilege, it doesn’t need to be disclosed in legal proceedings. Legal advice privilege allows clients to speak freely with their legal advisers, knowing those communications won’t be revealed.

Another type of privilege in New Zealand is “litigation privilege”, which applies to material prepared by clients, legal advisers or third parties, for the dominant purpose of preparing for a legal proceeding. Unlike legal advice privilege, it is not essential that the material is communicated to or from a legal adviser.

For material to be (and remain) privileged, it must be confidential. Privilege will be “waived” where it is disclosed in circumstances that are inconsistent with a claim to confidentiality.

In New Zealand, there have been no cases yet that have considered whether communications with an AI platform are protected by privilege – but there is now a case in the United States that provides guidance.

United States v Heppner

In United States v Heppner, Mr Heppner was indicted and arrested on a number of charges, including securities fraud and wire fraud. The FBI executed a search warrant at Mr Heppner’s home and seized documents and electronic devices, including copies of communications that Mr Heppner had with the generative AI platform Claude. Mr Heppner’s legal counsel argued these communications occurred after Mr Heppner knew he was under investigation, and in anticipation of Mr Heppner being indicted, and outlined an overall defence strategy and potential arguments responding to the anticipated charges.

The Court considered whether these communications were privileged and found they were not. The following points were influential in the Court’s decision:

• The communications were not between Mr Heppner and his legal counsel. There was no suggestion that Claude was an attorney. In fact, the US Government directly asked Claude if it was and its response was, unsurprisingly, “I’m not a lawyer and can’t provide formal legal advice or recommendations”.

• The communications were not confidential. The Court found generally, AI users do not have substantial privacy interests in information they voluntarily disclose with publicly accessible AI platforms. Claude’s Privacy Policy let users know Anthropic collected data on user’s inputs and Claude’s outputs, used that data to train Claude, and that data could be disclosed to third parties including government authorities.

• The communications were not for the purpose of Mr Heppner obtaining legal advice from his legal counsel. Claude was not a legal adviser, and Mr Heppner had not been told by his lawyers to use Claude.

Mr Heppner’s legal counsel argued the communications could be privileged because they incorporated their legal advice given to Mr Heppner. The Court’s view was that Mr Heppner had waived privilege in that advice by sharing it with Claude.

Implications for New Zealand (and Australia)

The Court’s decision in United States v Heppner has not yet been considered in New Zealand.

There is an important distinction between the United States and New Zealand in that, instead of “litigation privilege” which exists in New Zealand (and Australia), the United States has the “work product doctrine”. The work product doctrine is similar in that it protects material prepared in anticipation of litigation, but there is a further requirement that the material be “prepared by or at the behest of counsel”. In the Heppner decision there was no evidence that Mr Heppner’s legal counsel had directed him to communicate with Claude or use Claude to prepare material to assist his defence.

However, litigation privilege (as with any privilege) still relies on confidentiality. The Court’s comments in the Heppner decision suggest interactions with Claude are not sufficiently confidential to be privileged. These findings are specific to Claude and Anthropic’s terms, and it is possible other AI platforms’ terms which provide greater assurances around confidentiality and data protection (as many private AI tools on the market now – particularly enterprise-grade tools – do) will be considered to be sufficiently confidential to maintain a privilege claim. However, there is still a risk, and practically most people will not know what those terms say.

The Heppner decision highlights the tension between modern technology and legal principles developed long before AI tools like Claude existed. The Court emphasised that privilege is grounded in a “trusting human relationship”, something it found cannot not exist between an AI tool and a user. It also rejected the idea that using other Internet-based software, such as cloud-based word processing applications, creates any inherent expectation of privilege. These are issues that will require further consideration, and we will monitor developments with interest.

While not directly applicable, the underlying principles of the Heppner decision could have significant implications for other areas, for example, companies looking to protect their innovations and products via patents. A fundamental pillar of patentability is novelty, i.e. the innovation must be new. While the specific requirements differ slightly jurisdiction to jurisdiction, if disclosure by an innovator to an AI model is considered to be non-confidential (i.e. inadvertent public disclosure), that disclosure could put any subsequent patent for that innovation at risk of being invalid.

If you have any questions about the way you or your business uses AI tools, please get in touch.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Get in Touch