Tech
I
May 1, 2024

Lawyers and Generative AI – New NZ Law Society guidance released

It was encouraging to see the New Zealand Law Society Te Kāhui Ture o Aotearoa publish Generative AI guidance for lawyers recently, discussing what Gen AI is and what lawyers need to consider before using it in their practice. As the Law Society emphasises, Gen AI has significant potential for the legal profession, as well as risks and ethical issues that need to be managed carefully.

Use of Generative AI policy

As a firm, we have been considering the impact of Gen AI on the legal profession for some time. In September last year, we decided that the impact of Gen AI on what we do was significant enough that it requires a policy. We published that policy to provide transparency, encourage debate and assist other lawyers who might be grappling with the same issues. You can read the Hudson Gavin Martin Policy on Use of Generative AI here.

It was good to see the Law Society promote the importance of a policy in its guidance note, stating that law firms should:

Have a clear policy for all staff about how the firm uses AI. This should include topics such as protection of confidentiality and privilege, monitoring and unauthorised use, and quality assurance.

The Law Society also stated that a “lawyer practising on own account who allows the use of Gen AI in a way that is not adequately monitored or checked or who allows a situation to arise where staff are using Gen AI in an unauthorised manner also risks breaching r11 and 11.1 (Proper professional practice – administering, supervising and managing a legal practice)”. Given that many clients are saying they expect their firms to be using Gen AI tools, and that publicly available Gen AI tools like ChatGPT are easily available to all lawyers, it’s important for firms to address this issue proactively.

Quality assurance and competence

As we would expect, the Law Society’s guidance discusses the important issue of accuracy and quality in legal advice when using Gen AI tools. We know that the current generation of Gen AI tools can “hallucinate” i.e., fabricate or present incorrect information about cases, legislation and other information presented as fact. We also know that there is limited New Zealand content available to “train” legal tools, and much is still uncertain about the way Gen AI tools create content – there are concerns that tools may develop in a way that creates biased, discriminatory, or misleading content, or that infringes the intellectual property rights of others.

The Law Society has made it clear in the guidance that improper, negligent, or incompetent use of Gen AI (including a reliance on defective or misleading outputs) could lead to a serious breach of the Conduct and Client Care Rules. Careful human oversight to review outputs and apply professional judgement is always needed. Ultimately, the human lawyer will remain responsible for any AI-created legal content.

We think it’s important for firms to consider their approach to checking the outputs of Gen AI tools and build this into their policies and processes. This requires consideration of supervision and training approaches, as well as detailed due diligence of specific Gen AI tools before they are used.

Privacy, confidentiality, and cybersecurity

Understandably, the Law Society guidance specifies the need for robust security systems and processes to protect client confidentiality, privacy and privilege when inputting data to Gen AI tools.

While the need to use Gen AI tools in a way that is secure, legally compliant, and ethically sound goes almost without saying, we also believe it’s important to contribute to the ongoing development and evolution of Generative AI tools (as that ultimately benefits our clients and the wider legal profession). Our approach to the contribution of data for training purposes is, therefore, appropriately risk-based – in the sense that we will look for ways to contribute to training these systems, for example through permitting training on non-confidential/anonymised data, or by working with providers to test their solutions in a safe ‘sandbox’ environment.

Wider considerations

The Law Society comments constructively in the guidance on the broader ethical issues the legal profession is considering in relation to Gen AI, such as client consent, billing practices, and staff engagement.

The Law Society highlights that good client and staff communication will be essential, including staff training that covers your firm’s policy on the use of Gen AI, ethical and professional obligations, privacy, and information security. We would go further to include giving lawyers specific training on the use of AI tools (for example in relation to prompts).

Thoughtful change management is also required, as the use of Gen AI in legal practice naturally gives rise to questions about the future of work in the profession. We are of the firm view that Gen AI tools will assist and augment lawyers, not replace them. However, these tools have the potential to fundamentally change how we work, and it is important to reflect on the broader impacts on our people. We think a joined-up approach is required here – so that the introduction of AI is not simply a ‘tech’ issue, but a people and culture one too.

Likewise, our policy highlights the importance of continuing to communicate in our own voice. Gen AI tools can produce content that is clear, concise, and well-written. However, the content is based on patterns and probabilities derived from data, so is inherently generic. As lawyers, our humanity remains one of our key assets.

An AI-enabled future

The issues associated with implementing legal Gen AI tools are complex and rapidly evolving. As the Law Society notes, the purpose of their guidance is to assist lawyers but not to substitute for legal advice or technical expert input. If you would like to discuss how Generative AI might impact you, please don’t hesitate to contact us.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Tech
May 1, 2024

Lawyers and Generative AI – New NZ Law Society guidance released

It was encouraging to see the New Zealand Law Society Te Kāhui Ture o Aotearoa publish Generative AI guidance for lawyers recently, discussing what Gen AI is and what lawyers need to consider before using it in their practice. As the Law Society emphasises, Gen AI has significant potential for the legal profession, as well as risks and ethical issues that need to be managed carefully.

Use of Generative AI policy

As a firm, we have been considering the impact of Gen AI on the legal profession for some time. In September last year, we decided that the impact of Gen AI on what we do was significant enough that it requires a policy. We published that policy to provide transparency, encourage debate and assist other lawyers who might be grappling with the same issues. You can read the Hudson Gavin Martin Policy on Use of Generative AI here.

It was good to see the Law Society promote the importance of a policy in its guidance note, stating that law firms should:

Have a clear policy for all staff about how the firm uses AI. This should include topics such as protection of confidentiality and privilege, monitoring and unauthorised use, and quality assurance.

The Law Society also stated that a “lawyer practising on own account who allows the use of Gen AI in a way that is not adequately monitored or checked or who allows a situation to arise where staff are using Gen AI in an unauthorised manner also risks breaching r11 and 11.1 (Proper professional practice – administering, supervising and managing a legal practice)”. Given that many clients are saying they expect their firms to be using Gen AI tools, and that publicly available Gen AI tools like ChatGPT are easily available to all lawyers, it’s important for firms to address this issue proactively.

Quality assurance and competence

As we would expect, the Law Society’s guidance discusses the important issue of accuracy and quality in legal advice when using Gen AI tools. We know that the current generation of Gen AI tools can “hallucinate” i.e., fabricate or present incorrect information about cases, legislation and other information presented as fact. We also know that there is limited New Zealand content available to “train” legal tools, and much is still uncertain about the way Gen AI tools create content – there are concerns that tools may develop in a way that creates biased, discriminatory, or misleading content, or that infringes the intellectual property rights of others.

The Law Society has made it clear in the guidance that improper, negligent, or incompetent use of Gen AI (including a reliance on defective or misleading outputs) could lead to a serious breach of the Conduct and Client Care Rules. Careful human oversight to review outputs and apply professional judgement is always needed. Ultimately, the human lawyer will remain responsible for any AI-created legal content.

We think it’s important for firms to consider their approach to checking the outputs of Gen AI tools and build this into their policies and processes. This requires consideration of supervision and training approaches, as well as detailed due diligence of specific Gen AI tools before they are used.

Privacy, confidentiality, and cybersecurity

Understandably, the Law Society guidance specifies the need for robust security systems and processes to protect client confidentiality, privacy and privilege when inputting data to Gen AI tools.

While the need to use Gen AI tools in a way that is secure, legally compliant, and ethically sound goes almost without saying, we also believe it’s important to contribute to the ongoing development and evolution of Generative AI tools (as that ultimately benefits our clients and the wider legal profession). Our approach to the contribution of data for training purposes is, therefore, appropriately risk-based – in the sense that we will look for ways to contribute to training these systems, for example through permitting training on non-confidential/anonymised data, or by working with providers to test their solutions in a safe ‘sandbox’ environment.

Wider considerations

The Law Society comments constructively in the guidance on the broader ethical issues the legal profession is considering in relation to Gen AI, such as client consent, billing practices, and staff engagement.

The Law Society highlights that good client and staff communication will be essential, including staff training that covers your firm’s policy on the use of Gen AI, ethical and professional obligations, privacy, and information security. We would go further to include giving lawyers specific training on the use of AI tools (for example in relation to prompts).

Thoughtful change management is also required, as the use of Gen AI in legal practice naturally gives rise to questions about the future of work in the profession. We are of the firm view that Gen AI tools will assist and augment lawyers, not replace them. However, these tools have the potential to fundamentally change how we work, and it is important to reflect on the broader impacts on our people. We think a joined-up approach is required here – so that the introduction of AI is not simply a ‘tech’ issue, but a people and culture one too.

Likewise, our policy highlights the importance of continuing to communicate in our own voice. Gen AI tools can produce content that is clear, concise, and well-written. However, the content is based on patterns and probabilities derived from data, so is inherently generic. As lawyers, our humanity remains one of our key assets.

An AI-enabled future

The issues associated with implementing legal Gen AI tools are complex and rapidly evolving. As the Law Society notes, the purpose of their guidance is to assist lawyers but not to substitute for legal advice or technical expert input. If you would like to discuss how Generative AI might impact you, please don’t hesitate to contact us.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Get in Touch