Tech
I
September 25, 2023

Innovate at the speed of trust – Privacy Commissioner releases new guidance on artificial intelligence tools

Four months after outlining his “expectations” around how New Zealand businesses and organisations should use generative artificial intelligence, the Privacy Commissioner has now published more detailed guidance on the use of (all) AI tools.

The Commissioner’s position is clear: the Privacy Act applies to the use of AI. The successful use of AI tools comes down to building trust, and privacy must be a demonstrable starting point for the responsible use of AI tools.  

AI tools create “special issues” for privacy

The Privacy Commissioner takes the view that AI tools present specific challenges for privacy because they enable new ways to gather and combine personal information; and because they can make it harder to see, understand and explain how personal information is used. Because even experts can’t always explain how AI tools generate a particular output or decision, organisations should think especially hard before uploading personal information into these tools.

The Commissioner believes that it is critical for organisations to understand what is in the training data for the AI tool, how relevant and reliable it is for the intended purpose, and whether the training data is gathered and processed in ways that comply with legal obligations (not to mention the organisation’s ethical values).

Privacy-by-design

The Privacy Commissioner is strongly of the view that privacy should be considered in the early planning and implementation of AI tools in an organisation: “The best time to do privacy work is as soon as possible, especially for AI tools”.

The Information Privacy Principles (IPPs) in the Privacy Act set out the legal requirements for collecting, using, and sharing personal information. They apply whether an organisation is building its own AI tools, using AI tools formally to support a business function, or merely has team members who are informally using AI in their work. They also apply where overseas organisations supply AI tools for use in New Zealand.

The Commissioner states that before using AI tools, organisations need to understand enough about how they work to be confident they can and are upholding the IPPs if they use them. To be effective, this will require collaboration between cross functional teams within each organisation.  

More than box ticking

This process of understanding needs to be meaningful and demonstrable. The Commissioner’s view is that the “best way” to start is a Privacy Impact Assessment – which needs to be updated regularly.

Complying with the IPPs may require significant change to how AI tools are used in an organisation. For example, while having a “human in the loop” to check outputs before use is a good idea, it may not be enough on its own to uphold good privacy practices because it’s well established that people overseeing computer systems can suffer “automation blindness” and fail to notice errors and mistakes.

This high standard is consistent with the Commissioner’s previously published expectation that agencies using generative AI tools should only do so with “senior leadership approval” based on a full consideration of the risks and mitigations.

Applying AI tools to the IPPs

The IPPs specify the legal requirements around:

• How personal information is collected (IPPs 1 – 4).

• How personal information is used and protected (IPPs 5 – 10).

• How personal information is shared (IPPs 11 – 12).

In this latest guidance, the Commissioner states his view that the IPPs apply to each stage of building and using AI tools:

Collecting training data – that is, gathering training data as a resource to inform the behaviour of an AI tool;
Training a model – that is, processing data to create or refine a model to drive the behaviour of an AI tool;
User input – such as user-supplied prompts or other information;
Receiving a response – that is, generated outputs; and
Taking action or decisions based on the use of the AI tool – including providing it for public use.

What does this mean in practice?

The Privacy Commissioner has published a list of “key questions” that organisations should be asking themselves when using AI tools to ensure compliance with the IPPs.

Is the training data behind the AI tool relevant, reliable, and ethical?

Any time an organisation seeks out or obtains personal information, it is “collecting” it for the purposes of the Privacy Act. In general, agencies must obtain personal information directly from the person it is about (IPP2) and must be transparent about the information being collected and how it will be used (IPP3). Agencies must also ensure that the manner of collection of personal information is lawful, fair and does not unreasonably intrude on personal affairs, particularly when collecting information from children or young people (IPP4).

Using an AI tool to generate an output about a person (e.g., asking an AI tool a question that generates information about a person) may amount to the collection of personal information – bearing in mind that personal information includes information about a person that is inaccurate or made up, including fake social profiles and deepfake images. AI tools reproduce patterns from their training data and, without a good understanding of the training data and processes used to develop the tool, an organisation cannot know whether it includes personal information collected in a way that breaches IPPs 1 – 4. For example, it may not be clear to an organisation whether an AI tool has been trained on responsibly collected data – there may be a risk the training data includes personal information obtained in a data breach.

Organisations therefore need to analyse the risks involved in using AI tools without understanding the training data – and what steps they can take to mitigate those risks.

The Commissioner also makes the point that it may be risky to rely on the statutory exception for “publicly available” information because that requires an assumption about the way the training data was obtained e.g., training data scraped from the internet may include sources that require a login to access, such as social media profiles.

What was the purpose for collecting personal information? Is your use related?

In general, personal information can only be collected where it is necessary for a lawful purpose (IPP1). Organisations need to clearly identify the purposes for collecting personal information, and then limit use and disclosure to those purposes or a directly related purpose (IPPs 10 and 11). In practice this means that organisations need to think carefully about why they need to collect information – and then not collect more than is needed for that purpose.

The rapid development of more readily available AI tools has created new ways to use and disclose information for a legitimate business purpose. However, these uses may not be directly related to the purpose for which the information was originally collected.

If an organisation has already collected information and is now intending to use it in an AI tool (to train the tool or as a user prompt to generate a specific output), then it needs to analyse carefully the purpose for which it originally collected the information – and consider whether feeding the information into AI is directly related to that purpose.

The Privacy Commissioner’s position is that if an organisation wants to use personal information to train an AI tool, this needs to be made clear at the time the information is collected.

The Commissioner is less direct about whether specific disclosure is required at the time of collection if an organisation only wants to use the personal information in an AI tool (e.g., to generate an output) – presumably because that use may still be directly related to the other purpose for which the information was collected. However, the Commissioner is clear that, in the interests of fairness, transparency is always a good idea – if personal information is collected to train, refine, or use AI tools, organisations should consider clearly explaining this to people and offering them the chance to opt out from that particular use.

How are you keeping track of the information you collect and use with AI tools?

When an agency holds information about a person, that person can ask for that information (IPP6) and to correct that information (IPP7).

Organisations need to analyse how they will comply with these access and correction principles if they use AI tools. For example, the original training data, the pre-trained model, the inputs, and the outputs may all potentially contain personal information – but no practical way to access or correct it.

The Commissioner states that it is essential that organisations develop procedures for how they will respond to requests from individuals to access and correct their personal information in this context – before putting an AI tool into use. Can an organisation still be confident that it can provide and correct personal information when asked to do so?

Organisations will also need to be more vigilant about verifying the identity of an individual requesting their personal information, as AI tools make it easier to realistically impersonate people.

How are you testing that AI tools are accurate and fair for your intended purpose? Are you talking with people and communities with an interest in these issues?

IPP8 requires that agencies that hold personal information must not use or disclose that information without taking reasonable steps to ensure that the information is accurate, up-to-date, complete, relevant, and not misleading.

The accuracy issues with generative AI outputs (both of fact and logic) are already well-known, and the Commissioner cautions organisations to take a critical approach to accuracy claims made by providers of other AI tools. The question becomes what “reasonable steps” organisations need to take to assure themselves that AI tools will uphold the accuracy principle. The Commissioner warns that – depending on the nature of the intended use and the level of risk – this may require independent testing and auditing. Completing and updating a Privacy Impact Assessment will be essential. An organisation will need to consider how accurate and reliable an AI tool is at each stage – and this may require an investigation of the training process sitting behind the tool.

One obvious commercial benefit from the use of AI tools is in automated decision making, which can be quicker and more cost-effective. However, the Commissioner warns that the direct impacts on outcomes for people from automated decision making increases the risks of inaccuracy and means that a human review of decisions will generally be required to uphold IPP8.

He reiterates that while human review prior to acting on AI outputs is important to reduce the risks, simply having a “human in the loop” may not be enough given automation blindness. The use of AI tools should “maintain and complement” – but not replace – the processes and responsibilities already in place in organisations to uphold accuracy and fairness.

A less obvious risk of breaching IPP8 arises because there may be gaps or biases in the training data, which limit the accuracy and fairness of the AI tool. Most organisations will adopt AI tools that are not designed for and in consultation with Aotearoa New Zealand communities (as most public-facing AI tools now available have been developed overseas). This may lead to inaccuracy in the form of bias or exclusion (particularly of poorly represented peoples). The Commissioner’s view is that completing a good Privacy Impact Assessment may require engaging with the community, including Māori, to help the organisation understand what accuracy and fairness means – for example, engaging with Māori about the potential risks and impacts to the taonga of their information.

What are you doing to track and manage new risks to information from AI tools?

Organisations must protect personal information against loss, unauthorised access, and other misuse (IPP5). This includes adopting reasonable cybersecurity measures to protect information e.g., using two-factor authentication.

AI tools supercharge this cybersecurity risk. The use of AI tools leads to more sharing of information with third party providers, and there is no prospect that the provider of a publicly available AI tool (for example) will ring-fence information inputted by a particular organisation. If that information is then used for future training purposes, it may be leaked or re-identified. AI tools can also make it easier to perpetrate other cybercrimes, such as impersonating real people or creating fake identities online.

To uphold IPP5, agencies must keep personal information secure – including prompts and training data. Organisations will therefore need to consider whether they can use AI tools without sharing back data or by relying on contractual terms that stop the provider from using the input data for training. They will also need privacy breach response plans that deal specifically with the potential harms arising from using AI tools.

Safety first

If an organisation is not sure about an AI tool, then the Commissioner states that the safest approach is to avoid putting personal information into it – and to ensure that everyone across the organisation is complying with this. Many employees are already informally using AI tools in their work and an organisation may need to “catch up” to avoid unknowingly putting people’s privacy at risk.

As the Privacy Commissioner stated at the recent Aotearoa AI Summit, when it comes to personal information, the agency of the individual is key – organisations should innovate at the speed of trust to ensure that the benefits of AI tools don’t come at the expense of people’s privacy.

Thinking about privacy when implementing AI tools in your organisation is vital. Please contact us if you need any guidance in this area. You may also want to take a look at our own Policy on the Use of Generative AI, which you can read here.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Tech
September 25, 2023

Innovate at the speed of trust – Privacy Commissioner releases new guidance on artificial intelligence tools

Four months after outlining his “expectations” around how New Zealand businesses and organisations should use generative artificial intelligence, the Privacy Commissioner has now published more detailed guidance on the use of (all) AI tools.

The Commissioner’s position is clear: the Privacy Act applies to the use of AI. The successful use of AI tools comes down to building trust, and privacy must be a demonstrable starting point for the responsible use of AI tools.  

AI tools create “special issues” for privacy

The Privacy Commissioner takes the view that AI tools present specific challenges for privacy because they enable new ways to gather and combine personal information; and because they can make it harder to see, understand and explain how personal information is used. Because even experts can’t always explain how AI tools generate a particular output or decision, organisations should think especially hard before uploading personal information into these tools.

The Commissioner believes that it is critical for organisations to understand what is in the training data for the AI tool, how relevant and reliable it is for the intended purpose, and whether the training data is gathered and processed in ways that comply with legal obligations (not to mention the organisation’s ethical values).

Privacy-by-design

The Privacy Commissioner is strongly of the view that privacy should be considered in the early planning and implementation of AI tools in an organisation: “The best time to do privacy work is as soon as possible, especially for AI tools”.

The Information Privacy Principles (IPPs) in the Privacy Act set out the legal requirements for collecting, using, and sharing personal information. They apply whether an organisation is building its own AI tools, using AI tools formally to support a business function, or merely has team members who are informally using AI in their work. They also apply where overseas organisations supply AI tools for use in New Zealand.

The Commissioner states that before using AI tools, organisations need to understand enough about how they work to be confident they can and are upholding the IPPs if they use them. To be effective, this will require collaboration between cross functional teams within each organisation.  

More than box ticking

This process of understanding needs to be meaningful and demonstrable. The Commissioner’s view is that the “best way” to start is a Privacy Impact Assessment – which needs to be updated regularly.

Complying with the IPPs may require significant change to how AI tools are used in an organisation. For example, while having a “human in the loop” to check outputs before use is a good idea, it may not be enough on its own to uphold good privacy practices because it’s well established that people overseeing computer systems can suffer “automation blindness” and fail to notice errors and mistakes.

This high standard is consistent with the Commissioner’s previously published expectation that agencies using generative AI tools should only do so with “senior leadership approval” based on a full consideration of the risks and mitigations.

Applying AI tools to the IPPs

The IPPs specify the legal requirements around:

• How personal information is collected (IPPs 1 – 4).

• How personal information is used and protected (IPPs 5 – 10).

• How personal information is shared (IPPs 11 – 12).

In this latest guidance, the Commissioner states his view that the IPPs apply to each stage of building and using AI tools:

Collecting training data – that is, gathering training data as a resource to inform the behaviour of an AI tool;
Training a model – that is, processing data to create or refine a model to drive the behaviour of an AI tool;
User input – such as user-supplied prompts or other information;
Receiving a response – that is, generated outputs; and
Taking action or decisions based on the use of the AI tool – including providing it for public use.

What does this mean in practice?

The Privacy Commissioner has published a list of “key questions” that organisations should be asking themselves when using AI tools to ensure compliance with the IPPs.

Is the training data behind the AI tool relevant, reliable, and ethical?

Any time an organisation seeks out or obtains personal information, it is “collecting” it for the purposes of the Privacy Act. In general, agencies must obtain personal information directly from the person it is about (IPP2) and must be transparent about the information being collected and how it will be used (IPP3). Agencies must also ensure that the manner of collection of personal information is lawful, fair and does not unreasonably intrude on personal affairs, particularly when collecting information from children or young people (IPP4).

Using an AI tool to generate an output about a person (e.g., asking an AI tool a question that generates information about a person) may amount to the collection of personal information – bearing in mind that personal information includes information about a person that is inaccurate or made up, including fake social profiles and deepfake images. AI tools reproduce patterns from their training data and, without a good understanding of the training data and processes used to develop the tool, an organisation cannot know whether it includes personal information collected in a way that breaches IPPs 1 – 4. For example, it may not be clear to an organisation whether an AI tool has been trained on responsibly collected data – there may be a risk the training data includes personal information obtained in a data breach.

Organisations therefore need to analyse the risks involved in using AI tools without understanding the training data – and what steps they can take to mitigate those risks.

The Commissioner also makes the point that it may be risky to rely on the statutory exception for “publicly available” information because that requires an assumption about the way the training data was obtained e.g., training data scraped from the internet may include sources that require a login to access, such as social media profiles.

What was the purpose for collecting personal information? Is your use related?

In general, personal information can only be collected where it is necessary for a lawful purpose (IPP1). Organisations need to clearly identify the purposes for collecting personal information, and then limit use and disclosure to those purposes or a directly related purpose (IPPs 10 and 11). In practice this means that organisations need to think carefully about why they need to collect information – and then not collect more than is needed for that purpose.

The rapid development of more readily available AI tools has created new ways to use and disclose information for a legitimate business purpose. However, these uses may not be directly related to the purpose for which the information was originally collected.

If an organisation has already collected information and is now intending to use it in an AI tool (to train the tool or as a user prompt to generate a specific output), then it needs to analyse carefully the purpose for which it originally collected the information – and consider whether feeding the information into AI is directly related to that purpose.

The Privacy Commissioner’s position is that if an organisation wants to use personal information to train an AI tool, this needs to be made clear at the time the information is collected.

The Commissioner is less direct about whether specific disclosure is required at the time of collection if an organisation only wants to use the personal information in an AI tool (e.g., to generate an output) – presumably because that use may still be directly related to the other purpose for which the information was collected. However, the Commissioner is clear that, in the interests of fairness, transparency is always a good idea – if personal information is collected to train, refine, or use AI tools, organisations should consider clearly explaining this to people and offering them the chance to opt out from that particular use.

How are you keeping track of the information you collect and use with AI tools?

When an agency holds information about a person, that person can ask for that information (IPP6) and to correct that information (IPP7).

Organisations need to analyse how they will comply with these access and correction principles if they use AI tools. For example, the original training data, the pre-trained model, the inputs, and the outputs may all potentially contain personal information – but no practical way to access or correct it.

The Commissioner states that it is essential that organisations develop procedures for how they will respond to requests from individuals to access and correct their personal information in this context – before putting an AI tool into use. Can an organisation still be confident that it can provide and correct personal information when asked to do so?

Organisations will also need to be more vigilant about verifying the identity of an individual requesting their personal information, as AI tools make it easier to realistically impersonate people.

How are you testing that AI tools are accurate and fair for your intended purpose? Are you talking with people and communities with an interest in these issues?

IPP8 requires that agencies that hold personal information must not use or disclose that information without taking reasonable steps to ensure that the information is accurate, up-to-date, complete, relevant, and not misleading.

The accuracy issues with generative AI outputs (both of fact and logic) are already well-known, and the Commissioner cautions organisations to take a critical approach to accuracy claims made by providers of other AI tools. The question becomes what “reasonable steps” organisations need to take to assure themselves that AI tools will uphold the accuracy principle. The Commissioner warns that – depending on the nature of the intended use and the level of risk – this may require independent testing and auditing. Completing and updating a Privacy Impact Assessment will be essential. An organisation will need to consider how accurate and reliable an AI tool is at each stage – and this may require an investigation of the training process sitting behind the tool.

One obvious commercial benefit from the use of AI tools is in automated decision making, which can be quicker and more cost-effective. However, the Commissioner warns that the direct impacts on outcomes for people from automated decision making increases the risks of inaccuracy and means that a human review of decisions will generally be required to uphold IPP8.

He reiterates that while human review prior to acting on AI outputs is important to reduce the risks, simply having a “human in the loop” may not be enough given automation blindness. The use of AI tools should “maintain and complement” – but not replace – the processes and responsibilities already in place in organisations to uphold accuracy and fairness.

A less obvious risk of breaching IPP8 arises because there may be gaps or biases in the training data, which limit the accuracy and fairness of the AI tool. Most organisations will adopt AI tools that are not designed for and in consultation with Aotearoa New Zealand communities (as most public-facing AI tools now available have been developed overseas). This may lead to inaccuracy in the form of bias or exclusion (particularly of poorly represented peoples). The Commissioner’s view is that completing a good Privacy Impact Assessment may require engaging with the community, including Māori, to help the organisation understand what accuracy and fairness means – for example, engaging with Māori about the potential risks and impacts to the taonga of their information.

What are you doing to track and manage new risks to information from AI tools?

Organisations must protect personal information against loss, unauthorised access, and other misuse (IPP5). This includes adopting reasonable cybersecurity measures to protect information e.g., using two-factor authentication.

AI tools supercharge this cybersecurity risk. The use of AI tools leads to more sharing of information with third party providers, and there is no prospect that the provider of a publicly available AI tool (for example) will ring-fence information inputted by a particular organisation. If that information is then used for future training purposes, it may be leaked or re-identified. AI tools can also make it easier to perpetrate other cybercrimes, such as impersonating real people or creating fake identities online.

To uphold IPP5, agencies must keep personal information secure – including prompts and training data. Organisations will therefore need to consider whether they can use AI tools without sharing back data or by relying on contractual terms that stop the provider from using the input data for training. They will also need privacy breach response plans that deal specifically with the potential harms arising from using AI tools.

Safety first

If an organisation is not sure about an AI tool, then the Commissioner states that the safest approach is to avoid putting personal information into it – and to ensure that everyone across the organisation is complying with this. Many employees are already informally using AI tools in their work and an organisation may need to “catch up” to avoid unknowingly putting people’s privacy at risk.

As the Privacy Commissioner stated at the recent Aotearoa AI Summit, when it comes to personal information, the agency of the individual is key – organisations should innovate at the speed of trust to ensure that the benefits of AI tools don’t come at the expense of people’s privacy.

Thinking about privacy when implementing AI tools in your organisation is vital. Please contact us if you need any guidance in this area. You may also want to take a look at our own Policy on the Use of Generative AI, which you can read here.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Get in Touch