Media
I
November 13, 2024

AI and advertising – What producers need to know

Recently I had the pleasure of speaking on a panel hosted by Advertising Producers Aotearoa: “AI is here to stay, use it to make Creative Production slay”, alongside industry experts in VFX, post production, agency creative and talent management.

The commercial benefits of using AI to generate ad creative and copy are obvious. My role was to talk to the legal implications of using generative AI tools in the production of content and what producers need to be thinking about.

Here are the key takeaways from the points we discussed.

Who owns your AI-generated content? Is it you?

Intellectual property ownership of content is always a matter of concern for content producers and their clients.

But can AI-generated content even attract intellectual property protection at law? This differs between jurisdictions. For example, in the United States works created solely by AI generally can’t be protected by copyright.

Steven Thaler, an inventor who is now somewhat infamous for various IP claims he has made, recently tested this. In November 2018, Mr Thaler filed an application with the US Copyright Office to register copyright in an artwork called “A Recent Entrance to Paradise” that was autonomously created by a computer algorithm. In denying Mr Thaler’s application, the Copyright Office restated its opinion that the US Copyright Act provides protection only to works created by human beings.

Mr Thaler then brought proceedings against the Copyright Office, contesting the human authorship requirement and urging that AI be acknowledged as an author (where it otherwise meets authorship criteria), with any copyright ownership vesting in the AI’s owner. In August 2023, the United States District Court for the District of Columbia ruled that artwork generated entirely by an artificial system absent human involvement is not eligible for protection under the US Copyright Act – US copyright law protects only works of human creation.

The US Copyright Office has confirmed that works created partly by technological tools – including AI tools – might be eligible for copyright protection, provided that a “human had creative control over the work’s expression”. So, if a human author arranges or modifies AI-generated material, the human-authored aspects may still be copyrightable provided they are sufficiently creative (which would be a case-by-case assessment depending on how the AI tool was used). In any case, Mr Thaler has appealed the District Court’s decision.

Unlike the US (and many other overseas jurisdictions), the New Zealand Copyright Act 1994 does contemplate the creation of works purely by computer – so a work created using an AI tool could potentially be protected by copyright so long as the other requirements are met.

Under New Zealand law, in the case of a literary, dramatic, musical or artistic work that is computer generated, the author (i.e. the person who has created the work) is the person by whom the arrangements necessary for the creation of the work are undertaken. So, in the context of AI-generated work, the critical question becomes who “made the arrangements necessary” for the creation of the work?

While this issue hasn’t been tested in the New Zealand courts, keeping a complete record of all AI inputs/outputs on a project may help prove that you made the arrangements necessary for the creation of the work – and that therefore you own it.

The terms applicable to the use of AI tools are also important. These terms can vary on ownership of output generated by the tools, with ownership or extensive use rights remaining with the tool provider in some cases. Similarly, if you are engaging third parties to create content and those third parties use AI tools to provide services to you, then it’s important to ensure that your agreements with those third parties also address the use of AI tools and who owns the output.

Your risk of infringement

At present, there is a risk of copyright infringement in using AI tools to generate content.

There is a tidal wave of ongoing litigation globally against companies (including Microsoft, Github, Stability AI, Meta and Open AI) alleging copyright infringement – both at the initial training data input stage (training algorithms by scraping existing content) and the output stage (outputs that could be illegal copies or derivative works). Many of these cases have been stripped back through successful motions to dismiss to include only claims of direct copyright infringement from unauthorised use and copying for training purposes, and this seems to be becoming the decisive legal issue in the infringement cases. Another unsettled legal issue is who is liable if the output infringes – can the user be liable as well as the tool provider?

Usually, the terms of service with the tool provider exclude all liability for outputs. Given the nature of generative AI tools, there is, therefore, some risk for users of generative AI algorithms trained on copyright works without permission – it could be infringement of the copyright of the authors of the training data if the output is substantially similar.

Another infringement risk to be aware of in the US market is against the right of publicity, which protects against unauthorised commercial use of a person’s name or likeness. AI tools that are trained on vast quantities of images of well-known individuals can intentionally or inadvertently violate the right of publicity (by exploiting names, voices, photographs, or likenesses to generate outputs that are digital replicas of identifiable individuals).

Can an indemnity protect you?

Indemnity has a specific legal meaning, but it’s useful to think of it as a shield that will protect you upon the occurrence of a trigger event. That is, where the trigger event happens then the person who gives me the indemnity effectively acts as my shield and protects me from any loss or damage that I may suffer because of that trigger event.

In relation to AI-generated content, the trigger event is generally a claim that the content infringes intellectual property or breaches someone’s privacy.

In practical terms, let’s say I ask X to create some content for me. Usually (in the absence of AI tools being used), I would expect X to promise me that the content X creates for me will not infringe anyone else’s IP and that if it does, then X will indemnify me (i.e. be my shield). This is standard practice and content creators are comfortable with this, as they have control over what they create: either because it’s their original work or because they obtain the relevant authorisations for third party content to be included in the work.

Where X uses an AI tool to generate that content, then a different risk profile arises. In the case of a generative AI tool, the model or algorithm underlying that tool will have been trained on a set of data – which in the case of publicly available tools may be taken from the internet or elsewhere (depending on the tool). This results in an infringement risk in relation to the output (as discussed above) - i.e. the tool’s output may replicate someone else’s work. As a result, where tools have been trained on public or non-proprietary data, the tool provider will not provide any assurances that the output of that tool will be non-infringing.  

So, if X does not get this assurance from the tool provider, can X give me the assurance (and the indemnity) that they would normally provide regarding non-infringement in AI-generated content? There is a clear risk here for X, which may mean that an indemnity is not agreed to.

There are sometimes ways to mitigate the risk so that X can be comfortable to indemnify me and their other clients. For example, some tool providers provide an indemnity to their users (in this case X) because they have trained their tools on proprietary or licensed data – so they can be sure that the output will not infringe.

What is “ethical” AI?

It is key for any business that the use of AI aligns with its values. So, before launching into using AI, think about how your values should impact its use and what governance you want to introduce around the use of AI.

Having an AI policy should be your first starting point. You may already have policies that AI can sit under, but if not then consider putting one in place. This was something we did at Hudson Gavin Martin – we have open sourced that policy and you can find it here.

The use of AI also naturally gives rise to questions about the future of work in an industry – so thoughtful change management is required when introducing AI tools in a business, particularly when workers may be worried about the impact of AI on their jobs.

What can producers do to navigate the legal implications of AI?

Both the opportunities and the risks of an AI tool depend on its context:

• The nature of the use case;

• The nature of the tool; and

• The nature of the data, e.g. any time the tool is being used to process personal information and/or confidential information, the risks increase.

You can make an assessment at the start of any project about where the risk sits based on the context. For example:  

• An internal use case may be less risky than a use case where the content will be used in the media.

• A tool trained on proprietary data will be less risky than one trained on publicly available data sets.

• One tool provider may be willing to provide protections that another tool provider will not.  

You (or your lawyer) will need to read the fine print. For example, we have seen terms of use where the tool provider can opt-in to training the AI tool on your data, with no right for you to terminate. Permitting your data to be used to “train” the tool could lead to your proprietary material (and/or confidential information) being used to inform the AI tool’s responses to future prompts. It also increases your information security risks.

Bear in mind that the law around AI (including as it relates to data, privacy, and intellectual property) is only just emerging; you will need to periodically review your approach to the use of AI tools, including your contracts and your “Use of AI” policy, to ensure that any new legal or ethical issues are addressed.

If this discussion raises questions about your business’ use of AI-generated content, please get in touch.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Media
November 13, 2024

AI and advertising – What producers need to know

Recently I had the pleasure of speaking on a panel hosted by Advertising Producers Aotearoa: “AI is here to stay, use it to make Creative Production slay”, alongside industry experts in VFX, post production, agency creative and talent management.

The commercial benefits of using AI to generate ad creative and copy are obvious. My role was to talk to the legal implications of using generative AI tools in the production of content and what producers need to be thinking about.

Here are the key takeaways from the points we discussed.

Who owns your AI-generated content? Is it you?

Intellectual property ownership of content is always a matter of concern for content producers and their clients.

But can AI-generated content even attract intellectual property protection at law? This differs between jurisdictions. For example, in the United States works created solely by AI generally can’t be protected by copyright.

Steven Thaler, an inventor who is now somewhat infamous for various IP claims he has made, recently tested this. In November 2018, Mr Thaler filed an application with the US Copyright Office to register copyright in an artwork called “A Recent Entrance to Paradise” that was autonomously created by a computer algorithm. In denying Mr Thaler’s application, the Copyright Office restated its opinion that the US Copyright Act provides protection only to works created by human beings.

Mr Thaler then brought proceedings against the Copyright Office, contesting the human authorship requirement and urging that AI be acknowledged as an author (where it otherwise meets authorship criteria), with any copyright ownership vesting in the AI’s owner. In August 2023, the United States District Court for the District of Columbia ruled that artwork generated entirely by an artificial system absent human involvement is not eligible for protection under the US Copyright Act – US copyright law protects only works of human creation.

The US Copyright Office has confirmed that works created partly by technological tools – including AI tools – might be eligible for copyright protection, provided that a “human had creative control over the work’s expression”. So, if a human author arranges or modifies AI-generated material, the human-authored aspects may still be copyrightable provided they are sufficiently creative (which would be a case-by-case assessment depending on how the AI tool was used). In any case, Mr Thaler has appealed the District Court’s decision.

Unlike the US (and many other overseas jurisdictions), the New Zealand Copyright Act 1994 does contemplate the creation of works purely by computer – so a work created using an AI tool could potentially be protected by copyright so long as the other requirements are met.

Under New Zealand law, in the case of a literary, dramatic, musical or artistic work that is computer generated, the author (i.e. the person who has created the work) is the person by whom the arrangements necessary for the creation of the work are undertaken. So, in the context of AI-generated work, the critical question becomes who “made the arrangements necessary” for the creation of the work?

While this issue hasn’t been tested in the New Zealand courts, keeping a complete record of all AI inputs/outputs on a project may help prove that you made the arrangements necessary for the creation of the work – and that therefore you own it.

The terms applicable to the use of AI tools are also important. These terms can vary on ownership of output generated by the tools, with ownership or extensive use rights remaining with the tool provider in some cases. Similarly, if you are engaging third parties to create content and those third parties use AI tools to provide services to you, then it’s important to ensure that your agreements with those third parties also address the use of AI tools and who owns the output.

Your risk of infringement

At present, there is a risk of copyright infringement in using AI tools to generate content.

There is a tidal wave of ongoing litigation globally against companies (including Microsoft, Github, Stability AI, Meta and Open AI) alleging copyright infringement – both at the initial training data input stage (training algorithms by scraping existing content) and the output stage (outputs that could be illegal copies or derivative works). Many of these cases have been stripped back through successful motions to dismiss to include only claims of direct copyright infringement from unauthorised use and copying for training purposes, and this seems to be becoming the decisive legal issue in the infringement cases. Another unsettled legal issue is who is liable if the output infringes – can the user be liable as well as the tool provider?

Usually, the terms of service with the tool provider exclude all liability for outputs. Given the nature of generative AI tools, there is, therefore, some risk for users of generative AI algorithms trained on copyright works without permission – it could be infringement of the copyright of the authors of the training data if the output is substantially similar.

Another infringement risk to be aware of in the US market is against the right of publicity, which protects against unauthorised commercial use of a person’s name or likeness. AI tools that are trained on vast quantities of images of well-known individuals can intentionally or inadvertently violate the right of publicity (by exploiting names, voices, photographs, or likenesses to generate outputs that are digital replicas of identifiable individuals).

Can an indemnity protect you?

Indemnity has a specific legal meaning, but it’s useful to think of it as a shield that will protect you upon the occurrence of a trigger event. That is, where the trigger event happens then the person who gives me the indemnity effectively acts as my shield and protects me from any loss or damage that I may suffer because of that trigger event.

In relation to AI-generated content, the trigger event is generally a claim that the content infringes intellectual property or breaches someone’s privacy.

In practical terms, let’s say I ask X to create some content for me. Usually (in the absence of AI tools being used), I would expect X to promise me that the content X creates for me will not infringe anyone else’s IP and that if it does, then X will indemnify me (i.e. be my shield). This is standard practice and content creators are comfortable with this, as they have control over what they create: either because it’s their original work or because they obtain the relevant authorisations for third party content to be included in the work.

Where X uses an AI tool to generate that content, then a different risk profile arises. In the case of a generative AI tool, the model or algorithm underlying that tool will have been trained on a set of data – which in the case of publicly available tools may be taken from the internet or elsewhere (depending on the tool). This results in an infringement risk in relation to the output (as discussed above) - i.e. the tool’s output may replicate someone else’s work. As a result, where tools have been trained on public or non-proprietary data, the tool provider will not provide any assurances that the output of that tool will be non-infringing.  

So, if X does not get this assurance from the tool provider, can X give me the assurance (and the indemnity) that they would normally provide regarding non-infringement in AI-generated content? There is a clear risk here for X, which may mean that an indemnity is not agreed to.

There are sometimes ways to mitigate the risk so that X can be comfortable to indemnify me and their other clients. For example, some tool providers provide an indemnity to their users (in this case X) because they have trained their tools on proprietary or licensed data – so they can be sure that the output will not infringe.

What is “ethical” AI?

It is key for any business that the use of AI aligns with its values. So, before launching into using AI, think about how your values should impact its use and what governance you want to introduce around the use of AI.

Having an AI policy should be your first starting point. You may already have policies that AI can sit under, but if not then consider putting one in place. This was something we did at Hudson Gavin Martin – we have open sourced that policy and you can find it here.

The use of AI also naturally gives rise to questions about the future of work in an industry – so thoughtful change management is required when introducing AI tools in a business, particularly when workers may be worried about the impact of AI on their jobs.

What can producers do to navigate the legal implications of AI?

Both the opportunities and the risks of an AI tool depend on its context:

• The nature of the use case;

• The nature of the tool; and

• The nature of the data, e.g. any time the tool is being used to process personal information and/or confidential information, the risks increase.

You can make an assessment at the start of any project about where the risk sits based on the context. For example:  

• An internal use case may be less risky than a use case where the content will be used in the media.

• A tool trained on proprietary data will be less risky than one trained on publicly available data sets.

• One tool provider may be willing to provide protections that another tool provider will not.  

You (or your lawyer) will need to read the fine print. For example, we have seen terms of use where the tool provider can opt-in to training the AI tool on your data, with no right for you to terminate. Permitting your data to be used to “train” the tool could lead to your proprietary material (and/or confidential information) being used to inform the AI tool’s responses to future prompts. It also increases your information security risks.

Bear in mind that the law around AI (including as it relates to data, privacy, and intellectual property) is only just emerging; you will need to periodically review your approach to the use of AI tools, including your contracts and your “Use of AI” policy, to ensure that any new legal or ethical issues are addressed.

If this discussion raises questions about your business’ use of AI-generated content, please get in touch.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Get in Touch