As TMT lawyers, we have worked closely with clients as they grapple with how to approach AI for their businesses. The approaches can range from suspicion (and in some cases outright rejection), to complete infatuation. But by now most businesses have come to accept (and embrace) the new, new normal.
For businesses eager to leverage the power of AI, having a clear AI policy should be the starting point. This was something we did early on at Hudson Gavin Martin (we have open sourced that policy and you can find it here), and something that we’ve assisted our clients with.
If you’re still in the market for an AI policy, or want to sanity check the approach that you’ve taken, here are some key themes that we’ve seen:
• Businesses want to harness AI – especially to improve customer experiences and to unlock business efficiencies.
• AI needs to be used responsibly and in strict compliance with a business’ legal, regulatory and security obligations.
• AI is a new tool but it is not so different from a number of other things that a business might use – it is effectively something between procuring new technology and onboarding a new contractor.
• Most businesses have a vetting process of some kind before new technology can be implemented or a new contractor can be onboarded – these equally apply to AI.
• Similarly, many businesses will have a number of existing policies which already cover AI. This may be a good opportunity to revise your existing policies to ensure that the expectations are streamlined and clear – and that any policy that you adopt for AI is consistent with your established policies.
• AI tools should be assessed for:
Data and privacy risks – for example:
o Where does your data go?
o What control do you have over it?
o Will personal information be at risk?
o Will commercially valuable information be at risk?
o What does the AI provider want to do with your data?
Reliability of output – for example:
o How does the AI make decisions?
o What data has the AI been trained on?
o How will you ensure there is still a human assessment of the AI output?
IP risks – for example:
o Have the AI tools been trained on legitimately sourced data?
o What assurances does your AI provider give to address this?
Ownership of output:
o Will you own what you expect to own?
o Is your use of the output restricted in any way?
• This important vetting process can be seen as a "handbrake" on innovation, which can result in business teams trying to circumvent the process. This use of "shadow tech" (i.e. the unauthorised use of technology within a business) is not an AI specific phenomenon but may pose a higher risk.
• To better support eager business teams, many clients have explored different ways to ensure that AI tools can be available safely but at speed. Options include:
- Using access controls/restrictions to prevent unauthorised AI being accessed on work devices;
- Using an internal "marketplace" to enable speedy access to approved tech/AI;
- Using whitelists and blacklists to guide use of AI; and/or
- Using prompt-based tools to enable users to assess AI, with any green lit tools to be notified to the relevant internal team to further assess it and then add it to the relevant "marketplace"/"whitelist".
• Principle-based AI policies work best. This is because principles allow businesses to stay focused in an environment where the technology is constantly evolving. Some key principles include:
- Transparency (including telling customers/users that you are using AI);
- Accountability (including for the decisions of AI);
- Empowering your people (including to challenge AI decisions/outputs); and
- Protecting commercially sensitive information (both personal information and other confidential information that you don’t want absorbed into the AI through machine learning).
• Businesses should remain vigilant and really consider whether the use of AI is appropriate at any given time. For example, while it may be tempting to use AI to transcribe every internal meeting, this could backfire if a dispute arises and the documents are suddenly discoverable.
For more information, or help with your own AI policy, please feel free to get in touch.