2024 September 16
Many of us have done it. You’ve had to pull together a document, report, or tender, and have thrown it into one of the many AI chatbots available but is it putting your business at risk?
Using solutions such as ChatGPT, Microsoft Copilot, or Google’s Gemini means we have more time to do other things. What are the risks however of your employees using these programmes for work purposes and is your client data in jeopardy?
Here are a few things to think about with your people using these programmes and what you can do to help mitigate these risks.
Any client or sensitive data that is put into a generative AI product could be collected by the software owner or third parties, therefore putting it at risk of being inappropriately shared. There is also the potential that sharing client data through one of these systems could breach The Privacy Act 2020 - with generative AI actually being covered within the legislation. In fact, the Privacy Commissioner has openly stated that his Office will be working to ensure that the legislation is being complied with in this regard.
Storing client data in the cloud may expose it to vulnerabilities especially if the cloud provider’s security is compromised.
AI chatbots will go out and seek information related to a topic but what comes back may not always be accurate. This means of course that, without realising it, you may be passing on false or misleading information to your customers, suppliers or even employees. In some cases, it is believed, it could even be discriminatory when used in conjunction with generating HR-related documents.
AI chatbots may inadvertently reveal sensitive company or client data if they are not properly secured or configured. This could lead to data breaches, regulatory violations, and financial losses.
AI may struggle to understand moral complex queries, especially in situations that require deep understanding of human emotions and cultural sensitivities.
There is a potential risk of plagiarism if using AI to generate materials, especially if they go back into the public domain. For example, if you are using AI chatbots to generate content which you then use on your website as your own material. Parts of that content could have directly been lifted from others’ work, which is why lawsuits have started to happen in the US around AI and plagiarised work.
This includes the unauthorised use of another party’s intellectual property, notably ideas, designs, or text. If the plagiarised material is used for commercial gain, it can lead to intellectual property infringement claims.
If your employees are using their own AI chatbot accounts to generate work-related documents, they become the copyright owner of the work and not you. It also means that the employee could legally share their generated documents with other parties without your permission.
Some Kiwi organisations have already outright banned the use of these AI systems at work, including Government departments, while others have created their own platforms such as PwC’s ‘ChatPWC’.
What are your options to help mitigate the risks of AI use in the workplace?
If you need advice on your liability in this regard feel free to drop us a line.