Is staff use of AI chatbots putting your business at risk?

Is staff use of AI chatbots putting your business at risk?

< Back

2024 September 16

Many of us have done it. You’ve had to pull together a document, report, or tender, and have thrown it into one of the many AI chatbots available but is it putting your business at risk?

Using solutions such as ChatGPT, Microsoft Copilot, or Google’s Gemini means we have more time to do other things. What are the risks however of your employees using these programmes for work purposes and is your client data in jeopardy?

Here are a few things to think about with your people using these programmes and what you can do to help mitigate these risks.

Risk of client data being exposed

Any client or sensitive data that is put into a generative AI product could be collected by the software owner or third parties, therefore putting it at risk of being inappropriately shared. There is also the potential that sharing client data through one of these systems could breach The Privacy Act 2020 - with generative AI actually being covered within the legislation. In fact, the Privacy Commissioner has openly stated that his Office will be working to ensure that the legislation is being complied with in this regard.

Storing client data in the cloud may expose it to vulnerabilities especially if the cloud provider’s security is compromised.

Accuracy of data

AI chatbots will go out and seek information related to a topic but what comes back may not always be accurate. This means of course that, without realising it, you may be passing on false or misleading information to your customers, suppliers or even employees. In some cases, it is believed, it could even be discriminatory when used in conjunction with generating HR-related documents.

AI chatbots may inadvertently reveal sensitive company or client data if they are not properly secured or configured. This could lead to data breaches, regulatory violations, and financial losses.

AI may struggle to understand moral complex queries, especially in situations that require deep understanding of human emotions and cultural sensitivities.

Risk of plagiarism

There is a potential risk of plagiarism if using AI to generate materials, especially if they go back into the public domain. For example, if you are using AI chatbots to generate content which you then use on your website as your own material. Parts of that content could have directly been lifted from others’ work, which is why lawsuits have started to happen in the US around AI and plagiarised work.

This includes the unauthorised use of another party’s intellectual property, notably ideas, designs, or text. If the plagiarised material is used for commercial gain, it can lead to intellectual property infringement claims.

Who owns the data?

If your employees are using their own AI chatbot accounts to generate work-related documents, they become the copyright owner of the work and not you. It also means that the employee could legally share their generated documents with other parties without your permission.

What you can do to mitigate the risks

Some Kiwi organisations have already outright banned the use of these AI systems at work, including Government departments, while others have created their own platforms such as PwC’s ‘ChatPWC’.

What are your options to help mitigate the risks of AI use in the workplace?

  • First, understand how and when your employees are using generative AI.
  • Create a company policy around the use of AI chatbots, which sets out clear guidelines for how they are to be used, or not used, as the case may be. It is important that this is well communicated so your employees understand the importance of why you are creating such a policy.
  • If there is a need for these tools get an Enterprise account. Whilst it will cost more,  there is less risk of client data being leaked as information put into the system remains private to the organisation and is therefore not shared with third parties.
  • Discuss with your CIO or IT support provider about blocking the use of certain sites so that employees don’t accidentally use a non-approved application.
  • Talk to your insurance broker about your use of these tools and what liability cover you may need as a backstop for any breaches.
  • Conduct regular security assessments such as audits and address potential security weaknesses before they can be exploited.
  • Encrypt sensitive data.
  • Develop an incident response plan that includes instructions in case of a data breach, including containment strategies.

If you need advice on your liability in this regard feel free to drop us a line.