Understand the risks associated with generative AI
Last updated: 12 February 2025
Generative AI is a wonderfully powerful technology. Enabling generative AI in your chatbot automation will unlock a huge amount of benefits. However, like any advanced technology, it comes with inherent risks. This document outlines key risks associated with generative AI and the steps we take to mitigate them.
Hallucination
Hallucinations are situations where an AI generates an answer that is factually incorrect, fabricated, or non-sensical; even while they seem confident.
There are many reasons generative AI may hallucinate. It’s important to understand that, when generating an answer, a large language model (LLM) predicts the next word in a sequence using the patterns it learned from its training data. However, LLMs are not trained in the factual accuracy of these predictions.
Thus, generative AI models can occasionally generate incorrect or misleading information.
There are several measures we take (or recommend you take) to minimise the risk of hallucinations:
We ground all generative AI responses in content. When you create a generative AI chatbot on our platform, you also upload documents. Your chatbot will generate answers based on this content, though not solely.
We continuously refine our models to improve their performance and accuracy.
We recommend clearly displaying in your chatbot automation that answers generated may be inaccurate and should be double-checked.
Bias
AI generates answers based on the knowledge it has been fed, both via its foundational models and the documents you manually import. Any bias contained in either of these data sources may unintentionally reinforce stereotypes or provide biased answers.
There are several measures we take (or recommend you take) to minimise the risk of bias:
We attempt to collect data from a diverse range of sources to minimise the risk of bias.
We use advanced machine learning techniques to increase the size and diversity of our training data, including data augmentation, data balancing, and neural network architectures to prevent overfitting.
We use human evaluation to review the training data to identify and remove any biased or offensive content.
We regularly update the training data to ensure that it remains representative and unbiased, and to reflect changes in language usage and cultural norms.
We recommend customers regularly audit the content they import into the chatbot’s knowledge to identify and remove bias.
Privacy and personal information
Chatbots process user queries, which may contain sensitive or personal data. Some users may feel familiar enough to share sensitive information with the automation, such as their name, contact details, etc., creating a risk of exposure to private information.
There are several measures we take (or recommend you take) to minimise the risk of exposure:
We encrypt all communications in transit and at rest.
We enable customers to set their own data retention policies.
We use various LLM techniques ('Guards') to prevent personally identifiable information from being processed or communicated via generative AI.
We recommend clearly instructing your chatbot users not to divulge any private information.
Manipulation and unethical content
Malicious actors may attempt to manipulate the chatbot by injecting deceptive prompts to override its intended behaviour. Unethical actors may also use a chatbot’s general knowledge to generate harmful or misleading content.
There are several measures we take (or recommend you take) to minimise the risk of manipulation:
Every generative AI request contains at least one chunk of grounded content and a customer-written prompt.
We use various LLM techniques ('Guards') to flag and prevent manipulations such as prompt injections.
We regularly test and update our LLMs to ensure robust defences against emerging threats.