De-risking AI

Generative AI has become increasingly popular in organisations, but it has risks. Wisely AI has identified five key risks organisations should know when using Generative AI. 

The first risk is anthropomorphising AI chatbots. This means projecting human motivations onto their behaviour, which can compromise ourselves. As we interact with chatbots, we may start to assume that they have human-like intentions and capabilities, which can lead to misunderstandings or even exploitation.

The second risk is training data vulnerabilities. Malicious data sets collected from the internet, as well as copyrighted data sets, have made their way into publicly available AI chatbots. This can lead to unintended or even harmful responses from the chatbots.

The third risk is hallucinations. AI chatbots can generate erroneous or entirely fictional responses, particularly when responding to vague or ambiguous instructions. This can lead to confusion or distrust between the user and the chatbot.

The fourth risk concerns privacy, data security, and data sovereignty. It’s important to closely inspect and classify potential inputs to chatbots to ensure that personal, private, commercially sensitive, or legally restricted data is never shared with a public service.

The fifth and final risk is prompt attacks. These attacks can be ‘prompt subversions,’ which coax an AI chatbot into generating responses its creators have explicitly forbidden. Alternatively, ‘prompt injections’ can ‘pervert’ a chatbot’s goals, secretly turning it into an agent acting against the interests of its user.

To mitigate these risks, organisations should follow guidance provided by experts in the field of Generative AI. This can include implementing strict data privacy and security measures, training chatbots with diverse and representative data sets, and regularly monitoring chatbot responses for potential anomalies.

Source: Wisely AI


Posted

in

, ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *