Generative AI: vulnerable systems like any other

The business world is passionate about generative artificial intelligence. But to avoid unpleasant surprises, it’s better to take the time to think about their security before deploying them.

The paper lists the different configurations of a generative AI system and attack scenarios. It offers about thirty security recommendations. It mainly focuses on LLM (Large Language Model) tools designed to create text or computer code.

Upstream and production attacks

Computer attacks targeting generative AI systems fall into three main categories:

  • Manipulation attacks rely on malicious requests sent to a production system.
  • Infection attacks target an AI system during its training phase.
  • Finally, exfiltration attacks aim to steal information from the AI ​​system in production.

Among the intended attacks, a third party that has, for example, access to the data used to train the model can use that access to “poison” the data. Enough to hijack the system once in production, for example by training it to respond to a specific request to trigger a malicious action.

But even without having access to the training data or the system during training, an attacker with access to other components used by the AI ​​model could pose a risk.

The report identifies several possible impacts of these attacks. For example, the risk to the reputation of the organisation that provides chatbot-type services to the general public. In this regard, we remember Microsoft’s unfortunate experience with its chatbot Tay).

Also cited is the liberalisation of an attack against other business applications related to generative AI systems.

Attention at all levels

In all cases, Anssi recommends conducting an AI risk analysis study prior to the training phase. The goal? Identify the various elements related to AI and the sub-parties responsible for processing the organisation’s data.

The agency also called for some caution to be exercised regarding the final applications of AI and recommended against “the automated use of AI systems for critical actions on IS.”

If generative AI tools are viewed by many publishers as a powerful way to automate tasks, Anssi urges designers to maintain some caution and human control over these actions. For example, by strictly maintaining logs of the processing performed by the AI ​​model. But also by “strictly” separating them in a special technical environment.

The guide finally devotes a section to Gen AI-based source code generation tools, such as Microsoft’s Copilot. Here again, Anssi calls for caution and recommends systematic control of AI-generated source code and limiting its use to the most critical applications.

Leave a Comment