Source: European Commission [Access official publication]

General FAQ

Why do we need rules for general-purpose AI models?

AI promises huge benefits to our economy and society. General-purpose AI models play an important role

in that regard, as they can be used for a variety of tasks and therefore form the basis for a range of

downstream AI systems, used in Europe and worldwide.

The AI Act aims to ensure that general-purpose AI models are safe and trustworthy.

To achieve that aim, it is crucial that providers of general-purpose AI models possess a good understanding of their models along the entire AI value chain, both to enable the integration of such models into downstream AI systems and to fulfil their obligations under the AI Act. As explained in more detail below, providers of general-purpose AI models must draw up and provide technical documentation of their models to the AI Office and downstream providers, must put in place a copyright policy, and must publish a training content summary. In addition, providers of general-purpose AI models posing systemic risks, which may be the case either because they are very capable or because they have a significant impact on the internal market for other reasons, must notify the Commission, assess and mitigate systemic risks, perform model evaluations, report serious incidents, and ensure adequate cybersecurity of their models.

In this way, the AI Act contributes to safe and trustworthy innovation in Europe.

What are general-purpose AI models?

The AI Act defines a general-purpose AI model as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications” (Article 3(63)).

The Recitals to the AI Act further clarify which models should be deemed to display significant generality and to be capable of performing a wide range of distinct tasks.

According to Recital 98, “whereas the generality of a model could, inter alia, also be determined by a number of parameters, models with at least a billion of parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks.”

Recital 99 adds that “large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks.”

Note that significant generality and ability to competently perform a wide range of distinctive tasks may be achieved by models within a single modality, such as text, audio, images, or video, if the modality is flexible enough. This may also be achieved by models that were developed, fine-tuned, or otherwise modified to be particularly good at a specific task.

The AI Office intends to provide further clarifications on what should be considered a general-purpose AI model, drawing on insights from the Commission’s Joint Research Centre, which is currently working on a scientific research project addressing this and other questions.

What are general-purpose AI models with systemic risk?

Systemic risks are risks of large-scale harm from the most advanced (i.e. state-of-the-art) models at any given point in time or from other models that have an equivalent impact (see Article 3(65)). Such risks can manifest themselves, for example, through the lowering of barriers for chemical or biological weapons development, unintended issues of control over autonomous general-purpose AI models, or harmful discrimination or disinformation at scale (Recital 110). The most advanced models at any given point in time may pose systemic risks, including novel risks, as they are pushing the state of the art. At the same time, some models below the threshold reflecting the state of the art may also pose systemic risks, for example, through reach, scalability, or scaffolding.

Accordingly, the AI Act classifies a general-purpose AI model as a general-purpose AI model with systemic risk if it is one of the most advanced models at that point in time or if it has an equivalent impact (Article 51(1)). Which models are considered general-purpose AI models with systemic risk may change over time, reflecting the evolving state of the art and potential societal adaptation to increasingly advanced models. Currently, general-purpose AI models with systemic risk are developed by a handful of companies, although this may also change over time.

To capture the most advanced models, the AI Act initially lays down a threshold of 10^25 floating-point operations (FLOP) used for training the model (Article 51(1)(a) and (2)). Training a model that meets this threshold is currently estimated to cost tens of millions of Euros (Epoch AI, 2024). The AI Office will continuously monitor technological and industrial developments and the Commission may update the threshold to ensure that it continues to single out the most advanced models as the state of the art evolves by way of delegated act (Article 51(3)). For example, the value of the threshold itself could be adjusted, and/or additional thresholds introduced.