AI Act: EU countries headed to tiered approach on foundation models amid broader compromise

[Tada Images/Shutterstock]

The EU approach to powerful AI models is taking shape as European countries discuss possible concessions in the upcoming negotiations on the world’s first comprehensive Artificial Intelligence (AI) rulebook.

The Spanish presidency of the EU Council of Ministers shared on Tuesday (17 October) a document in preparation for the next political negotiation with the European Parliament and Commission on 24 October, the so-called trilogues.

The document, seen by Euractiv, details a series of possible landing zones on the AI Act, a flagship legislative proposal to regulate AI based on its capacity to cause harm. The compromises concern critical areas of the text, including how to deal with foundation models, large machine-learning models trained on vast data sets that generate responses based on a specific stimulus.

AI Act enters final phase of EU legislative process

The European Parliament adopted its position on the AI rulebook with an overwhelming majority on Wednesday (14 June), paving the way for the interinstitutional negotiations set to finalise the world’s first comprehensive law on Artificial Intelligence.

Foundation models & general-purpose AI

A key aspect in the discussions of the AI law has been how to regulate AI models that do not have a specific purpose. Following the meteoric rise of ChatGPT (powered by OpenAI’s GPT-3.5), significant public attention has been dedicated to foundation models.

The European Parliament was the first to suggest that this type of AI system, on which other AI solutions can be built, should be subject to a particular regime following a tiered approach, an idea the Council has taken on board.

The Spaniards introduced a possible definition of foundation models as an “AI model that is capable to competently perform a wide range of distinctive tasks”. They suggested establishing benchmarks via implementing acts to assess such capabilities.

The proposed approach includes horizontal transparency obligations for all foundation models, notably in documenting the modelling and training process and evaluating established benchmarks before the market launch.

Following the launch on the EU market, foundation model providers would have to provide information and documentation to the downstream economic operators.

The Spanish presidency is also pitching the introduction of a new category, the ‘very capable foundation models’, which should be subject to additional obligations because their “capabilities go beyond the current state-of-the-art and may not yet be fully understood”.

Again, the definition of the benchmarks to designate this category was left to implementing acts, but the Spaniards did suggest a possible way to set a threshold in terms of the amount of computing used for the training, which would require constant updating.

Other metrics mentioned that could be used are the data consumed in training and the potential impact on users, namely in high-risk applications built on the model.

These very capable foundation models must undergo regular vetting from external red teams and compliance controls by independent auditors and establish a risk mitigation system before market launch.

A third category Madrid sees is general-purpose AI systems built on foundation models and used at scale. A system is used at scale whether it has over 10,000 registered business users or 45 million registered end users, although qualitative considerations are also in order.

For this third category, the obligations include regular external vetting to uncover vulnerabilities and establishing a risk mitigation system. Finally, all general-purpose AI providers should state if their system can be used for high-risk uses and act accordingly.

Regarding the thorny copyright issue, the presidency wants foundation model providers to demonstrate that they have taken adequate measures to ensure their system is trained following EU copyright law, notably allowing rightsholders to opt out.

For generative AI models, Spain wants the providers to ensure their output is detectable as artificially generated or manipulated with a technical solution that is effective, interoperable and considers state-of-the-art technology.

AI Act: MEPs close in on rules for general purpose AI, foundation models

The European Parliament is set to propose stricter rules for foundation models like ChatGPT and distinguish them from general purpose AI, according to an advanced compromise text seen by EURACTIV.

Governance

In contrast with the European Parliament, EU member states initially left the enforcement of the AI law largely in the hands of national authorities. However, when it comes to foundation models, “the complexity and capability of these models and systems are such that centralising expertise would be important”.

As a result, the presidency has accepted the idea of an AI Office, introduced by the Parliament, explicitly tasked to oversee the new rules on the foundation models and general-purpose AI systems used at scale to define the auditing procedures and carry out compliance controls and investigations.

Leading MEPs push for European 'office' to enforce the EU's AI rulebook

The lawmakers spearheading the work on the AI Act launched the idea of an AI Office to streamline enforcement and solve competency disputes on cross-border cases.

Biometric identification

A massive contention between the EU Council and Parliament regards the use of real-time biometric identification systems by law enforcement. MEPs are pushing for a complete ban, whilst EU governments want to keep some exceptions as per the original Commission text.

The proposed compromise further narrows the exceptions, namely limiting the search for victims of abduction and human trafficking, preventing imminent threats like terrorist attacks, and prosecuting only the most severe crimes. Additional safeguards are also proposed.

MEPs introduced the obligation to obtain judicial authorisation for using biometric identification systems ex-post. Here, the presidency would like to remove the need for authorisation for initial generalised checks but not targeted searches.

Banned practices

The EU Parliament’s mandate includes banning emotion recognition in law enforcement, border management, workplace and education institutions. For the presidency, a limited prohibition in this area would need to exclude group screenings and include certain exemptions, such as safety reasons.

Alternatively, Madrid proposes to accept the ban in workplace and education but push back on law enforcement and border management.

MEPs also banned the biometric categorisation of protected data such as religious beliefs and political orientations, but the Spaniards wanted to introduce a carve-out for law enforcement.

Meanwhile, parliamentarians are pushing for a ban on predictive policing. Here, the concession might be additional wording as part of the prohibition on social scoring practices.

In border control, the Spanish presidency wishes to remove the verification of authenticity of travel documents from the list of high-risk use cases.

While apparently accepting the high-risk classification for detecting individuals in border management activities, the Spaniards want to remove the categorisation for forecasting migration trends.

AI Act: EU Parliament walking fine line on banned practices

Members of the European Parliament closed several critical parts of the AI regulation at a political meeting on Thursday (13 April), but the prohibited uses of AI could potentially divide the house.

Law enforcement

The EU Council introduced significant exceptions in the field of law enforcement. The presidency suggests limiting the exemption for police forces to register in the public database for high-risk systems and introducing a timeline for large-scale IT systems in the justice and home affairs area to align with the AI Act.

National security

While the EU Council excluded national security from the scope of the AI Act, Spain is proposing a ‘balanced compromise’ wording that recalls that this area is under the responsibility of member states. Alternatively, more flexible wording could be drawn from the Data Act.

AI Act: Czech EU presidency makes final tweaks ahead of ambassadors' approval

The Czech presidency of the EU Council shared with the other EU countries on Thursday (3 November) the final version of the AI Act, a flagship EU legislative initiative, which is set to be approved at the ambassador level by mid-November.

High-risk use cases

The European Parliament’s version has a broad conception of biometric identification systems that could fall under the high-risk category. In contrast, the Council is concerned casting the net too widely would also cover cases where the person is actively participating, for instance, giving fingerprints.

The presidency wishes to include emotion recognition and biometric categorisation in the high-risk category rather than a full-on ban. It proposes as additional safeguards the introduction of a third-party conformity assessment process.

[Edited by Nathalie Weatherald]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Subscribe