back

August 2, 2025: A New Milestone in the Implementation of the AI Act

Article IT and Data Protection | 31/07/25 | 3 min. | Mahasti Razavi Eden Gall

Following the entry into force of the European Union’s Artificial Intelligence Regulation (Regulation (EU) 2024/1689) on August 1, 2024, and the ban on unacceptable-risk AI systems in February 2025, a new stage begins on August 2, 2025: the application of Chapter V to general-purpose AI models (GPAI) made available in the EU. 
 

What is a GPAI model? 

The AI Act defines a GPAI model as: “an AI model, including when trained using a large amount of data with broad self-supervision, that displays significant generality and is capable of competently performing a wide range of distinct tasks—regardless of how the model is placed on the market—and that can be integrated into a variety of downstream systems or applications, with the exception of AI models used for research, development, or prototyping purposes prior to being placed on the market” (Art. 3 AI Act). 

The Regulation establishes two distinct compliance regimes: 

  • Standard GPAI models 

  • GPAI models presenting systemic risk 

 

Obligations for GPAI model providers 

As of August 2, 2025, GPAI providers must, among other things (Art. 53 AI Act): 

  • Prepare and keep up-to-date technical documentation describing the model, its training and testing processes, and evaluation results, including at minimum the information listed in Annex XI 

  • Provide documentation to AI system providers intending to integrate the GPAI model, including at minimum the information listed in the Annex 

  • Implement a copyright and related rights compliance policy 

  • Publish a summary of the data used to train the model 

  • Cooperate with competent authorities 

Lighter obligations apply to open-source GPAI models. 

 

Additional obligations for GPAI models with systemic risk 

A GPAI model is deemed to present systemic risk if it has “high-impact capabilities, as assessed on the basis of appropriate methodologies and technical tools,” or if it is designated as such by the European Commission under the criteria set out in Annex XIII (Art. 51 AI Act). Models trained with computing power above 10^25 FLOPs are presumed to present systemic risk. 

For these models, additional obligations apply (Art. 55 AI Act): 

  • Notification to the European Commission 

  • Independent model evaluations (including red teaming tests) 

  • Risk assessment and mitigation measures 

  • Monitoring of serious incidents 

  • Enhanced cybersecurity requirements 

 

Sanctions for non-compliance 

The European Commission may impose effective, proportionate, and dissuasive fines of up to 3% of the provider’s total worldwide annual turnover or EUR 15 million, whichever is higher. 

However, the enforcement of fines for GPAI providers is deferred until August 2, 2026, the date when the AI Act will apply in full. 

 

GPAI Code of Practice (July 10, 2025) 

Developed by a group of independent experts, the GPAI Code of Practice is structured around three chapters: 

  • Transparency (standardized documentation form) 

  • Copyright compliance 

  • Safety and security (for systemic-risk models) 

Until harmonized standards are adopted, conferring a presumption of conformity, GPAI providers may rely on the Code of Practice to demonstrate compliance. Doing so reduces their administrative burden and enhances legal certainty. 

Several GPAI providers have already announced their intention to adhere to this Code. 

 

Other provisions entering into force on August 2, 2025 

In addition to Chapter V, the following provisions of the AI Act also take effect: 

  • Chapter III, Section 4 and Chapter VII: designation of national competent authorities, and mandates of the AI Board, advisory forum, and scientific panel 

  • Chapter XII: penalties for non-compliance with the AI Act (excluding fines for GPAI providers, which apply from August 2, 2026) 

Explore our collection of PDF documents and enrich your knowledge now!
[[ typeof errors.company === 'string' ? errors.company : errors.company[0] ]]
[[ typeof errors.email === 'string' ? errors.email : errors.email[0] ]]
The email has been added correctly