back

Digital Omnibus Regulation on AI

Article IT and Data Protection | 19/12/25 | 17 min. | Marc Mossé Eden Gall Benjamin Fontani

Will the European principle of Smart Regulation eventually benefit artificial intelligence? One can only hope so. As Regulation (EU) 2024/1689 of 13 June 2024 establishing the first horizontal legal framework for artificial intelligence (the “AI Act”) is being phased in, the European Commission is already moving to recalibrate and streamline the regime through a targeted “Omnibus” package designed to make implementation more consistent, proportionate, and operational across the EU.
 

This initiative follows the Draghi Report (2024), which attributed part of the European Union’s competitiveness gap to regulatory complexity and excessive compliance costs. In response, the Commission committed, for the 2024–2029 term, to an ambitious legislative simplification agenda designed to address these imbalances. The Commission estimates that these measures could reduce administrative costs borne by businesses by up to five billion euros by 2029.

Key changes under consideration include:

  • adjusting the application timeline for obligations applicable to high-risk AI systems;
  • extending the documentation and procedural simplifications currently available to SMEs to mid-sized companies ("SMCs");
  • making post-market monitoring more flexible;
  • centralizing supervision of certain systems within the AI Office;
  • allowing the processing of certain sensitive data to detect and correct potential bias; and
  • broadening access to regulatory sandboxes and real-world testing pathways.

 

  1. The Digital Package

As part of its Digital Package intended to streamline and simplify several areas of EU digital legislation, the Commission has put forward an “Omnibus” proposal that would amend the legal frameworks governing artificial intelligence, cybersecurity, and data.

Two additional initiatives complement the package. One is aimed at implementing the Data Union strategy (in particular by facilitating access to high-quality data for AI), and the other would establish European business wallets for companies.

The Commission’s stated objective is to reduce the administrative burden borne by economic operators, especially in the digital sector, in order to strengthen European competitiveness while maintaining a high level of protection for fundamental rights, security, and transparency. The aim is to enable European companies to allocate more resources toward innovation and fewer toward administrative formalities.

In 2025, the Commission undertook extensive consultations on the implementation of the AI Act to identify potential operational challenges. Three public consultations were held in spring 2025, supplemented by a call for evidence specifically devoted to the Digital Package, allowing stakeholders to report the obstacles encountered in practice. An SME panel was also convened to better capture the specific needs of small and medium-sized businesses.

Against that backdrop, on 19 November 2025, the Commission presented the Digital Omnibus package, which includes three draft regulations and one Commission Communication, as follows:

  • a “Digital Omnibus” Regulation covering data, cybersecurity, and data protection (proposal for Regulation 2025/0360 (COD), COM(2025) 837 final);
  • a “Digital Omnibus” Regulation on AI (proposal for Regulation 2025/0359 (COD), COM(2025) 836 final);
  • a Regulation establishing European electronic business wallets; and
  • a Commission Communication presenting the new European data strategy.

 

  1. The Digital Omnibus Regulation on AI

 

At the core of the package, the Digital Omnibus Regulation on AI introduces targeted amendments to the AI Act intended to facilitate its implementation within a more appropriate and proportionate framework. The AI part of the Digital Package takes the form of a proposal for Regulation 2025/0359 (COD), COM(2025) 836 final, which amends both the AI Act and Regulation (EU) 2018/1139 on aviation safety.

The consultations conducted in 2025 identified several obstacles likely to undermine the AI Act’s effective application and enforcement of its key provisions, notably delays in designating national competent authorities and conformity assessment bodies and, critically, the absence of harmonized standards, guidelines, and compliance tools relating to high-risk AI systems.

Given that the AI Act is scheduled to enter into application progressively through the summer of 2027, these gaps create a risk of a significant increase in compliance costs for businesses and public authorities, and a risk of stifling innovation due to the lack of a sufficiently clear, predictable, and operational framework to ensure effective implementation by the originally scheduled deadlines.

The proposal therefore aims to simplify implementation of the AI Act and significantly reduce administrative burdens for stakeholders. The Commission has accompanied its proposal with a detailed assessment of the potential savings expected from the adoption of these adjustments.

 

    1. Entry into application of obligations applicable to high-risk AI systems
 

Acknowledging the difficulty for Member States and businesses to meet the initial deadlines for obligations applicable to high-risk AI systems[1], scheduled to apply in August 2026[2] or August 2027[3], the Commission proposes to make the applicability of those obligations contingent on the actual availability of the instruments needed to ensure compliance, including harmonized standards, common specifications, guidelines, and assessment procedures.

Under the AI Act, compliance with harmonized standards developed by European standardization bodies, particularly CEN and CENELEC, allows providers to demonstrate conformity with the Regulation’s requirements. In the absence of such standards prior to the application of high-risk AI obligations, it becomes very difficult for operators to demonstrate compliance and implement the required measures within the mandatory deadlines.

The Digital Omnibus Regulation on AI therefore provides that the Chapter III requirements for high-risk AI systems would become applicable only once the Commission has formally confirmed the availability of the relevant compliance documents. Those obligations would then apply:

  • 6 months after the Commission decision for high-risk AI systems referred to in Article 6(2) and Annex III;
  • 12 months after the Commission decision for high-risk AI systems referred to in Annex I.

 

However, if no Commission decision is adopted, Sections 1, 2, and 3 of Chapter III, covering high-risk AI systems and the requirements and obligations applicable to providers and deployers of such systems, would in any event become applicable no later than:

  • 2 December 2027 for high-risk AI systems referred to in Article 6(2) and Annex III; and
  • 2 August 2028 for high-risk AI systems referred to in Annex I.

 

This mechanism would defer the binding application of the high-risk AI regime while key compliance tools remain incomplete, yet preserving a mandatory backstop date to ensure the regime ultimately becomes fully effective.

Any such shift in the timeline, however, remains subject to the adoption of the Digital Omnibus by the European legislator. Unlike the so-called “stop the clock” directive adopted ahead of the Omnibus regulation amending the CSRD (sustainability reporting) and CS3D (due diligence) directives to adjust companies’ compliance dates, the Commission chose not to propose a separate instrument for digital matters. In other words, if the proposal were adopted after, or too close to, the newly suggested dates, it would have little or no effect on the timetable. One may infer that the Commission opted for this approach to increase, even indirectly, the pressure on the legislator to adopt the text as quickly as possible without substantially amending it. The same logic applies to the transitional period described below.

 

    1. Additional transitional period for certain AI systems and general-purpose AI models
 

In addition, the Digital Omnibus Regulation on AI introduces an extra six-month transitional period, until 2 February 2027, for providers of AI systems already on the market or in service. This gives them time to implement technical measures in existing generative AI systems to comply with the transparency requirements in Article 50(2). These obligations include implementing functionalities that make content machine-readable and detectable as artificially generated or manipulated.

 

    1. Extension of the favorable regime to SMCs
 

The Digital Omnibus extends the simplification measures available to SMEs to mid-sized companies (SMCs). These companies would in particular benefit from lighter technical documentation requirements, a proportionate quality management system, consideration of their interests when codes of conduct and guidelines are developed, and caps on fines calibrated to their size.

The text also introduces definitions of SMEs and SMCs that are expected to align with the definitions set out in the annex to Commission Recommendation 2003/361/EC and the annex to Commission Recommendation 2025/3500/EC.

This extension, drawn from the Omnibus IV package, pursues an economic fairness objective, supporting the competitiveness of innovative actors that, while not large companies, were bearing compliance costs disproportionate to their resources.

 

    1. AI literacy
 

Under Article 4 of the AI Act, providers and deployers of AI systems must ensure that their staff have sufficient “AI literacy”. In practice, this obligation is broad and lacks operational detail, making it difficult to implement consistently.

The Commission observes that a one-size-fits-all approach to AI literacy is not suited to all categories of actors and may fail to achieve its intended purpose. The proposal would therefore shift the emphasis toward an obligation for the Commission and Member States to encourage skills development and AI awareness among the staff of providers and deployers.

However, this reduction in burden should not obscure that the obligation to ensure meaningful human oversight of AI systems remains in place and that training and upskilling contribute to that oversight. Therefore, companies should not neglect this aspect in their compliance roadmaps.

 

    1. The AI Office
 

The Digital Omnibus Regulation on AI would strengthen the AI Office’s powers. In that respect, exclusive supervisory competence would be granted over:

  • AI systems based on general-purpose AI models, where the system and the model are provided by the same actor, except for systems linked to products governed by sector-specific legislation listed in Annex I to the AI Act. Sectoral authorities would remain responsible for supervising AI systems linked to products covered by that EU harmonized legislation; and
  • AI systems embedded in, or constituting, a very large online platform or a very large online search engine[4] within the meaning of Regulation (EU) 2022/2065 (Digital Services Act). For such platforms and engines, the issue is closely tied to the assessment and mitigation of systemic risks. The use of AI by these actors can significantly increase their societal impact, in particular with respect to fundamental rights and the spread of disinformation.

 

To carry out these tasks, the Commission would have to adopt an implementing act precisely defining the AI Office’s powers and the applicable procedures, including its ability to impose fines and other administrative sanctions.

At the same time, the text expressly sets out several procedural safeguards for economic operators, including detailed reasoning for decisions, information on available avenues of appeal, and the right to be heard within a maximum period of ten days.

In addition, the proposed Regulation provides that by 2028 an EU-level regulatory sandbox administered by the AI Office would be created. This mechanism would complement national sandboxes already provided for under the AI Act, enabling innovative AI systems to be tested within a coordinated cross-border framework.

To take on these new tasks, the Commission estimates that the AI Office would need an additional resource of 53 full-time equivalents, part of which could be covered through internal redeployment. In parallel, Member States’ workloads would decrease, in particular because some systems would then be supervised directly at EU level.

 

    1. Codes of practice
 

The proposed Digital Omnibus Regulation on AI would amend Articles 50(7) and 56(6) of the AI Act to remove the Commission’s ability to approve codes of practice by implementing act with general application across the Union. The Commission would nevertheless retain the ability to adopt an implementing act laying down common rules for the application of certain obligations where existing codes of practice are deemed insufficient.

The Commission therefore seeks to move from an ex ante validation model to a more targeted “catch-up” mechanism where sector-specific self-regulation proves inadequate.

 

    1. Legal basis for data processing
 

The proposed Regulation introduces a new Article 4a into the AI Act, replacing Article 10(5). This provision creates a legal basis allowing providers and deployers of AI systems or models, not limited to high-risk AI systems, to process, on an exceptional basis, special categories of personal data for the purpose of ensuring detection and correction of bias in AI systems, subject to certain conditions.

 

    1. Additional simplification measures
 

The Commission’s proposal includes several additional simplification provisions:

  • Post-market monitoring: the proposal also removes the obligation for providers to use a harmonized template for post-market monitoring plans, thereby allowing operators to tailor monitoring arrangements to sector-specific features. The stated aim is to avoid the rigidity of a one-size-fits-all template and to encourage proportionate approaches, without compromising the general requirement for continuous vigilance;
  • European database of high-risk AI systems: AI systems that fall within a domain listed in Annex III but are exempt from classification as high-risk under Article 6(3) would no longer be subject to registration;
  • Single application for conformity assessment: the text provides that conformity assessment bodies may submit a single application and undergo a single assessment procedure when they act both under the AI Act and under sectoral EU harmonization legislation listed in Annex I;
  • Real-world testing: the proposal allows for written agreements between the Commission and Member States enabling certain high-risk AI systems, particularly those in Section B of Annex I, to be tested in real-world conditions outside regulatory sandboxes, under joint supervision. This option does not apply to AI systems prohibited under the AI Act.

Several technical adjustments are also included to clarify that, in certain cases, the sectoral assessment procedure must be followed where an AI system is integrated into a product already covered by EU harmonization legislation, and to improve consistency with Regulation (EU) 2018/1139 in the aviation sector.

Finally, numerous guidelines are expected to enhance legal certainty and clarity for stakeholders, including guidance on classifying high-risk systems, applying transparency obligations, and the scope of the research exemption.

 

  1. Procedure

Proposal for Regulation 2025/0359 (COD) falls under the ordinary legislative procedure, involving both the European Parliament and the Council. After being presented by the Commission, the text was transmitted to the two co-legislators, which must now begin a formal and substantive review of the proposed amendments to the AI Act and Regulation (EU) 2018/1139. Once each institution has adopted its respective position, trilogue negotiations will begin in order to reach a political agreement enabling final adoption.

Given the AI Act’s phased applicability timetable, the co-legislators are expected to conduct the negotiations quickly, with the aim of adopting the text before summer 2026 and avoiding a scenario in which certain provisions, particularly those tied to deadlines, are adopted too late to be effective.

In parallel, the Commission will continue its supporting work (drafting guidelines, preparing implementing acts, etc.) to ensure the consistent, operational, and proportionate implementation of the European AI framework.

Despite the adoption of several standardization mandates by the Commission, no essential harmonized standards are yet available as the first compliance deadline approaches. This difficulty is compounded by the delay in publishing the numerous guidelines that the Commission was supposed to develop, particularly regarding the classification of high-risk systems.

 

These delays are at the heart of the difficulties identified by the Commission and place economic operators in a situation of legal uncertainty.

 

[1] Including risk management, data quality and governance, logging and traceability, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

[2] Entry into application of provisions relating to high-risk AI systems listed in Annex III of the AI Act.

[3] Entry into application of provisions relating to high-risk AI systems listed in Annex I of the AI Act.

[4] Article 33 of the Digital Services Act.

Explore our collection of PDF documents and enrich your knowledge now!
[[ typeof errors.company === 'string' ? errors.company : errors.company[0] ]]
[[ typeof errors.email === 'string' ? errors.email : errors.email[0] ]]
The email has been added correctly