This piece is authored by Daniel Schnurr, CERRE Research Fellow and University of Regensburg
Last week, the European Commission announced its Digital Omnibus package alongside its broader Data Union Strategy. Adjustments to the EU’s data and AI rules had been eagerly anticipated as ways to boost Europe’s competitiveness and digital sovereignty and have proven controversial. The proposed changes are positive, albeit limited, steps to streamline the EU’s digital rules – but beyond these tactical changes, questions remain about the Commission’s overarching digital strategy.
A willingness to update digital rules more quickly
After introducing numerous digital regulations during its previous mandate, the new Commission has signalled a shift towards implementation, consolidation, and simplification. The Digital Omnibus package tries to deliver on this commitment.
It is encouraging that the Commission appears willing to revise and even repeal legislation that has only recently entered into force but has proven ineffective or has been overtaken by the rapid pace of technological change. Consolidating cybersecurity and data protection reporting obligations is straightforward and sensible. Moreover, the launch of a broader ‘Digital Fitness Check’ – an examination of how well digital rules are working – suggests that Brussels is serious about reviewing existing rules to maintain a coherent and innovation-friendly rulebook.
Unfinished business consolidating data rules
Consolidating EU data rules could address concerns about coherence and inconsistencies that we have highlighted in previous CERRE work. However, the GDPR (which mandates protections for personal data) and the Data Act (which mandates sharing of certain data) remain separate and their interaction will likely raise continued tensions.
The Data Act amendments cut obligations on small mid-cap companies – adopting a more “asymmetric” regulatory approach. As we have previously argued, this should help innovation and competition. However, for a genuine, large-scale impact on data availability, the simplifications to the Data Act’s data-sharing obligations would have needed to be considerably more ambitious (see our earlier proposals).
GDPR: Small steps, but big political battles ahead
The proposed GDPR amendments are politically contentious. Several move in a more innovation-friendly direction. But civil-society organisations have already expressed
strong concerns. Given the considerable resistance expected from MEPs during the legislative process, it remains uncertain how much of this shift will ultimately survive.
Companies would be allowed to use publicly available personal data for training and operating AI systems, without the user’s active consent. This change appears inevitable if the EU wants to allow AI providers to keep collecting and using large-scale training datasets from the internet. A key question will be how to allow users to opt out.
Simplifying the rules on pseudonymisation could have significant positive effects. In practice, pseudonymised data is often treated almost as strictly as fully personal data, discouraging its reuse. Clearer and more proportionate rules could unlock additional value while enabling a broader range of safe and privacy-respecting processing techniques.
At the same time, it is important to ensure that the precise wording of the amendments does not lead to unintended side effects. For example, current changes to the GDPR could undermine the viability of data donations, which serve as important instruments for empirical research on digital services. This would contradict the goal of facilitating data collection and processing for scientific research.
One missed opportunity is the lack of additional GDPR exemptions or flexibilities for smaller firms. This could have helped ease regulatory burdens on these companies, helping to boost innovation. Small firms already benefit from exemptions in the AI Act and the Data Act.
Goodbye to cookie banners – hello to paywalls?
The Commission also proposed updates to the long-stalled ePrivacy Directive. The plan to deem cookies lawful by default for certain “low-risk” purposes – thus reducing the prevalence of the now-infamous cookie banners. Moving to automated, machine-readable signals of users’ choices could help cut cookie banners without undermining the GDPR’s consent principle. However, this approach will require standardisation processes, which have often proven lengthy and difficult, especially when they do not align with the interests of major stakeholders.
If users could more easily enforce a blanket “no data collection” preference, we could eventually see a broader shift towards “pay-or-consent” models for funding content and services. However, whether such models are compatible with GDPR principles remains highly contentious and has, in the case of large online platforms, been largely rejected by the European Data Protection Board.
Pragmatic adjustments to the AI Act
The Commission proposes targeted adjustments to the new AI Act: granting additional time for implementing the rules for high-risk AI, without changing the general risk-tiered framework.
The extended deadline is reasonable, as firms have little guidance on how to apply them, especially in certain sectors. Standardisation efforts are still far from completion. Industry and civil society attention so far has been on general-purpose AI models, not domain-specific high-risk AI systems.
Strengthening regulatory sandboxes and real-world testing is a welcome idea, particularly in the context of increasingly autonomous, general-purpose agentic AI systems, where it is hard to fully understand the risks before deployment. Providing greater flexibility in post-deployment monitoring will allow operators to develop more context-specific compliance solutions. This idea aligns with recommendations in our forthcoming CERRE issue paper on technological neutrality in the AI Act. Other good ideas include centralising more enforcement within the Office, cutting burdens on small mid-cap firms, and removing the AI literacy obligation – which was confusing and poorly understood.
Conclusion: positive steps, but what is the strategy?
The proposed changes are positive, albeit limited, steps to streamline the EU’s digital rules but questions remain about the Commission’s overarching digital strategy. Recent regulatory proposals avoid imposing new obligations, and focus more on supply-side measures and capacity building. But how these initiatives will meaningfully address Europe’s challenges remains vague.
In particular, fundamental questions remain about the Data Union Strategy’s goal of “scaling up access to quality data for AI and innovation”. What incentives will businesses have to share their data, given their concerns about trade secrets, privacy, and competition, as the Commission itself acknowledges? How will unlocking data for AI deliver concrete benefits for European businesses and consumers, while not fuelling market concentration? How can AI Gigafactories and other flagship initiatives support organisations to develop sustainable business models capable of competing with global players over the long term?
Answers to these questions are crucial if the EU’s digital and data rules are to be useful parts of the EU’s overarching digital strategy. Only then does Europe have a chance to develop a meaningful data- and AI-driven digital economy.