This piece is authored by Marco Bassini, Assistant Professor of Fundamental Rights and Artificial Intelligence at the Tilburg Institute for Law, Technology, and Society – Tilburg University.
Recent legislative initiatives in the EU – most notably the Digital Services Act and the Digital Markets Act – have set out objectives for the digital sector, marking a clear departure from the earlier phase characterised as ‘digital liberalism.’
Against this background, the AI Act is a litmus test for the EU’s broader regulatory ambitions. This legislation tries to articulate values to which AI uses should conform, diverging from the market-oriented and state-centric models that underpin AI governance in the US and China respectively. Anchored in the protection of fundamental rights, the AI Act tries to deliver this vision through a structured, risk-based approach. Yet, as the US and China are trying to increase their technological leadership, the EU’s value-oriented stance is encountering growing political and economic pressures.
Even if, as Anu Bradford argues, the perceived dichotomy between digital innovation and regulation is a ‘false choice,’ we don’t yet know if the EU’s value-centric posture will prove adequate in a world where control over innovation increasingly equates to geopolitical influence.
As noted in previous blogposts, the challenges posed to EU regulation by the US call for better—not necessarily less—lawmaking. The key issue, in other words, lies in how to regulate rather than whether to regulate. These reflections are particularly pertinent to the AI Act, which stands as one of the most recent and distinctive achievements of the EU digital lawmaking. For the AI Act to fulfil its intended objectives, the EU must ensure compliance does not devolve into a merely formalistic or box-ticking exercise. This is especially important in a time of global uncertainty, where Europe – unlike some of its competitors – does not currently hold technological leadership and is often perceived as an ‘underdog’ in the battle for digital sovereignty.
The AI Act unequivocally reflects the EU’s universal regulatory ambition, positioning the EU as a first mover in the global governance of AI. Take the law’s scope, for example. The AI Act applies to providers that place on the market or put into service AI systems within the EU, irrespective of whether they are established in the EU or in a third country. It extends to providers and deployers of AI systems based or operating outside the Union, as long as the output of the systems is used within the EU. In practice, the AI Act operates with extraterritorial reach, extending its applicability to many entities outside the EU.
Moreover, the AI Act may trigger obligations under EU law for activities not explicitly governed by its provisions. For instance, providers of general-purpose AI models are required to comply with certain obligations under EU copyright law (and particularly of respecting the reservations of rights under the text-and-data mining exception), although the training of such models may occur entirely outside EU territory and would, in principle, fall outside the scope of application of the EU’s Copyright Directive. Finally, compliance with the General Data Protection Regulation (GDPR) – a well-established requirement for many actors operating within EU digital markets and services – remains a foundational element of the EU’s broader approach. Unsurprisingly, the AI Act is increasingly viewed as part of the Union’s data protection framework, insofar as it imposes obligations that intersect with principles enshrined in the GDPR, such as data quality. Concern about the business models and data governance practices of non-EU-based companies, including OpenAI and DeepSeek, have primarily come from national data protection authorities, which have sought to enforce the GDPR in relation to key processing activities in AI systems’ lifecycle, particularly the training of AI models.
Accordingly, compliance with the AI Act will entail more than conformity with a rather standardised product safety framework; it will also require meaningful integration with existing EU legal norms governing data protection and safeguarding European values. This means the AI Act plays a big role in the EU’s concept of digital sovereignty. As noted above, the real face of the European AI regulation is still to be unveiled, as the Code of Practice for General Purpose AI models will have a significant impact on the fulfilment of the ambitions behind the AI Act, including safeguarding fundamental rights. When the puzzle of AI governance is completed, a more reliable assessment of the likelihood of these ambitions being met will be possible.
However, if the goal of EU lawmakers is to ensure that compliance is not merely ceremonial, the Commission must balance legal certainty with flexibility. A significant example is the requirement (for high-risk AI systems) to assess the impact of the system on fundamental rights. However, many aspects are still unclear. There is no specific indication, first of all, of the fundamental rights against which the impact of AI systems must be assessed and mitigated. While all the fundamental rights protected under the EU’s Charter of Fundamental Rights are covered by this assessment, business actors would benefit from greater precision, given they will lack human rights expertise. Likewise, many businesses will find it difficult to understand if these obligations apply to them, since these obligations only apply to high-risk systems, and it will not always be easy to map AI systems against the risk taxonomy in the AI Act. As noted above, the Commission has released guidelines on the notion of AI systems and on the interpretation of prohibited AI practices. However, these documents add an additional layer of complexity and do not always reduce uncertainty. Similarly, while the requirements introduced under the Code of Practice for General Purpose AI models constitute a co-regulatory effort, they may render compliance more burdensome.
The fact that scholars have described the AI Act as a ‘medley of product safety legislation and fundamental rights protection’ is a further testament to the difficulties in identifying a clear and well-defined approach in the AI Act. Furthermore, some scholars fear a ‘Brussels-side effect’, suggesting that an on-paper fundamental rights protective regulatory strategy could ultimately fail to deliver its promises, and thus that it would not be advisable to export it. To add insult to injury, this would mean that the efforts to build the EU value-based posture have been misplaced, and that the AI Act would pose a risk of isolation of Europe instead of delivering a global regulatory standard for AI.