Skip to content
CERRE think tank Logo
  • About us
    • About CERRE
    • Our team
    • Board of Directors
    • Careers
    • Transparency & Independence
    • FAQs
  • Areas of expertise
    • Energy, Mobility & Sustainability
    • Tech, Media, and Telecommunications
    • Cross-sector
  • Publications
    • Ambitions for EU 2024 – 2029
    • Global Governance for the Digital Ecosystems
  • Events
    • Upcoming events
    • Past events
  • Blogposts
  • Insights
  • Media Room
    • Press Releases
    • Press Coverage
  • Membership
    • Our members
    • Become a member
  • Contact
  • About us
    • About CERRE
    • Our team
    • Board of Directors
    • Careers
    • Transparency & Independence
    • FAQs
  • Areas of expertise
    • Energy, Mobility & Sustainability
    • Tech, Media, and Telecommunications
    • Cross-sector
  • Publications
    • Ambitions for EU 2024 – 2029
    • Global Governance for the Digital Ecosystems
  • Events
    • Upcoming events
    • Past events
  • Blogposts
  • Insights
  • Media Room
    • Press Releases
    • Press Coverage
  • Membership
    • Our members
    • Become a member
  • Contact
Filter by Sectors





Publications
#Tech, Media, Telecom

Meaningful XAI Based on User-Centric Design Methodology

  • July 10, 2023
Share.
Document(s)
Download the report here

The Artificial Intelligence Act proposal lies at the heart of the European approach to artificial intelligence (‘AI’). This approach focuses on excellence and trust, as well as aiming to boost research and industrial capacity while ensuring safety and protecting fundamental rights. The EU aims to become a world-class hub for AI that is human-centric and trustworthy, and such an approach needs to be guided by concrete rules. In April 2021, the EU Commission presented its AI package, which includes a communication on fostering a European approach to AI, a review of the Coordinated Plan on Artificial Intelligence with Member States, the AI Act proposal, and a relevant impact assessment.

Building trust in AI involves implementing a legal framework that addresses fundamental rights, safety risks, liability, and sector-specific regulations. By pursuing this approach, the EU seeks to shape the future of AI and create a resilient digital Europe for the Digital Decade. In light of this, CERRE authors Winston Maxwell and Bruno Dumas have produced this latest report on Explainable AI (‘XAI’) which delves into the concept of ‘explainability’ in AI-based systems, putting forward considerations of both “local” and “global” explanations. Local explanations pertain to specific algorithmic outputs and are context-sensitive, while “global” explanations encompass the functioning of the system as a whole.

The report explores the characteristics of end users, their tasks, and the information required to use AI systems. It emphasises explanations are not a one-size-fits-all solution and can sometimes have unintended consequences, such as creating complacency or exacerbating cognitive biases.

Two use cases, one on an educational recommender system and another on explainable AI for scientific discoveries, are presented to illustrate the application of explainability techniques. ‘Meaningfulness’ is a crucial aspect of explanations and its interpretation within the EUs General Data Protection Regulation framework is examined, noting that it is subjective and influenced by user characteristics, context, and task. The concept of ‘meaningfulness’ is further explored through coherence, purpose, and significance, representing levels that contribute to meaningful interactions.

Overall, the report underscores the importance of tailoring explanations to the user and specific task while considering the challenges and potential impact of measuring meaningfulness. The report concludes by making the following recommendations for policymakers:

  • The AI proposal should have specific guidance on local explainability in the risk analyses and management systems of AI providers, which should be adapted as necessary by the users of AI systems;
  • The AI proposal should adopt a user-centric design methodology as a way of measuring whether a given explanation is “meaningful” in light of its audience and purpose;
  • Using the EU Charter to underpin the approach to explainability in AI-based systems, and provide a foundation for seeking a harmonised methodology;
  • European standardisation bodies could develop a framework for meaningful explanations in the context of harmonised standards;
  • The European Centre for Algorithmic Transparency can also potentially develop a user-centric design methodology;
  • The European Commission should pursue the harmonisation of explainable AI requirements across different EU legislation and case law; and
  • The European Commission should also pursue the encouragement of user-centric design methodology to achieve meaningful explanations.
Author(s)
Loading...
Winston Maxwell
Winston Maxwell
Research Fellow
and Télécom Paris

Winston Maxwell is Director of the Law & Digital Technology Studies department at Télécom Paris – Institut Polytechnique de Paris, where he teaches and writes on subjects related to the regulation of data, AI and telecommunications. He previously had a career in private practice as a partner of the international law firm Hogan Lovells.

Winston completed his law degree (JD) at Cornell, his PhD in economics at Télécom Paris, and his HDR (Habilitation à Diriger des Recherches) at the University of Paris Panthéon Sorbonne. His research focuses on the regulation of AI, and in particular human control over algorithmic systems, explainability and bias. Winston co-ordinates the “Operational AI Ethics” program at Telecom Paris, which includes AI Ethics teaching at Institut Polytechnique de Paris. In addition to being a research fellow at CERRE, Winston is a member of the Data and AI Ethics Council of Orange, the Scientific Advisory Board of ARCOM, the Ethics Board of the Paris Institute of Advanced Studies. He also contributes to standardisation activities on trustworthy AI within ISO/IEC JTC21/WG 4.

Winston Maxwell is Director of the Law & Digital Technology Studies department at Télécom Paris – Institut Polytechnique de Paris, where he teaches and writes on subjects related to the regulation of data, AI and telecommunications. He previously had a career in private practice as a partner of the international law firm Hogan Lovells.

Winston completed his law degree (JD) at Cornell, his PhD in economics at Télécom Paris, and his HDR (Habilitation à Diriger des Recherches) at the University of Paris Panthéon Sorbonne. His research focuses on the regulation of AI, and in particular human control over algorithmic systems, explainability and bias. Winston co-ordinates the “Operational AI Ethics” program at Telecom Paris, which includes AI Ethics teaching at Institut Polytechnique de Paris. In addition to being a research fellow at CERRE, Winston is a member of the Data and AI Ethics Council of Orange, the Scientific Advisory Board of ARCOM, the Ethics Board of the Paris Institute of Advanced Studies. He also contributes to standardisation activities on trustworthy AI within ISO/IEC JTC21/WG 4.

Bruno Dumas
Bruno Dumas
Professor
Namur Digital Institute (NADI), University of Namur.

Bruno Dumas obtained his PhD in computer science in 2010 at the University of Fribourg, in Switzerland. His thesis focused on the creation of multimodal interfaces, along three axes: software architectures for multimodal interfaces, multimodal interaction modelling, and multimodal fusion algorithms. He then worked for four years as a post-doc researcher at the Vrije Universiteit Brussel, In Belgium, where he deepened his knowledge of cross-media interactive systems. He is now a professor at the University of Namur, where he leads the EXUI research group, and is the co-president of the NADI research institute. His main research domains revolve around human-computer interaction, multimodal interfaces, augmented/mixed reality as well as how the evolution of IT impacts users in their daily life and our society.

Bruno Dumas obtained his PhD in computer science in 2010 at the University of Fribourg, in Switzerland. His thesis focused on the creation of multimodal interfaces, along three axes: software architectures for multimodal interfaces, multimodal interaction modelling, and multimodal fusion algorithms. He then worked for four years as a post-doc researcher at the Vrije Universiteit Brussel, In Belgium, where he deepened his knowledge of cross-media interactive systems. He is now a professor at the University of Namur, where he leads the EXUI research group, and is the co-president of the NADI research institute. His main research domains revolve around human-computer interaction, multimodal interfaces, augmented/mixed reality as well as how the evolution of IT impacts users in their daily life and our society.

More publications

on #Tech, Media, Telecom

DMA@1: Looking Back and Ahead
26 March 2025
DSA Implementation Forum: Protection of Minors
25 March 2025
AI Act Implementation Forum: Legal Principles and Technical Requirements
4 February 2025
Which Governance Mechanisms for Open Tech Platforms?
28 January 2025
Better Law-Making and Evaluation for the EU Digital Rulebook
22 January 2025
Navigating the Revolution: Policy Recommendations for Inclusive AI
21 January 2025
Shaping the Future of European Consumer Protection: Towards a Digital Fairness Act?
3 December 2024
Systemic Risk in Digital Services: Benchmarks for Evaluating Management of Risk of Terrorist Content Dissemination
27 November 2024
AI Agents and Ecosystems Contestability
5 November 2024
Resilience In Digital Supply Chains:  Opportunities for Global and International Governance
30 September 2024

Stay informed

Subscribe to our newsletter for our latest updates

Subscribe now

Centre on Regulation in Europe asbl (CERRE)

Avenue Louise, 475 (box 10)
1050 Brussels, Belgium
T.: +32 2 230 83 60
E-mail: info@cerre.eu  

Linkedin-in Youtube Link
  • Copyright CERRE 2010-2023
  • BE 0824446055 RPM Bruxelles
About
  • About Us
  • Team
  • Board of Directors
  • Annual review
  • Careers
  • Transparency & Independence
  • FAQs
Expertise
  • Energy, Mobility & Sustainability
  • Tech, Media, Telecom
  • Cross-sector
More
  • Publications
  • Events
  • Blogposts
  • Insights
  • Privacy & Legals
  • Cookie Policy

Centre on Regulation in Europe asbl (CERRE)

Avenue Louise, 475 (box 10)
B-1050 Brussels – Belgium
T.: +3222308360
E-mail: info@cerre.eu 

BE 0824446055 RPM Bruxelles

Linkedin-in Youtube
About
  • About Us
  • Team
  • Board of directors
  • Annual review
  • Careers
  • Transparency & Independence
  • FAQs
Expertise
  • Energy & Sustainability
  • Tech, Media, Telecom
  • Mobility
  • Cross-sector
More
  • Publications
  • Events
  • News & insights
  • Our members
  • Become a member

This website uses cookies to ensure you get the best experience.

OK
CERRE Privacy Policy