Winston Maxwell: CERRE Research Fellow and Expert in Data Regulation and Artificial Intelligence
Each month CERRE introduces a member of its academic team showcasing their background, introducing the ongoing projects they are working on, and addressing topical cross-cutting issues everyone wants to know about.
Winston, you have a varied and successful legal background, can you give us a glimpse into your expertise and particular research interests?
Much of my career as a lawyer was devoted to telecommunications regulation and data protection. As an academic, I am now focusing on the Human Rights aspects of Artificial Intelligence.
You are a newly appointed Research Fellow, but you have collaborated with CERRE before. What did your previous research projects look into?
My previous CERRE research projects were related to telecommunications regulation, focusing on investment incentives for fibre (see previous reports on ‘Cooperation between firms to deploy very high capacity networks’ (2020) and ‘Implementing co-investment and network sharing‘ (2020)).
We are interested in delving into the topic of Artificial Intelligence (AI) with you today. CERRE is active in this field (see previous CERRE study on ‘EU Liability rules for the age of Articifical Intelligence‘ (2021)), and is closely following the negotiations of the EU’s Artificial Intelligence Act (EU AI Act) between co-legislators. In your opinion, what does the EU AI Act aim to achieve?
The EU AI Act is an ambitious attempt to provide a common regulatory framework for AI, particularly « high-risk » AI applications. The challenge is to create a regulatory framework that is flexible enough to cover the multiple uses of AI. AI is just software, but it’s what you do with the software that counts. That’s why the EU AI Act refers to existing regulations, such as the regulation of medical devices, automobiles, or banking systems.
The proposal seems to have sparked quite a lot of debate in the European institutions already. Which provisions of the EU AI Act do you think are going to be the most contentious/most difficult to find a common ground on?
The idea of creating a list of « high-risk » AI applications is controversial because some applications will inevitably be left out. The list of high-risk applications will be updated regularly, but we can still expect a lag and some gaps in coverage.
Another controversial topic is “What is AI?”. The proposed regulation covers much more than machine learning, including within its scope rule-based decision trees, and statistical approaches that have been around for decades, what some call GOFAI « Good Old Fashioned AI ».
The last controversial topic that comes to mind is whether Human Rights aspects can be addressed in the framework of safety certifications. The EU AI Act proposes a conformity assessment process with a CE mark, the kind of process used to certify the safety of refrigerators or medical devices. That’s an attractive idea, but it remains to be seen how Human Rights can be addressed.
Are there any outstanding issues that need to be addressed or areas that are not sufficiently covered by the current proposal?
The ethical and Human Rights dimension needs more work. For example, it would make sense for the EU AI Act to require risk assessments for high-risk AI applications to address each of the AI ethics principles contained in the HLEG (High-Level Expert Group) guidelines. This would transform the non-binding HLEG guidelines into operational requirements.
Is there an overlap between the EU AI Act and other EU legislation? Does this create any potential issues?
There’s considerable overlap with the GDPR. The GDPR requires an impact assessment for high-risk data processing, and the EU AI Act requires a risk assessment for high-risk AI applications. But the definition of high-risk is not synchronised between the two texts, and neither is the risk assessment process. They operate in separate silos.
From my point of view, anything that is « high-risk » under the GDPR should also be « high-risk » under the EU AI Act (assuming AI is involved). The GDPR impact assessment on fundamental rights should necessarily be input into the risk assessment done under the EU AI Act.
Going forward, what other policy debates are you going to be keeping an eye on in 2022?
In addition to the EU AI Act, I’m also watching the Digital Services Act (DSA), which will provide a regulatory framework for algorithms involved in content moderation. The Human Rights impacts are significant since algorithms often make an initial call on whether content should be blocked or not.
Thank you very much for your time and input, Winston.
Winston Maxwell is Director of the Law & Digital Technology Studies department at Télécom Paris – Institut Polytechnique de Paris, where he teaches and writes on subjects related to the regulation of data, AI and telecommunications.