This piece is authored by Sally Broughton Micova, CERRE Academic Co-Director and an Associate Professor in Communications Policy and Politics at the University of East Anglia (UEA).
We have entered a new phase in the implementation of the Digital Service Act’s provisions on managing systemic risk from very large online platforms and search engines (VLOPSEs). In the last week of November 2024, the designated services released public versions of their systemic risk assessments. This means that we finally get a glimpse into how they have understood, and how they believe they are mitigating, the broad list of harms for which they need to manage risk. These harms include the dissemination of illegal content, negative effects to all fundamental rights, and impingement on big societal systems such as electoral processes, public health, and security. Regulators, academics, civil society, and the providers of designated services can now start looking across the services to learn what is working, where it is working, and identifying where systemic risks are not being managed well enough.
In 2023, Andrea Calef and I argued in our CERRE Report Elements for Effective Systemic Risk Assessment Under the DSA that there is a need to establish what “good” looks like in the mitigation of the systemic risks covered in the DSA and set benchmarks for what should be achieved in each of the risk areas. Pointing out the highly interconnected nature of digital ecosystems, including by common exposure to malicious actors, we called for iterative learning through inclusive evaluation of systemic risk management that looks across designated services and risk areas. The publication of the risk assessments and audit reports provides an opportunity to begin doing that, but due to the lack of detail in most and the variety of structures used, the publication of these reports will more likely be useful in helping researchers to target their requests for data access to fill in the gaps.
Diverging approaches to understanding risk
One thing we can already see in the public versions of the risk assessments is that each provider has created its own way of grouping or breaking down the risk areas, which complicates meta-analysis. For example, Amazon created four overarching categories of illegal content, fundamental rights, democratic processes and public health, while TikTok broke illegal content down into CSAM, terrorist content and hate speech and reduced the public health risk area to medical misinformation. Google worked off a set of 40 risk statements indicating how it defined each of the areas listed in the DSA’s article 34(1) including each of the named fundamental rights plus the freedom to conduct business.
In the last year, I have been working with colleagues from CERRE’s academic team and from the UEA Centre for Competition Policy on establishing benchmarks for the mitigation of systemic risks – defining “good” – and identifying what metrics and data points can be used to evaluate against them. In my recent CERRE issue paper Systemic Risk in Digital Services: Benchmarks for Evaluating Management of Risk of Terrorist Content Dissemination, I noted five specific issues that should be looked for in the recently published systemic risk assessments in relation to this type of illegal content, including evidence on each services’ engagement with collaborative efforts, handling of borderline content, and reliance on third parties for mitigation measures.
An initial reading of the risk assessment reports indicates that there is some evidence on services’ engagement with collaborative efforts on illegal content and some reports acknowledge the use of third parties for content moderation and intelligence. However, the reports lack the level of detail needed to understand how big a role external shared resources, such as the hash databases, or commercial providers of content identification, red-teaming testing and insight, play in the mitigation of risk of dissemination of illegal content. Understanding these different factors will be crucial for seeing vulnerabilities shared across the ecosystem and for ensuring fundamental rights are protected.
Benchmarks: prevention, fundamental rights and transparency
In setting out the benchmarks for terrorist content, I argued that there is a need to balance exposure prevention targets with fundamental rights targets- and for transparency regarding relationships with law enforcement and other Member State authorities. There is some evidence in the audit reports of the designated companies that some auditors looked at accuracy rates for illegal content detection and the information given to users whose content was removed, though this was mainly in relation to compliance with other elements of the DSA, and not in the context of systemic risk to fundamental rights or illegal content dissemination. The audit reports produced by EY concerning the services of Meta and Google, for example, helpfully note the benchmarks set by the companies in relation to Art 14.4 that requires service terms to be enforced in a proportionate and rights-respecting manner. However, these are in the form of intentions and definitions rather than specific targets.
Some risk assessment reports contained evidence on how the services considered fundamental rights in relation to terrorist content removal. For example, Meta reported reviewing its Dangerous Organisations and Individuals Community Standards policies in both the Facebook and Instagram reports. However, there does not seem to be enough evidence in the risk assessment or the audit reports to provide a clear picture of whether fundamental rights targets have been set in relation to illegal content mitigations and what those might be.
Risk assessments and audits: impetus for change?
Looking at the risk assessment reports and audit reports from the perspective of the benchmarks and specific issues related to the systemic risk of terrorist content dissemination, it is possible to see some grains of information for an evidence base on whether the benchmarks are being achieved. It is likely there are other nuggets in there as well in relation to the other risk areas. However, overall they do not tell us much. The lack of detail and comparability is a hinderance, but the amount of insight to be gained is also limited by the fact that these are compliance documents. In risk assessment reports, service providers aim to demonstrate they are doing their best to understand and mitigate the risks and in relation to Articles 34 and 35 the auditors are checking whether the services have processes and policies in place and are reporting as required.
It is evident that the auditors can initiate change in the services. For example, the auditors found LinkedIn lacking in relation to information given back to users on notice and take down and also regarding undue constraints on users’ ability to make complaints (both with freedom of expression implications). Consequently, the service committed to fixing those issues by early 2025 in its implementation report. Meta’s auditors took issue with the lack of accuracy measures in the transparency reporting for both its services and the auditor reported that this had been rectified by the time of publication. This means that if the regulators or wider stakeholder community set more explicit expectations in relation to systemic risk, the audit process may be one mechanism to push services towards meeting those expectations.
The need for further evidence
Now that the various stakeholders, regulators and all the services have some view into the systemic risk assessment and mitigation by the designated services, the wider stakeholder community can engage in evaluating the DSA’s risk management approach more widely. This means considering shared resources and vulnerabilities and comparable evidence on the impact of mitigations, in addition to the compliance of individual services. Together with fellow CERRE academic Daniel Schnurr and my CCP colleagues Andrea Calef and Bryn Enstone, we put forward an agenda for learning and improving on risk management across the areas covered by the DSA in Cross-Cutting Issues for DSA Systemic Risk Management: An Agenda for Cooperation – CERRE. We argued for meta-analysis, taxonomy of harms, and strategies for consistent use of information-gathering tools across services and risk areas. We pointed to specific functionalities and service features that merit examination. However, the reality is that, for the realisation of these strategic tools and many of our other recommendations across the systemic risk series, we will need much more evidence than is available in the risk assessment.