The EU’s DSA Proposal: The end of self-regulation?

Desde la barra Análisis The EU’s DSA Proposal: The end of self-regulation?
- Advertisment -

This post has been drafted by Sergi Gálvez Duran.

As far back as in 2011, Eli Pariser’s book, The Filter Bubble, already presented the dangers of personalized algorithms for managing information online. Ten years later, the health misinformation during COVID-19 is a clear example of how fake news on social media can affect people’s behaviour (vaccination rates, mask-wearing, etc.). Until now, technology companies have been the actors that set the rules regarding what information can be shown on the platforms they control and how to monitor the propagation of misinformation. In other words, they work under a self-regulation model. For example, Twitter prohibits the promotion of political content and Facebook recently modified its advertising policies to allow it to restrict certain political ads. However, self-regulation should be limited when important countervailing values are on the line, such as freedom of expression, privacy, or discrimination.

Towards legal intervention

The algorithms that are designed to capture the attention of users are also likely to promote misleading clickbait, virality stories and misinformation spreads. In this context, the EU proposed Digital Services Act (“DSA”) seeks to regulate, amongst other things, the way algorithmic systems shape information flows online. If we look at the DSA’s recitals, this is indeed an area of concern for EU legislators: Recital (62) expressly notes that algorithmic recommender systems “can play an important role in the amplification of certain messages, the viral dissemination of information and the stimulation of online behaviour”.

The legislative intent to regulate how algorithms drive online content is set forth in the text of Article 29 of the proposed DSA. First, Article 29(1) provides that Very Large Online Platforms (“VLOPs”) shall stipulate in their terms and conditions the main parameters used by their recommender systems. Additionally, Article 29(1) stipulates that VLOPs shall explain “any options for the recipients of the service to modify or influence those main parameters that they may have made available, including at least one option which is not based on profiling”. To some extent, the EU Commission’s attempt to require user transparency should be welcomed.

However, the reality is more complicated. The EU Commission’s view that users will be able to shape their online experience simply because they will have the ability to influence some parameters, wrongly assumes that those parameters are the ones that impact the final recommendations. Many of these recommender systems are based on machine learning models where it is not clear what the “main parameters” are and what their effects are.  Moreover, freedom to access the main parameters does not necessarily ensure that users understand how information is prioritized for them, especially when there are a number of parameters and users are knowledgeable limited.

Also, research shows that users do not generally read or understand terms and conditions, because almost all are long and incomprehensible. In fact, most users who are not presented with a policy by default never click to read it and those who clicked to read policy end up merely skimming through its text (see, e.g. Nili Steinfeld). Moreover, as pointed out by the European Data Protection Supervisor, “[t]erms and conditions of very large online platforms are often even more complex because they inform about the many related services platform offer”.

On a different note, it is questionable whether the opt-out mechanism for profiling users introduced by Article 29(1) would be compatible with Article 22(1) of the General Data Protection Regulation (“GDPR”), at least with regards to fully automated systems used by online platforms to define the information displayed. Article 22(1) GDPR prohibits decisions based solely on automated processing, including profiling. This prohibition is subject to (essentially) two derogations: the necessity of the processing for the performance of a contract and the explicit consent.

With regards to the first exception (necessity for entering or performing a contract), one might wonder: is extensive automated profiling necessary to suggest online content? Then, the fact that recommender systems are by default based on profiling leaves no room for the exception of data subject’s explicit consent. Moreover, the Court of Justice of the European Union has held that “active consent is thus now expressly laid down in Regulation 2016/679” and that “[R]ecital 32 expressly precludes ‘silence, pre-ticked boxes or inactivity’ from constituting consent” (Planet49, C-673/17, EU:C:2019:801; paragraph 62).

The DSA Proposal is ineffective in countering power asymmetry

The EU’s DSA focus on increasing users’ transparency to tackle disinformation can make certain contributions to the public accountability of VLOPs. However, the DSA proposal still relies on neoliberal governance and self-regulation (technology companies’ terms and conditions) to address the problem of disinformation. Recommender systems are programmed to rank information and show it to users based on their profiles. This, together with the incentive of VLOPs for prioritizing clicks over safety for profit reasons, makes algorithms promote misleading content over factual information. In other words, the DSA proposal is failing to confront the asymmetrical nature of the power between platforms and users.

A radical proposal to rebalance the power asymmetry between online platforms and users should be to let users switch off recommender systems. Yet, the DSA does not go that far. Article 29(2) establishes that “where several options are available” (i.e., if online platforms allow users to choose), VLOPs shall provide an easily accessible functionality on their online interface allowing the user to select and to modify their preferred option for each of the recommender systems that determines the relative order of information presented. Since algorithms are opaque and platforms are restricting access to their public Application Programming Interfaces (APIs), the justification to grant platforms the legal power to decide what people perceive as true, diminishes. Providing users with limited choices is not necessarily the same as protecting them from misinformation flows.

If the EU legislator cares about the power asymmetry, then a combination of democratic and independent ecosystem is needed to ensure actual oversight.

Lvcentinvs
Desde la Asociación de Antiguos Alumnos del Magister Lvcentinvs y la Universidad de Alicante (España), información actualizada sobre los avances en materia de propiedad intelectual y de Derecho internacional privado para juristas europeos e iberoamericanos.

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí

Time limit is exhausted. Please reload CAPTCHA.

Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios.

LO ÚLTIMO

Must read

Sucedió el año pasado: consumidores, datos personales y juzgados de PI

Aurelius todavía sigue recuperando cosas que pasaron el año...

Proteccion de datos, China, ACTA y Brasil

Una entrada rápida antes de seguir con mis laboresEspaña:...
- Advertisement -

Quizá también te gusteRELACIONADOS
Recomendados para ti