The Right Way to Regulate Digital Harms - OPINION

  22 December 2020    Read: 1023
  The Right Way to Regulate Digital Harms -   OPINION

by David Kaye and Jason Pielemeier 

As the European Commission’s recent Digital Services Act demonstrates, lawmakers around the world are scrambling, with good reason, to address the extremism, disinformation, and manipulation that have consumed the digital ecosystem, distorted public discourse, and deepened polarization in recent years. And yet their efforts carry risks. Just as rules governing online domains can bolster democracy by promoting inclusive, informed debate, they can also be abused to inhibit freedom of expression.

Fortunately, international human rights law offers a set of principles that can guide regulation in a way that addresses toxic content while promoting freedom of expression. To help illuminate this process, our organization, the Global Network Initiative (GNI), recently brought together experts from across industry and the human-rights community to examine scores of content-regulation initiatives in more than a dozen countries, and provide relevant recommendations.

The first human-rights principle that must be applied is “legality,” which emphasizes the need for clear definitions adopted by democratic processes. Such definitions are missing in Tanzania, for example, which instead has rules barring online content that “promotes annoyance,” among other vague harms. If it is not clear what content is and is not allowed, governments seek to maximize their power to restrict speech; users cannot know what constitutes lawful conduct; and courts and companies struggle to enforce the rules fairly.

Another vital principle is “legitimacy,” which dictates that governments may limit expression only for specific compelling reasons, such as the rights of others, public health, and public order. The principle of “necessity” then demands that restrictions be tailored to fulfill those legitimate goals and be proportionate to the interest being protected. No regulation should be adopted if a less speech-restrictive rule could do the job.

A human rights-focused approach helps to prevent disproportionate consequences. In this respect, the European Union’s proposed regulation on preventing the dissemination of terrorist or extremist content online misses the mark. The regulation would require companies of all types and sizes to remove terrorist content within one hour and introduce proactive measures to filter such material. The dominant companies can afford such rules, but it would raise the barriers for innovative new players to enter the market, as well as resulting in the disproportionate removal of all sorts of acceptable content.

But companies themselves can and should apply rules that advance human rights, regardless of government regulation. Here, transparency, due process, and accountability are essential.

For starters, social media companies must be much more forthcoming about how they regulate content. This means sharing as much information as possible publicly, and providing legitimately sensitive information to regulators and independent experts through vetted access regimes or multi-stakeholder arrangements, similar to the one GNI has created for sharing information about company responses to government demands.

With this information, governments can ensure that intermediaries moderate content consistently and fairly. To this end, regulators, given appropriate resources and expertise (and, ideally, engaging experts and user rights’ advocates), should be tasked with providing guidance for and oversight of content-moderation systems. At the same time, companies should be required to introduce mechanisms that give users greater control over what they see and share.

Ultimately, however, responsibility for moderating sensitive content should not fall solely on private companies. Instead, governments should put democratically accountable organs, like courts, in charge. France’s Law Against the Manipulation of Information, while imperfect, seeks to do that, providing an expedited process for judges to review alleged election-related disinformation. That way, companies are not the ones making these difficult, politically sensitive determinations. By contrast, the country’s Constitutional Council recently struck down a French hate speech law, in part because it circumvented the country’s court system.

No matter how clear the rules and how efficient the moderation systems, regulators and companies will make mistakes. That is why the final piece of the puzzle is dispute resolution and remedy. Companies should allow users to appeal content moderation decisions to independent bodies, with special consideration for vulnerable groups, like children, and those serving the public interest, like journalists. Governments and regulators should also be subject to transparency and accountability mechanisms.

Toxic content hardly began with the Internet. But online platforms have made disseminating it further and faster much easier than ever before. If we are going to limit its spread, without crushing freedom of expression, we need clear and comprehensive regulatory approaches, based on human-rights principles. Otherwise, even rules designed with the best intentions could end up silencing the vulnerable and strengthening the powerful. That’s a lesson the world’s authoritarians know all too well.

US President-elect Joe Biden may have promised a “return to normalcy,” but the truth is that there is no going back. The world is changing in fundamental ways, and the actions the world takes in the next few years will be critical to lay the groundwork for a sustainable, secure, and prosperous future.

 

David Kaye, former UN Special Rapporteur for Freedom of Opinion and Expression, is Independent Board Chair of the Global Network Initiative.

Jason Pielemeier is Policy Director of the Global Network Initiative.

Read the original article on project-syndicate.org.


More about:


News Line