Control Facebook the political debate

Digital disinformation

Philipp Mueller

is academic advisor at the Institute for Media and Communication Studies at the University of Mannheim. His work focuses on the areas of political communication, reception & impact of journalistic media offers as well as media change & digitization.

What are Facebook, Twitter and Co. doing against disinformation? Should they only warn their users of false reports or should they remove this and other problematic content directly? Some measures are useful, others are counterproductive.

With an advertising campaign in the summer of 2018, Facebook wants to draw attention to the fact that the network is increasingly taking action against fake news. (& copy picture alliance / ZUMA Press)

Social network pages (SNS) on the Internet attract a whole range of problematic content (in addition to many desirable ones): The pages contain insults, discrimination, conspiracy theories, political propaganda and false reports. But which measures against it seem appropriate and sensible? Above all, Facebook has made various attempts in recent years to meet a growing public debate on this question and at the same time to maintain the principle of open access to all types of content. The ideas range from working with so-called "fact-checking" partners, who research messages with dubious content, to adapting the algorithms that decide which content is displayed to users. But the basic tone of the public debate remains critical and, above all, seems to want problematic content to be deleted. In order for social network sites to tackle this more consistently, the federal government passed a law in 2017, the Network Enforcement Act (NetzDG).

In the case of insults, slander or sedition, the case seems clear. Such statements violate applicable law and must be removed from the platforms for this reason alone. The only arguable here is whether the decision as to when such a criminal offense is fulfilled can and should be made independently by the platform operators, or whether this would not have to be decided by a court in each individual case.

On the other hand, when it comes to deliberately spread false reports, which are currently haunted by the public debate under the term "fake news", the solution is less clear. Because within the framework of freedom of expression in democratic states, there is good reason to be free to assert untruths to everyone, provided that these assertions do not harm anyone else directly. This is regulated in this way, among other things, because in individual cases it is not that easy to clearly determine whether a statement is true or untrue. If the spread of untruth per se were prevented, this could contribute to the suppression of socially marginalized, but not necessarily untrue, opinions and assertions.

Problem case "over-blocking"

Research suggests that false reports are more likely to be believed if they confirm their readers' pre-existing worldview than if they contradict it. [1] However, this still means that any additional distribution can have harmful effects if only the right reader is reached. In this respect, it appears desirable that messages that can be clearly identified as incorrectly reach as few users as possible. The deletion of false positives should nevertheless be treated with caution. Because it is often not possible to clearly demonstrate whether a statement is true. The sociology of knowledge also suggests that it always depends on the cultural and historical context whether a statement is held to be true in society. [2] In the case of a rigid deletion of supposed untruths on social network pages, actually true content could therefore also be deleted. One speaks here of "overblocking".

The problem is exacerbated when the decision of what content to delete is in the hands of companies with self-interest. The NetzDG, which came into force in 2017, obliges SNS operators, under threat of heavy fines, to promptly review and, if necessary, delete content that is reported as problematic by users or complaint offices. In order to avoid imminent fines, the operators could therefore delete content to a greater extent than would actually be sensible. [3] The creation of a public ombudsman, to which various civil society actors belong and which controls the deletions carried out by the operators, appears to be advisable.

One can still say that!

A second problem is the impression that deletions may give rise to the fact that particularly politically unpopular content should be removed. The entire debate about fake news on the Internet must be viewed against the background of current political developments. All over the world we are experiencing an upswing in right-wing populist movements, which are based on a conspiracy of the social elites against the interests of the people and which demand national isolation. These demands are directed against the socially and economically liberal camp that has dominated political events in recent decades. Within this debate, the "fake news" accusation has mutated into a fighting term to discredit information that comes from the other side. [4]

If, above all, false reports that promote the right-wing populist position are deleted from social network sites, this could be interpreted as renewed evidence of the alleged elite conspiracy, even if it is demonstrably untrue content. On the one hand, this would strengthen right-wing populist arguments within their supporters and, on the other hand, it could lead to their supporters looking for new specialized platforms for exchanging views on the Internet, thus promoting a split in social information environments. In order not to add fuel to the fire of the current debates, deletion of content should therefore be carried out cautiously and only if there is a verifiable violation of the law, which has best been verified by an independent body.

Warn instead of deleting?

So if deleting content is not the first choice to counter false reports on the Internet, what about warning notices? For example, Facebook works with journalistic service providers in many countries who provide information on false reports spread via the network and contribute their own articles on the topic that correct the misinformation. In Germany, these are the Correctiv research network and, since March 2019, the German Press Agency. The content identified as false reports in this way will not be deleted by Facebook, but will be algorithmically devalued and thus displayed to users with a lower probability. In addition, unlike on Twitter, users on Facebook receive two types of warning:

  • If you share a post that has been questioned by fact checkers and want to distribute it further, a pop-up message appears indicating that it is a questioned message
  • If you look at a questioned article, links to correcting articles from the fact-checking partners appear under the heading "More on the subject." Originally, Facebook had used a clearer marking for this: A red warning symbol and the note "Doubted by fact checkers" appeared directly below the message. However, this variant was soon abandoned.


A big problem, however, is that users experience warnings as paternalism and can react with anger and defiance. In the psychological literature this phenomenon is called reactance. [6] If people perceive their personal freedom of action as restricted from the outside, they react with anger and defiantly carry out the intended behavior even more. In the context of persuasive (persuasive) media messages, a so-called "boomerang effect" can be the result: When users notice that they are to be persuaded in a certain direction, for example by means of a warning (this is called "persuasion knowledge" i [7 ]), they assume an illegitimate attempt to influence them and trust their original convictions all the more.

At the same time, it can be questioned whether a warning that is simply overwritten with "More on the topic" is even taken as a correction. It is now widespread on news sites on the Internet to place references to further articles on a topic at the end of an article or even in the course of the text. In terms of content, these other articles usually contain in-depth, but not completely contradicting, information on a topic. The design of the warning notices used by Facebook since the end of 2017 corresponds to these references to other articles and only on closer inspection and reading make it clear that they contain contradicting information. These warnings could be so subtle that in many cases they simply go unnoticed.

A third problem for the effectiveness of warnings arises from the so-called "sleeper" effect. [8] This is the observation made in the 1950s that after receiving news content, people often remember the message itself after a certain period of time, but have forgotten the source from which they got information and how they got it Source at the time of reception. This means that once warnings have been read, they are more likely to be forgotten than the actual contents of a false report.

Between public task and algorithmic selection

On the basis of the arguments gathered here, it can be concluded that deletion can have a number of problematic consequences, as it interferes with freedom of expression on the one hand and could be used in populist discourses as evidence of a social elite conspiracy on the other. At the same time, of course, reports that violate applicable law, for example because they contain defamation or hate speech, have to be deleted. However, this means should be implemented sparingly and not on their own responsibility by the platform company. Rather, it is a public task.

There are also doubts about warnings. If formulated too drastically, they could create reactance. If they are formulated too weakly, they could be overlooked. In addition, they could be forgotten for longer periods of time. However, they cannot be completely dispensed with. In particular, warnings before the dissemination, i.e. sharing, of false reports appear, since the personal recommendation from user to user is one of the central mechanisms in the distribution of false reports on the Internet. [9]

Another starting point for curbing the spread of false reports is algorithmic selection. Pre-programmed decision-making rules define which content users are shown with what probability on SNS platforms. From the point of view of democracy theory, it seems desirable that a variety of information sources, topics and opinions receive attention. At the same time, however, it is possible in this way to considerably restrict the dissemination of information clearly identified as incorrect without any deletions or warnings. According to its own statements, Facebook has implemented this more recently in recent times. Apparently with success, because a study from the USA shows that the proportion of false reports on Facebook has decreased significantly since 2017, while at the same time it continued to increase on Twitter.

These different developments on the two social network sites once again indicate how problematic it can be that the few central information platforms on the Internet are in the hands of privately-owned companies that are only partially accountable to the general public. In the end, it is up to these companies to decide whether and what measures are taken to prevent the spread of false information.