How might we alleviate our society’s misinformation problem? One suggestion goes as follows: the problem is that people are so ignorant, poorly informed, gullible, irrational that they lack the ability to discern credible information and real expertise from incredible information and fake expertise. People should take a stronger personal and civic responsibility to be informed, and educate themselves in critical thinking and media literacy, if not for their own sake, then for the sake of society at large. People are too freewheeling with what they believe and what they share, and need to start thinking more critically about what they encounter and who to trust.
This view places the primary responsibility for our current informational predicament – and the responsibility to mend it – on individuals. It views them as somehow cognitively deficient. An attractive aspect of this view is that it suggests a solution (people need to become smarter) directly where the problem seems to lie (people are not smart). Simply, if we want to stop the spread of misinformation, people need to take responsibility to think better and learn how to stop spreading it. A closer philosophical and social scientific look at issues of responsibility with regard to information suggests that this view is mistaken on several accounts.
First, consider a commonly accepted requirement for someone to be responsible for something. According to what is known to philosophers as the epistemic condition of responsibility, one can only be held responsible for outcomes that one caused knowingly. For people to cause actions knowingly, they must have some level of awareness of what they are doing, what the consequences of the actions are, that there are alternatives to the action, and that the action is morally significant. If they do not fulfil these conditions, it is (at least intuitively) difficult to hold them fully and personally responsible. If I break a family heirloom while sleepwalking, or press a button that has the opposite effect to what I think it does, or do not realise that it is morally significant which kind of face mask I choose to wear during the COVID-19 pandemic, all despite my sincere efforts to do the right thing, it’s difficult to blame me for the undesirable moral consequences of my actions. I simply wasn’t aware that what I was doing was wrong.
The epistemic condition of responsibility is a complicated, hotly debated concept among moral philosophers. However, if we accept such a condition, it raises an even more complicated philosophical question when applied to the proposed responsibility to be informed. How responsible are people for knowing what they know and share? Or, since awareness and knowledge are prerequisites of responsibility, how knowingly do people know and share things?
Structural aspects of our environments can corrupt even our best intentions
There are two arguments that support the view that people’s belief-forming and information-sharing behaviour generally does not fulfil the epistemic condition of responsibility. According to the first argument, what anyone knows, how they treat new claims that they encounter, and the choices that they make in sharing new information are based on their previous experiences. These experiences are in turn based on still previous experiences, and so on, until we reach a first experience (birth), an event for which no newborn can be responsible.
An undesirable feature of this argument is that, because it relies on a kind of fundamental determinism, people do not seem to have any responsibilities at all: their behaviour is ultimately based on an involuntary sequence of events that they ride along. If we want to avoid this nihilistic conclusion, a second, more contingent argument can be made instead. People do have some degree of agency, but how they form beliefs and their choices to share them take place in the overlapping collection of information environments of old and new forms of media, educational institutions, cultural institutions, workplaces, and other physical and virtual spaces. Individuals’ behaviour in these environments is greatly affected by their designs: the problems of our malfunctioning information environments are not simply the summation of individual bad habits. As the philosopher Miranda Fricker observes, there is a limit to what the virtue of individuals can achieve in the face of structures of unequal power. Sometimes, looking beyond the level of individuals, we can see that the structural aspects of our environments can corrupt even our best intentions.
Social media is a good example of where this is happening. Users are at the mercy of non-transparent algorithms that favour certain kinds of content. If the algorithms favour extreme content, promote poorly corroborated claims, and form rabbit holes that are easy to fall down, it is unsurprising that users unwittingly slip up. Coming to carefully formed conclusions is very difficult even given the best conditions; in these hostile environments, users are not so much incapable as they are tricked. Of course, users can’t affect this algorithm democratically, or collectively decide the mechanisms and rules of their online public fora. Many are not even aware that there is an algorithm at work shaping their entire experience of the platform.
Considering these two points together – that the designs of citizens’ information environments are not in their own hands, and that these environments make forming sound beliefs difficult – it does not seem like individual citizens are the ones responsible for the problems of our current collective information environment.
If disinformation can be made to be profitable, we should not expect those who profit to self-regulate
However, even if we accept that citizens are not primarily causally responsible for our poor information environments, it could be argued that they nonetheless have a remedial responsibility to mend them. There are two reasons why it is unclear whether positing a personal responsibility to be informed improves our information practices. First, it is reasonable to expect that a lot of people will react poorly to simply being blamed for their ignorance. As the political theorist Iris Marion Young argued in her book Responsibility for Justice (2011): ‘Rhetorics of blame in public discussion of social problems … usually produce defensiveness and unproductive blame-switching.’ In the worst case, blame might even exacerbate the problems of our information environments by deepening polarisation. Blame-switching is especially dangerous if citizens do not themselves collectively agree that they are the problem, which they clearly do not.
Even if there was a mass willingness to accept accountability, or if a responsibility could be articulated without blaming citizens, there is no guarantee that citizens would be successful in actually practising their responsibility to be informed. As I said, even the best intentions are often manipulated. Critical thinking, rationality and identifying the correct experts are extremely difficult things to practise effectively on their own, much less in warped information environments. This is not to say that people’s intentions are universally good, but that even sincere, well-meaning efforts do not necessarily have desirable outcomes. This speaks against proposing a greater individual responsibility for misinformation, because, if even the best intentions can be corrupted, then there isn’t a great chance of success.
But all this does not mean that citizens should not be encouraged to do what they can. It is prudent to stay mindful of the sources of online information, pause to consider the context, bias or the satirical nature of a story before sharing, and to check the publication date. Disinformation is especially effective when repeated often, so it’s good to try to remain critical of especially outlandish content encountered multiple times. And while there is great concern over deepfake technology, still much more ubiquitous are ‘cheapfake’ videos, which create manipulations by conventional editing techniques. Reverse image searches can also reveal suspicious sources for specific images.
Leaning away from individual responsibility means that the burden should be shifted to those who have structural control over our information environments. Solutions to our misinformation epidemic are effective when they are structural and address the problem at its roots. In the case of online misinformation, we should understand that technology giants aim at creating profit over creating public democratic goods. If disinformation can be made to be profitable, we should not expect those who profit to self-regulate and adopt a responsibility toward information by default. Placing accountability and responsibility on technology companies but also on government, regulatory bodies, traditional media and political parties by democratic means is a good first step to foster information environments that encourage good knowledge practices. This step provides a realistic distribution of both causal and effective remedial responsibility for our misinformation problem without nihilistically throwing out the entire concept of responsibility – which we should never do.