10 questions to ask before covering mis- and dis-information by Nic Dias and Claire Wardle.
From the post:
Can silence be the best response to mis- and dis-information?
First Draft has been asking ourselves this question since the French election, when we had to make difficult decisions about what information to publicly debunk for CrossCheck. We became worried that – in cases where rumours, misleading articles or fabricated visuals were confined to niche communities – addressing the content might actually help to spread it farther.
As Alice Marwick and Rebecca Lewis noted in their 2017 report, Media Manipulation and Disinformation Online, “[F]or manipulators, it doesn’t matter if the media is reporting on a story in order to debunk or dismiss it; the important thing is getting it covered in the first place.” Buzzfeed’s Ryan Broderick seemed to confirm our concerns when, on the weekend of the #MacronLeaks trend, he tweeted that 4channers were celebrating news stories about the leaks as a “form of engagement.”
We have since faced the same challenges in the UK and German elections. Our work convinced us that journalists, fact-checkers and civil society urgently need to discuss when, how and why we report on examples of mis- and dis-information and the automated campaigns often used to promote them. Of particular importance is defining a “tipping point” at which mis- and dis-information becomes beneficial to address. We offer 10 questions below to spark such a discussion.
Before that, though, it’s worth briefly mentioning the other ways that coverage can go wrong. Many research studies examine how corrections can be counterproductive by ingraining falsehoods in memory or making them more familiar. Ultimately, the impact of a correction depends on complex interactions between factors like subject, format and audience ideology.
Reports of disinformation campaigns, amplified through the use of bots and cyborgs, can also be problematic. Experiments suggest that conspiracy-like stories can inspire feelings of powerlessness and lead people to report lower likelihoods to engage politically. Moreover, descriptions of how bots and cyborgs were found give their operators the opportunity to change strategies and better evade detection. In a month awash with revelations about Russia’s involvement in the US election, it’s more important than ever to discuss the implications of reporting on these kinds of activities.
Following the French election, First Draft has switched from the public-facing model of CrossCheck to a model where we primarily distribute our findings via email to newsroom subscribers. Our election teams now focus on stories that are predicted (by NewsWhip’s “Predicted Interactions” algorithm) to be shared widely. We also commissioned research on the effectiveness of the CrossCheck debunks and are awaiting its results to evaluate our methods.
…
The ten questions (see the post) should provoke useful discussions in newsrooms around the world.
I have three additional questions that round Nic Dias and Claire Wardle‘s list to a baker’s dozen:
- How do you define mis- or dis-information?
- How do you evaluate information to classify it as mis- or dis-information?
- Are your evaluations of specific information as mis- or dis-information public?
Defining dis- or mis-information
The standard definitions (Merriam Webster) for:
disinformation: false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth
misinformation: incorrect or misleading information
would find nodding agreement from Al Jazeera and the CIA, to the European Union and Recep Tayyip Erdoğan.
However, what is or is not disinformation or misinformation would vary from one of those parties to another.
Before reaching the ten questions of Nic Dias and Claire Wardle, define what you mean by disinformation or misinformation. Hopefully with numerous examples, especially ones that are close to the boundaries of your definitions.
Otherwise, all your readers know is that on the basis of some definition of disinformation/misinformation known only to you, information has been determined to be untrustworthy.
Documenting your process to classify as dis- or mis-information
Assuming you do arrive at a common definition of misinformation or disinformation, what process do you use to classify information according to those definitions? Ask your editor? That seems like a poor choice but no doubt it happens.
Do you consult and abide by an opinion found on Snopes? Or Politifact? Or FactCheck.org? Do all three have to agree for a judgement of misinformation or disinformation? What about other sources?
What sources do you consider definitive on the question of mis- or disinformation? Do you keep that list updated? How did you choose those sources over others?
Documenting your evaluation of information as dis- or mis-information
Having a process for evaluating information is great.
But have you followed that process? If challenged, how would you establish the process was followed for a particular piece of information?
Is your documentation office “lore,” or something more substantial?
An online form that captures the information, its source, the check fact source consulted with date, decision and person making the decision would take only seconds to populate. In addition to documenting the decision, you can build up a record of a source’s reliability.
Conclusion
Vagueness makes discussion and condemnation of mis- or dis-information easy to do and difficult to have a process for evaluating information, a common ground for classifying that information, to say nothing of documenting your decision on specific information.
Don’t be the black box of whim and caprice users experience at Twitter, Facebook and Google. You can do better than that.