Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 5, 2019

Automatic News Comment Generation

Filed under: Artificial Intelligence,Natural Language Processing,Social Media — Patrick Durusau @ 3:09 pm

Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation by Ze Yang, Can Xu, Wei Wu, Zhoujun Li.

Abstract: Automatic news comment generation is beneficial for real applications but has not attracted enough attention from the research community. In this paper, we propose a “read-attend-comment” procedure for news comment generation and formalize the procedure with a reading network and a generation network. The reading network comprehends a news article and distills some important points from it, then the generation network creates a comment by attending to the extracted discrete points and the news title. We optimize the model in an end-to-end manner by maximizing a variational lower bound of the true objective using the back-propagation algorithm. Experimental results on two public datasets indicate that our model can significantly outperform existing methods in terms of both automatic evaluation and human judgment.

A tweet said this was a “dangerous” paper, so I had to follow the link.

This research could be abused, but how many news comments have you read lately? The comments made by this approach would have to degrade a lot to approach the average human comment.

Anyone who is interested in abusive and/or inane comments, can scrape comments on Facebook or Twitter, set up a cron file and pop off the next comment for posting. Several orders of magnitude less effort that the approach of this paper.

Wondering, would coherence of comments over a large number of articles be an indicator that a bot is involved?

September 27, 2019

Weaponizing Your Information?

Filed under: Advertising,Fake News,Social Media,Social Networks — Patrick Durusau @ 8:29 pm

Study: Weaponized misinformation from political parties is now a global problem by Cara Curtis.

Social media, a tool created to guard freedom of speech and democracy, has increasingly been used in more sinister ways.

Memory check! Memory check! Is that how you remember the rise of social media? Have you ever thought of usenet as guarding freedom of speech (maybe) and democracy (unlikely)?

The Global Information Disorder report, the basis for Curtis’ report, treats techniques and tactics at a high level view, leaving you to file in the details for an information campaign. I prefer information as “disinformation” is in the eye of the reader.

I don’t have cites (apologies) to advertising literature on the shaping of information content for ads. Techniques known to work for advertisers, who have spent decades and $billions sharpening their techniques, should work for spreading information as well. Suggested literature?

March 28, 2019

Terrorist Usage of Twitter and Social Media (AKA Advertising)

Filed under: Advertising,Censorship,Social Media,Terrorism — Patrick Durusau @ 8:29 pm

Primer: Terrorist Usage of Twitter and Social Media

I mention this as an example of a catchy title for what is otherwise an “advertising on social media” post. Consider this re-write of the lead paragraph:

In recent years the Internet and social media has rapidly grown and become a part of everyday life for many people.  For example, YouTube alone has nearly two billion active users each month, has one billion hours of content watched every day, and over 300 hours of new video uploaded every minute (Aslam, 2019).  Other social media platforms also generate huge amounts of users and views.  The wide reach of these and other platforms has given many people and groups the opportunity to be heard when they otherwise would not have a voice.  While in many cases this opportunity is celebrated for supporting free speech, advertisers can take advantage of this access to reach and entice people that would otherwise be outside their influence.  Advertisers are becoming increasingly aware of, and taking advantage of, the global access the Internet and social media gives them.  These advertisers are no longer limited to recruiting new buyers in their physical sphere of influence; they can entice and recruit new buyers from anywhere around the world.  Advertisers are also using the Internet to encourage and carry out sales (physical and cyber) around the world…

The bolded text replaces text in the original.

For all of the bleeting and whining about terrorists on social media, what is being discussed is advertising. Any decent introduction to advertising is more useful to terrorists and their opponents than all of the literature on terrorist use of social media.

Critics of terrorist advertising miss the validity of terrorist ads in the eyes of their target populations. Twenty to thirty year old males in most cultures know they lack of ability to make a difference. For their families and communities. Structural inequalities guarantee that lack of ability. Those have been the “facts” all their lives. Terrorists offer the chance to perhaps not make a difference, but to at least not grow bent and old under the weight of oppression.

Your counter ad? …. There’s the problem with countering terrorist advertising. The facts underlying those ads are well known and have no persuasive refutation. Change the underlying facts as experienced by terrorists and their families and terrorist ads will die of their own accord. Keep the underlying facts and …, well, you know how that turns out.


August 3, 2018

Russian Bot Spotting, Magic Bullets, New York Times Tested

Filed under: Bots,Social Media,Twitter — Patrick Durusau @ 4:39 pm

How to Spot a Russian Bot by Daniel Costa-Roberts.

Spotting purported Russian bots on Twitter is a popular passtime for people unaware the “magic bullet” theory of communication has been proven to be false. One summary of “magic bullet” thinking:


The media (magic gun) fired the message directly into audience head without their own knowledge. The message cause the instant reaction from the audience mind without any hesitation is called “Magic Bullet Theory”. The media (needle) injects the message into audience mind and it cause changes in audience behavior and psyche towards the message. Audience are passive and they can’t resist the media message is called “Hypodermic Needle Theory”.

The “magic bullet” is an attractive theory for those selling advertising, but there is no scientific evidence to support it:


The magic bullet theory is based on assumption of human nature and it was not based on any empirical findings from research. Few media scholars do not accepting this model because it’s based on assumption rather than any scientific evidence. In 1938, Lazarsfeld and Herta Herzog testified the hypodermic needle theory in a radio broadcast “The War of the Worlds” (a famous comic program) by insert a news bulletin which made a widespread reaction and panic among the American Mass audience. Through this investigation he found the media messages may affect or may not affect audience.

“People’s Choice” a study conducted by Lazarsfeld in 1940 about Franklin D. Roosevelt election campaign and the effects of media messages. Through this study Lazarsfeld disproved the Magic Bullet theory and added audience are more influential in interpersonal than a media messages.

Nevertheless, MotherJones and Costa-Roberts outline five steps to spot a Russian bot:

  1. Hyperactivity – more than 50 or 60 tweets per day
  2. Suspicious images – stock avatar
  3. URL shorterners – use indicates a bot
  4. Multiple languages – polyglot indicates a bot
  5. Unlikely popularity – for given # of followers

OK, so let’s test those steps against a known non-Russian bot that favors the US government, the New York Times.

  1. Hyperactivity – New York Times joined Twitter, 2 March 2007, 4173 days, 328,555 tweets as of this afternoon, so, 78.73 on average per day. That’s hyperactive.
  2. Suspicious images – NYT symbol
  3. URL shorterners – Always – signals bot. (displays nytimes.com but if you check the links, URL shorterner)
  4. Multiple languages – Nope.
  5. Unlikely popularity – In which direction? NYT has 41,665,676 followers and only 17,145 likes, or one like for every 2340 followers.

On balance I would say the New York Times isn’t a Russian bot, but given it’s like to follower ratio, it needs to work on its social media posts.

Maybe the New York Times needs to hire a Russian bot farm?

July 18, 2018

Apologies for the Silence!

Filed under: Social Media — Patrick Durusau @ 12:18 pm

After years of posting on a daily basis, I fell into a slump since 30 June 2018 with no posts.

Sorry!

Part of the blame goes to social media, Facebook/Twitter, where I wasted time every day correcting people who were wrong. 😉

Both are tiresome and bottomless pits of error.

The sight of people wrapping themselves in flag and country over remarks concerning the US intelligence community, shocked me back into some semblance of sanity.

There are no words, no facts, no persuasive techniques that will sway anyone in the grip of such delusions.

That being the case, I was wasting my time trying to do so.

I’ve still been collecting links so have a large backlog of potential posts.

Spread the word! I’m back!

June 30, 2018

What’s Your Viral Spread Score?

Filed under: Fake News,News,Social Media,Social Networks — Patrick Durusau @ 4:13 pm

The Hoaxy homepage reports:

Visualize the spread of claims and fact checking.

Of course, when you get into the details, out of the box, Hoaxy isn’t setup to measure your ability to spread virally.

From the FAQ:


How does Hoaxy search work?

The Hoaxy corpus tracks the social sharing of links to stories published by two types of websites: (1) Low-credibility sources that often publish inaccurate, unverified, or satirical claims according to lists compiled and published by reputable news and fact-checking organizations. (2) Independent fact-checking organizations, such as snopes.com, politifact.com, and factcheck.org, that routinely fact check unverified claims.

What does the visualization show?

Hoaxy visualizes two aspects of the spread of claims and fact checking: temporal trends and diffusion networks. Temporal trends plot the cumulative number of Twitter shares over time. The user can zoom in on any time interval. Diffusion networks display how claims spread from person to person. Each node is a Twitter account and two nodes are connected if a link to a story is passed between those two accounts via retweets, replies, quotes, or mentions. The color of a connection indicates the type of information: claims and fact checks. Clicking on an edge reveals the tweet(s) and the link to the shared story; clicking on a node reveals claims shared by the corresponding user. The network may be pruned for performance.

(emphasis in original)

Bottom line is you won’t be able to ask someone for their Hoaxy score. Sorry.

On the bright side, the Hoaxy frontend and backend source code is available, so you can create a customized version (not using the Hoaxy name) with different capabilities.

The other good news is that you can study the techniques of messages that do spread virally, so you can get better at creating messages that go viral.

April 25, 2018

Breaking Non-News: Twitter Has Echo Chambers (Co-Occupancy of Echo Chambers)

Filed under: Politics,Social Media,Social Networks — Patrick Durusau @ 9:55 am

Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship by Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, Michael Mathioudakis.

Abstract:

Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared (‘echo’), and the place that allows its exposure (‘chamber’ — the social network), and examine closely at how these two components interact. We define a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we find that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also find that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a ‘price of bipartisanship’ in terms of their network centrality and content appreciation. In addition, we study the role of ‘gatekeepers’, users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these findings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging.

This is an interesting paper from a technical perspective, especially their findings on gatekeepers, but political echo chambers in Twitter is hardly surprising. Nor are political echo chambers new.

SourceWatch has a limited (time wise) history of echo chambers and attributes the creation of echo chambers to conservatives:

…conservatives pioneered the “echo chamber” technique,…

Amusing but I would not give conservatives that much credit.

Consider the echo chambers created by the Wall Street Journal (WSJ) versus the Guardian (formerly National Guardian, published in New York City), a Marxist publication, in the 1960’s.

Or the differing content read by pro verus anti-war activists in the same time period. Or racists versus pro-integration advocates. Or pro versus anti Row v. Wade 410 U.S. 113 (more) 93 S. Ct. 705, 35 L. Ed. 2d 147, 1973 U.S. LEXIS 159) supporters.

Echo chambers existed before the examples I have listed but those are sufficient to show echo chambers are not new, despite claims by those who missed secondary education history classes.

The charge of “echo chamber” by SourceWatch, for example, carries with it an assumption that information delivered via an “echo chamber” is false, harmful, etc., versus their information, which leads to the truth, light and the American way. (Substitute whatever false totems you have for “the American way.”)

I don’t doubt the sincerity SourceWatch. I doubt approaching others saying “…you need to crawl out from under your rock so I can enlighten you with the truth” leads to a reduction in echo chambers.

Becoming a gatekeeper, with a foot in two or more echo chambers won’t reduce the number of echo chambers either. But that does have the potential to have gateways between echo chambers.

You’ve tried beating on occupants of other echo chambers with little or no success. Why not try co-occupying their echo chambers for a while?

February 28, 2018

Liberals Amping Right Wing Conspiracies

Filed under: Fake News,News,Social Media,Social Networks — Patrick Durusau @ 9:19 pm

You read the headline correctly: Liberals Amping Right Wing Conspiracies.

It’s the only reasonable conclusion after reading Molly McKew‘s post: How Liberals Amped up a Paranoid Shooting Conspiracy Theory.

From the post:


This terminology camouflages the war for minds that is underway on social media platforms, the impact that this has on our cognitive capabilities over time, and the extent to which automation is being engaged to gain advantage. The assumption, for example, that other would-be participants in social media information wars who choose to use these same tactics will gain the same capabilities or advantage is not necessarily true. This is a playing field that is hard to level: Amplification networks have data-driven, machine learning components that work better with refinement over time. You can’t just turn one on and expect it to work perfectly.

The vast amounts of content being uploaded every minute cannot possibly be reviewed by human beings. Algorithms, and the poets who sculpt them, are thus given an increasingly outsized role in the shape of our information environment. Human minds are on a battlefield between warring AIs—caught in the crossfire between forces we can’t see, sometimes as collateral damage and sometimes as unwitting participants. In this blackbox algorithmic wonderland, we don’t know if we are picking up a gun or a shield.

McKew has a great description of the amplification in the Parkland shooting conspiracy case, but it’s after the fact and not a basis for predicting the next amplification event.

Any number of research projects suggest themselves:

  • Observing and testing social media algorithms against content
  • Discerning patterns in amplified content
  • Testing refinement of content
  • Building automated tools to apply lessons in amplification

No doubt all those are underway in various guises for any number of reasons. But are you going to share in those results to protect your causes?

February 14, 2018

Don’t Delete Evil Data [But Remember the Downside of “Evidence”]

Filed under: Archives,Preservation,Social Media — Patrick Durusau @ 8:56 pm

Don’t Delete Evil Data by Lam Thuy Vo.

From the post:

The web needs to be a friendlier place. It needs to be more truthful, less fake. It definitely needs to be less hateful. Most people agree with these notions.

There have been a number of efforts recently to enforce this idea: the Facebook groups and pages operated by Russian actors during the 2016 election have been deleted. None of the Twitter accounts listed in connection to the investigation of the Russian interference with the last presidential election are online anymore. Reddit announced late last fall that it was banning Nazi, white supremacist, and other hate groups.

But even though much harm has been done on these platforms, is the right course of action to erase all these interactions without a trace? So much of what constitutes our information universe is captured online—if foreign actors are manipulating political information we receive and if trolls turn our online existence into hell, there is a case to be made for us to be able to trace back malicious information to its source, rather than simply removing it from public view.

In other words, there is a case to be made to preserve some of this information, to archive it, structure it, and make it accessible to the public. It’s unreasonable to expect social media companies to sidestep consumer privacy protections and to release data attached to online misconduct willy-nilly. But to stop abuse, we need to understand it. We should consider archiving malicious content and related data in responsible ways that allow for researchers, sociologists, and journalists to understand its mechanisms better and, potentially, to demand more accountability from trolls whose actions may forever be deleted without a trace.

By some unspecified mechanism, I would support preservation of all social media. As well as have it publicly available, if it were publicly posted originally. Any restriction or permission to see/use the data will lead to the same abuses we see now.

Twitter, among others, talks about abuse but no one can prove or disprove whatever Twitter cares to say.

There is a downside to preserving social media. You have probably seen the NBC News story on 200,000 tweets that are the smoking gun on Russian interference with the 2016 elections.

Well, except that if you look at the tweets, that’s about as far from a smoking gun on Russian interference as anything you can imagine.

By analogy, that’s why intelligence analysts always say they have evidence and give you their conclusions, but not the evidence. Too much danger you will discover their report is completely fictional.

Or when not wholly fictional, serves their or their agency’s interest.

Keeping evidence is risky business. Just so you are aware.

February 2, 2018

PubMed Commons to be Discontinued

Filed under: Bioinformatics,Medical Informatics,PubMed,Social Media — Patrick Durusau @ 5:10 pm

PubMed Commons to be Discontinued

From the post:

PubMed Commons has been a valuable experiment in supporting discussion of published scientific literature. The service was first introduced as a pilot project in the fall of 2013 and was reviewed in 2015. Despite low levels of use at that time, NIH decided to extend the effort for another year or two in hopes that participation would increase. Unfortunately, usage has remained minimal, with comments submitted on only 6,000 of the 28 million articles indexed in PubMed.

While many worthwhile comments were made through the service during its 4 years of operation, NIH has decided that the low level of participation does not warrant continued investment in the project, particularly given the availability of other commenting venues.

Comments will still be available, see the post for details.

Good time for the reminder that even negative results from an experiment are valuable.

Even more so in this case because discussion/comment facilities are non-trivial components of a content delivery system. Time and resources not spent on comment facilities could be put in other directions.

Where do discussions of medical articles take place and can they be used to automatically annotate published articles?

January 6, 2018

21 Recipes for Mining Twitter Data with rtweet

Filed under: R,Social Media,Tweets,Twitter — Patrick Durusau @ 5:26 pm

21 Recipes for Mining Twitter Data with rtweet by Bob Rudis.

From the preface:

I’m using this as way to familiarize myself with bookdown so I don’t make as many mistakes with my web scraping field guide book.

It’s based on Matthew R. Russell’s book. That book is out of distribution and much of the content is in Matthew’s “Mining the Social Web” book. There will be many similarities between his “21 Recipes” book and this book on purpose. I am not claiming originality in this work, just making an R-centric version of the cookbook.

As he states in his tome, “this intentionally terse recipe collection provides you with 21 easily adaptable Twitter mining recipes”.

Rudis has posted about this editing project at: A bookdown “Hello World” : Twenty-one (minus two) Recipes for Mining Twitter with rtweet, which you should consult if you want to contribute to this project.

Working through 21 Recipes for Mining Twitter Data with rtweet will give you experience proofing a text and if you type in the examples (no cut-n-paste), you’ll develop rtweet muscle memory.

Enjoy!

November 10, 2017

Who Has More Government Censorship of Social Media, Canada or US?

Filed under: Censorship,Government,Social Media — Patrick Durusau @ 5:31 pm

Federal government blocking social media users, deleting posts by Elizabeth Thompson.

From the post:

Canadian government departments have quietly blocked nearly 22,000 Facebook and Twitter users, with Global Affairs Canada accounting for nearly 20,000 of the blocked accounts, CBC News has learned.

Moreover, nearly 1,500 posts — a combination of official messages and comments from readers — have been deleted from various government social media accounts since January 2016.

However, there could be even more blocked accounts and deleted posts. In answer to questions tabled by Opposition MPs in the House of Commons, several departments said they don’t keep track of how often they block users or delete posts.

It is not known how many of the affected people are Canadian.

It’s also not known how many posts were deleted or users were blocked prior to the arrival of Prime Minister Justin Trudeau’s government.

But the numbers shed new light on how Ottawa navigates the world of social media — where it can be difficult to strike a balance between reaching out to Canadians while preventing government accounts from becoming a destination for porn, hate speech and abuse.

US Legal Issues

Davison v. Loudoun County Board of Supervisors

Meanwhile, south of the Canadian border, last July (2017), a US district court decision carried the headline: Federal Court: Public Officials Cannot Block Social Media Users Because of Their Criticism.


Davison v. Loudoun County Board of Supervisors (Davidson) involved the chair of the Loudoun County Board of Supervisors, Phyllis J. Randall. In her capacity as a government official, Randall runs a Facebook page to keep in touch with her constituents. In one post to the page, Randall wrote, “I really want to hear from ANY Loudoun citizen on ANY issues, request, criticism, compliment, or just your thoughts.” She explicitly encouraged Loudoun residents to reach out to her through her “county Facebook page.”

Brian C. Davidson, a Loudon denizen, took Randall up on her offer and posted a comment to a post on her page alleging corruption on the part of Loudoun County’s School Board. Randall, who said she “had no idea” whether Davidson’s allegations were true, deleted the entire post (thereby erasing his comment) and blocked him. The next morning, she decided to unblock him. During the intervening 12 hours, Davidson could view or share content on Randall’s page but couldn’t comment on its posts or send it private messages.

Davidson sued, alleging a violation of his free speech rights. As U.S. District Judge James C. Cacheris explained in his decision, Randall essentially conceded in court that she had blocked Davidson “because she was offended by his criticism of her colleagues in the County government.” In other words, she “engaged in viewpoint discrimination,” which is generally prohibited under the First Amendment.

Blocking Twitter users by President Trump has lead to other litigation.

Knight First Amendment Institute at Columbia University v. Trump (1:17-cv-05205)

You can track filings in Knight First Amendment Institute at Columbia University v. Trump courtesy of the Court Listener Project. Please put the Court Listener project on your year end donation list.

US Factual Issues

The complaint outlines the basis for the case, both legal and factual, but does not recite any data on blocking of social media accounts by federal agencies. Would not have to, it’s not really relevant to the issue at hand but it would be useful to know the standard practice among US government agencies.

I can suggest where to start looking for that answer: U.S. Digital Registry, which as of today, lists 10877 social media accounts.

You could ask the agencies in question, FOIA requests for lists of blocked accounts.

Twitter won’t allow you to see the list of blocked users for accounts other than your own. Of course, that rule depends on your level of access. You’ll find similar situations for other social media providers.

Assuming you have blocked users by official or self-help means, comparing blocked users across agencies, by their demographics, etc., would make a nice data-driven journalism project. Yes?

October 30, 2017

Bottery

Filed under: Bots,Social Media — Patrick Durusau @ 7:58 pm

Bottery – A conversational agent prototyping platform by katecompton@

From the webpage:

Bottery is a syntax, editor, and simulator for prototyping generative contextual conversations modeled as finite state machines.

Bottery takes inspiration from the Tracery opensource project for generative text (also by katecompton@ in a non-google capacity) and the CheapBotsDoneQuick bot-hosting platform, as well as open FSM-based storytelling tools like Twine.

Like Tracery, Bottery is a syntax that specifies the script of a conversation (a map) with JSON. Like CheapBotsDoneQuick, the BotteryStudio can take that JSON and run a simulation of that conversation in a nice Javascript front-end, with helpful visualizations and editting ability.

The goal of Bottery is to help everyone, from designers to writers to coders, be able to write simple and engaging contextual conversational agents, and to test them out in a realistic interactive simulation, mimicking how they’d work on a “real” platform like API.AI.

Not a bot to take your place on social media but it does illustrate the potential of such a bot.

Drive your social “engagement” score with a bot!

Hmmm, gather up comments and your responses on say Facebook, then compare for similarity to a new comment, then select the closest response. With or without an opportunity to override the automatic response.

Enjoy!

September 29, 2017

@niccdias and @cward1e on Mis- and Dis-information [Additional Questions]

Filed under: Authoring Topic Maps,Journalism,News,Reporting,Social Media,Topic Maps — Patrick Durusau @ 7:50 pm

10 questions to ask before covering mis- and dis-information by Nic Dias and Claire Wardle.

From the post:

Can silence be the best response to mis- and dis-information?

First Draft has been asking ourselves this question since the French election, when we had to make difficult decisions about what information to publicly debunk for CrossCheck. We became worried that – in cases where rumours, misleading articles or fabricated visuals were confined to niche communities – addressing the content might actually help to spread it farther.

As Alice Marwick and Rebecca Lewis noted in their 2017 report, Media Manipulation and Disinformation Online, “[F]or manipulators, it doesn’t matter if the media is reporting on a story in order to debunk or dismiss it; the important thing is getting it covered in the first place.” Buzzfeed’s Ryan Broderick seemed to confirm our concerns when, on the weekend of the #MacronLeaks trend, he tweeted that 4channers were celebrating news stories about the leaks as a “form of engagement.”

We have since faced the same challenges in the UK and German elections. Our work convinced us that journalists, fact-checkers and civil society urgently need to discuss when, how and why we report on examples of mis- and dis-information and the automated campaigns often used to promote them. Of particular importance is defining a “tipping point” at which mis- and dis-information becomes beneficial to address. We offer 10 questions below to spark such a discussion.

Before that, though, it’s worth briefly mentioning the other ways that coverage can go wrong. Many research studies examine how corrections can be counterproductive by ingraining falsehoods in memory or making them more familiar. Ultimately, the impact of a correction depends on complex interactions between factors like subject, format and audience ideology.

Reports of disinformation campaigns, amplified through the use of bots and cyborgs, can also be problematic. Experiments suggest that conspiracy-like stories can inspire feelings of powerlessness and lead people to report lower likelihoods to engage politically. Moreover, descriptions of how bots and cyborgs were found give their operators the opportunity to change strategies and better evade detection. In a month awash with revelations about Russia’s involvement in the US election, it’s more important than ever to discuss the implications of reporting on these kinds of activities.

Following the French election, First Draft has switched from the public-facing model of CrossCheck to a model where we primarily distribute our findings via email to newsroom subscribers. Our election teams now focus on stories that are predicted (by NewsWhip’s “Predicted Interactions” algorithm) to be shared widely. We also commissioned research on the effectiveness of the CrossCheck debunks and are awaiting its results to evaluate our methods.

The ten questions (see the post) should provoke useful discussions in newsrooms around the world.

I have three additional questions that round Nic Dias and Claire Wardle‘s list to a baker’s dozen:

  1. How do you define mis- or dis-information?
  2. How do you evaluate information to classify it as mis- or dis-information?
  3. Are your evaluations of specific information as mis- or dis-information public?

Defining dis- or mis-information

The standard definitions (Merriam Webster) for:

disinformation: false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth

misinformation: incorrect or misleading information

would find nodding agreement from Al Jazeera and the CIA, to the European Union and Recep Tayyip Erdoğan.

However, what is or is not disinformation or misinformation would vary from one of those parties to another.

Before reaching the ten questions of Nic Dias and Claire Wardle, define what you mean by disinformation or misinformation. Hopefully with numerous examples, especially ones that are close to the boundaries of your definitions.

Otherwise, all your readers know is that on the basis of some definition of disinformation/misinformation known only to you, information has been determined to be untrustworthy.

Documenting your process to classify as dis- or mis-information

Assuming you do arrive at a common definition of misinformation or disinformation, what process do you use to classify information according to those definitions? Ask your editor? That seems like a poor choice but no doubt it happens.

Do you consult and abide by an opinion found on Snopes? Or Politifact? Or FactCheck.org? Do all three have to agree for a judgement of misinformation or disinformation? What about other sources?

What sources do you consider definitive on the question of mis- or disinformation? Do you keep that list updated? How did you choose those sources over others?

Documenting your evaluation of information as dis- or mis-information

Having a process for evaluating information is great.

But have you followed that process? If challenged, how would you establish the process was followed for a particular piece of information?

Is your documentation office “lore,” or something more substantial?

An online form that captures the information, its source, the check fact source consulted with date, decision and person making the decision would take only seconds to populate. In addition to documenting the decision, you can build up a record of a source’s reliability.

Conclusion

Vagueness makes discussion and condemnation of mis- or dis-information easy to do and difficult to have a process for evaluating information, a common ground for classifying that information, to say nothing of documenting your decision on specific information.

Don’t be the black box of whim and caprice users experience at Twitter, Facebook and Google. You can do better than that.

September 27, 2017

Salvation for the Left Behind on Twitter’s 280 Character Limit

Filed under: Social Media,Twitter — Patrick Durusau @ 3:14 pm

If you are one of the “left behind” on Twitter’s expansion to a 280 character limit, don’t despair!

Robert Graham (@ErrataRob) rides to your rescue with: Browser hacking for 280 character tweets.

Well, truth is Bob covers more than simply reaching the new 280 character limit for the left behind, covering HTTP requests, introduces Chrome’s DevTool, command line use of cURL.

Take a few minutes to walk through Bob’s post.

A little knowledge of browsers and tools will put you far ahead of your management.

May 26, 2017

Thank You, Scott – SNL

Filed under: Facebook,Social Media,Twitter — Patrick Durusau @ 8:49 pm

I posted this to Facebook, search for “Thanks Scott SNL” to find my post or that of others.

Included this note (with edits):

Appropriate social media warriors (myself included). From sexism and racism to fracking and pipelines, push back in the real world if you [want] change. Push back on social media for a warm but meaningless feeling of solidarity.

For me the “real world,” includes cyberspace, where pushing can have consequences.

You?

April 23, 2017

Dissing Facebook’s Reality Hole and Impliedly Censoring Yours

Filed under: Censorship,Facebook,Free Speech,Social Media — Patrick Durusau @ 4:42 pm

Climbing Out Of Facebook’s Reality Hole by Mat Honan.

From the post:

The proliferation of fake news and filter bubbles across the platforms meant to connect us have instead divided us into tribes, skilled in the arts of abuse and harassment. Tools meant for showing the world as it happens have been harnessed to broadcast murders, rapes, suicides, and even torture. Even physics have betrayed us! For the first time in a generation, there is talk that the United States could descend into a nuclear war. And in Silicon Valley, the zeitgeist is one of melancholy, frustration, and even regret — except for Mark Zuckerberg, who appears to be in an absolutely great mood.

The Facebook CEO took the stage at the company’s annual F8 developers conference a little more than an hour after news broke that the so-called Facebook Killer had killed himself. But if you were expecting a somber mood, it wasn’t happening. Instead, he kicked off his keynote with a series of jokes.

It was a stark disconnect with the reality outside, where the story of the hour concerned a man who had used Facebook to publicize a murder, and threaten many more. People used to talk about Steve Jobs and Apple’s reality distortion field. But Facebook, it sometimes feels, exists in a reality hole. The company doesn’t distort reality — but it often seems to lack the ability to recognize it.

I can’t say I’m fond of the Facebook reality hole but unlike Honan:


It can make it harder to use its platforms to harass others, or to spread disinformation, or to glorify acts of violence and destruction.

I have no desire to censor any of the content that anyone cares to make and/or view on it. Bar none.

The “default” reality settings desired by Honan and others are a thumb on the scale for some cause they prefer over others.

Entitled to their preference but I object to their setting the range of preferences enjoyed by others.

You?

April 5, 2017

Mastodon (Tor Access Recommended)

Filed under: Social Media,Twitter — Patrick Durusau @ 8:00 pm

Mastodon

From the homepage:

Mastodon is a free, open-source social network. A decentralized alternative to commercial platforms, it avoids the risks of a single company monopolizing your communication. Pick a server that you trust — whichever you choose, you can interact with everyone else. Anyone can run their own Mastodon instance and participate in the social network seamlessly.

What sets Mastodon apart:

  • Timelines are chronological
  • Public timelines
  • 500 characters per post
  • GIFV sets and short videos
  • Granular, per-post privacy settings
  • Rich block and muting tools
  • Ethical design: no ads, no tracking
  • Open API for apps and services

… (emphasis in original)

No regex for filtering posts but it does have:

  • Block notifications from non-followers
  • Block notifications from people you don’t follow

One or both should cover most of the harassment cases.

I was surprised by the “Pick a server that you trust…” suggestion.

Really? A remote server being run by someone unknown to me? Bad enough that I have to “trust” my ISP, to a degree, but an unknown?

You really need a Tor based email account and use Tor for access to Mastodon. Seriously.

March 10, 2017

Creating A Social Media ‘Botnet’ To Skew A Debate

Filed under: Education,Government,Politics,Social Media,Twitter — Patrick Durusau @ 5:34 pm

New Research Shows How Common Core Critics Built Social Media ‘Botnets’ to Skew the Education Debate by Kevin Mahnken.

From the post:

Anyone following education news on Twitter between 2013 and 2016 would have been hard-pressed to ignore the gradual curdling of Americans’ attitudes toward the Common Core State Standards. Once seen as an innocuous effort to lift performance in classrooms, they slowly came to be denounced as “Dirty Commie agenda trash” and a “Liberal/Islam indoctrination curriculum.”

After years of social media attacks, the damage is impressive to behold: In 2013, 83 percent of respondents in Education Next’s annual poll of Americans’ education attitudes felt favorably about the Common Core, including 82 percent of Republicans. But by the summer of 2016, support had eroded, with those numbers measuring only 50 percent and 39 percent, respectively. The uproar reached such heights, and so quickly, that it seemed to reflect a spontaneous populist rebellion against the most visible education reform in a decade.

Not so, say researchers with the University of Pennsylvania’s Consortium for Policy Research in Education. Last week, they released the #commoncore project, a study that suggests that public animosity toward Common Core was manipulated — and exaggerated — by organized online communities using cutting-edge social media strategies.

As the project’s authors write, the effect of these strategies was “the illusion of a vociferous Twitter conversation waged by a spontaneous mass of disconnected peers, whereas in actuality the peers are the unified proxy voice of a single viewpoint.”

Translation: A small circle of Common Core critics were able to create and then conduct their own echo chambers, skewing the Twitter debate in the process.

The most successful of these coordinated campaigns originated with the Patriot Journalist Network, a for-profit group that can be tied to almost one-quarter of all Twitter activity around the issue; on certain days, its PJNET hashtag has appeared in 69 percent of Common Core–related tweets.

The team of authors tracked nearly a million tweets sent during four half-year spans between September 2013 and April 2016, studying both how the online conversation about the standards grew (more than 50 percent between the first phase, September 2013 through February 2014, and the third, May 2015 through October 2015) and how its interlocutors changed over time.

Mahnken talks as though creating a ‘botnet’ to defeat adoption of the Common Core State Standards is a bad thing.

I never cared for #commoncore because testing makes money for large and small testing vendors. It has no other demonstrated impact on the educational process.

Let’s assume you want to build a championship high school baseball team. To do that, various officious intermeddlers, who have no experience with baseball, fund creation of the Common Core Baseball Standards.

Every three years, every child is tested against the Common Core Baseball Standards and their performance recorded. No funds are allocated for additional training for gifted performers, equipment, baseball fields, etc.

By the time these students reach high school, will you have the basis for a championship team? Perhaps, but if you do, it due to random chance and not the Common Core Baseball Standards.

If you want a championship high school baseball team, you fund training, equipment, baseball fields and equipment, in addition to spending money on the best facilities for your hoped for championship high school team. Consistently and over time you spend money.

The key to better education results isn’t testing, but funding based on the education results you hope to achieve.

I do commend the #commoncore project website for being an impressive presentation of Twitter data, even though it is clearly a propaganda machine for pro Common Core advocates.

The challenge here is to work backwards from what was observed by the project to both principles and tactics that made #stopcommoncore so successful. That is we know it has succeeded, at least to some degree, but how do we replicate that success on other issues?

Replication is how science demonstrates the reliability of a technique.

Looking forward to hearing your thoughts, suggestions, etc.

Enjoy!

February 25, 2017

Availability Cascades [Activists Take Note, Big Data Project?]

Filed under: Cascading,Chaos,Government,Social Media,Social Networks — Patrick Durusau @ 8:37 pm

Availability Cascades and Risk Regulation by Timur Kuran and Cass R. Sunstein, Stanford Law Review, Vol. 51, No. 4, 1999, U of Chicago, Public Law Working Paper No. 181, U of Chicago Law & Economics, Olin Working Paper No. 384.

Abstract:

An availability cascade is a self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception of increasing plausibility through its rising availability in public discourse. The driving mechanism involves a combination of informational and reputational motives: Individuals endorse the perception partly by learning from the apparent beliefs of others and partly by distorting their public responses in the interest of maintaining social acceptance. Availability entrepreneurs – activists who manipulate the content of public discourse – strive to trigger availability cascades likely to advance their agendas. Their availability campaigns may yield social benefits, but sometimes they bring harm, which suggests a need for safeguards. Focusing on the role of mass pressures in the regulation of risks associated with production, consumption, and the environment, Professor Timur Kuran and Cass R. Sunstein analyze availability cascades and suggest reforms to alleviate their potential hazards. Their proposals include new governmental structures designed to give civil servants better insulation against mass demands for regulatory change and an easily accessible scientific database to reduce people’s dependence on popular (mis)perceptions.

Not recent, 1999, but a useful starting point for the study of availability cascades.

The authors want to insulate civil servants where I want to exploit availability cascades to drive their responses but that’a question of perspective and not practice.

Google Scholar reports 928 citations of Availability Cascades and Risk Regulation, so it has had an impact on the literature.

However, availability cascades are not a recipe science but Networks, Crowds, and Markets: Reasoning About a Highly Connected World by David Easley and Jon Kleinberg, especially chapters 16 and 17, provide a background for developing such insights.

I started to suggest this would make a great big data project but big data projects are limited to where you have, well, big data. Certainly have that with Facebook, Twitter, etc., but that leaves a lot of the world’s population and social activity on the table.

That is to avoid junk results, you would need survey instruments to track any chain reactions outside of the bots that dominate social media.

Very high end advertising, which still misses with alarming regularity, would be a good place to look for tips on availability cascades. They have a profit motive to keep them interested.

February 23, 2017

Building an Online Profile:… [Toot Your Own Horn]

Filed under: Marketing,Social Media,Social Networks — Patrick Durusau @ 9:50 am

Building an Online Profile: Social Networking and Amplification Tools for Scientists by Antony Williams.

Seventy-seven slides from a February 22, 2017 presentation at NC State University on building an online profile.

Pure gold, whether you are building your profile or one for alternate identity. 😉

I like this slide in particular:

Take the “toot your own horn” advice to heart.

Your posts/work will never be perfect so don’t wait for that before posting.

Any errors you make are likely to go unnoticed until you correct them.

January 19, 2017

Empirical Analysis Of Social Media

Filed under: Government,Politics,Social Media — Patrick Durusau @ 11:01 am

How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument by Gary King, Jennifer Pan, and Margaret E. Roberts. American Political Science Review, 2017. (Supplementary Appendix)

Abstract:

The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We infer that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.

I differ from the authors on some of their conclusions but this is an excellent example of empirical as opposed to wishful analysis of social media.

Wishful analysis of social media includes the farcical claims that social media is an effective recruitment tool for terrorists. Too often claimed to dignify with a citation but never with empirical evidence, only an author’s repetition of the common “wisdom.”

In contrast, King et al. are careful to say what their analysis does and does not support, finding in a number of cases, the evidence contradicts commonly held thinking about the role of the Chinese government in social media.

One example I found telling was the lack of evidence that anyone is paid for pro-government social media comments.

In the authors’ words:


We also found no evidence that 50c party members were actually paid fifty cents or any other piecemeal amount. Indeed, no evidence exists that the authors of 50c posts are even paid extra for this work. We cannot be sure of current practices in the absence of evidence but, given that they already hold government and Chinese Communist Party (CCP) jobs, we would guess this activity is a requirement of their existing job or at least rewarded in performance reviews.
… (at pages 10-11)

Here I differ from the author’s “guess”

…this activity is a requirement of their existing job or at least rewarded in performance reviews.

Kudos to the authors for labeling this a “guess,” although one expects the mainstream press and members of Congress to take it as written in stone.

However, the authors presume positive posts about the government of China can only result from direct orders or pressure from superiors.

That’s a major weakness in this paper and similar analysis of social media postings.

The simpler explanation of pro-government posts is a poster is reporting the world as they see it. (Think Occam’s Razor.)

As for sharing them with the so-called “propaganda office,” perhaps they are attempting to curry favor. The small number of posters makes it difficult to credit their motives (unknown) and behavior (partially known) as representative for the estimated 2 million posters.

Moreover, out of a population that nears 1.4 billion, the existence of 2 million individuals with a positive view of the government isn’t difficult to credit.

This is an excellent paper that will repay a close reading several times over.

Take it also as a warning about ideologically based assumptions that can mar or even invalidate otherwise excellent empirical work.

PS:

Additional reading:

From the Gary King’s webpage on the article:

This paper follows up on our articles in Science, “Reverse-Engineering Censorship In China: Randomized Experimentation And Participant Observation”, and the American Political Science Review, “How Censorship In China Allows Government Criticism But Silences Collective Expression”.

January 4, 2017

Eight Years of the Republican Weekly Address

Filed under: Government,Politics,Prediction,Social Media — Patrick Durusau @ 5:23 pm

We looked at eight years of the Republican Weekly Address by Jesse Rifkin.

From the post:

Every week since Ronald Reagan started the tradition in 1982, the president delivers a weekly address. And every week, the opposition party delivers an address as well.

What can the Weekly Republican Addresses during the Obama era reveal about how the GOP has attempted to portray themselves to the American public, by the public policy topics they discussed and the speakers they featured? To find out, GovTrack Insider analyzed all 407 Weekly Republican Addresses for which we could find data during the Obama era, the first such analysis of the weekly addresses as best we can tell. (See the full list of weekly addresses here.)

Sometimes they discuss the same topic as the president’s weekly address — particularly common if a noteworthy event occurs in the news that week — although other times it’s on an unrelated topic of the party’s choosing. It also features a rotating cast of Republicans delivering the speech, most of them congressional, unlike the White House which has almost always featured President Obama, with Vice President Joe Biden occasionally subbing in.

On the issues, we found that Republicans have almost entirely refrained from discussing such inflammatory social issues as abortion, guns, or same-sex marriage in their weekly addresses, despite how animating such issues are to their base. They also were remarkably silent on Donald Trump until the week before the election.

We also find that while Republicans often get slammed on women’s rights and minority issues, Republican congressional women and African Americans are at least proportionally represented in the weekly addresses, compared to their proportions in Congress, if not slightly over-represented — but Hispanics are notably under-represented.

You have seen credible claims of On Predicting Social Unrest Using Social Media by Rostyslav Korolov, et al., and less credible claims from others, CIA claims it can predict some social unrest up to 5 days ahead.

Rumor has it that the CIA has a Word template named, appropriately enough: theRussiansDidIt. I can neither confirm nor deny that rumor.

Taking credible actors at their word, are you aware of any parallel research on weekly addresses by Congress and following congressional action?

A very lite skimming of the literature on predicting Supreme Court decisions comes up with: Competing Approaches to Predicting Supreme Court Decision Making by Andrew D. Martin, Kevin M. Quinn, Theodore W. Ruger, and Pauline T. Kim (2004), Algorithm predicts US Supreme Court decisions 70% of time by David Kravets (2014), Fantasy Scotus (a Supreme Court fantasy league with cash prizes).

Congressional voting has been studied as well, for instance, Predicting Congressional Voting – Social Identification Trumps Party. (Now there’s an unfortunate headline for searchers.)

Congressional votes are important but so is the progress of bills, the order in which issues are addressed, etc., and it the reflection of those less formal aspects in weekly addresses from congress that could be interesting.

The weekly speeches may be as divorced from any shared reality as comments inserted in the Congressional Record. On the other hand, a partially successful model, other than the timing of donations, may be possible.

November 29, 2016

Gab – Censorship Lite?

Filed under: Free Speech,Social Media,Twitter — Patrick Durusau @ 5:52 pm

I submitted my email today at Gab and got this message:

Done! You’re #1320420 in the waiting list.

Only three rules:

Illegal Pornography

We have a zero tolerance policy against illegal pornography. Such material will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We reserve the right to ban accounts that share such material. We may also report the user to local law enforcement per the advice our legal counsel.

Threats and Terrorism

We have a zero tolerance policy for violence and terrorism. Users are not allowed to make threats of, or promote, violence of any kind or promote terrorist organizations or agendas. Such users will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We may also report the user to local and/or federal law enforcement per the advice of our legal counsel.

What defines a ‘terrorist organization or agenda’? Any group that is labelled as a terrorist organization by the United Nations and/or United States of America classifies as a terrorist organization on Gab.

Private Information

Users are not allowed to post other’s confidential information, including but not limited to, credit card numbers, street numbers, SSNs, without their expressed authorization.

If Gab is listening, I can get the rules down to one:

Court Ordered Removal

When Gab receives a court order from a court of competent jurisdiction ordering the removal of identified, posted content, at (service address), the posted, identified content will be removed.

Simple, fair, gets Gab and its staff out of the censorship business and provides a transparent remedy.

At no cost to Gab!

What’s there not to like?

Gab should review my posts: Monetizing Hate Speech and False News and Preserving Ad Revenue With Filtering (Hate As Renewal Resource), while it is in closed beta.

Twitter and Facebook can keep spending uncompensated time and effort trying to be universal and fair censors. Gab has the opportunity to reach up and grab those $100 bills flying overhead for filtered news services.

What is the New York Times if not an opinionated and poorly run filter on all the possible information it could report?

Apply that same lesson to social media!

PS: Seriously, before going public, I would go to the one court-based rule on content. There’s no profit and no wins in censoring any content on your own. Someone will always want more or less. Courts get paid to make those decisions.

Check with your lawyers but if you don’t look at any content, you can’t be charged with constructive notice of it. Unless and until someone points it out, then you have to follow DCMA, court orders, etc.

November 21, 2016

PubMed comments & their continuing conversations

Filed under: PubMed,Social Media — Patrick Durusau @ 8:40 pm

PubMed comments & their continuing conversations

From the post:

We have many options for communication. We can choose platforms that fit our style, approach, and time constraints. From pop culture to current events, information and opinions are shared and discussed across multiple channels. And scientific publications are no exception.

PubMed Commons was established to enable commenting in PubMed, the largest biomedical literature database. In the past year, commenters posted to more than 1,400 publications. Of those publications, 80% have a single comment today, and 12% have comments from multiple members. The conversation carries forward in other venues.

Sometimes comments pull in discussion from other locations or spark exchanges elsewhere.Here are a few examples where social media prompted PubMed Commons posts or continued the commentary on publications.

An encouraging review of examples of sane discussion through the use of comments.

Unlike the abandoning of comments by some media outlets, NPR for example, NPR Website To Get Rid Of Comments by Elizabeth Jensen.

My take away from Jensen’s account was that NPR likes its free speech, not so much interested in the free speech of others.

See also: Have Comment Sections on News Media Websites Failed?, for op-ed pieces at the New York Times from a variety of perspectives.

Perhaps comments on news sites are examples of casting pearls before swine? (Matthew 7:6)

November 5, 2016

Freedom of Speech/Press – Great For “Us” – Not So Much For You (Wikileaks)

Filed under: Free Speech,Politics,Social Media,Wikileaks — Patrick Durusau @ 8:33 pm

The New York Times, sensing a possible defeat of its neo-liberal agenda on November 8, 2016, has loosed the dogs of war on social media in general and Wikileaks in particular.

Consider the sleight of hand in Farhad Manjoo’s How the Internet Is Loosening Our Grip on the Truth, which argues on one hand,


You’re Not Rational

The root of the problem with online news is something that initially sounds great: We have a lot more media to choose from.

In the last 20 years, the internet has overrun your morning paper and evening newscast with a smorgasbord of information sources, from well-funded online magazines to muckraking fact-checkers to the three guys in your country club whose Facebook group claims proof that Hillary Clinton and Donald J. Trump are really the same person.

A wider variety of news sources was supposed to be the bulwark of a rational age — “the marketplace of ideas,” the boosters called it.

But that’s not how any of this works. Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

This dynamic becomes especially problematic in a news landscape of near-infinite choice. Whether navigating Facebook, Google or The New York Times’s smartphone app, you are given ultimate control — if you see something you don’t like, you can easily tap away to something more pleasing. Then we all share what we found with our like-minded social networks, creating closed-off, shoulder-patting circles online.

This gets to the deeper problem: We all tend to filter documentary evidence through our own biases. Researchers have shown that two people with differing points of view can look at the same picture, video or document and come away with strikingly different ideas about what it shows.

You caught the invocation of authority by Manjoo, “researchers have shown,” etc.

But did you notice he never shows his other hand?

If the public is so bat-shit crazy that it takes all social media content as equally trustworthy, what are we to do?

Well, that is the question isn’t it?

Manjoo invokes “dozens of news outlets” who are tirelessly but hopelessly fact checking on our behalf in his conclusion.

The strong implication is that without the help of “media outlets,” you are a bundle of preconceptions and biases doing what feels easiest.

“News outlets,” on the other hand, are free from those limitations.

You bet.

If you thought Majoo was bad, enjoy seething through Zeynep Tufekci’s claims that Wikileaks is an opponent of privacy, sponsor of censorship and opponent of democracy, all in a little over 1,000 words (1069 exact count). Wikileaks Isn’t Whistleblowing.

It’s a breath taking piece of half-truths.

For example, playing for your sympathy, Tufekci invokes the need of dissidents for privacy. Even to the point of invoking the ghost of the former Soviet Union.

Tufekci overlooks and hopes you do as well, that these emails weren’t from dissidents, but from people who traded in and on the whims and caprices at the pinnacles of American power.

Perhaps realizing that is too transparent a ploy, she recounts other data dumps by Wikileaks to which she objects. As lawyers say, if the facts are against you, pound on the table.

In an echo of Manjoo, did you know you are too dumb to distinguish critical information from trivial?

Tufekci writes:


These hacks also function as a form of censorship. Once, censorship worked by blocking crucial pieces of information. In this era of information overload, censorship works by drowning us in too much undifferentiated information, crippling our ability to focus. These dumps, combined with the news media’s obsession with campaign trivia and gossip, have resulted in whistle-drowning, rather than whistle-blowing: In a sea of so many whistles blowing so loud, we cannot hear a single one.

I don’t think you are that dumb.

Do you?

But who will save us? You can guess Tufekci’s answer, but here it is in full:


Journalism ethics have to transition from the time of information scarcity to the current realities of information glut and privacy invasion. For example, obsessively reporting on internal campaign discussions about strategy from the (long ago) primary, in the last month of a general election against a different opponent, is not responsible journalism. Out-of-context emails from WikiLeaks have fueled viral misinformation on social media. Journalists should focus on the few important revelations, but also help debunk false misinformation that is proliferating on social media.

If you weren’t frightened into agreement by the end of her parade of horrors:


We can’t shrug off these dangers just because these hackers have, so far, largely made relatively powerful people and groups their targets. Their true target is the health of our democracy.

So now Wikileaks is gunning for democracy?

You bet. 😉

Journalists of my youth, think Vietnam, Watergate, were aggressive critics of government and the powerful. The Panama Papers project is evidence that level of journalism still exists.

Instead of whining about releases by Wikileaks and others, journalists* need to step up and provide context they see as lacking.

It would sure beat the hell out of repeating news releases from military commanders, “justice” department mouthpieces, and official but “unofficial” leaks from the American intelligence community.

* Like any generalization this is grossly unfair to the many journalists who work on behalf of the public everyday but lack the megaphone of the government lapdog New York Times. To those journalists and only them, do I apologize in advance for any offense given. The rest of you, take such offense as is appropriate.

October 17, 2016

How To Read: “War Goes Viral” (with caution, propaganda ahead)

Filed under: Politics,Social Media — Patrick Durusau @ 7:14 pm

social-media-war-460

War Goes Viral – How social media is being weaponized across the world by Emerson T. Brooking and P. W. Singer.

One of the highlights of the post reads:


Perhaps the greatest danger in this dynamic is that, although information that goes viral holds unquestionable power, it bears no special claim to truth or accuracy. Homophily all but ensures that. A multi-university study of five years of Facebook activity, titled “The Spreading of Misinformation Online,” was recently published in Proceedings of the National Academy of Sciences. Its authors found that the likelihood of someone believing and sharing a story was determined by its coherence with their prior beliefs and the number of their friends who had already shared it—not any inherent quality of the story itself. Stories didn’t start new conversations so much as echo preexisting beliefs.

This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.” As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

Ooooh, “…’truth’ becomes a matter of emotional resonance.”

That is always true but give the authors their due, “War Goes Viral” is a masterful piece of propaganda to the contrary.

Calling something “propaganda,” or “media bias” is easy and commonplace.

Let’s do the hard part and illustrate why that is the case with “War Goes Viral.”

The tag line:

How social media is being weaponized across the world

preps us to think:

Someone or some group is weaponizing social media.

So before even starting the article proper, we are prepared to be on the look out for the “bad guys.”

The authors are happy to oblige with #AllEyesOnISIS, first paragraph, second sentence. “The self-styled Islamic State…” appears in the second paragraph and ISIS in the third paragraph. Not much doubt who the “bad guys” are at this point in the article.

Listing only each change of current actors, “bad guys” in red, the article from start to finish names:

  • Islamic State
  • Russia
  • Venezuela
  • China
  • U.S. Army training to combat “bad guys”
  • Israel – neutral
  • Islamic State (Hussain)

The authors leave you with little doubt who they see as the “bad guys,” a one-sided view of propaganda and social media in particular.

For example, there is:

No mention of Voice of American (VOA), perhaps one of the longest running, continuous disinformation campaigns in history.

No mention of Pentagon admits funding online propaganda war against Isis.

No mention of any number of similar projects and programs which weren’t constructed with an eye on “truth and accuracy” by the United States.

The treatment here is as one-sided as the “weaponized” social media of which the authors complain.

Not that the authors are lacking in skill. They piggyback their own slant onto The Spreading of Misinformation Online:


This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.” As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

How much of that is supported by The Spreading of Misinformation Online?

  • First sentence
  • Second sentence
  • Both sentences

The answer is:

This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.”

The remainder of that paragraph was invented out of whole clothe by the authors and positioned with “truth” in quotes to piggyback on the legitimate academic work just quoted.

As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

Is popular cant among media and academic types but no more than that.

Skilled reporting can put information in a broad context and weave a coherent narrative, but disparaging social media authors doesn’t make that any more likely.

“War Goes Viral” being a case in point.

June 13, 2016

How Do I Become A Censor?

Filed under: Censorship,Free Speech,Social Media — Patrick Durusau @ 12:22 pm

You read about censorship or efforts at censorship on a daily basis.

But none of those reports answers the burning question of the social media age: How Do I Become A Censor?

I mean, what’s the use of reading about other people censoring your speech if you aren’t free to censor theirs? Where the fun in that?

Andrew Golis has an answer for you in: Comments are usually garbage. We’re adding comments to This.!.

Three steps to becoming a censor:

  1. Build a social media site that accepts comments
  2. Declare a highly subjective ass-hat rules
  3. Censor user comments

There being no third-party arbiters, you are now a censor! Feel the power surging through your fingers. Crush dangerous thoughts, memes or content with a single return. The safety and sanity of your users is now your responsibility.

Heady stuff. Yes?

If you think this is parody, check out the This. Community Guidelines for yourself:


With that in mind, This. is absolutely not a place for:

Violations of law. While this is expanded upon below, it should be clear that we will not tolerate any violations of law when using our site.

Hate speech, malicious speech, or material that’s harmful to marginalized groups. Overtly discriminating against an individual belonging to a minority group on the basis of race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability status, or medical condition won’t be tolerated on the site. This holds true whether it’s in the form of a link you post, a comment you make in a conversation, a username or display name you create (no epithets or slurs), or an account you run.

Harassment; incitements to violence; or threats of mental, emotional, cyber, or physical harm to other members. There’s a line between civil disagreement and harassment. You cross that line by bullying, attacking, or posing a credible threat to members of the site. This happens when you go beyond criticism of their words or ideas and instead attack who they are. If you’ve got a vendetta against a certain member, do not police and criticize that member’s every move, post, or comment on a conversation. Absolutely don’t take this a step further and organize or encourage violence against this person, whether through doxxing, obtaining dirt, or spreading that person’s private information.

Violations of privacy. Respect the sanctity of our members’ personal information. Don’t con them – or the infrastructure of our site – to obtain, post, or disseminate any information that could threaten or harm our members. This includes, but isn’t limited to, credit card or debit card numbers; social or national security numbers; home addresses; personal, non-public email addresses or phone numbers; sexts; or any other identifying information that isn’t already publicly displayed with that person’s knowledge.

Sexually-explicit, NSFW, obscene, vulgar, or pornographic content. We’d like for This. to be a site that someone can comfortably scroll through in a public space – say a cafe, or library. We’re not a place for sexually-explicit or pornographic posts, comments, accounts, usernames, or display names. The internet is rife with spaces for you to find people who might share your passion for a certain Pornhub video, but This. isn’t the place to do that. When it comes to nudity, what we do allow on our site is informative or newsworthy – so, for example, if you’re sharing this article on Cameroon’s breast ironing tradition, that’s fair game. Or a good news or feature article about Debbie Does Dallas. But, artful as it may be, we won’t allow actual footage of Debbie Does Dallas on the site. (We understand that some spaces on the internet are shitty at judging what is and isn’t obscene when it comes to nudity, so if you think we’ve pulled your post off the site because we’re a bunch of unreasonable prudes, we’ll be happy to engage.)

Excessively violent content. Gore, mutilation, bestiality, necrophilia? No thanks! There’s a distinction between a potentially upsetting image that’s worth consuming (think of some of the best war photography) and something you’d find in a snuff film. It’s not always an easy distinction to make – real life is pretty brutal, and some of the images we probably need to see are the hardest to stomach – but we also don’t want to create an overwhelmingly negative experience for anyone who visits the site and happens to stumble upon a painful image.

Promotion of self-harm, eating disorders, alcohol or drug abuse, or similar forms of destructive behavior. The internet is, sadly, also rife with spaces where people get off on encouraging others to hurt themselves. If you’d like to do that, get off our site and certainly seek help.

Username squatting. Dovetailing with that, we reserve the right to take back a username that is not being actively used and give it to someone else who’d like it it – especially if it’s, say, an esteemed publication, organization, or person. We’re also firmly against attempts to buy or sell stuff in exchange for usernames.

Use of the This. brand, trademark, or logo without consent. You also cannot use the This. name or anything associated with the brand without our consent – unless, of course, it’s a news item. That means no creating accounts, usernames, or display names that use our brand.

Spam. Populating the site with spammy accounts is antithetical to our mission – being the place to find the absolute best in media. If you’ve created accounts that are transparently selling, say, “installation help for Macbooks” or some other suspicious form tech support, or advertising your “viral video” about Justin Bieber that’s got a suspiciously low number of views, you don’t belong on our site. That contradicts why we exist as a platform – to give members a noise-free experience they can’t find elsewhere on the web.

Impersonation of others. Dovetailing with that – though we’d all like to be The New York Times or Diana Ross, don’t pretend to be them. Don’t create an identity on the site in the likeness of a company or person who isn’t you. If you decide, for some reason, to create a parody account of a public figure or organization – though we can think of better sites to do that on, frankly – make sure you make that as clear as possible in your display name, avatar, and bio.

Infringement of copyright or intellectual property rights. Don’t post copyrighted works without the permission of its original owner or creator. This extends, for example, to copying and pasting a copyrighted set of words into a comment and passing it off as your own without credit. If you think someone has unlawfully violated your own copyright, please follow the DMCA procedures set forth in our Terms of Service.

Mass or automated registration and following. We’ve worked hard to build the site’s infrastructure. If you manipulate that in any way to game your follow count or register multiple spam accounts, we’ll have to terminate your account.

Exploits, phishing, resource abuse, or fraudulent content. Do not scam our members into giving you money, or mislead our members through misrepresenting a link to, say, a virus.

Exploitation of minors. Do not post any material regarding minors that’s sexually explicit, violent, or harmful to their safety. Don’t solicit or request their private or personally identifiable information. Leave them alone.

So how do we take punitive action against anyone who violates these? Depends on the severity of the offense. If you’re a member with a good track record who seems to have slipped up, we’ll shoot you an email telling you why your content was removed. If you’ve shared, written, or done something flagrantly and recklessly violating one of these rules, we’ll ban you from the site through deleting your account and all that’s associated with it. And if we feel it’s necessary or otherwise believe it is required, we will work with law enforcement to handle any risk to one of our members, the This. community in general, or to public safety.

To put it plainly – if you’re an asshole, we’ll kick you off the site.

Let’s make that a little more concrete.

I want to say: “Former Vice-President Dick Cheney should be tortured for a minimum of thirty (30) years and be kept alive for that purpose, as a penalty for his war crimes.”

I can’t say that on This. because:

  • “incitement to violence” If torture is ok, then so it other violence.
  • “harmful to marginalized group” If you think of sociopaths as a marginalized group.
  • “harassment” Cheney is a victim too. He didn’t start life as a moral leper.
  • “excessively violence content” Assume I illustrate the torture Cheney should suffer.

Rules broken vary by the specific content of my speech.

Remind me to pass this advice along to: Jonathan “I Want To Be A Twitter Censor” Weisman. All he needs to do is build a competitor to Twitter and he can censor to his heart’s delight.

The build your own platform isn’t just my opinion. This. confirms my advice:

If you don’t like these rules, feel free to create your own platform! There are a lot of awesome, simple ways to do that. That’s what’s so lovely about the internet.

March 31, 2016

Onlinecensorship.org Launches First Report (PDF)

Filed under: Censorship,Free Speech,Social Media,Tweets,Twitter — Patrick Durusau @ 2:36 pm

Onlinecensorship.org Launches First Report (PDF).

Reposting:

Onlinecensorship.org is pleased to share our first report "Unfriending Censorship: Insights from four months of crowdsourced data on social media censorship." The report draws on data gathered directly from users between November 2015 and March 2016.

We asked users to send us reports when they had their content or accounts taken down on six social media platforms: Facebook, Flickr, Google+, Instagram, Twitter, and YouTube. We have aggregated and analyzed the collected data across geography, platform, content type, and issue areas to highlight trends in social media censorship. All the information presented here is anonymized, with the exception of case study examples we obtained with prior approval by the user.

Here are some of the highlights:

  • This report covers 161 submissions from 26 countries, regarding content in eleven languages.
  • Facebook was the most frequently reported platform, and account suspensions were the most reported content type.
  • Nudity and false identity were the most frequent reasons given to users for the removal of their content.
  • Appeals seem to present a particular challenge. A majority of users (53%) did not appeal the takedown of their content, 50% of whom said they didn’t know how and 41.9% of whom said they didn’t expect a response. In only four cases was content restored, while in 50 the user didn’t get a response.
  • We received widespread reports that flagging is being used for censorship: 61.6% believed this was the cause of the content takedown.

While we introduced some measures to help us verify reports (such as giving respondents the opportunity to send us screenshots that support their claims), we did not work with the companies to obtain this data and thus cannot claim it is representative of all content takedowns or user experiences. Instead, it shows how a subset of the millions of social media users feel about how their content takedowns were handled, and the impact it has had on their lives.

The full report is available for download and distribution under Creative Commons licensing.

As the report itself notes, 161 reports across 6 social media platforms in 4 months isn’t a representative sample of censoring in social media.

Twitter alone brags about closing 125,000 ISIS accounts since mid-2015 (report dated 5 February 2016).

Closing ISIS accounts is clearly censorship of political speech, whatever hand waving and verbal gymnastics Twitter wants to employ to justify its practices. Including terms of service.

Censorship, on whatever basis, by whoever practiced, by whatever mechanism (including appeals), will always step on legitimate speech of some speakers.

The non-viewing of content has one and only one legitimate locus of control, a user’s browser for web content.

Browsers and/or web interfaces for Twitter, Facebook, etc., should enable users to block users, content by keywords, or even classifications offered by social media services.

Poof!

All need for collaboration with governments, issues of what content to censor, appeal processes, etc., suddenly disappear.

Enabling users to choose the content that will be displayed in their browsers empowers listeners as well as speakers, with prejudice towards none.

Yes?

March 30, 2016

Jihadist Wannabes: You Too Can Be A Western Lackey

Filed under: Government,Politics,Social Media — Patrick Durusau @ 2:16 pm

Eric Geller’s piece Why ISIS is winning the online propaganda war is far too long to read but has several telling insights for jihadist wannabes.

First, perhaps without fulling realizing it, Geller points out that the appeal of ISIS is based on facts, not messages:


Young Muslims who feel torn between what can seem like two different worlds, who long for structure and meaning in their lives, are ISIS’s best targets. They seek a coherent picture of the world—and ISIS is ready to offer one. Imagine being 19 years old, living in a major American city, and not understanding how a terrorist attack in Paris can change the way your fellow subway passengers look at you. If that prejudice or bigotry mystified you, you might gravitate toward someone offering an explanation that felt like it fit with your experiences. You might start watching YouTube videos about the supposedly irreconcilable differences between the West and the Islamic world. ISIS shapes its content to appeal to this person and others who lack a framework for understanding world events and are willing to embrace a radical one.

The other psychological factor that ISIS exploits is the natural desire for purpose. ISIS is a bonafide regional power, and to people who already feel out of place in Western society and crave a sense of direction, joining ISIS offers that purpose, that significance. They can become part of something bigger than themselves. They can fight for a cause. ISIS’s messages don’t just offer a framework for understanding the world; they also offer the chance to help shape it. These messages “make people feel like they matter in the world,” Beutel said, by promising “a sense of honor and self-esteem, and the ability to actively live out those desires.”

There are also more pragmatic promises, tailored to people who are not only spiritually aimless but economically frustrated and emotionally unfulfilled. Liang described this part of the appeal as, “Come and you will have a real life. You will have a salary. You will have a job. You will have a wife. You will have a house.”

“This is appealing to people who have, really, no future,” she said.

I can see how ISIS would be appealing to:

…people who have, really, no future…

Noting that the “no future,” is a fact, not idle speculation. All Muslim youth do and will continue to face discrimination, especially in the aftermath of terrorist attacks.

The West and Islamic worlds are irreconcilable only to the extent leaders in both worlds profit from that view. Sane members of those and other traditions relish and welcome such diversity. The Islamic world has a better record of toleration of diversity that one can claim for any Western power.

Second, Geller illustrates how the focus on message is at odds with changing the reality for Muslim youth:


If the U.S. and its allies want to dissuade would-be jihadists from joining ISIS, they need to start from square one. “We need a compelling story that makes our story better than theirs,” Liang said. “And so far their story is trumping ours.”

The anti-extremist story can’t just be a paean to human rights and liberal democratic values. It must provide clear promises about what the Middle East will look like if ISIS is defeated. “What are we going to do if we take back the land that [ISIS] is inhabiting at the moment?” Liang said. “What government are we going to set up, and how legitimate will it be? If you look at, right now, the Iraqi state, it’s extremely corrupt, and it has to prove that it will be the better alternative.”

Part of the challenge that counter-narrative designers face is that the anti-extremist story can’t just be a sweeping theoretical message. It has to be pragmatic, full of real promises. But no one has a clear idea of how to do this. “To be totally honest, we haven’t cracked that nut yet,” the former senior administration official said. “Maybe it is liberal values and a democratic order and human rights and democratic values. I would hope that that would be the case. But I don’t think that there’s evidence yet that that would be equally compelling as a narrative or a set of values.

“Everyone agrees [that] we can’t just counter-message,” the official added. “We have to promote alternative messages. But nobody understands or agrees or has the answer in terms of what are the alternate courses of action or pathways that one could offer.”

How’s that for a bizarre story line? There is no effort to change the reality as experienced by Muslim youth, but they should just suck it up and not join ISIS?

One of the problems with “messaging” is the West wants dictating who will deliver the message and controlling what else they may say.

Not to mention that being discovered to be a Western lackey damages the credibility of anti-jihadists.

I’m not sure who edited Geller’s piece but there is this gem:


While the big-picture thinkers devise a story, others should focus on a bevy of vital changes to how counter-narratives are produced and distributed. For one thing, the content is too grim. Instead of going dark, Beutel said, go light: Offer would-be jihadists hope. Humanize ISIS’s foot soldiers instead of demonizing them, so that your intended audience understands that you care about their fate and not just taking them off the battlefield. “When you have people who are espousing incredibly hateful worldviews, the tendency is to want to demonize them—to want to shut them out [in order] to isolate them,” Beutel said. “More often than not, that actually repulses people rather than [getting] them to open up.” (emphasis added)

Gee, “espousing incredibly hateful worldviews,” I don’t think of ISIS first in that regard. Do you? There a list of governments and leaders that I would put way ahead of ISIS.

Maybe, just maybe, urging people to not join groups you are trying to destroy that are resisting corrupt Western toady governments just isn’t persuasive?

Have you stopped corrupting Muslim governments? Have you stopped supporting governments that oppress Muslims? Have you stopped playing favorites between Muslim factions? Have you taken any steps to promote a safe and diverse environment for Muslims in your society?

Or the overall question: Have you made a positive different in the day to day lives of Muslims? (from their point of view, not yours)

Messaging not based on having done (not promised, accomplished) those and other things, are invitations to be a Western lackey. Who wants that?


All the attribution of a high level of skill to ISIS messaging is merely a reflection of the tone-deafness of West dictated messaging. Strict hierarchical control over both messages and speakers, using messages that appeal to the sender and not the receiver, valuing message over reality, are only some of the flaws in Western anti-jihadist programs.

The old possible redeeming point is the use of former jihadists. ISIS, being composed of people, is subject to the same failings of governments/groups/movements everywhere. I’m not sure how any government could claim to be superior to ISIS in that regard.


BTW, I’m not an ISIS “cheerleader” as Geller put it. I have serious disagreement with ISIS on a number of issues, social policies and targeting being prominent ones. I do agree on the need to fight against corrupt, Western-dictated Muslim governments. Contrary to current US foreign policy.

Older Posts »

Powered by WordPress