Archive for the ‘Social Media’ Category

Liberals Amping Right Wing Conspiracies

Wednesday, February 28th, 2018

You read the headline correctly: Liberals Amping Right Wing Conspiracies.

It’s the only reasonable conclusion after reading Molly McKew‘s post: How Liberals Amped up a Paranoid Shooting Conspiracy Theory.

From the post:

This terminology camouflages the war for minds that is underway on social media platforms, the impact that this has on our cognitive capabilities over time, and the extent to which automation is being engaged to gain advantage. The assumption, for example, that other would-be participants in social media information wars who choose to use these same tactics will gain the same capabilities or advantage is not necessarily true. This is a playing field that is hard to level: Amplification networks have data-driven, machine learning components that work better with refinement over time. You can’t just turn one on and expect it to work perfectly.

The vast amounts of content being uploaded every minute cannot possibly be reviewed by human beings. Algorithms, and the poets who sculpt them, are thus given an increasingly outsized role in the shape of our information environment. Human minds are on a battlefield between warring AIs—caught in the crossfire between forces we can’t see, sometimes as collateral damage and sometimes as unwitting participants. In this blackbox algorithmic wonderland, we don’t know if we are picking up a gun or a shield.

McKew has a great description of the amplification in the Parkland shooting conspiracy case, but it’s after the fact and not a basis for predicting the next amplification event.

Any number of research projects suggest themselves:

  • Observing and testing social media algorithms against content
  • Discerning patterns in amplified content
  • Testing refinement of content
  • Building automated tools to apply lessons in amplification

No doubt all those are underway in various guises for any number of reasons. But are you going to share in those results to protect your causes?

Don’t Delete Evil Data [But Remember the Downside of “Evidence”]

Wednesday, February 14th, 2018

Don’t Delete Evil Data by Lam Thuy Vo.

From the post:

The web needs to be a friendlier place. It needs to be more truthful, less fake. It definitely needs to be less hateful. Most people agree with these notions.

There have been a number of efforts recently to enforce this idea: the Facebook groups and pages operated by Russian actors during the 2016 election have been deleted. None of the Twitter accounts listed in connection to the investigation of the Russian interference with the last presidential election are online anymore. Reddit announced late last fall that it was banning Nazi, white supremacist, and other hate groups.

But even though much harm has been done on these platforms, is the right course of action to erase all these interactions without a trace? So much of what constitutes our information universe is captured online—if foreign actors are manipulating political information we receive and if trolls turn our online existence into hell, there is a case to be made for us to be able to trace back malicious information to its source, rather than simply removing it from public view.

In other words, there is a case to be made to preserve some of this information, to archive it, structure it, and make it accessible to the public. It’s unreasonable to expect social media companies to sidestep consumer privacy protections and to release data attached to online misconduct willy-nilly. But to stop abuse, we need to understand it. We should consider archiving malicious content and related data in responsible ways that allow for researchers, sociologists, and journalists to understand its mechanisms better and, potentially, to demand more accountability from trolls whose actions may forever be deleted without a trace.

By some unspecified mechanism, I would support preservation of all social media. As well as have it publicly available, if it were publicly posted originally. Any restriction or permission to see/use the data will lead to the same abuses we see now.

Twitter, among others, talks about abuse but no one can prove or disprove whatever Twitter cares to say.

There is a downside to preserving social media. You have probably seen the NBC News story on 200,000 tweets that are the smoking gun on Russian interference with the 2016 elections.

Well, except that if you look at the tweets, that’s about as far from a smoking gun on Russian interference as anything you can imagine.

By analogy, that’s why intelligence analysts always say they have evidence and give you their conclusions, but not the evidence. Too much danger you will discover their report is completely fictional.

Or when not wholly fictional, serves their or their agency’s interest.

Keeping evidence is risky business. Just so you are aware.

PubMed Commons to be Discontinued

Friday, February 2nd, 2018

PubMed Commons to be Discontinued

From the post:

PubMed Commons has been a valuable experiment in supporting discussion of published scientific literature. The service was first introduced as a pilot project in the fall of 2013 and was reviewed in 2015. Despite low levels of use at that time, NIH decided to extend the effort for another year or two in hopes that participation would increase. Unfortunately, usage has remained minimal, with comments submitted on only 6,000 of the 28 million articles indexed in PubMed.

While many worthwhile comments were made through the service during its 4 years of operation, NIH has decided that the low level of participation does not warrant continued investment in the project, particularly given the availability of other commenting venues.

Comments will still be available, see the post for details.

Good time for the reminder that even negative results from an experiment are valuable.

Even more so in this case because discussion/comment facilities are non-trivial components of a content delivery system. Time and resources not spent on comment facilities could be put in other directions.

Where do discussions of medical articles take place and can they be used to automatically annotate published articles?

21 Recipes for Mining Twitter Data with rtweet

Saturday, January 6th, 2018

21 Recipes for Mining Twitter Data with rtweet by Bob Rudis.

From the preface:

I’m using this as way to familiarize myself with bookdown so I don’t make as many mistakes with my web scraping field guide book.

It’s based on Matthew R. Russell’s book. That book is out of distribution and much of the content is in Matthew’s “Mining the Social Web” book. There will be many similarities between his “21 Recipes” book and this book on purpose. I am not claiming originality in this work, just making an R-centric version of the cookbook.

As he states in his tome, “this intentionally terse recipe collection provides you with 21 easily adaptable Twitter mining recipes”.

Rudis has posted about this editing project at: A bookdown “Hello World” : Twenty-one (minus two) Recipes for Mining Twitter with rtweet, which you should consult if you want to contribute to this project.

Working through 21 Recipes for Mining Twitter Data with rtweet will give you experience proofing a text and if you type in the examples (no cut-n-paste), you’ll develop rtweet muscle memory.


Who Has More Government Censorship of Social Media, Canada or US?

Friday, November 10th, 2017

Federal government blocking social media users, deleting posts by Elizabeth Thompson.

From the post:

Canadian government departments have quietly blocked nearly 22,000 Facebook and Twitter users, with Global Affairs Canada accounting for nearly 20,000 of the blocked accounts, CBC News has learned.

Moreover, nearly 1,500 posts — a combination of official messages and comments from readers — have been deleted from various government social media accounts since January 2016.

However, there could be even more blocked accounts and deleted posts. In answer to questions tabled by Opposition MPs in the House of Commons, several departments said they don’t keep track of how often they block users or delete posts.

It is not known how many of the affected people are Canadian.

It’s also not known how many posts were deleted or users were blocked prior to the arrival of Prime Minister Justin Trudeau’s government.

But the numbers shed new light on how Ottawa navigates the world of social media — where it can be difficult to strike a balance between reaching out to Canadians while preventing government accounts from becoming a destination for porn, hate speech and abuse.

US Legal Issues

Davison v. Loudoun County Board of Supervisors

Meanwhile, south of the Canadian border, last July (2017), a US district court decision carried the headline: Federal Court: Public Officials Cannot Block Social Media Users Because of Their Criticism.

Davison v. Loudoun County Board of Supervisors (Davidson) involved the chair of the Loudoun County Board of Supervisors, Phyllis J. Randall. In her capacity as a government official, Randall runs a Facebook page to keep in touch with her constituents. In one post to the page, Randall wrote, “I really want to hear from ANY Loudoun citizen on ANY issues, request, criticism, compliment, or just your thoughts.” She explicitly encouraged Loudoun residents to reach out to her through her “county Facebook page.”

Brian C. Davidson, a Loudon denizen, took Randall up on her offer and posted a comment to a post on her page alleging corruption on the part of Loudoun County’s School Board. Randall, who said she “had no idea” whether Davidson’s allegations were true, deleted the entire post (thereby erasing his comment) and blocked him. The next morning, she decided to unblock him. During the intervening 12 hours, Davidson could view or share content on Randall’s page but couldn’t comment on its posts or send it private messages.

Davidson sued, alleging a violation of his free speech rights. As U.S. District Judge James C. Cacheris explained in his decision, Randall essentially conceded in court that she had blocked Davidson “because she was offended by his criticism of her colleagues in the County government.” In other words, she “engaged in viewpoint discrimination,” which is generally prohibited under the First Amendment.

Blocking Twitter users by President Trump has lead to other litigation.

Knight First Amendment Institute at Columbia University v. Trump (1:17-cv-05205)

You can track filings in Knight First Amendment Institute at Columbia University v. Trump courtesy of the Court Listener Project. Please put the Court Listener project on your year end donation list.

US Factual Issues

The complaint outlines the basis for the case, both legal and factual, but does not recite any data on blocking of social media accounts by federal agencies. Would not have to, it’s not really relevant to the issue at hand but it would be useful to know the standard practice among US government agencies.

I can suggest where to start looking for that answer: U.S. Digital Registry, which as of today, lists 10877 social media accounts.

You could ask the agencies in question, FOIA requests for lists of blocked accounts.

Twitter won’t allow you to see the list of blocked users for accounts other than your own. Of course, that rule depends on your level of access. You’ll find similar situations for other social media providers.

Assuming you have blocked users by official or self-help means, comparing blocked users across agencies, by their demographics, etc., would make a nice data-driven journalism project. Yes?


Monday, October 30th, 2017

Bottery – A conversational agent prototyping platform by katecompton@

From the webpage:

Bottery is a syntax, editor, and simulator for prototyping generative contextual conversations modeled as finite state machines.

Bottery takes inspiration from the Tracery opensource project for generative text (also by katecompton@ in a non-google capacity) and the CheapBotsDoneQuick bot-hosting platform, as well as open FSM-based storytelling tools like Twine.

Like Tracery, Bottery is a syntax that specifies the script of a conversation (a map) with JSON. Like CheapBotsDoneQuick, the BotteryStudio can take that JSON and run a simulation of that conversation in a nice Javascript front-end, with helpful visualizations and editting ability.

The goal of Bottery is to help everyone, from designers to writers to coders, be able to write simple and engaging contextual conversational agents, and to test them out in a realistic interactive simulation, mimicking how they’d work on a “real” platform like API.AI.

Not a bot to take your place on social media but it does illustrate the potential of such a bot.

Drive your social “engagement” score with a bot!

Hmmm, gather up comments and your responses on say Facebook, then compare for similarity to a new comment, then select the closest response. With or without an opportunity to override the automatic response.


@niccdias and @cward1e on Mis- and Dis-information [Additional Questions]

Friday, September 29th, 2017

10 questions to ask before covering mis- and dis-information by Nic Dias and Claire Wardle.

From the post:

Can silence be the best response to mis- and dis-information?

First Draft has been asking ourselves this question since the French election, when we had to make difficult decisions about what information to publicly debunk for CrossCheck. We became worried that – in cases where rumours, misleading articles or fabricated visuals were confined to niche communities – addressing the content might actually help to spread it farther.

As Alice Marwick and Rebecca Lewis noted in their 2017 report, Media Manipulation and Disinformation Online, “[F]or manipulators, it doesn’t matter if the media is reporting on a story in order to debunk or dismiss it; the important thing is getting it covered in the first place.” Buzzfeed’s Ryan Broderick seemed to confirm our concerns when, on the weekend of the #MacronLeaks trend, he tweeted that 4channers were celebrating news stories about the leaks as a “form of engagement.”

We have since faced the same challenges in the UK and German elections. Our work convinced us that journalists, fact-checkers and civil society urgently need to discuss when, how and why we report on examples of mis- and dis-information and the automated campaigns often used to promote them. Of particular importance is defining a “tipping point” at which mis- and dis-information becomes beneficial to address. We offer 10 questions below to spark such a discussion.

Before that, though, it’s worth briefly mentioning the other ways that coverage can go wrong. Many research studies examine how corrections can be counterproductive by ingraining falsehoods in memory or making them more familiar. Ultimately, the impact of a correction depends on complex interactions between factors like subject, format and audience ideology.

Reports of disinformation campaigns, amplified through the use of bots and cyborgs, can also be problematic. Experiments suggest that conspiracy-like stories can inspire feelings of powerlessness and lead people to report lower likelihoods to engage politically. Moreover, descriptions of how bots and cyborgs were found give their operators the opportunity to change strategies and better evade detection. In a month awash with revelations about Russia’s involvement in the US election, it’s more important than ever to discuss the implications of reporting on these kinds of activities.

Following the French election, First Draft has switched from the public-facing model of CrossCheck to a model where we primarily distribute our findings via email to newsroom subscribers. Our election teams now focus on stories that are predicted (by NewsWhip’s “Predicted Interactions” algorithm) to be shared widely. We also commissioned research on the effectiveness of the CrossCheck debunks and are awaiting its results to evaluate our methods.

The ten questions (see the post) should provoke useful discussions in newsrooms around the world.

I have three additional questions that round Nic Dias and Claire Wardle‘s list to a baker’s dozen:

  1. How do you define mis- or dis-information?
  2. How do you evaluate information to classify it as mis- or dis-information?
  3. Are your evaluations of specific information as mis- or dis-information public?

Defining dis- or mis-information

The standard definitions (Merriam Webster) for:

disinformation: false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth

misinformation: incorrect or misleading information

would find nodding agreement from Al Jazeera and the CIA, to the European Union and Recep Tayyip Erdoğan.

However, what is or is not disinformation or misinformation would vary from one of those parties to another.

Before reaching the ten questions of Nic Dias and Claire Wardle, define what you mean by disinformation or misinformation. Hopefully with numerous examples, especially ones that are close to the boundaries of your definitions.

Otherwise, all your readers know is that on the basis of some definition of disinformation/misinformation known only to you, information has been determined to be untrustworthy.

Documenting your process to classify as dis- or mis-information

Assuming you do arrive at a common definition of misinformation or disinformation, what process do you use to classify information according to those definitions? Ask your editor? That seems like a poor choice but no doubt it happens.

Do you consult and abide by an opinion found on Snopes? Or Politifact? Or Do all three have to agree for a judgement of misinformation or disinformation? What about other sources?

What sources do you consider definitive on the question of mis- or disinformation? Do you keep that list updated? How did you choose those sources over others?

Documenting your evaluation of information as dis- or mis-information

Having a process for evaluating information is great.

But have you followed that process? If challenged, how would you establish the process was followed for a particular piece of information?

Is your documentation office “lore,” or something more substantial?

An online form that captures the information, its source, the check fact source consulted with date, decision and person making the decision would take only seconds to populate. In addition to documenting the decision, you can build up a record of a source’s reliability.


Vagueness makes discussion and condemnation of mis- or dis-information easy to do and difficult to have a process for evaluating information, a common ground for classifying that information, to say nothing of documenting your decision on specific information.

Don’t be the black box of whim and caprice users experience at Twitter, Facebook and Google. You can do better than that.

Salvation for the Left Behind on Twitter’s 280 Character Limit

Wednesday, September 27th, 2017

If you are one of the “left behind” on Twitter’s expansion to a 280 character limit, don’t despair!

Robert Graham (@ErrataRob) rides to your rescue with: Browser hacking for 280 character tweets.

Well, truth is Bob covers more than simply reaching the new 280 character limit for the left behind, covering HTTP requests, introduces Chrome’s DevTool, command line use of cURL.

Take a few minutes to walk through Bob’s post.

A little knowledge of browsers and tools will put you far ahead of your management.

Thank You, Scott – SNL

Friday, May 26th, 2017

I posted this to Facebook, search for “Thanks Scott SNL” to find my post or that of others.

Included this note (with edits):

Appropriate social media warriors (myself included). From sexism and racism to fracking and pipelines, push back in the real world if you [want] change. Push back on social media for a warm but meaningless feeling of solidarity.

For me the “real world,” includes cyberspace, where pushing can have consequences.


Dissing Facebook’s Reality Hole and Impliedly Censoring Yours

Sunday, April 23rd, 2017

Climbing Out Of Facebook’s Reality Hole by Mat Honan.

From the post:

The proliferation of fake news and filter bubbles across the platforms meant to connect us have instead divided us into tribes, skilled in the arts of abuse and harassment. Tools meant for showing the world as it happens have been harnessed to broadcast murders, rapes, suicides, and even torture. Even physics have betrayed us! For the first time in a generation, there is talk that the United States could descend into a nuclear war. And in Silicon Valley, the zeitgeist is one of melancholy, frustration, and even regret — except for Mark Zuckerberg, who appears to be in an absolutely great mood.

The Facebook CEO took the stage at the company’s annual F8 developers conference a little more than an hour after news broke that the so-called Facebook Killer had killed himself. But if you were expecting a somber mood, it wasn’t happening. Instead, he kicked off his keynote with a series of jokes.

It was a stark disconnect with the reality outside, where the story of the hour concerned a man who had used Facebook to publicize a murder, and threaten many more. People used to talk about Steve Jobs and Apple’s reality distortion field. But Facebook, it sometimes feels, exists in a reality hole. The company doesn’t distort reality — but it often seems to lack the ability to recognize it.

I can’t say I’m fond of the Facebook reality hole but unlike Honan:

It can make it harder to use its platforms to harass others, or to spread disinformation, or to glorify acts of violence and destruction.

I have no desire to censor any of the content that anyone cares to make and/or view on it. Bar none.

The “default” reality settings desired by Honan and others are a thumb on the scale for some cause they prefer over others.

Entitled to their preference but I object to their setting the range of preferences enjoyed by others.


Mastodon (Tor Access Recommended)

Wednesday, April 5th, 2017


From the homepage:

Mastodon is a free, open-source social network. A decentralized alternative to commercial platforms, it avoids the risks of a single company monopolizing your communication. Pick a server that you trust — whichever you choose, you can interact with everyone else. Anyone can run their own Mastodon instance and participate in the social network seamlessly.

What sets Mastodon apart:

  • Timelines are chronological
  • Public timelines
  • 500 characters per post
  • GIFV sets and short videos
  • Granular, per-post privacy settings
  • Rich block and muting tools
  • Ethical design: no ads, no tracking
  • Open API for apps and services

… (emphasis in original)

No regex for filtering posts but it does have:

  • Block notifications from non-followers
  • Block notifications from people you don’t follow

One or both should cover most of the harassment cases.

I was surprised by the “Pick a server that you trust…” suggestion.

Really? A remote server being run by someone unknown to me? Bad enough that I have to “trust” my ISP, to a degree, but an unknown?

You really need a Tor based email account and use Tor for access to Mastodon. Seriously.

Creating A Social Media ‘Botnet’ To Skew A Debate

Friday, March 10th, 2017

New Research Shows How Common Core Critics Built Social Media ‘Botnets’ to Skew the Education Debate by Kevin Mahnken.

From the post:

Anyone following education news on Twitter between 2013 and 2016 would have been hard-pressed to ignore the gradual curdling of Americans’ attitudes toward the Common Core State Standards. Once seen as an innocuous effort to lift performance in classrooms, they slowly came to be denounced as “Dirty Commie agenda trash” and a “Liberal/Islam indoctrination curriculum.”

After years of social media attacks, the damage is impressive to behold: In 2013, 83 percent of respondents in Education Next’s annual poll of Americans’ education attitudes felt favorably about the Common Core, including 82 percent of Republicans. But by the summer of 2016, support had eroded, with those numbers measuring only 50 percent and 39 percent, respectively. The uproar reached such heights, and so quickly, that it seemed to reflect a spontaneous populist rebellion against the most visible education reform in a decade.

Not so, say researchers with the University of Pennsylvania’s Consortium for Policy Research in Education. Last week, they released the #commoncore project, a study that suggests that public animosity toward Common Core was manipulated — and exaggerated — by organized online communities using cutting-edge social media strategies.

As the project’s authors write, the effect of these strategies was “the illusion of a vociferous Twitter conversation waged by a spontaneous mass of disconnected peers, whereas in actuality the peers are the unified proxy voice of a single viewpoint.”

Translation: A small circle of Common Core critics were able to create and then conduct their own echo chambers, skewing the Twitter debate in the process.

The most successful of these coordinated campaigns originated with the Patriot Journalist Network, a for-profit group that can be tied to almost one-quarter of all Twitter activity around the issue; on certain days, its PJNET hashtag has appeared in 69 percent of Common Core–related tweets.

The team of authors tracked nearly a million tweets sent during four half-year spans between September 2013 and April 2016, studying both how the online conversation about the standards grew (more than 50 percent between the first phase, September 2013 through February 2014, and the third, May 2015 through October 2015) and how its interlocutors changed over time.

Mahnken talks as though creating a ‘botnet’ to defeat adoption of the Common Core State Standards is a bad thing.

I never cared for #commoncore because testing makes money for large and small testing vendors. It has no other demonstrated impact on the educational process.

Let’s assume you want to build a championship high school baseball team. To do that, various officious intermeddlers, who have no experience with baseball, fund creation of the Common Core Baseball Standards.

Every three years, every child is tested against the Common Core Baseball Standards and their performance recorded. No funds are allocated for additional training for gifted performers, equipment, baseball fields, etc.

By the time these students reach high school, will you have the basis for a championship team? Perhaps, but if you do, it due to random chance and not the Common Core Baseball Standards.

If you want a championship high school baseball team, you fund training, equipment, baseball fields and equipment, in addition to spending money on the best facilities for your hoped for championship high school team. Consistently and over time you spend money.

The key to better education results isn’t testing, but funding based on the education results you hope to achieve.

I do commend the #commoncore project website for being an impressive presentation of Twitter data, even though it is clearly a propaganda machine for pro Common Core advocates.

The challenge here is to work backwards from what was observed by the project to both principles and tactics that made #stopcommoncore so successful. That is we know it has succeeded, at least to some degree, but how do we replicate that success on other issues?

Replication is how science demonstrates the reliability of a technique.

Looking forward to hearing your thoughts, suggestions, etc.


Availability Cascades [Activists Take Note, Big Data Project?]

Saturday, February 25th, 2017

Availability Cascades and Risk Regulation by Timur Kuran and Cass R. Sunstein, Stanford Law Review, Vol. 51, No. 4, 1999, U of Chicago, Public Law Working Paper No. 181, U of Chicago Law & Economics, Olin Working Paper No. 384.


An availability cascade is a self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception of increasing plausibility through its rising availability in public discourse. The driving mechanism involves a combination of informational and reputational motives: Individuals endorse the perception partly by learning from the apparent beliefs of others and partly by distorting their public responses in the interest of maintaining social acceptance. Availability entrepreneurs – activists who manipulate the content of public discourse – strive to trigger availability cascades likely to advance their agendas. Their availability campaigns may yield social benefits, but sometimes they bring harm, which suggests a need for safeguards. Focusing on the role of mass pressures in the regulation of risks associated with production, consumption, and the environment, Professor Timur Kuran and Cass R. Sunstein analyze availability cascades and suggest reforms to alleviate their potential hazards. Their proposals include new governmental structures designed to give civil servants better insulation against mass demands for regulatory change and an easily accessible scientific database to reduce people’s dependence on popular (mis)perceptions.

Not recent, 1999, but a useful starting point for the study of availability cascades.

The authors want to insulate civil servants where I want to exploit availability cascades to drive their responses but that’a question of perspective and not practice.

Google Scholar reports 928 citations of Availability Cascades and Risk Regulation, so it has had an impact on the literature.

However, availability cascades are not a recipe science but Networks, Crowds, and Markets: Reasoning About a Highly Connected World by David Easley and Jon Kleinberg, especially chapters 16 and 17, provide a background for developing such insights.

I started to suggest this would make a great big data project but big data projects are limited to where you have, well, big data. Certainly have that with Facebook, Twitter, etc., but that leaves a lot of the world’s population and social activity on the table.

That is to avoid junk results, you would need survey instruments to track any chain reactions outside of the bots that dominate social media.

Very high end advertising, which still misses with alarming regularity, would be a good place to look for tips on availability cascades. They have a profit motive to keep them interested.

Building an Online Profile:… [Toot Your Own Horn]

Thursday, February 23rd, 2017

Building an Online Profile: Social Networking and Amplification Tools for Scientists by Antony Williams.

Seventy-seven slides from a February 22, 2017 presentation at NC State University on building an online profile.

Pure gold, whether you are building your profile or one for alternate identity. 😉

I like this slide in particular:

Take the “toot your own horn” advice to heart.

Your posts/work will never be perfect so don’t wait for that before posting.

Any errors you make are likely to go unnoticed until you correct them.

Empirical Analysis Of Social Media

Thursday, January 19th, 2017

How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument by Gary King, Jennifer Pan, and Margaret E. Roberts. American Political Science Review, 2017. (Supplementary Appendix)


The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We infer that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.

I differ from the authors on some of their conclusions but this is an excellent example of empirical as opposed to wishful analysis of social media.

Wishful analysis of social media includes the farcical claims that social media is an effective recruitment tool for terrorists. Too often claimed to dignify with a citation but never with empirical evidence, only an author’s repetition of the common “wisdom.”

In contrast, King et al. are careful to say what their analysis does and does not support, finding in a number of cases, the evidence contradicts commonly held thinking about the role of the Chinese government in social media.

One example I found telling was the lack of evidence that anyone is paid for pro-government social media comments.

In the authors’ words:

We also found no evidence that 50c party members were actually paid fifty cents or any other piecemeal amount. Indeed, no evidence exists that the authors of 50c posts are even paid extra for this work. We cannot be sure of current practices in the absence of evidence but, given that they already hold government and Chinese Communist Party (CCP) jobs, we would guess this activity is a requirement of their existing job or at least rewarded in performance reviews.
… (at pages 10-11)

Here I differ from the author’s “guess”

…this activity is a requirement of their existing job or at least rewarded in performance reviews.

Kudos to the authors for labeling this a “guess,” although one expects the mainstream press and members of Congress to take it as written in stone.

However, the authors presume positive posts about the government of China can only result from direct orders or pressure from superiors.

That’s a major weakness in this paper and similar analysis of social media postings.

The simpler explanation of pro-government posts is a poster is reporting the world as they see it. (Think Occam’s Razor.)

As for sharing them with the so-called “propaganda office,” perhaps they are attempting to curry favor. The small number of posters makes it difficult to credit their motives (unknown) and behavior (partially known) as representative for the estimated 2 million posters.

Moreover, out of a population that nears 1.4 billion, the existence of 2 million individuals with a positive view of the government isn’t difficult to credit.

This is an excellent paper that will repay a close reading several times over.

Take it also as a warning about ideologically based assumptions that can mar or even invalidate otherwise excellent empirical work.


Additional reading:

From the Gary King’s webpage on the article:

This paper follows up on our articles in Science, “Reverse-Engineering Censorship In China: Randomized Experimentation And Participant Observation”, and the American Political Science Review, “How Censorship In China Allows Government Criticism But Silences Collective Expression”.

Eight Years of the Republican Weekly Address

Wednesday, January 4th, 2017

We looked at eight years of the Republican Weekly Address by Jesse Rifkin.

From the post:

Every week since Ronald Reagan started the tradition in 1982, the president delivers a weekly address. And every week, the opposition party delivers an address as well.

What can the Weekly Republican Addresses during the Obama era reveal about how the GOP has attempted to portray themselves to the American public, by the public policy topics they discussed and the speakers they featured? To find out, GovTrack Insider analyzed all 407 Weekly Republican Addresses for which we could find data during the Obama era, the first such analysis of the weekly addresses as best we can tell. (See the full list of weekly addresses here.)

Sometimes they discuss the same topic as the president’s weekly address — particularly common if a noteworthy event occurs in the news that week — although other times it’s on an unrelated topic of the party’s choosing. It also features a rotating cast of Republicans delivering the speech, most of them congressional, unlike the White House which has almost always featured President Obama, with Vice President Joe Biden occasionally subbing in.

On the issues, we found that Republicans have almost entirely refrained from discussing such inflammatory social issues as abortion, guns, or same-sex marriage in their weekly addresses, despite how animating such issues are to their base. They also were remarkably silent on Donald Trump until the week before the election.

We also find that while Republicans often get slammed on women’s rights and minority issues, Republican congressional women and African Americans are at least proportionally represented in the weekly addresses, compared to their proportions in Congress, if not slightly over-represented — but Hispanics are notably under-represented.

You have seen credible claims of On Predicting Social Unrest Using Social Media by Rostyslav Korolov, et al., and less credible claims from others, CIA claims it can predict some social unrest up to 5 days ahead.

Rumor has it that the CIA has a Word template named, appropriately enough: theRussiansDidIt. I can neither confirm nor deny that rumor.

Taking credible actors at their word, are you aware of any parallel research on weekly addresses by Congress and following congressional action?

A very lite skimming of the literature on predicting Supreme Court decisions comes up with: Competing Approaches to Predicting Supreme Court Decision Making by Andrew D. Martin, Kevin M. Quinn, Theodore W. Ruger, and Pauline T. Kim (2004), Algorithm predicts US Supreme Court decisions 70% of time by David Kravets (2014), Fantasy Scotus (a Supreme Court fantasy league with cash prizes).

Congressional voting has been studied as well, for instance, Predicting Congressional Voting – Social Identification Trumps Party. (Now there’s an unfortunate headline for searchers.)

Congressional votes are important but so is the progress of bills, the order in which issues are addressed, etc., and it the reflection of those less formal aspects in weekly addresses from congress that could be interesting.

The weekly speeches may be as divorced from any shared reality as comments inserted in the Congressional Record. On the other hand, a partially successful model, other than the timing of donations, may be possible.

Gab – Censorship Lite?

Tuesday, November 29th, 2016

I submitted my email today at Gab and got this message:

Done! You’re #1320420 in the waiting list.

Only three rules:

Illegal Pornography

We have a zero tolerance policy against illegal pornography. Such material will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We reserve the right to ban accounts that share such material. We may also report the user to local law enforcement per the advice our legal counsel.

Threats and Terrorism

We have a zero tolerance policy for violence and terrorism. Users are not allowed to make threats of, or promote, violence of any kind or promote terrorist organizations or agendas. Such users will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We may also report the user to local and/or federal law enforcement per the advice of our legal counsel.

What defines a ‘terrorist organization or agenda’? Any group that is labelled as a terrorist organization by the United Nations and/or United States of America classifies as a terrorist organization on Gab.

Private Information

Users are not allowed to post other’s confidential information, including but not limited to, credit card numbers, street numbers, SSNs, without their expressed authorization.

If Gab is listening, I can get the rules down to one:

Court Ordered Removal

When Gab receives a court order from a court of competent jurisdiction ordering the removal of identified, posted content, at (service address), the posted, identified content will be removed.

Simple, fair, gets Gab and its staff out of the censorship business and provides a transparent remedy.

At no cost to Gab!

What’s there not to like?

Gab should review my posts: Monetizing Hate Speech and False News and Preserving Ad Revenue With Filtering (Hate As Renewal Resource), while it is in closed beta.

Twitter and Facebook can keep spending uncompensated time and effort trying to be universal and fair censors. Gab has the opportunity to reach up and grab those $100 bills flying overhead for filtered news services.

What is the New York Times if not an opinionated and poorly run filter on all the possible information it could report?

Apply that same lesson to social media!

PS: Seriously, before going public, I would go to the one court-based rule on content. There’s no profit and no wins in censoring any content on your own. Someone will always want more or less. Courts get paid to make those decisions.

Check with your lawyers but if you don’t look at any content, you can’t be charged with constructive notice of it. Unless and until someone points it out, then you have to follow DCMA, court orders, etc.

PubMed comments & their continuing conversations

Monday, November 21st, 2016

PubMed comments & their continuing conversations

From the post:

We have many options for communication. We can choose platforms that fit our style, approach, and time constraints. From pop culture to current events, information and opinions are shared and discussed across multiple channels. And scientific publications are no exception.

PubMed Commons was established to enable commenting in PubMed, the largest biomedical literature database. In the past year, commenters posted to more than 1,400 publications. Of those publications, 80% have a single comment today, and 12% have comments from multiple members. The conversation carries forward in other venues.

Sometimes comments pull in discussion from other locations or spark exchanges elsewhere.Here are a few examples where social media prompted PubMed Commons posts or continued the commentary on publications.

An encouraging review of examples of sane discussion through the use of comments.

Unlike the abandoning of comments by some media outlets, NPR for example, NPR Website To Get Rid Of Comments by Elizabeth Jensen.

My take away from Jensen’s account was that NPR likes its free speech, not so much interested in the free speech of others.

See also: Have Comment Sections on News Media Websites Failed?, for op-ed pieces at the New York Times from a variety of perspectives.

Perhaps comments on news sites are examples of casting pearls before swine? (Matthew 7:6)

Freedom of Speech/Press – Great For “Us” – Not So Much For You (Wikileaks)

Saturday, November 5th, 2016

The New York Times, sensing a possible defeat of its neo-liberal agenda on November 8, 2016, has loosed the dogs of war on social media in general and Wikileaks in particular.

Consider the sleight of hand in Farhad Manjoo’s How the Internet Is Loosening Our Grip on the Truth, which argues on one hand,

You’re Not Rational

The root of the problem with online news is something that initially sounds great: We have a lot more media to choose from.

In the last 20 years, the internet has overrun your morning paper and evening newscast with a smorgasbord of information sources, from well-funded online magazines to muckraking fact-checkers to the three guys in your country club whose Facebook group claims proof that Hillary Clinton and Donald J. Trump are really the same person.

A wider variety of news sources was supposed to be the bulwark of a rational age — “the marketplace of ideas,” the boosters called it.

But that’s not how any of this works. Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

This dynamic becomes especially problematic in a news landscape of near-infinite choice. Whether navigating Facebook, Google or The New York Times’s smartphone app, you are given ultimate control — if you see something you don’t like, you can easily tap away to something more pleasing. Then we all share what we found with our like-minded social networks, creating closed-off, shoulder-patting circles online.

This gets to the deeper problem: We all tend to filter documentary evidence through our own biases. Researchers have shown that two people with differing points of view can look at the same picture, video or document and come away with strikingly different ideas about what it shows.

You caught the invocation of authority by Manjoo, “researchers have shown,” etc.

But did you notice he never shows his other hand?

If the public is so bat-shit crazy that it takes all social media content as equally trustworthy, what are we to do?

Well, that is the question isn’t it?

Manjoo invokes “dozens of news outlets” who are tirelessly but hopelessly fact checking on our behalf in his conclusion.

The strong implication is that without the help of “media outlets,” you are a bundle of preconceptions and biases doing what feels easiest.

“News outlets,” on the other hand, are free from those limitations.

You bet.

If you thought Majoo was bad, enjoy seething through Zeynep Tufekci’s claims that Wikileaks is an opponent of privacy, sponsor of censorship and opponent of democracy, all in a little over 1,000 words (1069 exact count). Wikileaks Isn’t Whistleblowing.

It’s a breath taking piece of half-truths.

For example, playing for your sympathy, Tufekci invokes the need of dissidents for privacy. Even to the point of invoking the ghost of the former Soviet Union.

Tufekci overlooks and hopes you do as well, that these emails weren’t from dissidents, but from people who traded in and on the whims and caprices at the pinnacles of American power.

Perhaps realizing that is too transparent a ploy, she recounts other data dumps by Wikileaks to which she objects. As lawyers say, if the facts are against you, pound on the table.

In an echo of Manjoo, did you know you are too dumb to distinguish critical information from trivial?

Tufekci writes:

These hacks also function as a form of censorship. Once, censorship worked by blocking crucial pieces of information. In this era of information overload, censorship works by drowning us in too much undifferentiated information, crippling our ability to focus. These dumps, combined with the news media’s obsession with campaign trivia and gossip, have resulted in whistle-drowning, rather than whistle-blowing: In a sea of so many whistles blowing so loud, we cannot hear a single one.

I don’t think you are that dumb.

Do you?

But who will save us? You can guess Tufekci’s answer, but here it is in full:

Journalism ethics have to transition from the time of information scarcity to the current realities of information glut and privacy invasion. For example, obsessively reporting on internal campaign discussions about strategy from the (long ago) primary, in the last month of a general election against a different opponent, is not responsible journalism. Out-of-context emails from WikiLeaks have fueled viral misinformation on social media. Journalists should focus on the few important revelations, but also help debunk false misinformation that is proliferating on social media.

If you weren’t frightened into agreement by the end of her parade of horrors:

We can’t shrug off these dangers just because these hackers have, so far, largely made relatively powerful people and groups their targets. Their true target is the health of our democracy.

So now Wikileaks is gunning for democracy?

You bet. 😉

Journalists of my youth, think Vietnam, Watergate, were aggressive critics of government and the powerful. The Panama Papers project is evidence that level of journalism still exists.

Instead of whining about releases by Wikileaks and others, journalists* need to step up and provide context they see as lacking.

It would sure beat the hell out of repeating news releases from military commanders, “justice” department mouthpieces, and official but “unofficial” leaks from the American intelligence community.

* Like any generalization this is grossly unfair to the many journalists who work on behalf of the public everyday but lack the megaphone of the government lapdog New York Times. To those journalists and only them, do I apologize in advance for any offense given. The rest of you, take such offense as is appropriate.

How To Read: “War Goes Viral” (with caution, propaganda ahead)

Monday, October 17th, 2016


War Goes Viral – How social media is being weaponized across the world by Emerson T. Brooking and P. W. Singer.

One of the highlights of the post reads:

Perhaps the greatest danger in this dynamic is that, although information that goes viral holds unquestionable power, it bears no special claim to truth or accuracy. Homophily all but ensures that. A multi-university study of five years of Facebook activity, titled “The Spreading of Misinformation Online,” was recently published in Proceedings of the National Academy of Sciences. Its authors found that the likelihood of someone believing and sharing a story was determined by its coherence with their prior beliefs and the number of their friends who had already shared it—not any inherent quality of the story itself. Stories didn’t start new conversations so much as echo preexisting beliefs.

This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.” As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

Ooooh, “…’truth’ becomes a matter of emotional resonance.”

That is always true but give the authors their due, “War Goes Viral” is a masterful piece of propaganda to the contrary.

Calling something “propaganda,” or “media bias” is easy and commonplace.

Let’s do the hard part and illustrate why that is the case with “War Goes Viral.”

The tag line:

How social media is being weaponized across the world

preps us to think:

Someone or some group is weaponizing social media.

So before even starting the article proper, we are prepared to be on the look out for the “bad guys.”

The authors are happy to oblige with #AllEyesOnISIS, first paragraph, second sentence. “The self-styled Islamic State…” appears in the second paragraph and ISIS in the third paragraph. Not much doubt who the “bad guys” are at this point in the article.

Listing only each change of current actors, “bad guys” in red, the article from start to finish names:

  • Islamic State
  • Russia
  • Venezuela
  • China
  • U.S. Army training to combat “bad guys”
  • Israel – neutral
  • Islamic State (Hussain)

The authors leave you with little doubt who they see as the “bad guys,” a one-sided view of propaganda and social media in particular.

For example, there is:

No mention of Voice of American (VOA), perhaps one of the longest running, continuous disinformation campaigns in history.

No mention of Pentagon admits funding online propaganda war against Isis.

No mention of any number of similar projects and programs which weren’t constructed with an eye on “truth and accuracy” by the United States.

The treatment here is as one-sided as the “weaponized” social media of which the authors complain.

Not that the authors are lacking in skill. They piggyback their own slant onto The Spreading of Misinformation Online:

This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.” As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

How much of that is supported by The Spreading of Misinformation Online?

  • First sentence
  • Second sentence
  • Both sentences

The answer is:

This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.”

The remainder of that paragraph was invented out of whole clothe by the authors and positioned with “truth” in quotes to piggyback on the legitimate academic work just quoted.

As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

Is popular cant among media and academic types but no more than that.

Skilled reporting can put information in a broad context and weave a coherent narrative, but disparaging social media authors doesn’t make that any more likely.

“War Goes Viral” being a case in point.

How Do I Become A Censor?

Monday, June 13th, 2016

You read about censorship or efforts at censorship on a daily basis.

But none of those reports answers the burning question of the social media age: How Do I Become A Censor?

I mean, what’s the use of reading about other people censoring your speech if you aren’t free to censor theirs? Where the fun in that?

Andrew Golis has an answer for you in: Comments are usually garbage. We’re adding comments to This.!.

Three steps to becoming a censor:

  1. Build a social media site that accepts comments
  2. Declare a highly subjective ass-hat rules
  3. Censor user comments

There being no third-party arbiters, you are now a censor! Feel the power surging through your fingers. Crush dangerous thoughts, memes or content with a single return. The safety and sanity of your users is now your responsibility.

Heady stuff. Yes?

If you think this is parody, check out the This. Community Guidelines for yourself:

With that in mind, This. is absolutely not a place for:

Violations of law. While this is expanded upon below, it should be clear that we will not tolerate any violations of law when using our site.

Hate speech, malicious speech, or material that’s harmful to marginalized groups. Overtly discriminating against an individual belonging to a minority group on the basis of race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability status, or medical condition won’t be tolerated on the site. This holds true whether it’s in the form of a link you post, a comment you make in a conversation, a username or display name you create (no epithets or slurs), or an account you run.

Harassment; incitements to violence; or threats of mental, emotional, cyber, or physical harm to other members. There’s a line between civil disagreement and harassment. You cross that line by bullying, attacking, or posing a credible threat to members of the site. This happens when you go beyond criticism of their words or ideas and instead attack who they are. If you’ve got a vendetta against a certain member, do not police and criticize that member’s every move, post, or comment on a conversation. Absolutely don’t take this a step further and organize or encourage violence against this person, whether through doxxing, obtaining dirt, or spreading that person’s private information.

Violations of privacy. Respect the sanctity of our members’ personal information. Don’t con them – or the infrastructure of our site – to obtain, post, or disseminate any information that could threaten or harm our members. This includes, but isn’t limited to, credit card or debit card numbers; social or national security numbers; home addresses; personal, non-public email addresses or phone numbers; sexts; or any other identifying information that isn’t already publicly displayed with that person’s knowledge.

Sexually-explicit, NSFW, obscene, vulgar, or pornographic content. We’d like for This. to be a site that someone can comfortably scroll through in a public space – say a cafe, or library. We’re not a place for sexually-explicit or pornographic posts, comments, accounts, usernames, or display names. The internet is rife with spaces for you to find people who might share your passion for a certain Pornhub video, but This. isn’t the place to do that. When it comes to nudity, what we do allow on our site is informative or newsworthy – so, for example, if you’re sharing this article on Cameroon’s breast ironing tradition, that’s fair game. Or a good news or feature article about Debbie Does Dallas. But, artful as it may be, we won’t allow actual footage of Debbie Does Dallas on the site. (We understand that some spaces on the internet are shitty at judging what is and isn’t obscene when it comes to nudity, so if you think we’ve pulled your post off the site because we’re a bunch of unreasonable prudes, we’ll be happy to engage.)

Excessively violent content. Gore, mutilation, bestiality, necrophilia? No thanks! There’s a distinction between a potentially upsetting image that’s worth consuming (think of some of the best war photography) and something you’d find in a snuff film. It’s not always an easy distinction to make – real life is pretty brutal, and some of the images we probably need to see are the hardest to stomach – but we also don’t want to create an overwhelmingly negative experience for anyone who visits the site and happens to stumble upon a painful image.

Promotion of self-harm, eating disorders, alcohol or drug abuse, or similar forms of destructive behavior. The internet is, sadly, also rife with spaces where people get off on encouraging others to hurt themselves. If you’d like to do that, get off our site and certainly seek help.

Username squatting. Dovetailing with that, we reserve the right to take back a username that is not being actively used and give it to someone else who’d like it it – especially if it’s, say, an esteemed publication, organization, or person. We’re also firmly against attempts to buy or sell stuff in exchange for usernames.

Use of the This. brand, trademark, or logo without consent. You also cannot use the This. name or anything associated with the brand without our consent – unless, of course, it’s a news item. That means no creating accounts, usernames, or display names that use our brand.

Spam. Populating the site with spammy accounts is antithetical to our mission – being the place to find the absolute best in media. If you’ve created accounts that are transparently selling, say, “installation help for Macbooks” or some other suspicious form tech support, or advertising your “viral video” about Justin Bieber that’s got a suspiciously low number of views, you don’t belong on our site. That contradicts why we exist as a platform – to give members a noise-free experience they can’t find elsewhere on the web.

Impersonation of others. Dovetailing with that – though we’d all like to be The New York Times or Diana Ross, don’t pretend to be them. Don’t create an identity on the site in the likeness of a company or person who isn’t you. If you decide, for some reason, to create a parody account of a public figure or organization – though we can think of better sites to do that on, frankly – make sure you make that as clear as possible in your display name, avatar, and bio.

Infringement of copyright or intellectual property rights. Don’t post copyrighted works without the permission of its original owner or creator. This extends, for example, to copying and pasting a copyrighted set of words into a comment and passing it off as your own without credit. If you think someone has unlawfully violated your own copyright, please follow the DMCA procedures set forth in our Terms of Service.

Mass or automated registration and following. We’ve worked hard to build the site’s infrastructure. If you manipulate that in any way to game your follow count or register multiple spam accounts, we’ll have to terminate your account.

Exploits, phishing, resource abuse, or fraudulent content. Do not scam our members into giving you money, or mislead our members through misrepresenting a link to, say, a virus.

Exploitation of minors. Do not post any material regarding minors that’s sexually explicit, violent, or harmful to their safety. Don’t solicit or request their private or personally identifiable information. Leave them alone.

So how do we take punitive action against anyone who violates these? Depends on the severity of the offense. If you’re a member with a good track record who seems to have slipped up, we’ll shoot you an email telling you why your content was removed. If you’ve shared, written, or done something flagrantly and recklessly violating one of these rules, we’ll ban you from the site through deleting your account and all that’s associated with it. And if we feel it’s necessary or otherwise believe it is required, we will work with law enforcement to handle any risk to one of our members, the This. community in general, or to public safety.

To put it plainly – if you’re an asshole, we’ll kick you off the site.

Let’s make that a little more concrete.

I want to say: “Former Vice-President Dick Cheney should be tortured for a minimum of thirty (30) years and be kept alive for that purpose, as a penalty for his war crimes.”

I can’t say that on This. because:

  • “incitement to violence” If torture is ok, then so it other violence.
  • “harmful to marginalized group” If you think of sociopaths as a marginalized group.
  • “harassment” Cheney is a victim too. He didn’t start life as a moral leper.
  • “excessively violence content” Assume I illustrate the torture Cheney should suffer.

Rules broken vary by the specific content of my speech.

Remind me to pass this advice along to: Jonathan “I Want To Be A Twitter Censor” Weisman. All he needs to do is build a competitor to Twitter and he can censor to his heart’s delight.

The build your own platform isn’t just my opinion. This. confirms my advice:

If you don’t like these rules, feel free to create your own platform! There are a lot of awesome, simple ways to do that. That’s what’s so lovely about the internet. Launches First Report (PDF)

Thursday, March 31st, 2016 Launches First Report (PDF).

Reposting: is pleased to share our first report "Unfriending Censorship: Insights from four months of crowdsourced data on social media censorship." The report draws on data gathered directly from users between November 2015 and March 2016.

We asked users to send us reports when they had their content or accounts taken down on six social media platforms: Facebook, Flickr, Google+, Instagram, Twitter, and YouTube. We have aggregated and analyzed the collected data across geography, platform, content type, and issue areas to highlight trends in social media censorship. All the information presented here is anonymized, with the exception of case study examples we obtained with prior approval by the user.

Here are some of the highlights:

  • This report covers 161 submissions from 26 countries, regarding content in eleven languages.
  • Facebook was the most frequently reported platform, and account suspensions were the most reported content type.
  • Nudity and false identity were the most frequent reasons given to users for the removal of their content.
  • Appeals seem to present a particular challenge. A majority of users (53%) did not appeal the takedown of their content, 50% of whom said they didn’t know how and 41.9% of whom said they didn’t expect a response. In only four cases was content restored, while in 50 the user didn’t get a response.
  • We received widespread reports that flagging is being used for censorship: 61.6% believed this was the cause of the content takedown.

While we introduced some measures to help us verify reports (such as giving respondents the opportunity to send us screenshots that support their claims), we did not work with the companies to obtain this data and thus cannot claim it is representative of all content takedowns or user experiences. Instead, it shows how a subset of the millions of social media users feel about how their content takedowns were handled, and the impact it has had on their lives.

The full report is available for download and distribution under Creative Commons licensing.

As the report itself notes, 161 reports across 6 social media platforms in 4 months isn’t a representative sample of censoring in social media.

Twitter alone brags about closing 125,000 ISIS accounts since mid-2015 (report dated 5 February 2016).

Closing ISIS accounts is clearly censorship of political speech, whatever hand waving and verbal gymnastics Twitter wants to employ to justify its practices. Including terms of service.

Censorship, on whatever basis, by whoever practiced, by whatever mechanism (including appeals), will always step on legitimate speech of some speakers.

The non-viewing of content has one and only one legitimate locus of control, a user’s browser for web content.

Browsers and/or web interfaces for Twitter, Facebook, etc., should enable users to block users, content by keywords, or even classifications offered by social media services.


All need for collaboration with governments, issues of what content to censor, appeal processes, etc., suddenly disappear.

Enabling users to choose the content that will be displayed in their browsers empowers listeners as well as speakers, with prejudice towards none.


Jihadist Wannabes: You Too Can Be A Western Lackey

Wednesday, March 30th, 2016

Eric Geller’s piece Why ISIS is winning the online propaganda war is far too long to read but has several telling insights for jihadist wannabes.

First, perhaps without fulling realizing it, Geller points out that the appeal of ISIS is based on facts, not messages:

Young Muslims who feel torn between what can seem like two different worlds, who long for structure and meaning in their lives, are ISIS’s best targets. They seek a coherent picture of the world—and ISIS is ready to offer one. Imagine being 19 years old, living in a major American city, and not understanding how a terrorist attack in Paris can change the way your fellow subway passengers look at you. If that prejudice or bigotry mystified you, you might gravitate toward someone offering an explanation that felt like it fit with your experiences. You might start watching YouTube videos about the supposedly irreconcilable differences between the West and the Islamic world. ISIS shapes its content to appeal to this person and others who lack a framework for understanding world events and are willing to embrace a radical one.

The other psychological factor that ISIS exploits is the natural desire for purpose. ISIS is a bonafide regional power, and to people who already feel out of place in Western society and crave a sense of direction, joining ISIS offers that purpose, that significance. They can become part of something bigger than themselves. They can fight for a cause. ISIS’s messages don’t just offer a framework for understanding the world; they also offer the chance to help shape it. These messages “make people feel like they matter in the world,” Beutel said, by promising “a sense of honor and self-esteem, and the ability to actively live out those desires.”

There are also more pragmatic promises, tailored to people who are not only spiritually aimless but economically frustrated and emotionally unfulfilled. Liang described this part of the appeal as, “Come and you will have a real life. You will have a salary. You will have a job. You will have a wife. You will have a house.”

“This is appealing to people who have, really, no future,” she said.

I can see how ISIS would be appealing to:

…people who have, really, no future…

Noting that the “no future,” is a fact, not idle speculation. All Muslim youth do and will continue to face discrimination, especially in the aftermath of terrorist attacks.

The West and Islamic worlds are irreconcilable only to the extent leaders in both worlds profit from that view. Sane members of those and other traditions relish and welcome such diversity. The Islamic world has a better record of toleration of diversity that one can claim for any Western power.

Second, Geller illustrates how the focus on message is at odds with changing the reality for Muslim youth:

If the U.S. and its allies want to dissuade would-be jihadists from joining ISIS, they need to start from square one. “We need a compelling story that makes our story better than theirs,” Liang said. “And so far their story is trumping ours.”

The anti-extremist story can’t just be a paean to human rights and liberal democratic values. It must provide clear promises about what the Middle East will look like if ISIS is defeated. “What are we going to do if we take back the land that [ISIS] is inhabiting at the moment?” Liang said. “What government are we going to set up, and how legitimate will it be? If you look at, right now, the Iraqi state, it’s extremely corrupt, and it has to prove that it will be the better alternative.”

Part of the challenge that counter-narrative designers face is that the anti-extremist story can’t just be a sweeping theoretical message. It has to be pragmatic, full of real promises. But no one has a clear idea of how to do this. “To be totally honest, we haven’t cracked that nut yet,” the former senior administration official said. “Maybe it is liberal values and a democratic order and human rights and democratic values. I would hope that that would be the case. But I don’t think that there’s evidence yet that that would be equally compelling as a narrative or a set of values.

“Everyone agrees [that] we can’t just counter-message,” the official added. “We have to promote alternative messages. But nobody understands or agrees or has the answer in terms of what are the alternate courses of action or pathways that one could offer.”

How’s that for a bizarre story line? There is no effort to change the reality as experienced by Muslim youth, but they should just suck it up and not join ISIS?

One of the problems with “messaging” is the West wants dictating who will deliver the message and controlling what else they may say.

Not to mention that being discovered to be a Western lackey damages the credibility of anti-jihadists.

I’m not sure who edited Geller’s piece but there is this gem:

While the big-picture thinkers devise a story, others should focus on a bevy of vital changes to how counter-narratives are produced and distributed. For one thing, the content is too grim. Instead of going dark, Beutel said, go light: Offer would-be jihadists hope. Humanize ISIS’s foot soldiers instead of demonizing them, so that your intended audience understands that you care about their fate and not just taking them off the battlefield. “When you have people who are espousing incredibly hateful worldviews, the tendency is to want to demonize them—to want to shut them out [in order] to isolate them,” Beutel said. “More often than not, that actually repulses people rather than [getting] them to open up.” (emphasis added)

Gee, “espousing incredibly hateful worldviews,” I don’t think of ISIS first in that regard. Do you? There a list of governments and leaders that I would put way ahead of ISIS.

Maybe, just maybe, urging people to not join groups you are trying to destroy that are resisting corrupt Western toady governments just isn’t persuasive?

Have you stopped corrupting Muslim governments? Have you stopped supporting governments that oppress Muslims? Have you stopped playing favorites between Muslim factions? Have you taken any steps to promote a safe and diverse environment for Muslims in your society?

Or the overall question: Have you made a positive different in the day to day lives of Muslims? (from their point of view, not yours)

Messaging not based on having done (not promised, accomplished) those and other things, are invitations to be a Western lackey. Who wants that?

All the attribution of a high level of skill to ISIS messaging is merely a reflection of the tone-deafness of West dictated messaging. Strict hierarchical control over both messages and speakers, using messages that appeal to the sender and not the receiver, valuing message over reality, are only some of the flaws in Western anti-jihadist programs.

The old possible redeeming point is the use of former jihadists. ISIS, being composed of people, is subject to the same failings of governments/groups/movements everywhere. I’m not sure how any government could claim to be superior to ISIS in that regard.

BTW, I’m not an ISIS “cheerleader” as Geller put it. I have serious disagreement with ISIS on a number of issues, social policies and targeting being prominent ones. I do agree on the need to fight against corrupt, Western-dictated Muslim governments. Contrary to current US foreign policy.

NCSU Offers Social Media Archives Toolkit for Libraries [Defeating Censors]

Sunday, February 28th, 2016

NCSU Offers Social Media Archives Toolkit for Libraries by Matt Enis.

From the post:

North Carolina State University (NCSU) Libraries recently debuted a free, web-based social media archives toolkit designed to help cultural heritage organizations develop social media collection strategies, gain knowledge of ways in which peer institutions are collecting similar content, understand current and potential uses of social media content by researchers, assess the legal and ethical implications of archiving this content, and develop techniques for enriching collections of social media content at minimal cost. Tools for building and enriching collections include NCSU’s Social Media Combine—which pre-assembles the open source Social Feed Manager, developed at George Washington University for Twitter data harvesting, and NCSU’s own open source Lentil program for Instagram—into a single package that can be deployed on Windows, OSX, and Linux computers.

“By harvesting social media data (such as Tweets and Instagram photos), based on tags, accounts, or locations, researchers and cultural heritage professionals are able to develop accurate historical assessments and democratize access to archival contributors, who would otherwise never be represented in the historical record,” NCSU explained in an announcement.

“A lot of activity that used to take place as paper correspondence is now taking place on social media—the establishment of academic and artistic communities, political organizing, activism, awareness raising, personal and professional interactions,” Jason Casden, interim associate head of digital library initiatives, told LJ. Historians and researchers will want to have access to this correspondence, but unlike traditional letters, this content is extremely ephemeral and can’t be collected retroactively like traditional paper-based collections.

“So we collect proactively—as these events are happening or shortly after,” Casden explained.

I saw this too late today to install but I’m sure I will be posting about it later this week!

Do you see the potential of such tooling for defeating would-be censors of Twitter and other social media?

More on that later this week as well.

The Social-Network Illusion That Tricks Your Mind – (Terrorism As Majority Illusion)

Friday, December 25th, 2015

The Social-Network Illusion That Tricks Your Mind

From the post:

One of the curious things about social networks is the way that some messages, pictures, or ideas can spread like wildfire while others that seem just as catchy or interesting barely register at all. The content itself cannot be the source of this difference. Instead, there must be some property of the network that changes to allow some ideas to spread but not others.

Today, we get an insight into why this happens thanks to the work of Kristina Lerman and pals at the University of Southern California. These people have discovered an extraordinary illusion associated with social networks which can play tricks on the mind and explain everything from why some ideas become popular quickly to how risky or antisocial behavior can spread so easily.

Network scientists have known about the paradoxical nature of social networks for some time. The most famous example is the friendship paradox: on average your friends will have more friends than you do.

This comes about because the distribution of friends on social networks follows a power law. So while most people will have a small number of friends, a few individuals have huge numbers of friends. And these people skew the average.

Here’s an analogy. If you measure the height of all your male friends. you’ll find that the average is about 170 centimeters. If you are male, on average, your friends will be about the same height as you are. Indeed, the mathematical notion of “average” is a good way to capture the nature of this data.

But imagine that one of your friends was much taller than you—say, one kilometer or 10 kilometers tall. This person would dramatically skew the average, which would make your friends taller than you, on average. In this case, the “average” is a poor way to capture this data set.

If that has you interested, see:

The Majority Illusion in Social Networks by Kristina Lerman, Xiaoran Yan, Xin-Zeng Wu.


Social behaviors are often contagious, spreading through a population as individuals imitate the decisions and choices of others. A variety of global phenomena, from innovation adoption to the emergence of social norms and political movements, arise as a result of people following a simple local rule, such as copy what others are doing. However, individuals often lack global knowledge of the behaviors of others and must estimate them from the observations of their friends’ behaviors. In some cases, the structure of the underlying social network can dramatically skew an individual’s local observations, making a behavior appear far more common locally than it is globally. We trace the origins of this phenomenon, which we call “the majority illusion,” to the friendship paradox in social networks. As a result of this paradox, a behavior that is globally rare may be systematically overrepresented in the local neighborhoods of many people, i.e., among their friends. Thus, the “majority illusion” may facilitate the spread of social contagions in networks and also explain why systematic biases in social perceptions, for example, of risky behavior, arise. Using synthetic and real-world networks, we explore how the “majority illusion” depends on network structure and develop a statistical model to calculate its magnitude in a network.

Research has not reached the stage of enabling the manipulation of public opinion to reflect the true rarity of terrorist activity in the West.

That being the case, being factually correct that Western fear of terrorism is a majority illusion isn’t as profitable as product tying to that illusion.

Are you a debunker or fact-checker? (take the survey, it’s important)

Tuesday, December 1st, 2015

Are you a debunker or fact-checker? Take this survey to help us understand the issue by Craig Silverman.

From the post:

Major news events like the Paris attacks quickly give birth to false rumors, hoaxes and viral misinformation. But there is a small, growing number of people and organizations who seek to quickly identify, verify or debunk and spread the truth about such misinformation that arises during major news events. As much as possible, they want to stop a false rumor before it spreads.

These real-time debunkers, some of which First Draft has profiled recently, face many challenges. But by sharing such challenges and possible solutions, it is possible to find collective answers to the problem of false information.

The survey below intends to gather tips and tactics from those doing this work in an attempt to identify best practices to be shared.

If you are engaged in this type of work — or have experimented with it in the past — we ask that you please take a moment to complete this quick survey and share your advice.

I don’t know if it is possible for debunking or fact-checking to run faster than rumor and falsehood but that should not keep us from trying.

Definitely take the survey!


Thursday, November 5th, 2015

BEOMAPS: Ad-hoc topic maps for enhanced search in social network data. by Peter Dolog, Martin Leginus, and ChengXiang Zhai.

From the webpage:

The aim of this project is to develop a novel system – a proof of concept that will enable more effective search, exploration, analysis and browsing of social media data. The main novelty of the system is an ad-hoc multi-dimensional topic map. The ad-hoc topic map can be generated and visualized according to multiple predefined dimensions e.g., recency, relevance, popularity or location based dimension. These dimensions will provide a better means for enhanced browsing, understanding and navigating to related relevant topics from underlying social media data. The ad-hoc aspect of the topic map allows user-guided exploration and browsing of the underlying social media topics space. It enables the user to explore and navigate the topic space through user-chosen dimensions and ad-hoc user-defined queries. Similarly, as in standard search engines, we consider the possibility of freely defined ad-hoc queries to generate a topic map as a possible paradigm for social media data exploration, navigation and browsing. An additional benefit of the novel system is an enhanced query expansion to allow users narrow their difficult queries with the terms suggested by an ad-hoc multi-dimensional topic map. Further, ad-hoc topic maps enable the exploration and analysis of relations between individual topics, which might lead to serendipitous discoveries.

This looks very cool and accords with some recent thinking I have been doing on waterfall versus agile authoring of topic maps.

The conference paper on this project is lodged behind a paywall at:

Beomap: Ad Hoc Topic Maps for Enhanced Exploration of Social Media Data, with this abstract:

Social media is ubiquitous. There is a need for intelligent retrieval interfaces that will enable a better understanding, exploration and browsing of social media data. A novel two dimensional ad hoc topic map is proposed (called Beomap). The main novelty of Beomap is that it allows a user to define an ad hoc semantic dimension with a keyword query when visualizing topics in text data. This not only helps to impose more meaningful spatial dimensions for visualization, but also allows users to steer browsing and exploration of the topic map through ad hoc defined queries. We developed a system to implement Beomap for exploring Twitter data, and evaluated the proposed Beomap in two ways, including an offline simulation and a user study. Results of both evaluation strategies show that the new Beomap interface is better than a standard interactive interface.

It has attracted 224 downloads as of today so I would say it is a popular chapter on topic maps.

I have contacted the authors in an attempt to locate a copy that isn’t behind a paywall.


#OpKKK Engaged: … [Dangers of skimming news]

Monday, November 2nd, 2015

#OpKKK Engaged: Anonymous Begins Exposing Politicians with Ties to the KKK by Matt Agorist.

From the post:

Over the last week, the hacktivist group known as Anonymous has been saying that they will expose some 1,000 United States politicians who have ties to the KKK.

On Monday morning, just past midnight, the hackers made true on their “threat.”

In a video posted to the We are Anonymous Facebook page Monday, the group began to release the names of those they claim have ties to the KKK.

In the video, Anonymous states that they are not going to release the home addresses of the politicians in fear of violent retaliation against the accused racists. But the group did release the politicians’ full name, the municipalities in which they work, and the addresses of their political offices.

While the video doesn’t come close to the promised 1,000 names, Anonymous did release a partial list.

It is rumored that Anonymous plans to release all the names on the 5th of November, also known as Guy Fawkes Night. Anonymous is synonymous with Guy Fawkes as the popular mask is used to shield the faces of its members during their broadcasts and protests.

I must admit I had my own doubts about #OpKKK when I heard the number “1,000.”

That illustrates the danger of half-listening or skimming a news feed.

The number was surely higher than “1,000.”

Turns out that #OpKKK means a direct connection to the KKK and not just supporting their policies. That was my mistake. Just supporting KKK objectives, the number would be far higher.

Take that as a lesson to read carefully when going through news stories.

Failure to release addresses

The failure to release addresses of those named, “so nobody gets it in their mind to take out their own justice against them” is highly questionable.

Who does Anonymous think is going to administer “justice?”

With 1,000 political leaders having direct ties to the KKK, who in the government will call them to account?

The position of Anonymous reminds me of Wikileaks, the Guardian and the New York Times failing to release all of the Snowden documents and other government document dumps in full.

“Oh, oh, we are afraid to endanger lives.” Really? The government from who those documents were liberated has not hesitated to murder, torture and blight the lives of hundreds of thousands if not millions.

Rather than giving the people legitimate targets for at least some of those mis-deeds, you wring your hands and worry about possible collateral damage?

Sorry, all social change invovles collateral damage to innocents. Why do you think it doesn’t? The current systematic and structural racism exacts a heavy toll on innocents every day in the United States. Is that more acceptable because you don’t have blood on your hands from it? (Or don’t think you do.)

The term for that is “moral cowardice.”

Is annoying the status quo, feeling righteous, associating with other “righteous” folk, all there is to your view of social change?

If it is, welcome to the status quo from here to eternity.

Tracie Powell: “We’re supposed to challenge power…

Sunday, October 18th, 2015

Tracie Powell: “We’re supposed to challenge power…it seems like we’ve abdicated that to social media” by Laura Hazard Owen.

From the post:

Tracie Powell tries not to use the word “diversity” anymore.

“When you talk about diversity, people’s eyes glaze over,” Powell, the founder of All Digitocracy, told me. The site covers tech, policy, and the impact of media on communities that Powell describes as “emerging audiences” — people of color and of different sexual orientations and gender identities.

I first heard Powell speak at the LION conference for hyperlocal publishers in Chicago earlier this month, where she stood in front of the almost entirely white audience to discuss how journalists and news organizations can get better at reporting for more people.

I followed up with Powell, who is currently a John S. Knight Journalism Fellow at Stanford, to hear more. “If we [as journalists] don’t do a better job at engaging with these audiences, we’re dead,” Powell said. “Our survival depends on reaching these emerging audiences.”

Here’s a lightly condensed and edited version of our conversation.

Warning: Challenging power is far more risky than supporting fiery denunciations of the most vulnerable and least powerful in society.

From women facing hard choices about pregnancy, rape victims, survivors of abuse both physical and emotional, or those who have lived with doubt, discrimination and deprivation as day to day realities, victims of power aren’t hard to find.

One of the powers that needs to be challenged is the news media itself. Take for example the near constant emphasis on gun violence and mass shootings. If you were to take the news media at face value, you would be frightened to go outside.

But, a 2013 Pew Center Report, Gun Homicide Rate Down 49% Since 1993 Peak; Public Unaware tell a different tale:


Not as satisfying as taking down a representative or senator but in the long run, influencing the mass media may be a more reliable path to challenging power.

Images for Social Media

Friday, August 21st, 2015

23 Tools and Resources to Create Images for Social Media

From the post:

Through experimentation and iteration, we’ve found that including images when sharing to social media increases engagement across the board — more clicks, reshares, replies, and favorites.

Using images in social media posts is well worth trying with your profiles.

As a small business owner or a one-man marketing team, is this something you can pull off by yourself?

At Buffer, we create all the images for our blogposts and social media sharing without any outside design help. We rely on a handful of amazing tools and resources to get the job done, and I’ll be happy to share with you the ones we use and the extras that we’ve found helpful or interesting.

If you tend to scroll down numbered lists (like I do), you will be left thinking the creators of the post don’t know how to count:




the end of the numbered list, isn’t 23.

If you look closely, there are several lists of unnumbered resources. So, you’re thinking that they do know how to count, but some of the items are unnumbered.

Should be, but it’s not. There are thirteen (13) unnumbered items, which added to fifteen (15), makes twenty-eight (28).

So, I suspect the title should read: 28 Tools and Resources to Create Images for Social Media.

In any event, its a fair collection of tools that with some effort on your part, can increase your social media presence.