Archive for the ‘Social Media’ Category

Dissing Facebook’s Reality Hole and Impliedly Censoring Yours

Sunday, April 23rd, 2017

Climbing Out Of Facebook’s Reality Hole by Mat Honan.

From the post:

The proliferation of fake news and filter bubbles across the platforms meant to connect us have instead divided us into tribes, skilled in the arts of abuse and harassment. Tools meant for showing the world as it happens have been harnessed to broadcast murders, rapes, suicides, and even torture. Even physics have betrayed us! For the first time in a generation, there is talk that the United States could descend into a nuclear war. And in Silicon Valley, the zeitgeist is one of melancholy, frustration, and even regret — except for Mark Zuckerberg, who appears to be in an absolutely great mood.

The Facebook CEO took the stage at the company’s annual F8 developers conference a little more than an hour after news broke that the so-called Facebook Killer had killed himself. But if you were expecting a somber mood, it wasn’t happening. Instead, he kicked off his keynote with a series of jokes.

It was a stark disconnect with the reality outside, where the story of the hour concerned a man who had used Facebook to publicize a murder, and threaten many more. People used to talk about Steve Jobs and Apple’s reality distortion field. But Facebook, it sometimes feels, exists in a reality hole. The company doesn’t distort reality — but it often seems to lack the ability to recognize it.

I can’t say I’m fond of the Facebook reality hole but unlike Honan:


It can make it harder to use its platforms to harass others, or to spread disinformation, or to glorify acts of violence and destruction.

I have no desire to censor any of the content that anyone cares to make and/or view on it. Bar none.

The “default” reality settings desired by Honan and others are a thumb on the scale for some cause they prefer over others.

Entitled to their preference but I object to their setting the range of preferences enjoyed by others.

You?

Mastodon (Tor Access Recommended)

Wednesday, April 5th, 2017

Mastodon

From the homepage:

Mastodon is a free, open-source social network. A decentralized alternative to commercial platforms, it avoids the risks of a single company monopolizing your communication. Pick a server that you trust — whichever you choose, you can interact with everyone else. Anyone can run their own Mastodon instance and participate in the social network seamlessly.

What sets Mastodon apart:

  • Timelines are chronological
  • Public timelines
  • 500 characters per post
  • GIFV sets and short videos
  • Granular, per-post privacy settings
  • Rich block and muting tools
  • Ethical design: no ads, no tracking
  • Open API for apps and services

… (emphasis in original)

No regex for filtering posts but it does have:

  • Block notifications from non-followers
  • Block notifications from people you don’t follow

One or both should cover most of the harassment cases.

I was surprised by the “Pick a server that you trust…” suggestion.

Really? A remote server being run by someone unknown to me? Bad enough that I have to “trust” my ISP, to a degree, but an unknown?

You really need a Tor based email account and use Tor for access to Mastodon. Seriously.

Creating A Social Media ‘Botnet’ To Skew A Debate

Friday, March 10th, 2017

New Research Shows How Common Core Critics Built Social Media ‘Botnets’ to Skew the Education Debate by Kevin Mahnken.

From the post:

Anyone following education news on Twitter between 2013 and 2016 would have been hard-pressed to ignore the gradual curdling of Americans’ attitudes toward the Common Core State Standards. Once seen as an innocuous effort to lift performance in classrooms, they slowly came to be denounced as “Dirty Commie agenda trash” and a “Liberal/Islam indoctrination curriculum.”

After years of social media attacks, the damage is impressive to behold: In 2013, 83 percent of respondents in Education Next’s annual poll of Americans’ education attitudes felt favorably about the Common Core, including 82 percent of Republicans. But by the summer of 2016, support had eroded, with those numbers measuring only 50 percent and 39 percent, respectively. The uproar reached such heights, and so quickly, that it seemed to reflect a spontaneous populist rebellion against the most visible education reform in a decade.

Not so, say researchers with the University of Pennsylvania’s Consortium for Policy Research in Education. Last week, they released the #commoncore project, a study that suggests that public animosity toward Common Core was manipulated — and exaggerated — by organized online communities using cutting-edge social media strategies.

As the project’s authors write, the effect of these strategies was “the illusion of a vociferous Twitter conversation waged by a spontaneous mass of disconnected peers, whereas in actuality the peers are the unified proxy voice of a single viewpoint.”

Translation: A small circle of Common Core critics were able to create and then conduct their own echo chambers, skewing the Twitter debate in the process.

The most successful of these coordinated campaigns originated with the Patriot Journalist Network, a for-profit group that can be tied to almost one-quarter of all Twitter activity around the issue; on certain days, its PJNET hashtag has appeared in 69 percent of Common Core–related tweets.

The team of authors tracked nearly a million tweets sent during four half-year spans between September 2013 and April 2016, studying both how the online conversation about the standards grew (more than 50 percent between the first phase, September 2013 through February 2014, and the third, May 2015 through October 2015) and how its interlocutors changed over time.

Mahnken talks as though creating a ‘botnet’ to defeat adoption of the Common Core State Standards is a bad thing.

I never cared for #commoncore because testing makes money for large and small testing vendors. It has no other demonstrated impact on the educational process.

Let’s assume you want to build a championship high school baseball team. To do that, various officious intermeddlers, who have no experience with baseball, fund creation of the Common Core Baseball Standards.

Every three years, every child is tested against the Common Core Baseball Standards and their performance recorded. No funds are allocated for additional training for gifted performers, equipment, baseball fields, etc.

By the time these students reach high school, will you have the basis for a championship team? Perhaps, but if you do, it due to random chance and not the Common Core Baseball Standards.

If you want a championship high school baseball team, you fund training, equipment, baseball fields and equipment, in addition to spending money on the best facilities for your hoped for championship high school team. Consistently and over time you spend money.

The key to better education results isn’t testing, but funding based on the education results you hope to achieve.

I do commend the #commoncore project website for being an impressive presentation of Twitter data, even though it is clearly a propaganda machine for pro Common Core advocates.

The challenge here is to work backwards from what was observed by the project to both principles and tactics that made #stopcommoncore so successful. That is we know it has succeeded, at least to some degree, but how do we replicate that success on other issues?

Replication is how science demonstrates the reliability of a technique.

Looking forward to hearing your thoughts, suggestions, etc.

Enjoy!

Availability Cascades [Activists Take Note, Big Data Project?]

Saturday, February 25th, 2017

Availability Cascades and Risk Regulation by Timur Kuran and Cass R. Sunstein, Stanford Law Review, Vol. 51, No. 4, 1999, U of Chicago, Public Law Working Paper No. 181, U of Chicago Law & Economics, Olin Working Paper No. 384.

Abstract:

An availability cascade is a self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception of increasing plausibility through its rising availability in public discourse. The driving mechanism involves a combination of informational and reputational motives: Individuals endorse the perception partly by learning from the apparent beliefs of others and partly by distorting their public responses in the interest of maintaining social acceptance. Availability entrepreneurs – activists who manipulate the content of public discourse – strive to trigger availability cascades likely to advance their agendas. Their availability campaigns may yield social benefits, but sometimes they bring harm, which suggests a need for safeguards. Focusing on the role of mass pressures in the regulation of risks associated with production, consumption, and the environment, Professor Timur Kuran and Cass R. Sunstein analyze availability cascades and suggest reforms to alleviate their potential hazards. Their proposals include new governmental structures designed to give civil servants better insulation against mass demands for regulatory change and an easily accessible scientific database to reduce people’s dependence on popular (mis)perceptions.

Not recent, 1999, but a useful starting point for the study of availability cascades.

The authors want to insulate civil servants where I want to exploit availability cascades to drive their responses but that’a question of perspective and not practice.

Google Scholar reports 928 citations of Availability Cascades and Risk Regulation, so it has had an impact on the literature.

However, availability cascades are not a recipe science but Networks, Crowds, and Markets: Reasoning About a Highly Connected World by David Easley and Jon Kleinberg, especially chapters 16 and 17, provide a background for developing such insights.

I started to suggest this would make a great big data project but big data projects are limited to where you have, well, big data. Certainly have that with Facebook, Twitter, etc., but that leaves a lot of the world’s population and social activity on the table.

That is to avoid junk results, you would need survey instruments to track any chain reactions outside of the bots that dominate social media.

Very high end advertising, which still misses with alarming regularity, would be a good place to look for tips on availability cascades. They have a profit motive to keep them interested.

Building an Online Profile:… [Toot Your Own Horn]

Thursday, February 23rd, 2017

Building an Online Profile: Social Networking and Amplification Tools for Scientists by Antony Williams.

Seventy-seven slides from a February 22, 2017 presentation at NC State University on building an online profile.

Pure gold, whether you are building your profile or one for alternate identity. 😉

I like this slide in particular:

Take the “toot your own horn” advice to heart.

Your posts/work will never be perfect so don’t wait for that before posting.

Any errors you make are likely to go unnoticed until you correct them.

Empirical Analysis Of Social Media

Thursday, January 19th, 2017

How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument by Gary King, Jennifer Pan, and Margaret E. Roberts. American Political Science Review, 2017. (Supplementary Appendix)

Abstract:

The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We infer that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.

I differ from the authors on some of their conclusions but this is an excellent example of empirical as opposed to wishful analysis of social media.

Wishful analysis of social media includes the farcical claims that social media is an effective recruitment tool for terrorists. Too often claimed to dignify with a citation but never with empirical evidence, only an author’s repetition of the common “wisdom.”

In contrast, King et al. are careful to say what their analysis does and does not support, finding in a number of cases, the evidence contradicts commonly held thinking about the role of the Chinese government in social media.

One example I found telling was the lack of evidence that anyone is paid for pro-government social media comments.

In the authors’ words:


We also found no evidence that 50c party members were actually paid fifty cents or any other piecemeal amount. Indeed, no evidence exists that the authors of 50c posts are even paid extra for this work. We cannot be sure of current practices in the absence of evidence but, given that they already hold government and Chinese Communist Party (CCP) jobs, we would guess this activity is a requirement of their existing job or at least rewarded in performance reviews.
… (at pages 10-11)

Here I differ from the author’s “guess”

…this activity is a requirement of their existing job or at least rewarded in performance reviews.

Kudos to the authors for labeling this a “guess,” although one expects the mainstream press and members of Congress to take it as written in stone.

However, the authors presume positive posts about the government of China can only result from direct orders or pressure from superiors.

That’s a major weakness in this paper and similar analysis of social media postings.

The simpler explanation of pro-government posts is a poster is reporting the world as they see it. (Think Occam’s Razor.)

As for sharing them with the so-called “propaganda office,” perhaps they are attempting to curry favor. The small number of posters makes it difficult to credit their motives (unknown) and behavior (partially known) as representative for the estimated 2 million posters.

Moreover, out of a population that nears 1.4 billion, the existence of 2 million individuals with a positive view of the government isn’t difficult to credit.

This is an excellent paper that will repay a close reading several times over.

Take it also as a warning about ideologically based assumptions that can mar or even invalidate otherwise excellent empirical work.

PS:

Additional reading:

From the Gary King’s webpage on the article:

This paper follows up on our articles in Science, “Reverse-Engineering Censorship In China: Randomized Experimentation And Participant Observation”, and the American Political Science Review, “How Censorship In China Allows Government Criticism But Silences Collective Expression”.

Eight Years of the Republican Weekly Address

Wednesday, January 4th, 2017

We looked at eight years of the Republican Weekly Address by Jesse Rifkin.

From the post:

Every week since Ronald Reagan started the tradition in 1982, the president delivers a weekly address. And every week, the opposition party delivers an address as well.

What can the Weekly Republican Addresses during the Obama era reveal about how the GOP has attempted to portray themselves to the American public, by the public policy topics they discussed and the speakers they featured? To find out, GovTrack Insider analyzed all 407 Weekly Republican Addresses for which we could find data during the Obama era, the first such analysis of the weekly addresses as best we can tell. (See the full list of weekly addresses here.)

Sometimes they discuss the same topic as the president’s weekly address — particularly common if a noteworthy event occurs in the news that week — although other times it’s on an unrelated topic of the party’s choosing. It also features a rotating cast of Republicans delivering the speech, most of them congressional, unlike the White House which has almost always featured President Obama, with Vice President Joe Biden occasionally subbing in.

On the issues, we found that Republicans have almost entirely refrained from discussing such inflammatory social issues as abortion, guns, or same-sex marriage in their weekly addresses, despite how animating such issues are to their base. They also were remarkably silent on Donald Trump until the week before the election.

We also find that while Republicans often get slammed on women’s rights and minority issues, Republican congressional women and African Americans are at least proportionally represented in the weekly addresses, compared to their proportions in Congress, if not slightly over-represented — but Hispanics are notably under-represented.

You have seen credible claims of On Predicting Social Unrest Using Social Media by Rostyslav Korolov, et al., and less credible claims from others, CIA claims it can predict some social unrest up to 5 days ahead.

Rumor has it that the CIA has a Word template named, appropriately enough: theRussiansDidIt. I can neither confirm nor deny that rumor.

Taking credible actors at their word, are you aware of any parallel research on weekly addresses by Congress and following congressional action?

A very lite skimming of the literature on predicting Supreme Court decisions comes up with: Competing Approaches to Predicting Supreme Court Decision Making by Andrew D. Martin, Kevin M. Quinn, Theodore W. Ruger, and Pauline T. Kim (2004), Algorithm predicts US Supreme Court decisions 70% of time by David Kravets (2014), Fantasy Scotus (a Supreme Court fantasy league with cash prizes).

Congressional voting has been studied as well, for instance, Predicting Congressional Voting – Social Identification Trumps Party. (Now there’s an unfortunate headline for searchers.)

Congressional votes are important but so is the progress of bills, the order in which issues are addressed, etc., and it the reflection of those less formal aspects in weekly addresses from congress that could be interesting.

The weekly speeches may be as divorced from any shared reality as comments inserted in the Congressional Record. On the other hand, a partially successful model, other than the timing of donations, may be possible.

Gab – Censorship Lite?

Tuesday, November 29th, 2016

I submitted my email today at Gab and got this message:

Done! You’re #1320420 in the waiting list.

Only three rules:

Illegal Pornography

We have a zero tolerance policy against illegal pornography. Such material will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We reserve the right to ban accounts that share such material. We may also report the user to local law enforcement per the advice our legal counsel.

Threats and Terrorism

We have a zero tolerance policy for violence and terrorism. Users are not allowed to make threats of, or promote, violence of any kind or promote terrorist organizations or agendas. Such users will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We may also report the user to local and/or federal law enforcement per the advice of our legal counsel.

What defines a ‘terrorist organization or agenda’? Any group that is labelled as a terrorist organization by the United Nations and/or United States of America classifies as a terrorist organization on Gab.

Private Information

Users are not allowed to post other’s confidential information, including but not limited to, credit card numbers, street numbers, SSNs, without their expressed authorization.

If Gab is listening, I can get the rules down to one:

Court Ordered Removal

When Gab receives a court order from a court of competent jurisdiction ordering the removal of identified, posted content, at (service address), the posted, identified content will be removed.

Simple, fair, gets Gab and its staff out of the censorship business and provides a transparent remedy.

At no cost to Gab!

What’s there not to like?

Gab should review my posts: Monetizing Hate Speech and False News and Preserving Ad Revenue With Filtering (Hate As Renewal Resource), while it is in closed beta.

Twitter and Facebook can keep spending uncompensated time and effort trying to be universal and fair censors. Gab has the opportunity to reach up and grab those $100 bills flying overhead for filtered news services.

What is the New York Times if not an opinionated and poorly run filter on all the possible information it could report?

Apply that same lesson to social media!

PS: Seriously, before going public, I would go to the one court-based rule on content. There’s no profit and no wins in censoring any content on your own. Someone will always want more or less. Courts get paid to make those decisions.

Check with your lawyers but if you don’t look at any content, you can’t be charged with constructive notice of it. Unless and until someone points it out, then you have to follow DCMA, court orders, etc.

PubMed comments & their continuing conversations

Monday, November 21st, 2016

PubMed comments & their continuing conversations

From the post:

We have many options for communication. We can choose platforms that fit our style, approach, and time constraints. From pop culture to current events, information and opinions are shared and discussed across multiple channels. And scientific publications are no exception.

PubMed Commons was established to enable commenting in PubMed, the largest biomedical literature database. In the past year, commenters posted to more than 1,400 publications. Of those publications, 80% have a single comment today, and 12% have comments from multiple members. The conversation carries forward in other venues.

Sometimes comments pull in discussion from other locations or spark exchanges elsewhere.Here are a few examples where social media prompted PubMed Commons posts or continued the commentary on publications.

An encouraging review of examples of sane discussion through the use of comments.

Unlike the abandoning of comments by some media outlets, NPR for example, NPR Website To Get Rid Of Comments by Elizabeth Jensen.

My take away from Jensen’s account was that NPR likes its free speech, not so much interested in the free speech of others.

See also: Have Comment Sections on News Media Websites Failed?, for op-ed pieces at the New York Times from a variety of perspectives.

Perhaps comments on news sites are examples of casting pearls before swine? (Matthew 7:6)

Freedom of Speech/Press – Great For “Us” – Not So Much For You (Wikileaks)

Saturday, November 5th, 2016

The New York Times, sensing a possible defeat of its neo-liberal agenda on November 8, 2016, has loosed the dogs of war on social media in general and Wikileaks in particular.

Consider the sleight of hand in Farhad Manjoo’s How the Internet Is Loosening Our Grip on the Truth, which argues on one hand,


You’re Not Rational

The root of the problem with online news is something that initially sounds great: We have a lot more media to choose from.

In the last 20 years, the internet has overrun your morning paper and evening newscast with a smorgasbord of information sources, from well-funded online magazines to muckraking fact-checkers to the three guys in your country club whose Facebook group claims proof that Hillary Clinton and Donald J. Trump are really the same person.

A wider variety of news sources was supposed to be the bulwark of a rational age — “the marketplace of ideas,” the boosters called it.

But that’s not how any of this works. Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

This dynamic becomes especially problematic in a news landscape of near-infinite choice. Whether navigating Facebook, Google or The New York Times’s smartphone app, you are given ultimate control — if you see something you don’t like, you can easily tap away to something more pleasing. Then we all share what we found with our like-minded social networks, creating closed-off, shoulder-patting circles online.

This gets to the deeper problem: We all tend to filter documentary evidence through our own biases. Researchers have shown that two people with differing points of view can look at the same picture, video or document and come away with strikingly different ideas about what it shows.

You caught the invocation of authority by Manjoo, “researchers have shown,” etc.

But did you notice he never shows his other hand?

If the public is so bat-shit crazy that it takes all social media content as equally trustworthy, what are we to do?

Well, that is the question isn’t it?

Manjoo invokes “dozens of news outlets” who are tirelessly but hopelessly fact checking on our behalf in his conclusion.

The strong implication is that without the help of “media outlets,” you are a bundle of preconceptions and biases doing what feels easiest.

“News outlets,” on the other hand, are free from those limitations.

You bet.

If you thought Majoo was bad, enjoy seething through Zeynep Tufekci’s claims that Wikileaks is an opponent of privacy, sponsor of censorship and opponent of democracy, all in a little over 1,000 words (1069 exact count). Wikileaks Isn’t Whistleblowing.

It’s a breath taking piece of half-truths.

For example, playing for your sympathy, Tufekci invokes the need of dissidents for privacy. Even to the point of invoking the ghost of the former Soviet Union.

Tufekci overlooks and hopes you do as well, that these emails weren’t from dissidents, but from people who traded in and on the whims and caprices at the pinnacles of American power.

Perhaps realizing that is too transparent a ploy, she recounts other data dumps by Wikileaks to which she objects. As lawyers say, if the facts are against you, pound on the table.

In an echo of Manjoo, did you know you are too dumb to distinguish critical information from trivial?

Tufekci writes:


These hacks also function as a form of censorship. Once, censorship worked by blocking crucial pieces of information. In this era of information overload, censorship works by drowning us in too much undifferentiated information, crippling our ability to focus. These dumps, combined with the news media’s obsession with campaign trivia and gossip, have resulted in whistle-drowning, rather than whistle-blowing: In a sea of so many whistles blowing so loud, we cannot hear a single one.

I don’t think you are that dumb.

Do you?

But who will save us? You can guess Tufekci’s answer, but here it is in full:


Journalism ethics have to transition from the time of information scarcity to the current realities of information glut and privacy invasion. For example, obsessively reporting on internal campaign discussions about strategy from the (long ago) primary, in the last month of a general election against a different opponent, is not responsible journalism. Out-of-context emails from WikiLeaks have fueled viral misinformation on social media. Journalists should focus on the few important revelations, but also help debunk false misinformation that is proliferating on social media.

If you weren’t frightened into agreement by the end of her parade of horrors:


We can’t shrug off these dangers just because these hackers have, so far, largely made relatively powerful people and groups their targets. Their true target is the health of our democracy.

So now Wikileaks is gunning for democracy?

You bet. 😉

Journalists of my youth, think Vietnam, Watergate, were aggressive critics of government and the powerful. The Panama Papers project is evidence that level of journalism still exists.

Instead of whining about releases by Wikileaks and others, journalists* need to step up and provide context they see as lacking.

It would sure beat the hell out of repeating news releases from military commanders, “justice” department mouthpieces, and official but “unofficial” leaks from the American intelligence community.

* Like any generalization this is grossly unfair to the many journalists who work on behalf of the public everyday but lack the megaphone of the government lapdog New York Times. To those journalists and only them, do I apologize in advance for any offense given. The rest of you, take such offense as is appropriate.

How To Read: “War Goes Viral” (with caution, propaganda ahead)

Monday, October 17th, 2016

social-media-war-460

War Goes Viral – How social media is being weaponized across the world by Emerson T. Brooking and P. W. Singer.

One of the highlights of the post reads:


Perhaps the greatest danger in this dynamic is that, although information that goes viral holds unquestionable power, it bears no special claim to truth or accuracy. Homophily all but ensures that. A multi-university study of five years of Facebook activity, titled “The Spreading of Misinformation Online,” was recently published in Proceedings of the National Academy of Sciences. Its authors found that the likelihood of someone believing and sharing a story was determined by its coherence with their prior beliefs and the number of their friends who had already shared it—not any inherent quality of the story itself. Stories didn’t start new conversations so much as echo preexisting beliefs.

This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.” As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

Ooooh, “…’truth’ becomes a matter of emotional resonance.”

That is always true but give the authors their due, “War Goes Viral” is a masterful piece of propaganda to the contrary.

Calling something “propaganda,” or “media bias” is easy and commonplace.

Let’s do the hard part and illustrate why that is the case with “War Goes Viral.”

The tag line:

How social media is being weaponized across the world

preps us to think:

Someone or some group is weaponizing social media.

So before even starting the article proper, we are prepared to be on the look out for the “bad guys.”

The authors are happy to oblige with #AllEyesOnISIS, first paragraph, second sentence. “The self-styled Islamic State…” appears in the second paragraph and ISIS in the third paragraph. Not much doubt who the “bad guys” are at this point in the article.

Listing only each change of current actors, “bad guys” in red, the article from start to finish names:

  • Islamic State
  • Russia
  • Venezuela
  • China
  • U.S. Army training to combat “bad guys”
  • Israel – neutral
  • Islamic State (Hussain)

The authors leave you with little doubt who they see as the “bad guys,” a one-sided view of propaganda and social media in particular.

For example, there is:

No mention of Voice of American (VOA), perhaps one of the longest running, continuous disinformation campaigns in history.

No mention of Pentagon admits funding online propaganda war against Isis.

No mention of any number of similar projects and programs which weren’t constructed with an eye on “truth and accuracy” by the United States.

The treatment here is as one-sided as the “weaponized” social media of which the authors complain.

Not that the authors are lacking in skill. They piggyback their own slant onto The Spreading of Misinformation Online:


This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.” As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

How much of that is supported by The Spreading of Misinformation Online?

  • First sentence
  • Second sentence
  • Both sentences

The answer is:

This extreme ideological segregation, the authors concluded, “comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.”

The remainder of that paragraph was invented out of whole clothe by the authors and positioned with “truth” in quotes to piggyback on the legitimate academic work just quoted.

As smartphone cameras and streaming video turn every bystander into a reporter (and everyone with an internet connection into an analyst), “truth” becomes a matter of emotional resonance.

Is popular cant among media and academic types but no more than that.

Skilled reporting can put information in a broad context and weave a coherent narrative, but disparaging social media authors doesn’t make that any more likely.

“War Goes Viral” being a case in point.

How Do I Become A Censor?

Monday, June 13th, 2016

You read about censorship or efforts at censorship on a daily basis.

But none of those reports answers the burning question of the social media age: How Do I Become A Censor?

I mean, what’s the use of reading about other people censoring your speech if you aren’t free to censor theirs? Where the fun in that?

Andrew Golis has an answer for you in: Comments are usually garbage. We’re adding comments to This.!.

Three steps to becoming a censor:

  1. Build a social media site that accepts comments
  2. Declare a highly subjective ass-hat rules
  3. Censor user comments

There being no third-party arbiters, you are now a censor! Feel the power surging through your fingers. Crush dangerous thoughts, memes or content with a single return. The safety and sanity of your users is now your responsibility.

Heady stuff. Yes?

If you think this is parody, check out the This. Community Guidelines for yourself:


With that in mind, This. is absolutely not a place for:

Violations of law. While this is expanded upon below, it should be clear that we will not tolerate any violations of law when using our site.

Hate speech, malicious speech, or material that’s harmful to marginalized groups. Overtly discriminating against an individual belonging to a minority group on the basis of race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability status, or medical condition won’t be tolerated on the site. This holds true whether it’s in the form of a link you post, a comment you make in a conversation, a username or display name you create (no epithets or slurs), or an account you run.

Harassment; incitements to violence; or threats of mental, emotional, cyber, or physical harm to other members. There’s a line between civil disagreement and harassment. You cross that line by bullying, attacking, or posing a credible threat to members of the site. This happens when you go beyond criticism of their words or ideas and instead attack who they are. If you’ve got a vendetta against a certain member, do not police and criticize that member’s every move, post, or comment on a conversation. Absolutely don’t take this a step further and organize or encourage violence against this person, whether through doxxing, obtaining dirt, or spreading that person’s private information.

Violations of privacy. Respect the sanctity of our members’ personal information. Don’t con them – or the infrastructure of our site – to obtain, post, or disseminate any information that could threaten or harm our members. This includes, but isn’t limited to, credit card or debit card numbers; social or national security numbers; home addresses; personal, non-public email addresses or phone numbers; sexts; or any other identifying information that isn’t already publicly displayed with that person’s knowledge.

Sexually-explicit, NSFW, obscene, vulgar, or pornographic content. We’d like for This. to be a site that someone can comfortably scroll through in a public space – say a cafe, or library. We’re not a place for sexually-explicit or pornographic posts, comments, accounts, usernames, or display names. The internet is rife with spaces for you to find people who might share your passion for a certain Pornhub video, but This. isn’t the place to do that. When it comes to nudity, what we do allow on our site is informative or newsworthy – so, for example, if you’re sharing this article on Cameroon’s breast ironing tradition, that’s fair game. Or a good news or feature article about Debbie Does Dallas. But, artful as it may be, we won’t allow actual footage of Debbie Does Dallas on the site. (We understand that some spaces on the internet are shitty at judging what is and isn’t obscene when it comes to nudity, so if you think we’ve pulled your post off the site because we’re a bunch of unreasonable prudes, we’ll be happy to engage.)

Excessively violent content. Gore, mutilation, bestiality, necrophilia? No thanks! There’s a distinction between a potentially upsetting image that’s worth consuming (think of some of the best war photography) and something you’d find in a snuff film. It’s not always an easy distinction to make – real life is pretty brutal, and some of the images we probably need to see are the hardest to stomach – but we also don’t want to create an overwhelmingly negative experience for anyone who visits the site and happens to stumble upon a painful image.

Promotion of self-harm, eating disorders, alcohol or drug abuse, or similar forms of destructive behavior. The internet is, sadly, also rife with spaces where people get off on encouraging others to hurt themselves. If you’d like to do that, get off our site and certainly seek help.

Username squatting. Dovetailing with that, we reserve the right to take back a username that is not being actively used and give it to someone else who’d like it it – especially if it’s, say, an esteemed publication, organization, or person. We’re also firmly against attempts to buy or sell stuff in exchange for usernames.

Use of the This. brand, trademark, or logo without consent. You also cannot use the This. name or anything associated with the brand without our consent – unless, of course, it’s a news item. That means no creating accounts, usernames, or display names that use our brand.

Spam. Populating the site with spammy accounts is antithetical to our mission – being the place to find the absolute best in media. If you’ve created accounts that are transparently selling, say, “installation help for Macbooks” or some other suspicious form tech support, or advertising your “viral video” about Justin Bieber that’s got a suspiciously low number of views, you don’t belong on our site. That contradicts why we exist as a platform – to give members a noise-free experience they can’t find elsewhere on the web.

Impersonation of others. Dovetailing with that – though we’d all like to be The New York Times or Diana Ross, don’t pretend to be them. Don’t create an identity on the site in the likeness of a company or person who isn’t you. If you decide, for some reason, to create a parody account of a public figure or organization – though we can think of better sites to do that on, frankly – make sure you make that as clear as possible in your display name, avatar, and bio.

Infringement of copyright or intellectual property rights. Don’t post copyrighted works without the permission of its original owner or creator. This extends, for example, to copying and pasting a copyrighted set of words into a comment and passing it off as your own without credit. If you think someone has unlawfully violated your own copyright, please follow the DMCA procedures set forth in our Terms of Service.

Mass or automated registration and following. We’ve worked hard to build the site’s infrastructure. If you manipulate that in any way to game your follow count or register multiple spam accounts, we’ll have to terminate your account.

Exploits, phishing, resource abuse, or fraudulent content. Do not scam our members into giving you money, or mislead our members through misrepresenting a link to, say, a virus.

Exploitation of minors. Do not post any material regarding minors that’s sexually explicit, violent, or harmful to their safety. Don’t solicit or request their private or personally identifiable information. Leave them alone.

So how do we take punitive action against anyone who violates these? Depends on the severity of the offense. If you’re a member with a good track record who seems to have slipped up, we’ll shoot you an email telling you why your content was removed. If you’ve shared, written, or done something flagrantly and recklessly violating one of these rules, we’ll ban you from the site through deleting your account and all that’s associated with it. And if we feel it’s necessary or otherwise believe it is required, we will work with law enforcement to handle any risk to one of our members, the This. community in general, or to public safety.

To put it plainly – if you’re an asshole, we’ll kick you off the site.

Let’s make that a little more concrete.

I want to say: “Former Vice-President Dick Cheney should be tortured for a minimum of thirty (30) years and be kept alive for that purpose, as a penalty for his war crimes.”

I can’t say that on This. because:

  • “incitement to violence” If torture is ok, then so it other violence.
  • “harmful to marginalized group” If you think of sociopaths as a marginalized group.
  • “harassment” Cheney is a victim too. He didn’t start life as a moral leper.
  • “excessively violence content” Assume I illustrate the torture Cheney should suffer.

Rules broken vary by the specific content of my speech.

Remind me to pass this advice along to: Jonathan “I Want To Be A Twitter Censor” Weisman. All he needs to do is build a competitor to Twitter and he can censor to his heart’s delight.

The build your own platform isn’t just my opinion. This. confirms my advice:

If you don’t like these rules, feel free to create your own platform! There are a lot of awesome, simple ways to do that. That’s what’s so lovely about the internet.

Onlinecensorship.org Launches First Report (PDF)

Thursday, March 31st, 2016

Onlinecensorship.org Launches First Report (PDF).

Reposting:

Onlinecensorship.org is pleased to share our first report "Unfriending Censorship: Insights from four months of crowdsourced data on social media censorship." The report draws on data gathered directly from users between November 2015 and March 2016.

We asked users to send us reports when they had their content or accounts taken down on six social media platforms: Facebook, Flickr, Google+, Instagram, Twitter, and YouTube. We have aggregated and analyzed the collected data across geography, platform, content type, and issue areas to highlight trends in social media censorship. All the information presented here is anonymized, with the exception of case study examples we obtained with prior approval by the user.

Here are some of the highlights:

  • This report covers 161 submissions from 26 countries, regarding content in eleven languages.
  • Facebook was the most frequently reported platform, and account suspensions were the most reported content type.
  • Nudity and false identity were the most frequent reasons given to users for the removal of their content.
  • Appeals seem to present a particular challenge. A majority of users (53%) did not appeal the takedown of their content, 50% of whom said they didn’t know how and 41.9% of whom said they didn’t expect a response. In only four cases was content restored, while in 50 the user didn’t get a response.
  • We received widespread reports that flagging is being used for censorship: 61.6% believed this was the cause of the content takedown.

While we introduced some measures to help us verify reports (such as giving respondents the opportunity to send us screenshots that support their claims), we did not work with the companies to obtain this data and thus cannot claim it is representative of all content takedowns or user experiences. Instead, it shows how a subset of the millions of social media users feel about how their content takedowns were handled, and the impact it has had on their lives.

The full report is available for download and distribution under Creative Commons licensing.

As the report itself notes, 161 reports across 6 social media platforms in 4 months isn’t a representative sample of censoring in social media.

Twitter alone brags about closing 125,000 ISIS accounts since mid-2015 (report dated 5 February 2016).

Closing ISIS accounts is clearly censorship of political speech, whatever hand waving and verbal gymnastics Twitter wants to employ to justify its practices. Including terms of service.

Censorship, on whatever basis, by whoever practiced, by whatever mechanism (including appeals), will always step on legitimate speech of some speakers.

The non-viewing of content has one and only one legitimate locus of control, a user’s browser for web content.

Browsers and/or web interfaces for Twitter, Facebook, etc., should enable users to block users, content by keywords, or even classifications offered by social media services.

Poof!

All need for collaboration with governments, issues of what content to censor, appeal processes, etc., suddenly disappear.

Enabling users to choose the content that will be displayed in their browsers empowers listeners as well as speakers, with prejudice towards none.

Yes?

Jihadist Wannabes: You Too Can Be A Western Lackey

Wednesday, March 30th, 2016

Eric Geller’s piece Why ISIS is winning the online propaganda war is far too long to read but has several telling insights for jihadist wannabes.

First, perhaps without fulling realizing it, Geller points out that the appeal of ISIS is based on facts, not messages:


Young Muslims who feel torn between what can seem like two different worlds, who long for structure and meaning in their lives, are ISIS’s best targets. They seek a coherent picture of the world—and ISIS is ready to offer one. Imagine being 19 years old, living in a major American city, and not understanding how a terrorist attack in Paris can change the way your fellow subway passengers look at you. If that prejudice or bigotry mystified you, you might gravitate toward someone offering an explanation that felt like it fit with your experiences. You might start watching YouTube videos about the supposedly irreconcilable differences between the West and the Islamic world. ISIS shapes its content to appeal to this person and others who lack a framework for understanding world events and are willing to embrace a radical one.

The other psychological factor that ISIS exploits is the natural desire for purpose. ISIS is a bonafide regional power, and to people who already feel out of place in Western society and crave a sense of direction, joining ISIS offers that purpose, that significance. They can become part of something bigger than themselves. They can fight for a cause. ISIS’s messages don’t just offer a framework for understanding the world; they also offer the chance to help shape it. These messages “make people feel like they matter in the world,” Beutel said, by promising “a sense of honor and self-esteem, and the ability to actively live out those desires.”

There are also more pragmatic promises, tailored to people who are not only spiritually aimless but economically frustrated and emotionally unfulfilled. Liang described this part of the appeal as, “Come and you will have a real life. You will have a salary. You will have a job. You will have a wife. You will have a house.”

“This is appealing to people who have, really, no future,” she said.

I can see how ISIS would be appealing to:

…people who have, really, no future…

Noting that the “no future,” is a fact, not idle speculation. All Muslim youth do and will continue to face discrimination, especially in the aftermath of terrorist attacks.

The West and Islamic worlds are irreconcilable only to the extent leaders in both worlds profit from that view. Sane members of those and other traditions relish and welcome such diversity. The Islamic world has a better record of toleration of diversity that one can claim for any Western power.

Second, Geller illustrates how the focus on message is at odds with changing the reality for Muslim youth:


If the U.S. and its allies want to dissuade would-be jihadists from joining ISIS, they need to start from square one. “We need a compelling story that makes our story better than theirs,” Liang said. “And so far their story is trumping ours.”

The anti-extremist story can’t just be a paean to human rights and liberal democratic values. It must provide clear promises about what the Middle East will look like if ISIS is defeated. “What are we going to do if we take back the land that [ISIS] is inhabiting at the moment?” Liang said. “What government are we going to set up, and how legitimate will it be? If you look at, right now, the Iraqi state, it’s extremely corrupt, and it has to prove that it will be the better alternative.”

Part of the challenge that counter-narrative designers face is that the anti-extremist story can’t just be a sweeping theoretical message. It has to be pragmatic, full of real promises. But no one has a clear idea of how to do this. “To be totally honest, we haven’t cracked that nut yet,” the former senior administration official said. “Maybe it is liberal values and a democratic order and human rights and democratic values. I would hope that that would be the case. But I don’t think that there’s evidence yet that that would be equally compelling as a narrative or a set of values.

“Everyone agrees [that] we can’t just counter-message,” the official added. “We have to promote alternative messages. But nobody understands or agrees or has the answer in terms of what are the alternate courses of action or pathways that one could offer.”

How’s that for a bizarre story line? There is no effort to change the reality as experienced by Muslim youth, but they should just suck it up and not join ISIS?

One of the problems with “messaging” is the West wants dictating who will deliver the message and controlling what else they may say.

Not to mention that being discovered to be a Western lackey damages the credibility of anti-jihadists.

I’m not sure who edited Geller’s piece but there is this gem:


While the big-picture thinkers devise a story, others should focus on a bevy of vital changes to how counter-narratives are produced and distributed. For one thing, the content is too grim. Instead of going dark, Beutel said, go light: Offer would-be jihadists hope. Humanize ISIS’s foot soldiers instead of demonizing them, so that your intended audience understands that you care about their fate and not just taking them off the battlefield. “When you have people who are espousing incredibly hateful worldviews, the tendency is to want to demonize them—to want to shut them out [in order] to isolate them,” Beutel said. “More often than not, that actually repulses people rather than [getting] them to open up.” (emphasis added)

Gee, “espousing incredibly hateful worldviews,” I don’t think of ISIS first in that regard. Do you? There a list of governments and leaders that I would put way ahead of ISIS.

Maybe, just maybe, urging people to not join groups you are trying to destroy that are resisting corrupt Western toady governments just isn’t persuasive?

Have you stopped corrupting Muslim governments? Have you stopped supporting governments that oppress Muslims? Have you stopped playing favorites between Muslim factions? Have you taken any steps to promote a safe and diverse environment for Muslims in your society?

Or the overall question: Have you made a positive different in the day to day lives of Muslims? (from their point of view, not yours)

Messaging not based on having done (not promised, accomplished) those and other things, are invitations to be a Western lackey. Who wants that?


All the attribution of a high level of skill to ISIS messaging is merely a reflection of the tone-deafness of West dictated messaging. Strict hierarchical control over both messages and speakers, using messages that appeal to the sender and not the receiver, valuing message over reality, are only some of the flaws in Western anti-jihadist programs.

The old possible redeeming point is the use of former jihadists. ISIS, being composed of people, is subject to the same failings of governments/groups/movements everywhere. I’m not sure how any government could claim to be superior to ISIS in that regard.


BTW, I’m not an ISIS “cheerleader” as Geller put it. I have serious disagreement with ISIS on a number of issues, social policies and targeting being prominent ones. I do agree on the need to fight against corrupt, Western-dictated Muslim governments. Contrary to current US foreign policy.

NCSU Offers Social Media Archives Toolkit for Libraries [Defeating Censors]

Sunday, February 28th, 2016

NCSU Offers Social Media Archives Toolkit for Libraries by Matt Enis.

From the post:

North Carolina State University (NCSU) Libraries recently debuted a free, web-based social media archives toolkit designed to help cultural heritage organizations develop social media collection strategies, gain knowledge of ways in which peer institutions are collecting similar content, understand current and potential uses of social media content by researchers, assess the legal and ethical implications of archiving this content, and develop techniques for enriching collections of social media content at minimal cost. Tools for building and enriching collections include NCSU’s Social Media Combine—which pre-assembles the open source Social Feed Manager, developed at George Washington University for Twitter data harvesting, and NCSU’s own open source Lentil program for Instagram—into a single package that can be deployed on Windows, OSX, and Linux computers.

“By harvesting social media data (such as Tweets and Instagram photos), based on tags, accounts, or locations, researchers and cultural heritage professionals are able to develop accurate historical assessments and democratize access to archival contributors, who would otherwise never be represented in the historical record,” NCSU explained in an announcement.

“A lot of activity that used to take place as paper correspondence is now taking place on social media—the establishment of academic and artistic communities, political organizing, activism, awareness raising, personal and professional interactions,” Jason Casden, interim associate head of digital library initiatives, told LJ. Historians and researchers will want to have access to this correspondence, but unlike traditional letters, this content is extremely ephemeral and can’t be collected retroactively like traditional paper-based collections.

“So we collect proactively—as these events are happening or shortly after,” Casden explained.

I saw this too late today to install but I’m sure I will be posting about it later this week!

Do you see the potential of such tooling for defeating would-be censors of Twitter and other social media?

More on that later this week as well.

The Social-Network Illusion That Tricks Your Mind – (Terrorism As Majority Illusion)

Friday, December 25th, 2015

The Social-Network Illusion That Tricks Your Mind

From the post:

One of the curious things about social networks is the way that some messages, pictures, or ideas can spread like wildfire while others that seem just as catchy or interesting barely register at all. The content itself cannot be the source of this difference. Instead, there must be some property of the network that changes to allow some ideas to spread but not others.

Today, we get an insight into why this happens thanks to the work of Kristina Lerman and pals at the University of Southern California. These people have discovered an extraordinary illusion associated with social networks which can play tricks on the mind and explain everything from why some ideas become popular quickly to how risky or antisocial behavior can spread so easily.

Network scientists have known about the paradoxical nature of social networks for some time. The most famous example is the friendship paradox: on average your friends will have more friends than you do.

This comes about because the distribution of friends on social networks follows a power law. So while most people will have a small number of friends, a few individuals have huge numbers of friends. And these people skew the average.

Here’s an analogy. If you measure the height of all your male friends. you’ll find that the average is about 170 centimeters. If you are male, on average, your friends will be about the same height as you are. Indeed, the mathematical notion of “average” is a good way to capture the nature of this data.

But imagine that one of your friends was much taller than you—say, one kilometer or 10 kilometers tall. This person would dramatically skew the average, which would make your friends taller than you, on average. In this case, the “average” is a poor way to capture this data set.

If that has you interested, see:

The Majority Illusion in Social Networks by Kristina Lerman, Xiaoran Yan, Xin-Zeng Wu.

Abstract:

Social behaviors are often contagious, spreading through a population as individuals imitate the decisions and choices of others. A variety of global phenomena, from innovation adoption to the emergence of social norms and political movements, arise as a result of people following a simple local rule, such as copy what others are doing. However, individuals often lack global knowledge of the behaviors of others and must estimate them from the observations of their friends’ behaviors. In some cases, the structure of the underlying social network can dramatically skew an individual’s local observations, making a behavior appear far more common locally than it is globally. We trace the origins of this phenomenon, which we call “the majority illusion,” to the friendship paradox in social networks. As a result of this paradox, a behavior that is globally rare may be systematically overrepresented in the local neighborhoods of many people, i.e., among their friends. Thus, the “majority illusion” may facilitate the spread of social contagions in networks and also explain why systematic biases in social perceptions, for example, of risky behavior, arise. Using synthetic and real-world networks, we explore how the “majority illusion” depends on network structure and develop a statistical model to calculate its magnitude in a network.

Research has not reached the stage of enabling the manipulation of public opinion to reflect the true rarity of terrorist activity in the West.

That being the case, being factually correct that Western fear of terrorism is a majority illusion isn’t as profitable as product tying to that illusion.

Are you a debunker or fact-checker? (take the survey, it’s important)

Tuesday, December 1st, 2015

Are you a debunker or fact-checker? Take this survey to help us understand the issue by Craig Silverman.

From the post:

Major news events like the Paris attacks quickly give birth to false rumors, hoaxes and viral misinformation. But there is a small, growing number of people and organizations who seek to quickly identify, verify or debunk and spread the truth about such misinformation that arises during major news events. As much as possible, they want to stop a false rumor before it spreads.

These real-time debunkers, some of which First Draft has profiled recently, face many challenges. But by sharing such challenges and possible solutions, it is possible to find collective answers to the problem of false information.

The survey below intends to gather tips and tactics from those doing this work in an attempt to identify best practices to be shared.

If you are engaged in this type of work — or have experimented with it in the past — we ask that you please take a moment to complete this quick survey and share your advice.

I don’t know if it is possible for debunking or fact-checking to run faster than rumor and falsehood but that should not keep us from trying.

Definitely take the survey!

BEOMAPS:…

Thursday, November 5th, 2015

BEOMAPS: Ad-hoc topic maps for enhanced search in social network data. by Peter Dolog, Martin Leginus, and ChengXiang Zhai.

From the webpage:


The aim of this project is to develop a novel system – a proof of concept that will enable more effective search, exploration, analysis and browsing of social media data. The main novelty of the system is an ad-hoc multi-dimensional topic map. The ad-hoc topic map can be generated and visualized according to multiple predefined dimensions e.g., recency, relevance, popularity or location based dimension. These dimensions will provide a better means for enhanced browsing, understanding and navigating to related relevant topics from underlying social media data. The ad-hoc aspect of the topic map allows user-guided exploration and browsing of the underlying social media topics space. It enables the user to explore and navigate the topic space through user-chosen dimensions and ad-hoc user-defined queries. Similarly, as in standard search engines, we consider the possibility of freely defined ad-hoc queries to generate a topic map as a possible paradigm for social media data exploration, navigation and browsing. An additional benefit of the novel system is an enhanced query expansion to allow users narrow their difficult queries with the terms suggested by an ad-hoc multi-dimensional topic map. Further, ad-hoc topic maps enable the exploration and analysis of relations between individual topics, which might lead to serendipitous discoveries.

This looks very cool and accords with some recent thinking I have been doing on waterfall versus agile authoring of topic maps.

The conference paper on this project is lodged behind a paywall at:

Beomap: Ad Hoc Topic Maps for Enhanced Exploration of Social Media Data, with this abstract:

Social media is ubiquitous. There is a need for intelligent retrieval interfaces that will enable a better understanding, exploration and browsing of social media data. A novel two dimensional ad hoc topic map is proposed (called Beomap). The main novelty of Beomap is that it allows a user to define an ad hoc semantic dimension with a keyword query when visualizing topics in text data. This not only helps to impose more meaningful spatial dimensions for visualization, but also allows users to steer browsing and exploration of the topic map through ad hoc defined queries. We developed a system to implement Beomap for exploring Twitter data, and evaluated the proposed Beomap in two ways, including an offline simulation and a user study. Results of both evaluation strategies show that the new Beomap interface is better than a standard interactive interface.

It has attracted 224 downloads as of today so I would say it is a popular chapter on topic maps.

I have contacted the authors in an attempt to locate a copy that isn’t behind a paywall.

Enjoy!

#OpKKK Engaged: … [Dangers of skimming news]

Monday, November 2nd, 2015

#OpKKK Engaged: Anonymous Begins Exposing Politicians with Ties to the KKK by Matt Agorist.

From the post:

Over the last week, the hacktivist group known as Anonymous has been saying that they will expose some 1,000 United States politicians who have ties to the KKK.

On Monday morning, just past midnight, the hackers made true on their “threat.”

In a video posted to the We are Anonymous Facebook page Monday, the group began to release the names of those they claim have ties to the KKK.

In the video, Anonymous states that they are not going to release the home addresses of the politicians in fear of violent retaliation against the accused racists. But the group did release the politicians’ full name, the municipalities in which they work, and the addresses of their political offices.

While the video doesn’t come close to the promised 1,000 names, Anonymous did release a partial list.

It is rumored that Anonymous plans to release all the names on the 5th of November, also known as Guy Fawkes Night. Anonymous is synonymous with Guy Fawkes as the popular mask is used to shield the faces of its members during their broadcasts and protests.

I must admit I had my own doubts about #OpKKK when I heard the number “1,000.”

That illustrates the danger of half-listening or skimming a news feed.

The number was surely higher than “1,000.”

Turns out that #OpKKK means a direct connection to the KKK and not just supporting their policies. That was my mistake. Just supporting KKK objectives, the number would be far higher.

Take that as a lesson to read carefully when going through news stories.

Failure to release addresses

The failure to release addresses of those named, “so nobody gets it in their mind to take out their own justice against them” is highly questionable.

Who does Anonymous think is going to administer “justice?”

With 1,000 political leaders having direct ties to the KKK, who in the government will call them to account?

The position of Anonymous reminds me of Wikileaks, the Guardian and the New York Times failing to release all of the Snowden documents and other government document dumps in full.

“Oh, oh, we are afraid to endanger lives.” Really? The government from who those documents were liberated has not hesitated to murder, torture and blight the lives of hundreds of thousands if not millions.

Rather than giving the people legitimate targets for at least some of those mis-deeds, you wring your hands and worry about possible collateral damage?

Sorry, all social change invovles collateral damage to innocents. Why do you think it doesn’t? The current systematic and structural racism exacts a heavy toll on innocents every day in the United States. Is that more acceptable because you don’t have blood on your hands from it? (Or don’t think you do.)

The term for that is “moral cowardice.”

Is annoying the status quo, feeling righteous, associating with other “righteous” folk, all there is to your view of social change?

If it is, welcome to the status quo from here to eternity.

Tracie Powell: “We’re supposed to challenge power…

Sunday, October 18th, 2015

Tracie Powell: “We’re supposed to challenge power…it seems like we’ve abdicated that to social media” by Laura Hazard Owen.

From the post:

Tracie Powell tries not to use the word “diversity” anymore.

“When you talk about diversity, people’s eyes glaze over,” Powell, the founder of All Digitocracy, told me. The site covers tech, policy, and the impact of media on communities that Powell describes as “emerging audiences” — people of color and of different sexual orientations and gender identities.

I first heard Powell speak at the LION conference for hyperlocal publishers in Chicago earlier this month, where she stood in front of the almost entirely white audience to discuss how journalists and news organizations can get better at reporting for more people.

I followed up with Powell, who is currently a John S. Knight Journalism Fellow at Stanford, to hear more. “If we [as journalists] don’t do a better job at engaging with these audiences, we’re dead,” Powell said. “Our survival depends on reaching these emerging audiences.”

Here’s a lightly condensed and edited version of our conversation.

Warning: Challenging power is far more risky than supporting fiery denunciations of the most vulnerable and least powerful in society.

From women facing hard choices about pregnancy, rape victims, survivors of abuse both physical and emotional, or those who have lived with doubt, discrimination and deprivation as day to day realities, victims of power aren’t hard to find.

One of the powers that needs to be challenged is the news media itself. Take for example the near constant emphasis on gun violence and mass shootings. If you were to take the news media at face value, you would be frightened to go outside.

But, a 2013 Pew Center Report, Gun Homicide Rate Down 49% Since 1993 Peak; Public Unaware tell a different tale:

pew-gun-deaths

Not as satisfying as taking down a representative or senator but in the long run, influencing the mass media may be a more reliable path to challenging power.

Images for Social Media

Friday, August 21st, 2015

23 Tools and Resources to Create Images for Social Media

From the post:

Through experimentation and iteration, we’ve found that including images when sharing to social media increases engagement across the board — more clicks, reshares, replies, and favorites.

Using images in social media posts is well worth trying with your profiles.

As a small business owner or a one-man marketing team, is this something you can pull off by yourself?

At Buffer, we create all the images for our blogposts and social media sharing without any outside design help. We rely on a handful of amazing tools and resources to get the job done, and I’ll be happy to share with you the ones we use and the extras that we’ve found helpful or interesting.

If you tend to scroll down numbered lists (like I do), you will be left thinking the creators of the post don’t know how to count:

23-tools

because:

15-app

the end of the numbered list, isn’t 23.

If you look closely, there are several lists of unnumbered resources. So, you’re thinking that they do know how to count, but some of the items are unnumbered.

Should be, but it’s not. There are thirteen (13) unnumbered items, which added to fifteen (15), makes twenty-eight (28).

So, I suspect the title should read: 28 Tools and Resources to Create Images for Social Media.

In any event, its a fair collection of tools that with some effort on your part, can increase your social media presence.

Enjoy!

Twitter As Investment Tool

Thursday, May 21st, 2015

Social Media, Financial Algorithms and the Hack Crash by Tero Karppi and Kate Crawford.

Abstract:

@AP: Breaking: Two Explosions in the White House and Barack Obama is injured’. So read a tweet sent from a hacked Associated Press Twitter account @AP, which affected financial markets, wiping out $136.5 billion of the Standard & Poor’s 500 Index’s value. While the speed of the Associated Press hack crash event and the proprietary nature of the algorithms involved make it difficult to make causal claims about the relationship between social media and trading algorithms, we argue that it helps us to critically examine the volatile connections between social media, financial markets, and third parties offering human and algorithmic analysis. By analyzing the commentaries of this event, we highlight two particular currents: one formed by computational processes that mine and analyze Twitter data, and the other being financial algorithms that make automated trades and steer the stock market. We build on sociology of finance together with media theory and focus on the work of Christian Marazzi, Gabriel Tarde and Tony Sampson to analyze the relationship between social media and financial markets. We argue that Twitter and social media are becoming more powerful forces, not just because they connect people or generate new modes of participation, but because they are connecting human communicative spaces to automated computational spaces in ways that are affectively contagious and highly volatile.

Social sciences lag behind the computer sciences in making their publications publicly accessible as well as publishing behind firewalls so I can report on is the abstract.

On the other hand, I’m not sure how much practical advice you could gain from the article as opposed to the volumes of commentary following the incident itself.

The research reminds me of Malcolm Gladwell, author of The Tipping Point and similar works.

While I have greatly enjoyed several of Gladwell’s books, including the Tipping Point, it is one thing to look back and say: “Look, there was a tipping point.” It is quite another to be in the present and successfully say: “Look, there is a tipping point and we can make it tip this way or that.”

In retrospect, we all credit ourselves with near omniscience when our plans succeed and we invent fanciful explanations about what we knew or realized at the time. Others, equally skilled, dedicated and competent, who started at the same time, did not succeed. Of course, the conservative media (and ourselves if we are honest), invent narratives to explain those outcomes as well.

Of course, deliberate manipulation of the market with false information, via Twitter or not, is illegal. The best you can do is look for a pattern of news and/or tweets that result in downward changes in a particular stock, which then recovers and then apply that pattern more broadly. You won’t make $millions off of any one transaction but that is the sort of thing that draws regulatory attention.

Exposure to Diverse Information on Facebook [Skepticism]

Saturday, May 9th, 2015

Exposure to Diverse Information on Facebook by Eytan Bakshy, Solomon Messing, Lada Adamicon.

From the post:

As people increasingly turn to social networks for news and civic information, questions have been raised about whether this practice leads to the creation of “echo chambers,” in which people are exposed only to information from like-minded individuals [2]. Other speculation has focused on whether algorithms used to rank search results and social media posts could create “filter bubbles,” in which only ideologically appealing content is surfaced [3].

Research we have conducted to date, however, runs counter to this picture. A previous 2012 research paper concluded that much of the information we are exposed to and share comes from weak ties: those friends we interact with less often and are more likely to be dissimilar to us than our close friends [4]. Separate research suggests that individuals are more likely to engage with content contrary to their own views when it is presented along with social information [5].

Our latest research, released today in Science, quantifies, for the first time, exactly how much individuals could be and are exposed to ideologically diverse news and information in social media [1].

We found that people have friends who claim an opposing political ideology, and that the content in peoples’ News Feeds reflect those diverse views. While News Feed surfaces content that is slightly more aligned with an individual’s own ideology (based on that person’s actions on Facebook), who they friend and what content they click on are more consequential than the News Feed ranking in terms of how much diverse content they encounter.

The Science paper: Exposure to Ideologically Diverse News and Opinion

The definition of an “echo chamber” is implied in the authors’ conclusion:


By showing that people are exposed to a substantial amount of content from friends with opposing viewpoints, our findings contrast concerns that people might “list and speak only to the like-minded” while online [2].

The racism of the Deep South existed in spite of interaction between whites and blacks. So “echo chamber” should not be defined as association of like with like, at least not entirely. The Deep South was a echo chamber of racism but not for a lack of diversity in social networks.

Besides lacking a useful definition of “echo chamber,” the author’s ignore the role of confirmation bias (aka “backfire effect”) when confronted with contrary thoughts or evidence. To some readers seeing a New York Times editorial disagreeing with their position, can make them feel better about being on the “right side.”

That people are exposed to diverse information on Facebook is interesting, but until there is a meaningful definition of “echo chambers,” the role Facebook plays in the maintenance of “echo chambers” remains unknown.

Bias? What Bias?

Monday, March 16th, 2015

Scientists Warn About Bias In The Facebook And Twitter Data Used In Millions Of Studies by Brid-Aine Parnell.

From the post:

Social media like Facebook and Twitter are far too biased to be used blindly by social science researchers, two computer scientists have warned.

Writing in today’s issue of Science, Carnegie Mellon’s Juergen Pfeffer and McGill’s Derek Ruths have warned that scientists are treating the wealth of data gathered by social networks as a goldmine of what people are thinking – but frequently they aren’t correcting for inherent biases in the dataset.

If folks didn’t already know that scientists were turning to social media for easy access to the pat statistics on thousands of people, they found out about it when Facebook allowed researchers to adjust users’ news feeds to manipulate their emotions.

Both Facebook and Twitter are such rich sources for heart pounding headlines that I’m shocked, shocked that anyone would suggest there is bias in the data! 😉

Not surprisingly, people participate in social media for reasons entirely of their own and quite unrelated to the interests or needs of researchers. Particular types of social media attract different demographics than other types. I’m not sure how you could “correct” for those biases, unless you wanted to collect better data for yourself.

Not that there are any bias free data sets but some are so obvious that it hardly warrants mentioning. Except that institutions like the Brookings Institute bump and grind on Twitter data until they can prove the significance of terrorist social media. Brookings knows better but terrorism is a popular topic.

Not to make data carry all the blame, the test most often applied to data is:

Will this data produce a result that merits more funding and/or will please my supervisor?

I first saw this in a tweet by Persontyle.

The ISIS Twitter Census

Saturday, March 7th, 2015

The ISIS Twitter Census: Defining and describing the population of ISIS supporters on Twitter by J.M. Berger and Jonathon Morgan.

This is the Brookings Institute report that I said was forthcoming in: Losing Your Right To Decide, Needlessly.

From the Executive Summary:

The Islamic State, known as ISIS or ISIL, has exploited social media, most notoriously Twitter, to send its propaganda and messaging out to the world and to draw in people vulnerable to radicalization.

By virtue of its large number of supporters and highly organized tactics, ISIS has been able to exert an outsized impact on how the world perceives it, by disseminating images of graphic violence (including the beheading of Western journalists and aid workers and more recently, the immolation of a Jordanian air force pilot), while using social media to attract new recruits and inspire lone actor attacks.

Although much ink has been spilled on the topic of ISIS activity on Twitter, very basic questions remain unanswered, including such fundamental issues as how many Twitter users support ISIS, who they are, and how many of those supporters take part in its highly organized online activities.

Previous efforts to answer these questions have relied on very small segments of the overall ISIS social network. Because of the small, cellular nature of that network, the examination of particular subsets such as foreign fighters in relatively small numbers, may create misleading conclusions.

My suggestion is that you skim the “group think” sections on ISIS and move quickly to Section 3, Methodology. That will put you into a position to evaluate the various and sundry claims about ISIS and what may or may not be supported by their methodology.

I am still looking for a metric for “successful” use of social media. So far, no luck.

SocioViz (Danger?)

Tuesday, February 24th, 2015

SocioViz (Danger?)

From the website:

SocioViz is a social media analytics platform powered by Social Network Analysis metrics

Are you a Social Media Marketer, Digital Journalist or Social Researcher? Have a try and jump on board!

After you login, you give SocioViz access to your Twitter account and it generates a visual graph of your connections.

But there is no “about us” link. The tos (terms of service) and privacy link just reloads the login page. Only other links are to share SocioViz on a variety of social media sites. Quick search did not find any other significant information.

socioviz

Sort of like Luke in the trash compactor, I have a very bad feeling about this. 😉

Anyone know more about this site?

I don’t like opaque social sites seeking access to my accounts. Maybe nothing but poor design but it is so far beyond the pale that I suspect a less generous explanation.

If you are feeling really risky, search for SocioViz, the site will turn up in the first few hits. I am reluctant to even repeat its address online.

How Do Others See You Online?

Thursday, January 1st, 2015

The question isn’t “how do you see yourself online?” but “How to others see you online?”

Allowing for the vagaries of memory, selective unconscious editing, self-justification, etc., I quite confident that how others see us online isn’t the same thing as how we see ourselves.

The saying “know thyself” is often repeated and for practical purposes, is about as effective as a poke with a sharp stick. It hurts but there’s not much other benefit to be had.

Farhad Manjoo writes in ThinkUp Helps the Social Network User See the Online Self about the startup Thinkup.com, which offers an analytical service of your participation in social networks.

Unlike your “selective” memory, Thinkup gives you a report based on all your tweets, posts, etc., and breaks them down in ways you probably would not anticipate. The service creates enough distance between you and the report that you get a glimpse of yourself as others may be seeing you.

Beyond whatever value self-knowledge has for you, Thinkup, as Farhad learns from experience, can make you a more effective user of social media. You are already spending time on social media, why not spend it more effectively?

Everything You Need To Know About Social Media Search

Sunday, December 14th, 2014

Everything You Need To Know About Social Media Search by Olsy Sorokina.

From the post:

For the past decade, social networks have been the most universally consistent way for us to document our lives. We travel, build relationships, accomplish new goals, discuss current events and welcome new lives—and all of these events can be traced on social media. We have created hashtags like #ThrowbackThursday and apps like Timehop to reminisce on all the past moments forever etched in the social web in form of status updates, photos, and 140-character phrases.

Major networks demonstrate their awareness of the role they play in their users’ lives by creating year-end summaries such as Facebook’s Year in Review, and Twitter’s #YearOnTwitter. However, much of the emphasis on social media has been traditionally placed on real-time interactions, which often made it difficult to browse for past posts without scrolling down for hours on end.

The bias towards real-time messaging has changed in a matter of a few days. Over the past month, three major social networks announced changes to their search functions, which made finding old posts as easy as a Google search. If you missed out on the news or need a refresher, here’s everything you need to know.

I suppose Olsy means in addition to search in general sucking.

Interested tidbit on Facebook:


This isn’t Facebook’s first attempt at building a search engine. The earlier version of Graph Search gave users search results in response to longer-form queries, such as “my friends who like Game of Thrones.” However, the semantic search never made it to the mobile platforms; many supposed that using complex phrases as search queries was too confusing for an average user.

Does anyone have any user research on the ability of users to use complex phrases as search queries?

I ask because if users have difficulty authoring “complex” semantics and difficulty querying with “complex” semantics, it stands to reason they may have difficulty interpreting “complex” semantic results. Yes?

If all three of those are the case, then how do we impart the value-add of “complex” semantics without tripping over one of those limitations?

Osly also covers Instagram and Twitter. Twitter’s advanced search looks like the standard include/exclude, etc. type of “advanced” search. “Advanced” maybe forty years ago in the early OPACs but not really “advanced” now.

Catch up on these new search features. They will provide at least a minimum of grist for your topic map mill.

The 2014 Social Media Glossary: 154 Essential Definitions

Saturday, October 25th, 2014

The 2014 Social Media Glossary: 154 Essential Definitions by Matt Foulger.

From the post:

Welcome to the 2014 edition of the Hootsuite Social Media Glossary. This is a living document that will continue to grow as we add more terms and expand our definitions. If there’s a term you would like to see added, let us know in the comments!

I searched but did not find an earlier version of this glossary on the Hootsuite blog. I have posted a comment asking for pointers to the earlier version(s).

In the meantime, you may want to compare: The Ultimate Glossary: 120 Social Media Marketing Terms Explained by Kipp Bodnar. From 2011 but if you don’t know the terms, even a 2011 posting may be helpful.

We all accept the notion that language evolves but within domains that evolution is gradual and as thinking in that domain shifts, making it harder for domain members to see it.

Tracking a rapidly changing vocabulary, such as the one used in social media, might be more apparent.

Web Apps in the Cloud: Even Astronomers Can Write Them!

Wednesday, October 22nd, 2014

Web Apps in the Cloud: Even Astronomers Can Write Them!

From the post:

Philip Cowperthwaite and Peter K. G. Williams work in time-domain astronomy at Harvard. Philip is a graduate student working on the detection of electromagnetic counterparts to gravitational wave events, and Peter studies magnetic activity in low-mass stars, brown dwarfs, and planets.

Astronomers that study GRBs are well-known for racing to follow up bursts immediately after they occur — thanks to services like the Gamma-ray Coordinates Network (GCN), you can receive an email with an event position less than 30 seconds after it hits a satellite like Swift. It’s pretty cool that we professionals can get real-time notification of stars exploding across the universe, but it also seems like a great opportunity to convey some of the excitement of cutting-edge science to the broader public. To that end, we decided to try to expand the reach of GCN alerts by bringing them on to social media. Join us for a surprisingly short and painless tale about the development of YOITSAGRB, a tiny piece of Python code on the Google App Engine that distributes GCN alerts through the social media app Yo.

If you’re not familiar with Yo, there’s not much to know. Yo was conceived as a minimalist social media experience: users can register a unique username and send each other a message consisting of “Yo,” and only “Yo.” You can think of it as being like Twitter, but instead of 140 characters, you have zero. (They’ve since added more features such as including links with your “Yo,” but we’re Yo purists so we’ll just be using the base functionality.) A nice consequence of this design is that the Yo API is incredibly straightforward, which is convenient for a “my first web app” kind of project.

While “Yo” has been expanded to include more content, the origin remains an illustration of the many meanings that can be signaled by the same term. In this case, the detection of a gamma-ray burst in the known universe.

Or “Yo” could mean it is time to start some other activity when received from a particular sender. Or even be a message composed entirely of “Yo’s” where different senders had some significance. Or “Yo’s” sent at particular times to compose a message. Or “Yo’s” sent to leave the impression that messages were being sent. 😉

So, does a “Yo” have any semantics separate and apart from that read into it by a “Yo” recipient?