Archive for the ‘Free Speech’ Category

Be Undemocratic – Think For Other People – Courtesy of Slate

Wednesday, December 14th, 2016

Feeling down? Left out of the “big boys” internet censor game by the likes of Facebook and Twitter?

Dry your eyes! Slate has ridden to your rescue!

Will Oremus writes in: Only You Can Stop the Spread of Fake News:


Slate has created a new tool for internet users to identify, debunk, and—most importantly—combat the proliferation of bogus stories. Conceived and built by Slate developers, with input and oversight from Slate editors, it’s a Chrome browser extension called This Is Fake, and you can download and install it for free either on its home page or in the Chrome web store. The point isn’t just to flag fake news; you probably already know it when you see it. It’s to remind you that, anytime you see fake news in your feed, you have an opportunity to interrupt its viral transmission, both within your network and beyond.

I’m glad Slate is taking the credit/blame for This is Fake.

Can you name a more undemocratic position than assuming your fellow voters are incapable of making intelligent choices about the news they consume.

Well, everybody but you and your friends. Right?

Thanks for your offer to help Slate, but no thanks.

How To Brick A School Bus, Data Science Helps Park It (Part 1)

Tuesday, December 13th, 2016

Apologies for being a day late! I was working on how the New York Times acted as a bullhorn for those election interfering Russian hackers.

We left off in Data Science and Protests During the Age of Trump [How To Brick A School Bus…] with:

  • How best to represent these no free speech and/or no free assembly zones on a map?
  • What data sets do you need to make protesters effective under these restrictions?
  • What questions would you ask of those data sets?
  • How to decide between viral/spontaneous action versus publicly known but lawful conduct, up until the point it becomes unlawful?

I started this series of posts because the Women’s March on Washington wasn’t able to obtain a protest permit from the National Park Service due to a preemptive reservation by the Presidential Inauguration Committee.

Since then, the Women’s March on Washington has secured a protest permit (sic) from the Metropolitan Police Department.

If you are interested in protests organized for the convenience of government:

“People from across the nation will gather” at the intersection of Independence Avenue and Third Street SW, near the U.S. Capitol, at 10:00am” on Jan. 21, march organizers said in a statement on Friday.

Each to their own.

Bricking A School Bus

We are all familiar with the typical school bus:

school-bus-460

By Die4kids (Own work) [GFDL or CC BY-SA 3.0], via Wikimedia Commons

The saying, “no one size fits all,” applies to the load capacity of school buses. For example, the North Carolina School Bus Safety Web posted this spreadsheet detailing the empty (column I) and maximum weight (column R) of a variety of school bus sizes. For best results, get the GVWR (Gross Vehicle Weight Rating, maximum load) for your bus and then weight it on reliable scales.

Once you determine the maximum weight capacity of your bus, divide that weight by 4,000 pounds, the weight of one cubic yard of concrete. That results is the amount of concrete that you can have poured into your bus as part of the bricking process.

I use the phrase “your bus” deliberately because pouring concrete into a school bus that doesn’t belong to you would be destruction of private property and thus a crime. Don’t commit crimes. Use your own bus.

Once the concrete has hardened (for stability), drive to a suitable location. It’s a portable barricade, at least for a while.

At a suitable location, puncture the tires on one side and tip the bus over. Remove/burn the tires.

Consulting line 37 of the spreadsheet, with that bus, you have a barricade of almost 30,000 pounds, with no wheels.

Congratulations!

I’m still working on the data science aspects of where to park. More on that in How To Brick A School Bus, Data Science Helps Park It (Part 2), which I will post tomorrow.

Facebook Patents Tool To Think For You

Wednesday, December 7th, 2016

My apologies but Facebook thinks you are too stupid to detect “fake news.” Facebook will compensate for your stupidity with a process submitted for a US patent. For free!

Facebook is patenting a tool that could help automate removal of fake news by Casey Newton.

From the post:

As Facebook works on new tools to stop the spread of misinformation on its network, it’s seeking to patent technology that could be used for that purpose. This month the US Trademark and Patent Office published Facebook’s application for Patent 0350675: “systems and methods to identify objectionable content.” The application, which was filed in June 2015, describes a sophisticated system for identifying inappropriate text and images and removing them from the network.

As described in the application, the primary purpose of the tool is to improve the detection of pornography, hate speech, and bullying. But last month, Zuckerberg highlighted the need for “better technical systems to detect what people will flag as false before they do it themselves.” The patent published Thursday, which is still pending approval, offers some ideas for how such a system could work.

A Facebook spokeswoman said the company often seeks patents for technology that it never implements, and said this patent should not be taken as an indication of the company’s future plans. The spokeswoman declined to comment on whether it was now in use.

The system described in the application is largely consistent with Facebook’s own descriptions of how it currently handles objectionable content. But it also adds a layer of machine learning to make reporting bad posts more efficient, and to help the system learn common markers of objectionable content over time — tools that sound similar to the anticipatory flagging that Zuckerberg says is needed to combat fake news.

If you substitute “user” for “administrator” where it appears in the text, Facebook would be enabling users to police the content they view.

Why Facebook finds users making decisions about the content they view objectionable isn’t clear. Suggestions on that question?

The process doesn’t appear to be either accountable and/or transparent.

If I can’t see the content that is removed by Facebook, how do I make judgments about why it was removed and/or how that compares to content about to be uploaded to Facebook?

Urge Facebook users to demand empowering them to make decisions about the content they view.

Urge Facebook shareholders to pressure management to abandon this quixotic quest to be an internet censor.

Attn: “Fake News” Warriors! Where’s The Harm In Terrorist Propaganda?

Tuesday, December 6th, 2016

Facebook, Microsoft, Twitter, and YouTube team up to stop terrorist propaganda by Justin Carissimo.

Justin’s report is true, at least in the sense that Facebook, Microsoft, Twitter, and YouTube are collaborating to censor “terrorist propaganda.”

Justin’s post also propagates the “fake news” that online content from terrorists “…threaten our national security and public safety….”

Really? You would think after all these years of terrorist propaganda, there would be evidence to support that claim.

True enough, potential terrorists can meet online, but “recruitment” is a far different tale than reading online terrorist content. Consider ISIS and the Lonely Young American, a tale told to support the idea of online recruiting, but is one of the better refutations of that danger.

It’s not hard to whistle up alleged social science studies of online “terrorist propaganda” but the impacts of that so-called propaganda, are speculation at best, when not actually fantasies of the authors.

“Fake News” warriors should challenge the harmful terrorist propaganda narrative as well as those that are laughably false (denying climate change for example).

Resisting EU Censorship

Monday, December 5th, 2016

US tech giants like Facebook could face new EU laws forcing them to tackle hate speech by Arjun Kharpal.

From the post:

U.S. technology giants including Facebook, Twitter, Microsoft, and Google’s YouTube could face new laws forcing them to deal with online hate speech if they don’t tackle the problem themselves, the European Commission warned.

In May, the four U.S. firms unveiled a “code of conduct” drawn up in conjunction with the Commission, the European Union’s executive arm, to take on hate speech on their platforms. It involved a series of commitments including a pledge to review the majority of notifications of suspected illegal hate speech in less than 24 hours and remove or disable access to the content if necessary. Another promise was to provide regular training to staff around hate speech.

But six months on, the Commission is not happy with the progress. EU Justice Commissioner Vera Jourova has commissioned a report, set to be released later this week, which claims that progress in removing offending material has been too slow.

I posted about this ill-fated “code of conduct” under Four Horsemen Of Internet Censorship + One. I pointed out the only robust solution to the “hate speech” problem was to enable users to filter the content they see, as opposed to the EU.

Fast forward 2 internet years (3 months = 1 internet year) and the EU is seeking to increase its censorship powers and not to empower users to regulate the content they consume.

Adding injury to insult, the EU proposes directives that require uncompensated expenditures on the part of its victims, Facebook, Twitter, Microsoft, and Google, to meet criteria that can only be specified user by user.

Why the first refuge of the EU for disagreeable speech is censorship I don’t know. What I do know is any tolerance of EU censorship demands encourages even more outrageous censorship demands.

The usual suspects should push back and push back hard against EU demands for censorship.

Enabling users to filter content means users can shape incoming streams to fit their personal sensitivities and dislikes, without impinging on the rights of others.

Had Facebook, Twitter, Microsoft, and Google started developing shareable content filters when they proposed their foolish “code of conduct” to the EU last May, they would either be available or nearly so by today.

Social media providers should not waste any further time attempting to censor on behalf of the EU or users. Enable users to censor their own content and get out of the censorship business.

There’s no profit in the censorship business. In fact, there is only expense and wasted effort.

PS: The “EU report” in question won’t be released until Wednesday, December 7, 2016 (or so I am told).

Internet Censor(s) Spotted in Mirror

Wednesday, November 30th, 2016

How to solve Facebook’s fake news problem: experts pitch their ideas by Nicky Woolf.

From the post:

The impact of fake news, propaganda and misinformation has been widely scrutinized since the US election. Fake news actually outperformed real news on Facebook during the final weeks of the election campaign, according to an analysis by Buzzfeed, and even outgoing president Barack Obama has expressed his concerns.

But a growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser.

Woolf captures the essential wrongness with the now, 120 pages, of suggestions, quoting Claire Wardle:


“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks have to be on board.”

Don’t worry, selecting the arbiter of truth and what truth is won’t be difficult.

The authors of these suggestions see their favorite candidate every day:

mirror-460

So long as they aren’t seeing my image (substitute your name/image) in the mirror, I’m not interested in any censorship proposal.

Personally, even if offered the post of Internet Censor, I would turn it down.

I can’t speak for you but I am unable to be equally impartial to all. Nor do I trust anyone else to be equally impartial.

The “solution” to “fake news,” if you think that is a meaningful term, is more news, not less.

Enable users to easily compare and contrast news sources, if they so choose. Freedom means being free to make mistakes as well as good choices (from some point of view).

Gab – Censorship Lite?

Tuesday, November 29th, 2016

I submitted my email today at Gab and got this message:

Done! You’re #1320420 in the waiting list.

Only three rules:

Illegal Pornography

We have a zero tolerance policy against illegal pornography. Such material will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We reserve the right to ban accounts that share such material. We may also report the user to local law enforcement per the advice our legal counsel.

Threats and Terrorism

We have a zero tolerance policy for violence and terrorism. Users are not allowed to make threats of, or promote, violence of any kind or promote terrorist organizations or agendas. Such users will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We may also report the user to local and/or federal law enforcement per the advice of our legal counsel.

What defines a ‘terrorist organization or agenda’? Any group that is labelled as a terrorist organization by the United Nations and/or United States of America classifies as a terrorist organization on Gab.

Private Information

Users are not allowed to post other’s confidential information, including but not limited to, credit card numbers, street numbers, SSNs, without their expressed authorization.

If Gab is listening, I can get the rules down to one:

Court Ordered Removal

When Gab receives a court order from a court of competent jurisdiction ordering the removal of identified, posted content, at (service address), the posted, identified content will be removed.

Simple, fair, gets Gab and its staff out of the censorship business and provides a transparent remedy.

At no cost to Gab!

What’s there not to like?

Gab should review my posts: Monetizing Hate Speech and False News and Preserving Ad Revenue With Filtering (Hate As Renewal Resource), while it is in closed beta.

Twitter and Facebook can keep spending uncompensated time and effort trying to be universal and fair censors. Gab has the opportunity to reach up and grab those $100 bills flying overhead for filtered news services.

What is the New York Times if not an opinionated and poorly run filter on all the possible information it could report?

Apply that same lesson to social media!

PS: Seriously, before going public, I would go to the one court-based rule on content. There’s no profit and no wins in censoring any content on your own. Someone will always want more or less. Courts get paid to make those decisions.

Check with your lawyers but if you don’t look at any content, you can’t be charged with constructive notice of it. Unless and until someone points it out, then you have to follow DCMA, court orders, etc.

Mute Account vs. Mute Word/Hashtag – Ineffectual Muting @Twitter

Thursday, November 17th, 2016

twitter-hate-speech-460

I mentioned yesterday the distinction between muting an account versus the new muting by word or #hashtag at Twitter.

Take a moment to check my sources at Twitter support to make sure I have the rules correctly stated. I’ll wait.

(I’m not a journalist but readers should be enabled to satisfy themselves claims I make are at least plausible.)

No feedback from Twitter on the don’t appear in your timeline vs. do appear in your timeline distinction.

Why would I want to only block notifications of what I think of as hate speech and still have those tweets in my timeline?

Then it occurred to me:

If you can block tweets from appearing in your timeline by word or hashtag, you can block advertising tweets from appearing in your timeline.

You cannot effectively mute hate speech @Twitter because you could also mute advertising.

What about it Twitter?

Must feminists, people of color, minorities of all types be subjected to hate speech in order to preserve your revenue streams?


Not that I object to Twitter having revenue streams from advertising but it needs to be more sophisticated than the Nigerian spammer model now in use. Charge a higher price for targeted advertising that users are unlikely to block.

For example, I would be highly unlikely to block ads for cs theory/semantic integration tomes. On the other hand, I would follow a mute list that blocked histories of famous cricket matches. (Apologies to any cricket players in the audience.)

In my post: Twitter Almost Enables Personal Muting + Roving Citizen-Censors I offer a solution that requires only minor changes based on data Twitter already collects plus regexes for muting. It puts what you see entirely in the hands of users.

That enables Twitter to get out of the censorship business altogether, something it doesn’t do well anyway, and puts users in charge of what they see. A win-win from my perspective.

Alt-right suspensions lay bare Twitter’s consistency [hypocrisy] problem

Thursday, November 17th, 2016

Alt-right suspensions lay bare Twitter’s consistency problem by Nausicaa Renner.

From the post:

TWITTER SUSPENDED A NUMBER OF ACCOUNTS associated with the alt-right, USA Today reported this morning. This move was bound to be divisive: While Twitter has banned and suspended users in the past (prominently, Milo Yiannopoulos for incitement), USA Today points out the company has never suspended so many at once—at least seven in this case. Richard Spencer, one of the suspended users and prominent alt-righter, also had a verified account on Twitter. He claims, “I, and a number of other people who have just got banned, weren’t even trolling.”

If this is true, it would be a powerful political statement, indeed. As David Frum notes in The Atlantic, “These suspensions seem motivated entirely by viewpoint, not by behavior.” Frum goes on to argue that a kingpin strategy on Twitter’s part will only strengthen the alt-right’s audience. But we may never know Twitter’s reasoning for suspending the accounts. Twitter declined to comment on its moves, citing privacy and security reasons.

(emphasis in original)

Contrary to the claims of the Southern Poverty Law Center (SPLC) to Twitter, these users may not have been suspended for violating Twitter’s terms of service, but for their viewpoints.

Like the CIA, FBI and NSA, Twitter uses secrecy to avoid accountability and transparency for its suspension process.

The secrecy – avoidance of accountability/transparency pattern is one you should commit to memory. It is quite common.

Twitter needs to develop better muting options for users and abandon account suspension (save on court order) altogether.

Twitter Almost Enables Personal Muting + Roving Citizen-Censors

Wednesday, November 16th, 2016

Investigating news reports of Twitter enabling muting of words and hashtags lead me to Advanced muting options on Twitter. Also relevant is Muting accounts on Twitter.

Alex Hern‘s post: Twitter users to get ability to mute words and conversations prompted this search because I found:

After nine years, Twitter users will finally be able to mute specific conversations on the site, as well as filter out all tweets with a particular word or phrase from their notifications.

The much requested features are being rolled out today, according to the company. Muting conversations serves two obvious purposes: users who have a tweet go viral will no longer have to deal with thousands of replies from strangers, while users stuck in an interminable conversation between people they don’t know will be able to silently drop out of the discussion.

A broader mute filter serves some clear general uses as well. Users will now be able to mute the names of popular TV shows, for instance, or the teams playing in a match they intend to watch later in the day, from showing up in their notifications, although the mute will not affect a user’s main timeline. “This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time,” says Twitter in a blogpost.

to be too vague to be useful.

Starting with Advanced muting options on Twitter, you don’t have to read far to find:

Note: Muting words and hashtags only applies to your notifications. You will still see these Tweets in your timeline and via search. The muted words and hashtags are applied to replies and mentions, including all interactions on those replies and mentions: likes, Retweets, additional replies, and Quote Tweets.

That’s the second paragraph and displayed with a high-lighted background.

So, “muting” of words and hashtags only stops notifications.

“Muted” offensive or inappropriate content is still visible “in your timeline and search.”

Perhaps really muting based on words and hashtags will be a paid subscription feature?

The other curious aspect is that “muting” an account carries an entirely different meaning.

The first sentence in Muting accounts on Twitter reads:

Mute is a feature that allows you to remove an account’s Tweets from your timeline without unfollowing or blocking that account.

Quick Summary:

  • Mute account – Tweets don’t appear in your timeline.
  • Mute by word or hashtag – Tweets do appear in your timeline.

How lame is that?

Solution That Avoids Censorship

The solution to Twitter’s “hate speech,” which means different things to different people isn’t hard to imagine:

  1. Mute by account, word, hashtag or regex – Tweets don’t appear in your timeline.
  2. Mute lists can be shared and/or followed by others.

Which means that if I trust N’s judgment on “hate speech,” I can follow their mute list. That saves me the effort of constructing my own mute list and perhaps even encourages the construction of public mute lists.

Twitter has the technical capability to produce such a solution in short order so you have to wonder why they haven’t? I have no delusion of being the first person to have imagined such a solution. Twitter? Comments?

The Alternative Solution – Roving Citizen-Censors

The alternative to a clean and non-censoring solution is covered in the USA Today report Twitter suspends alt-right accounts:

Twitter suspended a number of accounts associated with the alt-right movement, the same day the social media service said it would crack down on hate speech.

Among those suspended was Richard Spencer, who runs an alt-right think tank and had a verified account on Twitter.

The alt-right, a loosely organized group that espouses white nationalism, emerged as a counterpoint to mainstream conservatism and has flourished online. Spencer has said he wants blacks, Asians, Hispanics and Jews removed from the U.S.

[I personally find Richard Spencer’s views abhorrent and report them here only by way of example.]

From the report, Twitter didn’t go gunning for Richard Spencer’s account but the Southern Poverty Law Center (SPLC) did.

The SPLC didn’t follow more than 100 white supermacists to counter their outlandish claims or to offer a counter-narrative. They followed to gather evidence of alleged violations of Twitter’s terms of service and to request removal of those accounts.

Government censorship of free speech is bad enough, enabling roving bands of self-righteous citizen-censors to do the same is even worse.

The counter-claim that Twitter isn’t the government, it’s not censorship, etc., is intellectually and morally dishonest. Technically true in U.S. constitutional law sense but suppression of speech is the goal and that’s censorship, whatever fig leaf the SPLC wants to put on it. They should be honest enough to claim and defend the right to censor the speech of others.

I would not vote in their favor, that is to say they have a right to censor the speech of others. They are free to block speech they don’t care to hear, which is what my solution to “hate speech” on Twitter enables.

Support muting, not censorship or roving bands of citizen-censors.

Preventing Another Trump – Censor Facebook To Protect “Dumb” Voters

Saturday, November 12th, 2016

Facebook can no longer be ‘I didn’t do it’ boy of global media by Emily Bell.


Barack Obama called out the fake news problem directly at a rally in Michigan on the eve of the election: “And people, if they just repeat attacks enough, and outright lies over and over again, as long as it’s on Facebook and people can see it, as long as it’s on social media, people start believing it….And it creates this dust cloud of nonsense.”

Yesterday, Zuckerberg disputed this, saying that “the idea that fake news on Facebook… influenced the election…is a pretty crazy idea” and defending the “diversity” of information Facebook users see. Adam Mosseri, the company’s VP of Product Development, said Facebook must work on “improving our ability to detect misinformation.” This line is part of Zuckerberg’s familiar but increasingly unconvincing narrative that Facebook is not a media company, but a tech company. Given the shock of Trump’s victory and the universal finger-pointing at Facebook as a key player in the election, it is clear that Zuckerberg is rapidly losing that argument.

In fact, Facebook, now the most influential and powerful publisher in the world, is becoming the “I didn’t do it” boy of global media. Clinton supporters and Trump detractors are searching for reasons why a candidate who lied so frequently and so flagrantly could have made it to the highest office in the land. News organizations, particularly cable news, are shouldering part of the blame for failing to report these lies for what they were. But a largely hidden sphere of propagandistic pages that target and populate the outer reaches of political Facebook are arguably even more responsible.

You can tell Bell has had several cups of the Obama kool-aid by her uncritical acceptance of Barack Obama’s groundless attacks on “…fake news problem….”

Does Bell examine the incidence of “fake news” in other elections?

No.

Does Bell specify which particular “fake news” stories should have been corrected?

No.

Does Bell explain why voters can’t distinguish “fake news” from truthful news?

No.

Does Bell explain why mainstream media is better than voters at detecting “fake news?”

No.

Does Bell explain why she should be the judge over reporting during the 2016 Presidential election?

No.

Does Bell explain why she and Obama consider voters to be dumber than themselves?

No.

Do I think Bell or anyone else should be censoring Facebook for “false news?”

No.

How about you?

Freedom of Speech/Press – Great For “Us” – Not So Much For You (Wikileaks)

Saturday, November 5th, 2016

The New York Times, sensing a possible defeat of its neo-liberal agenda on November 8, 2016, has loosed the dogs of war on social media in general and Wikileaks in particular.

Consider the sleight of hand in Farhad Manjoo’s How the Internet Is Loosening Our Grip on the Truth, which argues on one hand,


You’re Not Rational

The root of the problem with online news is something that initially sounds great: We have a lot more media to choose from.

In the last 20 years, the internet has overrun your morning paper and evening newscast with a smorgasbord of information sources, from well-funded online magazines to muckraking fact-checkers to the three guys in your country club whose Facebook group claims proof that Hillary Clinton and Donald J. Trump are really the same person.

A wider variety of news sources was supposed to be the bulwark of a rational age — “the marketplace of ideas,” the boosters called it.

But that’s not how any of this works. Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

This dynamic becomes especially problematic in a news landscape of near-infinite choice. Whether navigating Facebook, Google or The New York Times’s smartphone app, you are given ultimate control — if you see something you don’t like, you can easily tap away to something more pleasing. Then we all share what we found with our like-minded social networks, creating closed-off, shoulder-patting circles online.

This gets to the deeper problem: We all tend to filter documentary evidence through our own biases. Researchers have shown that two people with differing points of view can look at the same picture, video or document and come away with strikingly different ideas about what it shows.

You caught the invocation of authority by Manjoo, “researchers have shown,” etc.

But did you notice he never shows his other hand?

If the public is so bat-shit crazy that it takes all social media content as equally trustworthy, what are we to do?

Well, that is the question isn’t it?

Manjoo invokes “dozens of news outlets” who are tirelessly but hopelessly fact checking on our behalf in his conclusion.

The strong implication is that without the help of “media outlets,” you are a bundle of preconceptions and biases doing what feels easiest.

“News outlets,” on the other hand, are free from those limitations.

You bet.

If you thought Majoo was bad, enjoy seething through Zeynep Tufekci’s claims that Wikileaks is an opponent of privacy, sponsor of censorship and opponent of democracy, all in a little over 1,000 words (1069 exact count). Wikileaks Isn’t Whistleblowing.

It’s a breath taking piece of half-truths.

For example, playing for your sympathy, Tufekci invokes the need of dissidents for privacy. Even to the point of invoking the ghost of the former Soviet Union.

Tufekci overlooks and hopes you do as well, that these emails weren’t from dissidents, but from people who traded in and on the whims and caprices at the pinnacles of American power.

Perhaps realizing that is too transparent a ploy, she recounts other data dumps by Wikileaks to which she objects. As lawyers say, if the facts are against you, pound on the table.

In an echo of Manjoo, did you know you are too dumb to distinguish critical information from trivial?

Tufekci writes:


These hacks also function as a form of censorship. Once, censorship worked by blocking crucial pieces of information. In this era of information overload, censorship works by drowning us in too much undifferentiated information, crippling our ability to focus. These dumps, combined with the news media’s obsession with campaign trivia and gossip, have resulted in whistle-drowning, rather than whistle-blowing: In a sea of so many whistles blowing so loud, we cannot hear a single one.

I don’t think you are that dumb.

Do you?

But who will save us? You can guess Tufekci’s answer, but here it is in full:


Journalism ethics have to transition from the time of information scarcity to the current realities of information glut and privacy invasion. For example, obsessively reporting on internal campaign discussions about strategy from the (long ago) primary, in the last month of a general election against a different opponent, is not responsible journalism. Out-of-context emails from WikiLeaks have fueled viral misinformation on social media. Journalists should focus on the few important revelations, but also help debunk false misinformation that is proliferating on social media.

If you weren’t frightened into agreement by the end of her parade of horrors:


We can’t shrug off these dangers just because these hackers have, so far, largely made relatively powerful people and groups their targets. Their true target is the health of our democracy.

So now Wikileaks is gunning for democracy?

You bet. 😉

Journalists of my youth, think Vietnam, Watergate, were aggressive critics of government and the powerful. The Panama Papers project is evidence that level of journalism still exists.

Instead of whining about releases by Wikileaks and others, journalists* need to step up and provide context they see as lacking.

It would sure beat the hell out of repeating news releases from military commanders, “justice” department mouthpieces, and official but “unofficial” leaks from the American intelligence community.

* Like any generalization this is grossly unfair to the many journalists who work on behalf of the public everyday but lack the megaphone of the government lapdog New York Times. To those journalists and only them, do I apologize in advance for any offense given. The rest of you, take such offense as is appropriate.

Bias in Data Collection: A UK Example

Monday, October 10th, 2016

Kelly Fiveash‘s story, UK’s chief troll hunter targets doxxing, virtual mobbing, and nasty images starts off:

Trolls who hurl abuse at others online using techniques such as doxxing, baiting, and virtual mobbing could face jail, the UK’s top prosecutor has warned.

New guidelines have been released by the Crown Prosecution Service to help cops in England and Wales determine whether charges—under part 2, section 44 of the 2007 Serious Crime Act—should be brought against people who use social media to encourage others to harass folk online.

It even includes “encouraging” statistics:


According to the most recent publicly available figures—which cite data between May 2013 and December 2014—1,850 people were found guilty in England and Wales of offences under section 127 of the Communications Act 2003. But the numbers reveal a steady climb in charges against trolls. In 2007, there were a total of 498 defendants found guilty under section 127 in England and Wales, compared with 693 in 2008, 873 in 2009, 1,186 in 2010 and 1,286 in 2011.

But the “most recent publicly available figures,” doesn’t ring true does it?

Imagine that, 1850 trolls out of a total population of England and Wales of 57 million. (England 53.9 million, Wales 3.1 million, mid-2013)

Really?

Let’s look at the referenced government data, 25015 Table.xls.

For the months of May 2013 to December 2014, there are only monthly totals of convictions.

What data is not being collected?

Among other things:

  1. Offenses reported to law enforcement
  2. Offenses investigated by law enforcement (not the same as #1)
  3. Conduct in question
  4. Relationship, if any, between the alleged offender/victim
  5. Race, economic status, location, social connections of alleged offender/victim
  6. Law enforcement and/or prosecutors involved
  7. Disposition of cases without charges being brought
  8. Disposition of cases after charges brought but before trial
  9. Charges dismissed by courts and acquittals
  10. Judges who try and/or dismiss charges
  11. Penalties imposed upon guilty plea and/or conviction
  12. Appeals and results on appeal, judges, etc.

All that information exists for every reported case of “trolls,” and is recorded at some point in the criminal justice process or could be discerned from those records.

Can you guess who isn’t collecting that information?

The TheyWorkForYou site reports at: Communications Act 2003, Jeremy Wright, The Parliamentary Under-Secretary of State for Justice, saying:


The Ministry of Justice Court Proceedings Database holds information on defendants proceeded against, found guilty and sentenced for criminal offences in England and Wales. This database holds information on offences provided by the statutes under which proceedings are brought but not the specific circumstances of each case. It is not possible to separately identify, in all cases brought under section 127 of the Communications Act 2003, whether a defendant sent or caused to send information to an individual or a small group of individuals or made the information widely available to the public. This detailed information may be held by the courts on individual case files which due to their size and complexity are not reported to Justice Analytical Services. As such this information can be obtained only at disproportionate cost.
… (emphasis added)

I was unaware that courts in England and Wales were still recording their proceedings on vellum. That would be expensive to manually gather that data together. (NOT!)

How difficult is it from any policy organization, whether seeking greater protection from trolls and/or opposing classes of prosecution based on discrimination and free speech to gather the same data?

Here is a map of the Crown Prosecution Service districts:

cps-460

Counting the sub-offices in each area, I get forty-three separate offices.

But that’s only cases that are considered for prosecution and that’s unlikely to be the same number as reported to the police.

Checking for police districts in England, I get thirty-nine.

england-police-460

Plus, another four areas for Wales:

wales-police-230

The Wikipedia article List of law enforcement agencies in the United Kingdom, Crown dependencies and British Overseas Territories has links for all these police areas, which in the interest of space, I did not repeat here.

I wasn’t able to quickly find a map of English criminal courts, although you can locate them by postcode at: Find the right court or tribunal. My suspicion is that Crown Prosecution Service areas correspond to criminal courts. But verify that for yourself.

In order to collect the information already in the possession of the government, you would have to search records in 43 police districts, 43 Crown Prosecution Service offices, plus as many as 43 criminal courts in which defendants may be prosecuted. All over England and Wales. With unhelpful clerks all along the way.

All while the government offers the classic excuse:

As such this information can be obtained only at disproportionate cost.

Disproportionate because:

Abuse of discretion, lax enforcement, favoritism, discrimination by police officers, Crown prosecutors, judges could be demonstrated as statistical facts?

Governments are old hands at not collecting evidence they prefer to not see thrown back in their faces.

For example: FBI director calls lack of data on police shootings ‘ridiculous,’ ‘embarrassing’.

Non-collection of data is a source of bias.

What bias is behind the failure to collect troll data in the UK?

What are we allowed to say? [Criticism]

Saturday, September 24th, 2016

What are we allowed to say? by David Bromwich.

From the post:

Free speech is an aberration – it is best to begin by admitting that. In most societies throughout history and in all societies some of the time, censorship has been the means by which a ruling group or a visible majority cleanses the channels of communication to ensure that certain conventional practices will go on operating undisturbed. It is not only traditional cultures that see the point of taboos on speech and expressive action. Even in societies where faith in progress is part of a common creed, censorship is often taken to be a necessary means to effect improvements that will convey a better life to all. Violent threats like the fatwa on Salman Rushdie and violent acts like the assassinations at Charlie Hebdo remind us that a militant religion is a dangerous carrier of the demand for the purification of words and images. Meanwhile, since the fall of Soviet communism, liberal bureaucrats in the North Atlantic democracies have kept busy constructing speech codes and guidelines on civility to soften the impact of unpleasant ideas. Is there a connection between the two?

Probably an inbred trait of human nature renders the attraction of censorship perennial. Most people (the highly literate are among the worst) believe that what is good for them will be good for others. Besides, a regime of censorship must claim to derive its authority from settled knowledge and not opinion. Once enforcement and exclusion have done their work, this assumption becomes almost irresistible; and it is relied on to produce a fortunate and economical result: self-censorship. We stay out of trouble by gagging ourselves. Among the few motives that may strengthen the power of resistance is the consciousness of having been deeply wrong oneself, either regarding some abstract question or in personal or public life. Another motive of resistance occasionally pitches in: a radical, quasi-physical horror of seeing people coerce other people without having to supply reasons. For better or worse, this second motive is likely to be mixed with misanthropy.

As far back as one can trace the vicissitudes of public speech and its suppression, the case for censorship seems to have begun in the need for strictures against blasphemy. The introductory chapter of Blasphemy, by the great American legal scholar Leonard Levy, covers ‘the Jewish trial of Jesus’; it is followed in close succession, in Levy’s account, by the Christian invention of the concept of heresy and the persecution of the Socinian and Arminian heretics and later of the Ranters, Antinomians and early Quakers. After an uncertain interval of state prosecutions and compromises in the 19th century, Levy’s history closes at the threshold of a second Enlightenment in the mid-20th: the endorsement by the North Atlantic democracies of a regime of almost unrestricted freedom of speech and expression.
… (emphasis in original)

Bromwich’s essay runs some twenty pages in print so refresh your coffee before starting!

It is a “must” read but not without problems.

The focus on Charlie Hebdo and The Satanic Verses, gives readers a “safe context” in which to consider the issue of “free speech.”

The widespread censorship of “jihadist” speech, which for the most part passes unnoticed and without even a cursory node towards “free speech” is a more current and confrontational example.

Does Bromwich use safe examples to “stay out of trouble by gagging [himself]?”

Hundreds of thousands have been silenced by Western tech companies. Yet in an essay on freedom of speech, they don’t merit a single mention.

The failure to mention the largest current example of anti-freedom of speech in a freedom of speech essay, should disturb every attentive reader.

Disturb them to ask: What of freedom of speech today? Not as a dry and desiccated abstraction but freedom of speech in the streets.

Where is the freedom of speech to incite others to action? Freedom of speech to oppose corrupt governments? Freedom of speech to advocate harsh measures against criminal oppressors?

The invocation of Milton and Mill provides a groundwork for confrontation of government urged if not required censorship but the opportunity is wasted on the vagaries of academic politics.

Freedom of speech is important on college campuses but people are dying where freedom of speech is being denied. To showcase the former over the latter is a form of censorship itself.

If the question is censorship, as Milton and Mill would agree, the answer is no. (full stop)

PS: For those who raise the bugaboo of child pornography, there are laws against the sexual abuse of children, laws that raise no freedom of speech issues.

Possession of child pornography is attacked because it gives the appearance of meaningful action, while allowing the cash flow from its production and distribution to continue unimpeded.

Are You A Closet Book Burner? Google Crowdsources Censorship!

Friday, September 23rd, 2016

YouTube is cleaning up and it wants your help! by Lisa Vaas.

From the post:

Google is well aware that the hair-raising comments of YouTube users have turned the service into a fright fest.

It’s tried to drain the swamp. In February 2015, for example, it created a kid-safe app that would keep things like, oh, say, racist/anti-Semitic/homophobic comments or zombies from scaring the bejeezus out of young YouTubers.

Now, Google’s trying something new: it’s soliciting “YouTube Heroes” to don their mental hazmat suits and dive in to do some cleanup.

You work hard to make YouTube better for everyone… and like all heroes, you deserve a place to call home.

Google has renamed the firemen of Fahrenheit 451 to YouTube Heroes.

Positive names cannot change the fact that censors by any name, are in fact just that, censors.

Google has taken censorship to a new level in soliciting the participation of the close-minded, the intolerant, the bigoted, the fearful, etc., from across the reach of the Internet, to censor YouTube.

Google does own YouTube and if it wants to turn it into a pasty gray pot of safe gruel, it certainly can do so.

As censors flood into YouTube, free thinkers, explorers, users who prefer new ideas over pablum, need to flood out of YouTube.

Ad revenue needs to fall as this ill-advised campaign, “come be a YouTube censor” succeeds.

Only falling ad revenue will stop this foray into the folly of censorship by Google.

First steps:

  1. Don’t post videos to Google.
  2. Avoid watching videos on Google as much as possible.
  3. Urge other to not post/use YouTube.
  4. Post videos to other venues.
  5. Speak out against YouTube censorship.
  6. Urge YouTube authors to post/repost elsewhere

“Safe place” means a place safe from content control at the whim and caprice of governments, corporations and even other individuals.

What’s so hard to “get” about that?

Let’s Offend Mark Zuckerberg! Napalm-Girl – Please Repost Image

Friday, September 9th, 2016

Facebook deletes Norwegian PM’s post as ‘napalm girl’ row escalates by Alice Ross and Julia Carrie Wong.

napalm-girl-460

From the post:

Facebook has deleted a post by the Norwegian prime minister in an escalating row over the website’s decision to remove content featuring the Pulitzer-prize winning “napalm girl” photograph from the Vietnam war.

Erna Solberg, the Conservative prime minister, called on Facebook to “review its editing policy” after it deleted her post voicing support for a Norwegian newspaper that had fallen foul of the social media giant’s guidelines.

Solberg was one of a string of Norwegian politicians who shared the iconic image after Facebook deleted a post from Tom Egeland, a writer who had included the Nick Ut picture as one of seven photographs he said had “changed the history of warfare”.

I remember when I first saw that image during the Vietnam War. As if the suffering of the young girl wasn’t enough, the photo captures the seeming indifference of the soldiers in the background.

This photo certainly changes approach of the U.S. military to press coverage of wars. From TV cameras recording live footage of battles and the wounded in Vietnam, present day coverage is highly sanitized and “safe” for any viewing audience.

There are the obligatory shots of the aftermath of “terrorist” bombings but where is the live reporting on allied bombing of hospitals, weddings, schools and the like? Where are the shrieking wounded and death rattles?

Too much of that and American voters might get the idea that war has real consequences, for real people. Well, war always does but it the profit consequences that concern military leadership and their future employers. Can’t have military spending without a war and a supposed enemy.

Zuckerberg should not shield us and especially not children from the nasty side of war.

Sanitized and “safe” reporting of wars is a recipe for the continuation of the same.

Read more about the photo and the photographer who took it: Nick Ut’s Napalm Girl Helped End the Vietnam War. Today in L.A., He’s Still Shooting

You can’t really tell from the photo but the girl’s skin (Kim Phuc) was melting off in strips. That’s the reality of war that needs to be brought home to everyone who supports war to achieve abstract policy goals and objectives.

From Preaching to Meddling – Censors, I-94s, Usage of Social Media

Thursday, August 25th, 2016

Tony Romm has a highly amusing account of how internet censors, Google, Facebook and Twitter, despite their own censorship efforts, object to social media screening on I-94 (think international arrival and departure) forms.

Tony writes Tech slams Homeland Security on social media screening:

Internet giants including Google, Facebook and Twitter slammed the Obama administration on Monday for a proposal that would seek to weed out security threats by asking foreign visitors about their social media accounts.

The Department of Homeland Security for months has weighed whether to prompt foreign travelers arriving on visa waivers to disclose the social media websites they use — and their usernames for those accounts — as it seeks new ways to spot potential terrorist sympathizers. The government unveiled its draft plan this summer amid widespread criticism that authorities aren’t doing enough to monitor suspicious individuals for signs of radicalization, including the married couple who killed 14 people in December’s mass shooting in San Bernardino, Calif.

But leading tech companies said Monday that the proposal could “have a chilling effect on use of social media networks, online sharing and, ultimately, free speech online.”
….

Google, Facebook and Twitter casually censor hundreds of thousands of users every year so their sudden concern for free speech is puzzling.

Until you catch the line:

have a chilling effect on use of social media networks, online sharing

Translation: chilling effect on market share of social media, diminished advertising revenues and online sales.

The reaction of Google, Facebook and Twitter reminds me of the elderly woman in church who would shout “Amen!” when the preacher talked about the dangers of alcohol, “Amen!” when he spoke against smoking, “Amen! when he spoke of the shame of gambling, but was curiously silent when the preacher said that dipping snuff was also sinful.

After the service, as the parishioners left the church, the preacher stopped the woman to ask about her change in demeanor. The woman said, “Well, but you went from preaching to meddling.”

😉

Speaking against terrorism, silencing users by the hundred thousand, is no threat to the revenue streams of Google, Facebook and Twitter. Easy enough and they benefit from whatever credibility that buys with governments.

Disclosure of social media use, which could have some adverse impact on revenue, the government has gone from preaching to meddling.

The revenue stream impacts imagined by Google, Facebook and Twitter are just that, imagined. Its impact in fact is unknown. But fear of an adverse impact is so great that all three have swung into frantic action.

That’s a good measure of their commitment to free speech versus their revenue streams.

Having said all that, the Agency Information Collection Activities: Arrival and Departure Record (Forms I-94 and I-94W) and Electronic System for Travel Authorization is about as lame and ineffectual of an anti-terrorist proposal as I have seen since 9/11.

You can see the comments on the I-94 farce, which I started to collect but then didn’t.

I shouldn’t say this for free but here’s one insight into “radicalization:”

Use of social media to exchange “radical” messages is a symptom of “radicalization,” not its cause.

You can convince yourself of that fact.

Despite expensive efforts to stamp out child pornography (radical messages), sexual abuse of children (radicalization) continues. The consumption of child pornography doesn’t cause sexual abuse of children, rather it is consumed by sexual abusers of children. The market is driving the production of the pornography. No market, no pornography.

So why the focus on child pornography?

It’s visible (like social media), it’s easy to find (like tweets), it’s abhorrent (ditto for beheadings), and cheap (unlike uncovering real sexual abuse of children and/or actual terrorist activity).

The same factors explain the mis-guided and wholly ineffectual focus on terrorism and social media.

First Amendment Secondary? [Full Text – Response to Stay]

Monday, August 22nd, 2016

Backpage.com defies sex trafficking subpoena despite Senate contempt vote by David Kravets.

From the post:

The First Amendment has been good, really good to the online classified ads portal Backpage.com. In 2015, the US Constitution helped Backpage dodge a lawsuit from victims of sex trafficking. What’s more, a federal judge invoked the First Amendment and crucified an Illinois sheriff—who labeled Backpage a “sex trafficking industry profiteer”—because the sheriff coerced Visa and Mastercard to refrain from processing payments to the site. The judge said Cook County Sheriff Thomas Dart’s anti-Backpage lobbying amounted to “an informal extralegal prior restraint of speech” because Dart’s actions were threatening the site’s financial survival.

But the legal troubles didn’t end there for Backpage, which The New York Times had labeled “the leading site for trafficking of women and girls in the United States.”

Kravets does a great job of linking to the primary documents in this case and while quoting from the government’s response to the request for a stay, does not include a link for the government’s response.

For your research and reading convenience, RESPONSE IN OPPOSITION [1631269] filed by Senate Permanent Subcommittee on Investigations to motion to stay case. A total of 128 pages.

In that consolidated document, Schedule A of the subpoena runs from page 40 to page 50, although the government contends in its opposition that it tried to be more reasonable that it appears.

Even more disturbing than the Senate’s fishing expedition into the records of Backpage is the justification for disregarding the First Amendment:

The Subcommittee is investigating the serious problem of human trafficking on the Internet—much of which takes place on Backpage’s website—and has subpoenaed Mr. Ferrer for documents relating to Backpage’s screening for illegal trafficking. It is important for the Subcommittee’s investigation of Internet sex trafficking to understand what methods the leading online marketplace for sex advertisements employs to screen out illegal sex trafficking on its website. Mr. Ferrer has no First Amendment right to ignore a subpoena for documents about Backpage’s business practices related to that topic. He has refused to identify his First Amendment interests except in sweeping generalities and failed even to attempt to show that any such interests outweigh important governmental interests served by the Subcommittee’s investigation. Indeed, Mr. Ferrer cannot make any balancing argument because he refused to search for responsive documents or produce a privilege log describing them, claiming that the First Amendment gave him blanket immunity from having to carry out these basic duties of all subpoena respondents.

As serious a problem as human trafficking surely is, there are no exceptions to the First Amendment because a crime is a serious one. Just as there are no exceptions to the Fourth or Fifth Amendments because a crime is a serious one.

If you are interested in the “evidence” cited against Backpage, S. Hrg. 114–179 Human Trafficking Investigation (November 2015), runs some 260 pages, details the commission of illegal human trafficking by others, not Backpage.

Illegal sex traffic undoubtedly occurs in the personal ads of the New York Times (NYT) but the Senate hasn’t favored the NYT with such a subpoena.

Kravets reports Backpage is due to respond to the government by 4:00 p.m. Wednesday of this week. I will post a copy of that response as soon as it is available.

235,000 Voices Cried Out And Were Suddenly Silenced

Saturday, August 20th, 2016

Yahoo! News carried this report of censorship: Twitter axes 235,000 more accounts in terror crackdown.

From the post:

Twitter on Thursday announced that it has cut off 235,000 more accounts for violating its policies regarding promotion of terrorism at the global one-to-many messaging service.

The latest account suspensions raised to 360,000 the total number of accounts sidelined since the middle of 2015 and was helping “drive meaningful results” in curbing the activity, according to the San Francisco-based company.

Twitter has been under pressure to balance protecting free speech at the service with not providing a stage for terrorist groups to spread violent messages and enlist people to their causes.

The latest account suspensions came since February, when Twitter announced that it had neutralized 125,000 accounts for violating rules against violent threats and promotion of terrorism.

“Since that announcement, the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe,” Twitter said in a blog post.

When you read Twitter’s blog post, An update on our efforts to combat violent extremism, out of 235,000 accounts, how many are directly tied to a terrorist attack?

Would you guess:


235,000?

150,000?

100,000?

50,000?

25,000?

10,000?

5,000?

Twitter reports 0 accounts as being tied to terrorist attacks.

Odd considering that Twitter says:

Since that announcement, the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe

“…wave of deadly, abhorrent terror attacks…” What wave?

From March of 2016 until July 31, the List of terrorist incidents, 2016 lists some 864 attacks.

A far cry from the almost 1/4 million silenced accounts.

Of course, “terrorism” depends on your definition, the Global Terrorism Database lists over 6,000 terrorist attacks for the time period March 2015 until July 31, 2015.

Even using 2015’s 6,000 attack figure, that’s a long way from 235,000 Twitter accounts.

If you think “…wave of deadly, abhorrent terror attacks…” is just marketing talk on the part of Twitter, the evidence is on your side.

Anyone who thinks they may be in danger of being silenced by Twitter should obtain regular archives of their tweets. Don’t allow your history to be stolen by Twitter.

I do have a question for anyone working on this issue:

Are there efforts to create a non-Twitter servers that make fair use of the Twitter API, so that archives of tweets and/or even new accounts, could continue silenced accounts? Say a Dark Web Not-Twitter Server?

I ask because Twitter continues to demonstrate that “free speech” is subject to its whim and caprice.

A robust and compatible alternative to Twitter, especially if archives can be loaded, would enable free speech for many diverse groups.

Another Data Point On Twitter Censorship Practices

Sunday, August 14th, 2016

twitter-censor-olympics-460

Alert! Non-Lobbyists Have Personal Contact For Members Of Congress!

Sunday, August 14th, 2016

Hacker posts contact information for almost 200 congressional Democrats

Summary: Guccifer 2.0 posted a spreadsheet with the personal contact details of almost 200 Democratic members of Congress.

Sorry, I don’t see why non-lobbyists having personal contact information of members of Congress is a bad thing?

The very thought of non-lobbyists contacting members of Congress provoked frantic activity at WordPress, which promptly disabled Guccifer 2.0 page because of:

receipt of a valid complaint regarding the publication of private information, (WordPress blocks latest Guccifer 2.0 docs

The WordPress model of democracy looks something like this:

wordpress-democracy

I’m not vouching for the donation amounts and/or the amount of access you get for those amounts. It varies from congressional district to district.

Check with your local representative for current prices and access.

If and when you meet with your representative, be sure to ask for their new cellphone number.

Twitter Too Busy With Censorship To Care About Abuse

Saturday, August 13th, 2016

Complaints about Twitter ignoring cases of abuse are quite common, “A Honeypot For Assholes” [How To Monetize Assholes/Abuse]. I may have stumbled on why Twitter “ignores” abuse cases.

Twitter staff aren’t “ignoring” abuse cases, they are too damned busy being ad hoc government censors to handle abuse cases.

Consider: How Israel is trying to enforce gag orders beyond its borders by Michael Schaeffer Omer-Man.

From the post:

Israeli authorities are taking steps to block their own citizens from reading materials published online in other countries, including the United States.

The Israeli State Attorney’s Office Cyber Division has sent numerous take-down requests to Twitter and other media platforms in recent months, demanding that they remove certain content, or block Israeli users from viewing it.

In an email viewed by +972, dated August 2, 2016, Twitter’s legal department notified American blogger Richard Silverstein that the Israeli State Attorney claimed a tweet of his violates Israeli law. The tweet in question had been published 76 days earlier, on May 18. Silverstein has in the past broken stories that Israeli journalists have been unable to report due to gag orders, including the Anat Kamm case.

Without demanding that he take any specific action, Twitter asked Silverstein to let its lawyers know, “if you decide to voluntarily remove the content.” The American blogger, who says he has not stepped foot in any Israeli jurisdiction for two decades, refused, noting that he is not bound by Israeli law. Twitter is based in California.

Two days later, Twitter sent Silverstein a follow-up email, informing him that it was now blocking Israeli users from viewing the tweet in question. Or in Twitter-talk, “In accordance with applicable law and our policies, Twitter is now withholding the following Tweet(s) in Israel.”

It’s no wonder Twitter lacks the time and resources to think of robust solutions that enable free speech and at the same time, protects users who aren’t interested in listening to the free speech of certain others.

Both rights are equally important but Twitter has its hands full responding in an ad hoc fashion to unreasonable demands.

Adopt a policy of delivering any content, anywhere, from any author and empower users to choose what they see.

The seething ball of lawyers, which add no value for Twitter or its users, will suddenly melt away.

No issues to debate.

Governments block content on their own or they don’t.

Users block content on their own or they don’t.

BTW, 972mag.com needs your financial support to keep up this type of reporting. If you are having a good month, keep them in mind.

Twitter Censor Strikes Again (and again, and again)

Saturday, August 13th, 2016

Twitter censors accounts for reasons known only to itself, but in the case, truth telling is one obvious trigger for Twitter censorship:

twitter-censors-again-460

Twitter censors accounts every day that don’t make the news and those are just as serious violations of free speech as this instance.

Twitter could trivially empower users to have free speech and the equally important right to not listen but also for reasons known only to Twitter, has chosen not to do so.

Free speech and the right to not listen are equally important.

What’s so difficult to understand about that?

Twitter Censorship On Behalf Of Turkish Government

Wednesday, August 10th, 2016

twitter-turkey-censor-460

The link Post Coup Censorship takes you to a list of twenty-three (23) journalist/publicist accounts verified as withheld by Twitter in Turkey.

I have tweeted to Efe Kerem Sözeri about this issue and was advised the censorship is based on IP addresses. Sözeri points out that use of a VPN is one easy means of avoiding the censorship.

Hopefully that was productive than a rant about Twitter’s toadyism and self-anointed role to prevent abuse (as opposed to empowering Twitter users to avoid abuse on their own).

Twitter Nanny Says No! No!

Thursday, July 21st, 2016

twitter-nanny-460

For the other side of this story, enjoy Milo Yiannopoulos’s Twitter ban, explained by Aja Romano, where Aja is supportive of Twitter and its self-anointed role as arbiter of social values.

From my point of view, the facts are fairly simple:

Milo Yiannopoulos (formerly @Nero) has been banned from Twitter on the basis of his speech and the speech of others who agree with him.

What more needs to be said?

I have not followed, read, reposted or retweeted any tweets by Milo Yiannopoulos (formerly @Nero). And would not even if someone sent them to me.

I choose to not read that sort of material and so can anyone else. Including the people who complain in Aja’s post.

The Twitter Nanny becomes censor in insisting that no one be able to read tweets from Milo Yiannopoulos (formerly @Nero).

I’ve heard the argument that the First Amendment doesn’t apply to Twitter, which is true, but irrelevant. Only one country in the world has the First Amendment as stated in the US Constitution but that doesn’t stop critics from decrying censorship by other governments.

Or is it only censorship if you agree with the speech being suppressed?

Censorship of speech that I find disturbing, sexist, racist, misogynistic, dehumanizing, transphobic, homophobic, supporting terrorism, is still censorship.

And it is still wrong.

We only have ourselves to blame for empowering Twitter to act as a social media censor. Central point of failure and all that jazz.

Suggestions on a free speech alternative to Twitter?

Buffoons A Threat To Cartoonists?

Wednesday, June 29th, 2016

How social media has changed the landscape for editorial cartooning by Ann Telnaes.

At the center of the social media outrage that Ann describes was her cartoon:

ted-cruz-cartoon-460

I did not see the original Washington Post political attack ad featuring Cruz and his daughters, but the use of family as props is traditional American politics. I took Ann’s cartoon as criticism of that practice in general and Cruz’s use of it in particular.

Even more of a tradition in American politics, is the intellectually and morally dishonest failure to engage the issue at hand. Rather than responding to the criticism of his exploitation of his own children, Cruz attacked Ann as though she was the one at fault.

That should not have been unexpected, given Cruz’s party is responsible for the “Checkers” speech and other notable acts of national deception. (If you don’t know the “Checkers” speech, check it out. TV was just becoming a player in national politics, much like social media now.)

As you can tell, I think the response by Cruz and others was a deliberate distortion of the original cartoon and certainly the abuse heaped upon Ann was unjustified, but what I am missing is the threat posed by “social media lynch mobs?”

What if every buffoon on Fox, social media, etc., all took to social media to criticize Ann’s cartoon?

Certainly a waste of electricity and data packets, but so what? They are theirs to waste.

Ann’s fellow cartoonists recognized the absurdity of the criticism, as would any rational person familiar with American politics.

Ann suggests:


How should the journalism community protect cartoonists so they can do their jobs? We need to educate and be ready the next time a cartoonist aims his or her satire against a thin-skinned politician or interest group looking for an opportunity to manipulate fair criticism. Be aware when a false narrative is being presented to deflect the actual intent of a cartoon; talk to your editors and come up with a plan to counter the misinformation.

Sorry, what other than “false narratives” were you expecting? Shouldn’t we make that assumption at the outset and prepare to press forward with the “true narrative?”

Ann almost captures my approach when she says:

It has been said cartoonists are on the front lines of the war to defend free speech.

The war to defend free speech is quite real. If you doubt that, browse the pages of Index on Censorship.

Where I differ from Ann is that I don’t see the braying of every buffoon social media has to offer as a threat to free speech.

Better filters are the answer to buffoons on social media.

Mapping Media Freedom

Tuesday, June 14th, 2016

Mapping Media Freedom

From the webpage:

Journalists and media workers are confronting relentless pressure simply for doing their job. Mapping Media Freedom identifies threats, violations and limitations faced by members of the press throughout European Union member states, candidates for entry and neighbouring countries.

My American readers should not be mis-led by the current map image:

media-map-2-cropped-460

If it is true the United States is free from press suppression, something I seriously doubt, it won’t be long before it starts to rack up incidents on this site.

Just today, Newt Gingrich, a truly unpleasant waste of human skin, proposed re-igniting the witch hunt committees of the 1950’s. Newt Gingrich Suggests Reforming House Un-American Committee In Wake Of Orlando Shooting.

The so-called “presumptive” candidates for President, Clinton and Trump, have called for tech companies to aid in the suppression of jihadist content and even the closing off of parts of the internet.

At least once a week, visit the Mapping Media Freedom and do what you can to support the media everywhere.

More Censorship Is Coming – To The USA

Tuesday, June 14th, 2016

Hillary Clinton says tech companies need to ‘step up’ fight against ISIS propaganda by Amar Toor.

From the post:

Hillary Clinton said this week that if elected president, she would work with major technology companies to “step up” counter-terrorism efforts, including surveillance of social media and campaigns to combat jihadist propaganda online. As Reuters reports, the presumptive Democratic presidential nominee made the comments in a speech in Cleveland Monday, one day after a gunman killed 49 people and left 53 wounded at a gay nightclub in Orlando.

Clinton did not provide details on how she would work with tech companies, though her comments add to the ongoing debate over privacy and national security, which has intensified following recent terrorist attacks in both the US and Europe. In her speech, the former secretary of state called for an “intelligence surge,” saying that security agencies “need better intelligence to discover and disrupt terrorist plots before they can be carried out.” She also called on the government and tech companies to “use all our capabilities to counter jihadist propaganda online.”

“As president, I will work with our great tech companies from Silicon Valley to Boston to step up our game,” Clinton said. “We have to [do] a better job intercepting ISIS’ communications, tracking and analyzing social media posts and mapping jihadist networks, as well as promoting credible voices who can provide alternatives to radicalization.”

What does it mean to “counter jihadist propaganda online?”

Does it include factual reports about the aims of jihadists and the abuses they seek to correct?

For example, the Declaration of Independence was once considered “propaganda.”

Does it include factual reports of terrorist bombings by coalition forces on jihadists positions?

Question: Who do you root for in the Star Wars movies, the tiny band of rebels or the empire?

Does it include calling on young people to actively resist corrupt and oppressive governments?

Let’s see…

Even anarchy itself, that bugbear held up by the tools of power (though truly to be deprecated) is infinitely less dangerous to mankind than arbitrary government. Anarchy can be but of short duration; for when men are at liberty to pursue that course which is most conducive to their own happiness, they will soon come into it, and for the rudest state of nature, order and good government must soon arise. But tyranny, when once established, entails its curse on a nation to the latest period of time; unless some daring genius, inspired by Heaven, shall unappalled by danger, bravely form and execute the arduous design of restoring liberty and life to his enslaved, murdered country.” [AN ORATION DELIVERED MARCH 6, 1775, AT THE… Joseph Warren (1741-1775) Boston: Printed by Messieurs Edes and Gill, and by J. Greenleaf, 1775 E297 W54, Fighting Words, a collection at Utah State University.]

Updated to use modern language, would that qualify?

As I remember the First Amendment, all of those qualify as protected free speech.

Clinton and her separated-at-birth twin, Donald Trump, can try to impose censorship on the legitimate speech of jihadists.

Let’s all lend the jihadists a hand and repeat their legitimate speech on a regular basis.

I for one would like to hear what the jihadists have to say for themselves.

Wouldn’t you?

How Do I Become A Censor?

Monday, June 13th, 2016

You read about censorship or efforts at censorship on a daily basis.

But none of those reports answers the burning question of the social media age: How Do I Become A Censor?

I mean, what’s the use of reading about other people censoring your speech if you aren’t free to censor theirs? Where the fun in that?

Andrew Golis has an answer for you in: Comments are usually garbage. We’re adding comments to This.!.

Three steps to becoming a censor:

  1. Build a social media site that accepts comments
  2. Declare a highly subjective ass-hat rules
  3. Censor user comments

There being no third-party arbiters, you are now a censor! Feel the power surging through your fingers. Crush dangerous thoughts, memes or content with a single return. The safety and sanity of your users is now your responsibility.

Heady stuff. Yes?

If you think this is parody, check out the This. Community Guidelines for yourself:


With that in mind, This. is absolutely not a place for:

Violations of law. While this is expanded upon below, it should be clear that we will not tolerate any violations of law when using our site.

Hate speech, malicious speech, or material that’s harmful to marginalized groups. Overtly discriminating against an individual belonging to a minority group on the basis of race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability status, or medical condition won’t be tolerated on the site. This holds true whether it’s in the form of a link you post, a comment you make in a conversation, a username or display name you create (no epithets or slurs), or an account you run.

Harassment; incitements to violence; or threats of mental, emotional, cyber, or physical harm to other members. There’s a line between civil disagreement and harassment. You cross that line by bullying, attacking, or posing a credible threat to members of the site. This happens when you go beyond criticism of their words or ideas and instead attack who they are. If you’ve got a vendetta against a certain member, do not police and criticize that member’s every move, post, or comment on a conversation. Absolutely don’t take this a step further and organize or encourage violence against this person, whether through doxxing, obtaining dirt, or spreading that person’s private information.

Violations of privacy. Respect the sanctity of our members’ personal information. Don’t con them – or the infrastructure of our site – to obtain, post, or disseminate any information that could threaten or harm our members. This includes, but isn’t limited to, credit card or debit card numbers; social or national security numbers; home addresses; personal, non-public email addresses or phone numbers; sexts; or any other identifying information that isn’t already publicly displayed with that person’s knowledge.

Sexually-explicit, NSFW, obscene, vulgar, or pornographic content. We’d like for This. to be a site that someone can comfortably scroll through in a public space – say a cafe, or library. We’re not a place for sexually-explicit or pornographic posts, comments, accounts, usernames, or display names. The internet is rife with spaces for you to find people who might share your passion for a certain Pornhub video, but This. isn’t the place to do that. When it comes to nudity, what we do allow on our site is informative or newsworthy – so, for example, if you’re sharing this article on Cameroon’s breast ironing tradition, that’s fair game. Or a good news or feature article about Debbie Does Dallas. But, artful as it may be, we won’t allow actual footage of Debbie Does Dallas on the site. (We understand that some spaces on the internet are shitty at judging what is and isn’t obscene when it comes to nudity, so if you think we’ve pulled your post off the site because we’re a bunch of unreasonable prudes, we’ll be happy to engage.)

Excessively violent content. Gore, mutilation, bestiality, necrophilia? No thanks! There’s a distinction between a potentially upsetting image that’s worth consuming (think of some of the best war photography) and something you’d find in a snuff film. It’s not always an easy distinction to make – real life is pretty brutal, and some of the images we probably need to see are the hardest to stomach – but we also don’t want to create an overwhelmingly negative experience for anyone who visits the site and happens to stumble upon a painful image.

Promotion of self-harm, eating disorders, alcohol or drug abuse, or similar forms of destructive behavior. The internet is, sadly, also rife with spaces where people get off on encouraging others to hurt themselves. If you’d like to do that, get off our site and certainly seek help.

Username squatting. Dovetailing with that, we reserve the right to take back a username that is not being actively used and give it to someone else who’d like it it – especially if it’s, say, an esteemed publication, organization, or person. We’re also firmly against attempts to buy or sell stuff in exchange for usernames.

Use of the This. brand, trademark, or logo without consent. You also cannot use the This. name or anything associated with the brand without our consent – unless, of course, it’s a news item. That means no creating accounts, usernames, or display names that use our brand.

Spam. Populating the site with spammy accounts is antithetical to our mission – being the place to find the absolute best in media. If you’ve created accounts that are transparently selling, say, “installation help for Macbooks” or some other suspicious form tech support, or advertising your “viral video” about Justin Bieber that’s got a suspiciously low number of views, you don’t belong on our site. That contradicts why we exist as a platform – to give members a noise-free experience they can’t find elsewhere on the web.

Impersonation of others. Dovetailing with that – though we’d all like to be The New York Times or Diana Ross, don’t pretend to be them. Don’t create an identity on the site in the likeness of a company or person who isn’t you. If you decide, for some reason, to create a parody account of a public figure or organization – though we can think of better sites to do that on, frankly – make sure you make that as clear as possible in your display name, avatar, and bio.

Infringement of copyright or intellectual property rights. Don’t post copyrighted works without the permission of its original owner or creator. This extends, for example, to copying and pasting a copyrighted set of words into a comment and passing it off as your own without credit. If you think someone has unlawfully violated your own copyright, please follow the DMCA procedures set forth in our Terms of Service.

Mass or automated registration and following. We’ve worked hard to build the site’s infrastructure. If you manipulate that in any way to game your follow count or register multiple spam accounts, we’ll have to terminate your account.

Exploits, phishing, resource abuse, or fraudulent content. Do not scam our members into giving you money, or mislead our members through misrepresenting a link to, say, a virus.

Exploitation of minors. Do not post any material regarding minors that’s sexually explicit, violent, or harmful to their safety. Don’t solicit or request their private or personally identifiable information. Leave them alone.

So how do we take punitive action against anyone who violates these? Depends on the severity of the offense. If you’re a member with a good track record who seems to have slipped up, we’ll shoot you an email telling you why your content was removed. If you’ve shared, written, or done something flagrantly and recklessly violating one of these rules, we’ll ban you from the site through deleting your account and all that’s associated with it. And if we feel it’s necessary or otherwise believe it is required, we will work with law enforcement to handle any risk to one of our members, the This. community in general, or to public safety.

To put it plainly – if you’re an asshole, we’ll kick you off the site.

Let’s make that a little more concrete.

I want to say: “Former Vice-President Dick Cheney should be tortured for a minimum of thirty (30) years and be kept alive for that purpose, as a penalty for his war crimes.”

I can’t say that on This. because:

  • “incitement to violence” If torture is ok, then so it other violence.
  • “harmful to marginalized group” If you think of sociopaths as a marginalized group.
  • “harassment” Cheney is a victim too. He didn’t start life as a moral leper.
  • “excessively violence content” Assume I illustrate the torture Cheney should suffer.

Rules broken vary by the specific content of my speech.

Remind me to pass this advice along to: Jonathan “I Want To Be A Twitter Censor” Weisman. All he needs to do is build a competitor to Twitter and he can censor to his heart’s delight.

The build your own platform isn’t just my opinion. This. confirms my advice:

If you don’t like these rules, feel free to create your own platform! There are a lot of awesome, simple ways to do that. That’s what’s so lovely about the internet.

Art and the Law: [UK Focused]

Sunday, June 12th, 2016

Art and the Law: Guides to the legal framework and its impact on artistic freedom of expression by Jodie Ginsberg, chief executive, Index on Censorship.

From the post:

Freedom of expression is essential to the arts. But the laws and practices that protect and nurture free expression are often poorly understood both by practitioners and by those enforcing the law.

As part of Index on Censorship’s work on art and offence, Index has published a series of law packs intended to address questions about legal limits related to free expression and the arts.

We intend them as “living” documents, to be enhanced and developed in partnership with arts groups so that artistic freedom is nurtured and nourished.

This work builds on an earlier study by Index on Censorship, Taking the Offensive, which showed how self-censorship manifests itself in arts organisations and institutions.

Descriptions of:

Child Protection: PDF | web

Counter Terrorism: PDF | web

Obscene Publications: PDF | web

Public Order: PDF | web

Race and Religion: PDF | web

along with numerous other resources appear on this page.

Realize these are UK specific and the laws on such matters vary widely. That’s not a criticism but an observation for the safety of readers. Check your local laws with qualified legal advisers.

Unlike Jonathan “I Want To Be A Twitter Censor” Weisman, my advice for when you find offensive content, is to look away.

What other people choose to create, publish, perform, listen to, view, read, etc., is their business and certainly none of yours.

Criminal acts against other people, children in particular, are already unlawful and censorship isn’t required outlaw them.