Archive for the ‘Free Speech’ Category

Combating YouTube Censorship (Carry Banned Videos Yourself)

Monday, May 29th, 2017

Memorial Day is always a backwards looking holiday, but reading How Terrorists Slip Beheading Videos Past YouTube’s Censors by Rita Katz, felt like time warping to the 1950’s.

Other jihadi propaganda on the video-sharing platform may be visually more low-key, but are just as insidious in their own ways.

There is a grim bit in comedian Dave Chappelle’s new Netflix special about clicking “don’t like” on an Islamic State beheading video.

“How is this guy cutting peoples’ heads off on YouTube?” Chappelle asks, noting the absurdity of it.

Don’t like. Click.

In reality, reports of extremist content littering YouTube aren’t new. But when hundreds of major advertisers began suspending contracts with YouTube and Google in recent months, boycotting the massive video-sharing platform over concerns with such explicit content, things got a lot more real.

Google services—namely YouTube—are the most plentiful and important links used by terrorist organizations to disseminate their propaganda. And despite all of YouTube’s efforts to keep them out thus far, such groups still manage to sneak their media onto its servers.
… (emphasis in original)

Whatever label you want to apply to another group, “terrorist,” “al Qaeda,” etc., censorship is and remains censorship.

Censorship and intimidation were practiced during the Red Scare of the 1940’s/50’s, lives/careers were ruined, and we weren’t one whit safer than without it.

Want to combat YouTube censorship?

When videos are censored by YouTube, carry them on your site.

Suggested header: Banned on YouTube to make it easy to find.

It won’t stop YouTube’s censorship but it can defeat its intended outcome.

Introduction: The New Face of Censorship

Saturday, May 6th, 2017

Introduction: The New Face of Censorship by Joel Simon.

From the post:

In the days when news was printed on paper, censorship was a crude practice involving government officials with black pens, the seizure of printing presses and raids on newsrooms. The complexity and centralization of broadcasting also made radio and television vulnerable to censorship even when the governments didn’t exercise direct control of the airwaves. After all, frequencies can be withheld; equipment can be confiscated; media owners can be pressured.

New information technologies–the global, interconnected internet; ubiquitous social media platforms; smart phones with cameras–were supposed to make censorship obsolete. Instead, they have just made it more complicated.

Does anyone still believe the utopian mantras that information wants to be free and the internet is impossible to censor or control?

The fact is that while we are awash in information, there are tremendous gaps in our knowledge of the world. The gaps are growing as violent attacks against the media spike, as governments develop new systems of information control, and as the technology that allows information to circulate is co-opted and used to stifle free expression.

The work of Joel Simon and the Committee to Protect Journalists is invaluable. The challenges, dangers and hazards for journalists around the world are constant and unrelenting.

I have no doubt about Simon’s account of suppression of journalists. His essay is a must read for everyone who opposes censorship, at least in its obvious forms.

A more subtle form of censorship is practiced in the United States, self-censorship.

How many stories on this theme have you read in the last couple of weeks? U.S. spy agency abandons controversial surveillance technique

Now, how many of those same stories mentioned that the NSA has a long and storied history of lying to the American public, presidents and congress?

By my count, which wasn’t exhaustive, the total is 0.

Instead of challenging this absurd account, Reuters reports the NSA reports as though it were true and fails to remind the public it is relying on a habitual liar.

Show of hands, how many readers think the Reuters staff forgot that the NSA is a hotbed of liars and cheats?

There is little cause for government censorship of US media outlets. They censor themselves before the government can even ask.

Support the Committee to Protect Journalists and perhaps their support of journalists facing real censorship will shame US media into growing a spine.

3,000 New Censorship Jobs At Facebook

Friday, May 5th, 2017

Quick qualification test for censorship jobs at Facebook:

  • Are you more moral than most people?
  • Are you more religious than most people?
  • Are you more sensitive than most people?
  • Do you want to suppress “harmful” content?
  • Do you enjoy protecting people who are easily mis-lead (unlike you)?
  • Do you support the United States, its agencies, offices and allies?
  • Do you recognize Goldman Sachs, Chase and all other NYSE listed companies as people with rights?

If you answered one or more of these questions with “yes,” congratulations! You have passed a pre-qualification test for one of the 3,000 new censorship positions for Facebook.

(Disclaimer: It is not known if Facebook will recognize this pre-qualification test and may have other tests or questions for actual applicants.)

For further details, see: Will Facebook actually hire 3,000 content moderators, or will they outsource? by Annalee Newitz.

Censorship is the question. The answer is no.

EU Censorship Emboldens Torpid UK Parliament Members

Tuesday, May 2nd, 2017

Social media companies “shamefully far” from tackling illegal and dangerous content

From the webpage:

The Home Affairs Committee has strongly criticised social media companies for failing to take down and take sufficiently seriously illegal content – saying they are “shamefully far” from taking sufficient action to tackle hate and dangerous content on their sites.

The Committee recommends the Government should assess whether failure to remove illegal material is in itself a crime and, if not, how the law should be strengthened. They recommend that the Government also consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.
… (emphasis in original)

I can only guess the recent EU censorship spasm, EU’s Unfunded Hear/See No Evil Policy, has made the UK parliament bold. Or at least bolder than usual.

what leaves me puzzled though, is that “hate crimes,” are by definition crimes. Yes? And even the UK laws against hate crimes, police officials to enforce those laws and courts in which to try those suspected of hate crimes and prisons in the event they are convicted. Yes?

If all that’s true, then for social media, really media in general, you need only one rule:

If what you see, hear and/or read disturbs you, look, listen and/or read something else.

It’s really that simple. No costs to social media companies, no extra personnel to second guess what some number of UK parliament members find to be “hate and dangerous content,” no steady decay of the right to speak without government pre-approval, etc.

As far as what other people prefer to see, hear and/or read, well, that’s really none of your business.

EU’s Unfunded Hear/See No Evil Policy

Friday, April 28th, 2017

EU lawmakers vote to make YouTube fight online hate speech by Julia Floretti.

From the post:

Video-sharing platforms such as Google’s YouTube and Vimeo will have to take measures to protect citizens from content containing hate speech and incitement to violence under measures voted by EU lawmakers on Tuesday.

The proliferation of hate speech and fake news on social media has led to companies coming under increased pressure to take it down quickly, while internet campaigners have warned an excessive crackdown could endanger freedom of speech.

Members of the culture committee in the European Parliament voted on a legislative proposal that covers everything from 30 percent quotas for European works on video streaming websites such as Netflix to advertising times on TV to combating hate speech.

Ironically, the reported vote was by the “CULT” committee. No, I’m not making that up! I can prove that from the documents page:

From the report,


Amendment 18

(28) Some of the content stored on video-sharing platforms is not under the editorial responsibility of the video-sharing platform provider. However, those providers typically determine the organisation of the content, namely programmes or user-generated videos, including by automatic means or algorithms. Therefore, those providers should be required to take appropriate measures to protect minors from content that may impair their physical, mental or moral development and protect all citizens from incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to sex, race, colour, religion, descent or national or ethnic origin.
… (emphasis in original)

In addition to being censorship, unfunded censorship at that, the EU report runs afoul of the racist reality of the EU.

If you’re up for some difficult reading, consider Intolerance, Prejudice and Discrimination – A European Report by Forum Berlin, Andreas Zick, Beate Küpper, and Andreas Hövermann.

From page 13 of the report:

  • Group-focused enmity is widespread in Europe. It is weakest in the Netherlands, and strongest in Poland and Hungary. With respect to anti-immigrant attitudes, anti-Muslim attitudes and racism there are only minor differences between the countries, while differences in the extent of anti-Semitism, sexism and homophobia are much more marked.
  • About half of all European respondents believe there are too many immigrants in their country. Between 17 percent in the Netherlands and more than 70 percent in Poland believe that Jews seek to benefit from their forebears’ suffering during the Nazi era. About one third of respondents believe there is a natural hierarchy of ethnicity. Half or more condemn Islam as “a religion of intolerance”. A majority in Europe also subscribe to sexist attitudes rooted in traditional gender roles and demand that: “Women should take their role as wives and mothers more seriously.” With a figure of about one third, Dutch respondents are least likely to affirm sexist attitudes. The proportion opposing equal rights for homosexuals ranges between 17 percent in the Netherlands and 88 percent in Poland; they believe it is not good “to allow marriages between two men or two women”.

At the risk of insulting our simian relatives, this new EU policy can be summarized by:

(source: Three Wise Monkeys)

Suppressing hate speech does not result in less hate, only in less evidence of it.

While this legislation is pending, YouTube and Vimeo should occasionally suspend access of EU viewers for an hour. EU voters may decide they need more responsible leadership.

Dissing Facebook’s Reality Hole and Impliedly Censoring Yours

Sunday, April 23rd, 2017

Climbing Out Of Facebook’s Reality Hole by Mat Honan.

From the post:

The proliferation of fake news and filter bubbles across the platforms meant to connect us have instead divided us into tribes, skilled in the arts of abuse and harassment. Tools meant for showing the world as it happens have been harnessed to broadcast murders, rapes, suicides, and even torture. Even physics have betrayed us! For the first time in a generation, there is talk that the United States could descend into a nuclear war. And in Silicon Valley, the zeitgeist is one of melancholy, frustration, and even regret — except for Mark Zuckerberg, who appears to be in an absolutely great mood.

The Facebook CEO took the stage at the company’s annual F8 developers conference a little more than an hour after news broke that the so-called Facebook Killer had killed himself. But if you were expecting a somber mood, it wasn’t happening. Instead, he kicked off his keynote with a series of jokes.

It was a stark disconnect with the reality outside, where the story of the hour concerned a man who had used Facebook to publicize a murder, and threaten many more. People used to talk about Steve Jobs and Apple’s reality distortion field. But Facebook, it sometimes feels, exists in a reality hole. The company doesn’t distort reality — but it often seems to lack the ability to recognize it.

I can’t say I’m fond of the Facebook reality hole but unlike Honan:


It can make it harder to use its platforms to harass others, or to spread disinformation, or to glorify acts of violence and destruction.

I have no desire to censor any of the content that anyone cares to make and/or view on it. Bar none.

The “default” reality settings desired by Honan and others are a thumb on the scale for some cause they prefer over others.

Entitled to their preference but I object to their setting the range of preferences enjoyed by others.

You?

Naming German Censors

Friday, April 7th, 2017

Germany gives social networks 24 hours to delete criminal content by Simon Sharwood.

From the post:

Germany has followed through on its proposal to make social networks remove slanderous hate speech and fake news or face massive fines.

The nation’s Bundesministerium der Justiz und für Verbraucherschutz (Federal Ministry of Justice and Consumer Protection) has announced that cabinet approved a plan to force social network operators to create a complaints mechanism allowing members of the public to report content that online translate-o-tronic services categorise as “insults, libel, slander, public prosecutions, crimes, and threats.”

The Bill approved by Cabinet proposes that social networks be required to establish complaints officer who is subject to local law and gets the job of removing obviously criminal content 24 hours after receiving a complaint. A seven-day deadline will apply to content that’s not immediately identifiable as infringing. Social networks will also be required to inform complainants of the outcome of their takedown requests and to provide quarterly summaries of their activities.

The ministry’s statement also suggests that those who feel aggrieved by material posted about them should be able to learn the true identity of the poster.

A Faktenpapier (PDF) on the Bill says that if the deadlines mentioned above aren’t met the social network’s designated complaints-handler could be fined up to five million Euros, while the network itself could cop a fine of 50 million Euros. An appeal to Germany’s courts will be possible.

Sharwood’s post is a great summary of this censorship proposal but fails to identify those responsible for it.

“Germany” in the abstract sense isn’t responsible for it. And to say the “Cabinet,” leaves the average reader no more informed than saying “Germany.”

Perhaps this helps: German Cabinet / Censors:

Peter Altmaier Alexander Dobrindt Sigmar Gabriel
Hermann Gröhe Barbara Hendricks Ursula von der Leyen
Heiko Maas Thomas de Maizière Angela Merkel
Gerd Müller Andrea Nahles Wolfgang Schäuble
Christian Schmidt Manuela Schwesig Johanna Wanka
Brigitte Zypries

I don’t have their staff listings, yet, but that’s a start on piercing the veil that “Germany,” and “Cabinet” puts between the reader and wannabe censors.

Other veils that hide/protect censors that need piercing?

UK Proposes to Treat Journalists As Spies (Your Response Here)

Sunday, March 19th, 2017

UK’s proposed Espionage Act will treat journalists like spies by Roy Greenslade.

From the post:

Journalists in Britain are becoming increasingly alarmed by the government’s apparent determination to prevent them from fulfilling their mission to hold power to account. The latest manifestation of this assault on civil liberties is the so-called Espionage Act. If passed by parliament, it could lead to journalists who obtain leaked information, along with the whistle blowers who provide it to them, serving lengthy prison sentences.

In effect, it would equate journalists with spies, and its threat to press freedom could not be more stark. It would not so much chill investigative journalism as freeze it altogether.

The proposal is contained in a consultation paper, “Protection of Official Data,” which was drawn up by the Law Commission. Headed by a senior judge, the commission is ostensibly independent of government. Its function is to review laws and recommend reforms to ensure they are fairer and more modern.

But fairness is hardly evident in the proposed law. Its implications for the press were first highlighted in independent news website The Register by veteran journalist Duncan Campbell, who specializes in investigating the U.K. security services.

Comments on the public consultation document can be registered here.

Greenslade reports criticism of the proposal earned this response from the government:


In response, both Theresa’s May’s government and the Law Commission stressed that it was an early draft of the proposed law change. Then the commission followed up by extending the public consultation period by a further month, setting a deadline of May 3.

Early draft, last draft or the final form from parliament, journalists should treat the proposed Espionage Act as a declaration of war on the press.

Being classified as spies, journalists should start acting as spies. Spies that offer no quarter and who take no prisoners.

Develop allies in other countries who are willing to publish information detrimental to your government.

The government has chosen a side and it’s not yours. What more need be said?

Continuing Management Fail At Twitter

Monday, March 6th, 2017

Twitter management continues to fail.

Consider censoring the account of Lauri Love. (a rumored hacker)

Competent management at Twitter would be licensing the rights to create shareable mutes/filters for all posts from Lauri Love.

The FBI, Breitbart, US State Department, and others would vie for users of their filters, which block “dangerous and/or seditious content.”

Filters licensed in increments, depending on how many shares you want to enable.

Twitter with no censorship at all would drive the market for such filters.

Licensing filters by number of shares provides a steady revenue stream and Twitter could its censorship prone barnacles. More profit, reduced costs, what’s not to like?

PS: I ask nothing for this suggestion. Getting Twitter out of the censorship game on behalf of governments is benefit enough for me.

White House blocks news organizations from press briefing [Opsec vs. Boromir, Ethics]

Friday, February 24th, 2017

White House blocks news organizations from press briefing by Dylan Byers, Sara Murray and Kevin Liptak.

From the post:

CNN and other news outlets were blocked Friday from an off-camera White House press briefing, raising alarm among media organizations and First Amendment watchdogs.

The New York Times, the Los Angeles Times, Politico and BuzzFeed were also excluded from the meeting, which is known as a gaggle and is less formal than the televised Q-and-A session in the White House briefing room. The gaggle was held by White House press secretary Sean Spicer.

In a brief statement defending the move, administration spokeswoman Sarah Sanders said the White House “had the pool there so everyone would be represented and get an update from us today.”

The pool usually includes a representative from one television network and one print outlet. In this case, four of the five major television networks — NBC, ABC, CBS and Fox News — were invited and attended the meeting, while only CNN was blocked.

And while The New York Times was kept out, conservative media organizations Breitbart News, The Washington Times and One America News Network were also allowed in.
… (emphasis in original)

Good opsec counsels silence in the face of such an outrage but as Boromir says in The Fellowship of the Ring:

But always I have let my horn cry at setting forth, and though thereafter we may walk in the shadows, I will not go forth as a thief in the night.” (emphasis added)

I trust this outrage obviates “ethical” concerns over distinctions between leaking, hacking, or other means of obtaining government information?

Twitter reduces reach of users it believes are abusive [More Opaque Censorship]

Friday, February 17th, 2017

Twitter reduces reach of users it believes are abusive

More opaque censorship from Twitter:

Twitter has begun temporarily decreasing the reach of tweets from users it believes are engaging in abusive behaviour.

The new action prevents tweets from users Twitter has identified as being abusive from being displayed to people who do not follow them for 12 hours, thus reducing the user’s reach.

If the user were to mention someone who does not follow them on the social media site, that person would not see the tweet in their notifications. Again, this would last for 12 hours.

If the user who had posted abusive tweets was retweeted by someone else, this tweet would not be able to be seen by people who do not follow them, again reducing their Twitter reach.
… (emphasis in original)

I’m assuming this is one of the changes Ed Ho alluded to in An Update on Safety (February 7, 2017) when he said:

Collapsing potentially abusive or low-quality Tweets:

Our team has also been working on identifying and collapsing potentially abusive and low-quality replies so the most relevant conversations are brought forward. These Tweet replies will still be accessible to those who seek them out. You can expect to see this change rolling out in the coming weeks.
… (emphasis in original)

No announcements for:

  • Grounds for being deemed “abusive.”
  • Process for contesting designation as “abusive.”

Twitter is practicing censorship, the basis for which is opaque and the censored have no impartial public forum for contesting that censorship.

In the interest of space, I forego the obvious historical comparisons.

All of which could have been avoided by granting Twitter users:

The ability to create and share filters for tweets.

Even a crude filtering mechanism should enable me to filter tweets that contain my Twitter handle, but that don’t originate from anyone I follow.

So Ed Ho, why aren’t users being empowered to filter their own streams?

Defeating “Fake News” Without Mark Zuckerberg

Sunday, January 1st, 2017

Despite a lack of proof that “fake news” is a problem, Mark Zuckerberg and others, have taken up the banner of public censors on behalf of us all. Whether any of us are interested in their assistance or not.

In countering calls for and toleration of censorship, you may find it helpful to point out that “fake news” isn’t new.

There are any number of spot instances of fake news. Michael J. Socolow reports in: Reporting and punditry that escaped infamy:


As the day wore on, real reporting receded, giving way to more speculation. Right-wing commentator Fulton Lewis Jr. told an audience five hours after the attack that he shared the doubts of many American authorities that the Japanese were truly responsible. He “reported” that US military officials weren’t convinced Japanese pilots had the skills to carry out such an impressive raid. The War Department, he said, is “concerned to find out who the pilots of these planes are—whether they are Japanese pilots. There is some doubt as to that, some skepticism whether they may be pilots of some other nationality, perhaps Germans, perhaps Italians,” he explained. The rumor that Germans bombed Pearl Harbor lingered on the airwaves, with NBC reporting, on December 8, that eyewitnesses claimed to have seen Nazi swastikas painted on some of the bombers.

You may object that it was much confusion, the pundits weren’t trying to deceive, any number of other excuses. And you can repeat those for other individual instances of “fake news.” They simply don’t compare to the flood of intentionally “fake” publications available today.

I disagree but point taken. Let’s look back to an event that, like the internet, enabled a comparative flood of information to be available to readers, the invention of the printing press.

Elizabeth Eisenstein in The Printing Revolution in Early Modern Europe characterizes the output of the first fifty years of printing presses saying:

…it seems necessary to qualify the assertion that the first half-century of printing gave “a great impetus to wide dissemination of accurate knowledge of the sources of Western thought, both classical and Christian.” The duplication of the hermetic writings, the sibylline prophecies, the hieroglyphics of “Horapollo” and many other seemingly authoritative, actually fraudulent esoteric writings worked in the opposite direction, spreading inaccurate knowledge even while paving the way for purification of Christian sources later on.
…(emphasis added) (page 48)

I take Eisenstein to mean that knowingly fraudulent materials were being published, which seems to be the essence of the charge against the authors of “fake news” today.

As far as the quantity of the printing press equivalent to “fake news,” she remarks:


Compared to the large output of unscholarly venacular materials, the number of trilingual dictionaries and Greek or even Latin editions seems so small that one wonders whether the term “wide dissemination” ought to be applied to the latter case at all.
… (page 48)

To be fair, “unscholarly venacular materials” includes both intended to be accurate as well as “fake” texts.

The Printing Revolution in Early Modern Europe is the abridged version of Eisentein’s The printing press as an agent of change : communications and cultural transformations in early modern Europe, which has the footnotes and references to enable more precision on early production figures.

Suffice it to say, however, that no 15th equivalent to Mark Zuckerberg arrived upon the scene to save everyone from “…actually fraudulent esoteric writings … spreading inaccurate knowledge….

The world didn’t need Mark Zuckerberg’s censoring equivalent in the 15th century and it doesn’t need him now.

Facebook’s Censoring Rules (Partial)

Wednesday, December 21st, 2016

Facebook’s secret rules of deletion by Till Krause and Hannes Grassegger.

From the post:

Facebook refuses to disclose the criteria that deletions are based on. SZ-Magazin has gained access to some of these rules. We show you some excerpts here – and explain them.

Introductory words

These are excerpts of internal documents that explain to content moderators what they need to do. To protect our sources, we have made visual edits to maintain confidentiality. While the rules are constantly changing, these documents provide the first-ever insights into the current guidelines that Facebook applies to delete contents.

Insight into a part of the byzantine world of Facebook deletion/censorship rules.

Pointers to more complete leaks of Facebook rules please!

Achtung! Germany Hot On The Censorship Trail

Tuesday, December 20th, 2016

Germany threatens to fine Facebook €500,000 for each fake news post by Mike Murphy.

Mike reports that fears are spreading that fake news could impact German parliamentary elections set for 2017.

One source of those fears is the continued sulking of Clinton campaign staff who fantasize that “fake news” cost Sec. Clinton the election.

Anything is possible as they say but to date, other than accusations of fake news impacting the election, between sobs and sighs, there has been no proof offered that “fake news” or otherwise had any impact on the election at all.

Do you seriously think the “fake news” that the Pope had endorsed Trump impacted the election? Really?

If “fake news” something other than an excuse for censorship (United States, UK, Germany, etc.), collect the “fake news” stories that you claim impacted the election.

Measure the impact of that “fake news” on volunteers following standard social science protocols.

Or do “fake news” criers fear the factual results of such a study?

PS: If you realize that “fake news,” isn’t something new but quite tradition, you will enjoy ‘Fake News’ in America: Homegrown, and Far From New by Chris Hedges.

The Biggest Fake News…

Sunday, December 18th, 2016

Naval Ravikant tweeted:

The biggest fake news is that the “fake news” debate is about anything other than censorship.

Any story/report/discussion/debate over “fake news,” should start with the observation that regulation, filtering, tagging, etc., of “fake news” is a form of censorship.

Press advocates of regulation, filtering, tagging “fake news” until they admit advocating censorship.

The only acceptable answer to censorship is NO. Well, perhaps Hell NO! but you get the idea.

Fight Censorship – Expand Content Flow! Censor Overflow!

Sunday, December 18th, 2016

Facebook, Twitter and others have undertaken demented and pernicious censorship campaigns. Depending upon your politics and preferences, some of their rationales may or may not be compelling to you.

All censorship solutions fail to honor the fundamental right of all users to choose to listen/view or not, whatever content they choose. Instead, these censors seek to impose their choices on everyone.

I’m indifferent to the motivations of censors, some of which I would find personally compelling. The fact remains that users and only users should exercise the right of choice over the content they consume. I would not interfere with that right, even to further my own views on appropriate content.

Having said all that, you no doubt have noticed that your freedom to consume the content of your choice are being rapidly curtailed by the aforementioned censors and others.

One practical defense against these censorious vermin is to explode the flow of content. Producing a condition I call “censor overflow.”

Radio.Garden (which I posted on yesterday) is one source of new content.

Here are some others:

Australian Live Radio Some 268 “proper” radio stations (no internet only) from Australia.

InternetRadio As of today, 39,539 internet radio stations. Even more intriguing is the capability to create your own radio station. Servers are in London and the US so you will need to self-censor or find concealment for your station if you want to be edgy.

Listenlive.edu Over 4000 “proper” radio stations from across Europe.

Live-Radio.net Another “proper” radio station listing but this time with worldwide coverage.

RadioGuide.fm A focus on online radio stations from around the world, now numbering more than 3000 stations.

Radio Station World A wider worldwide listing which expressly includes:

RadioStationWorld is an informational directory dealing with the radio broadcasters worldwide. We depend on many people around the world to help us keep the RadioStationWorld listings up to date. (And much thanks to those that take some time to help keep information up-to-date!) Some of the features you will find on our site include listings of local radio stations on the web, radio station that offer streaming webcast services, and in depth listings of local radio broadcast stations including digital radio throughout North America. Also featured are national and regional broadcast networks, shortwave radio, satellite radio, hospital radio, cable radio, closed circuit/campus radio and radio service providers, as well as a growing list of links to sites that deal with the radio broadcasting industry. Enjoy RadioStationWorld, we hope you find this site useful to whatever your needs are, but remember, we do depend on people like yourself to help update in an ever changing broadcast industry. [Correction: The shortwave radio broadcast listing ahs been withdrawn and the provided link points to a dead resource.]

TuneIn Radio

TuneIn enables people to discover, follow and listen to what’s most important to them — from sports, to news, to music, to talk. TuneIn provides listeners access to over 100,000 real radio stations and more than four million podcasts streaming from every continent.(emphasis in original)

For the sake of completeness, avoid the List of Internet radio stations at Wikipedia. It is too outdated to be anything other than a waste of time.

Contribute content, writing, sound, music, videos, graphics, images, anything that can bring us closer to a state of censor overload!

No promises that censors will tire and go away, after all, censors have been censoring since Plato’s Republic.

But, we have more opportunities to bury censors in a tidal wave of content.

Which will be almost as enjoyable as the content in which we bury them.

Sigh, Tolerance for Censorship is High

Friday, December 16th, 2016

Almost half of Americans believe government ‘responsible’ for tackling fake news by Alastair Reid.

From the post:

Americans are increasingly concerned about the impact of fake news and believe the government bears responsibility in stopping its spread, according to a new survey published today by the Pew Research Center.

Almost 90 per cent of respondents believe fake news causes a “great deal” or “some” confusion about “the basic facts of current events”, and 45 per cent think the government, politicians or elected officials have a “great deal of responsibility” in stopping the spread of fake news.

I am less concerned with the 75 per cent of people who believe fake stories to be true (BuzzFeed News) than the 45% who find it acceptable for government to combat fake news.

I don’t know of any government or tech company I would trust to filter the content I see.

You?

The full Pew report.

Be Undemocratic – Think For Other People – Courtesy of Slate

Wednesday, December 14th, 2016

Feeling down? Left out of the “big boys” internet censor game by the likes of Facebook and Twitter?

Dry your eyes! Slate has ridden to your rescue!

Will Oremus writes in: Only You Can Stop the Spread of Fake News:


Slate has created a new tool for internet users to identify, debunk, and—most importantly—combat the proliferation of bogus stories. Conceived and built by Slate developers, with input and oversight from Slate editors, it’s a Chrome browser extension called This Is Fake, and you can download and install it for free either on its home page or in the Chrome web store. The point isn’t just to flag fake news; you probably already know it when you see it. It’s to remind you that, anytime you see fake news in your feed, you have an opportunity to interrupt its viral transmission, both within your network and beyond.

I’m glad Slate is taking the credit/blame for This is Fake.

Can you name a more undemocratic position than assuming your fellow voters are incapable of making intelligent choices about the news they consume.

Well, everybody but you and your friends. Right?

Thanks for your offer to help Slate, but no thanks.

How To Brick A School Bus, Data Science Helps Park It (Part 1)

Tuesday, December 13th, 2016

Apologies for being a day late! I was working on how the New York Times acted as a bullhorn for those election interfering Russian hackers.

We left off in Data Science and Protests During the Age of Trump [How To Brick A School Bus…] with:

  • How best to represent these no free speech and/or no free assembly zones on a map?
  • What data sets do you need to make protesters effective under these restrictions?
  • What questions would you ask of those data sets?
  • How to decide between viral/spontaneous action versus publicly known but lawful conduct, up until the point it becomes unlawful?

I started this series of posts because the Women’s March on Washington wasn’t able to obtain a protest permit from the National Park Service due to a preemptive reservation by the Presidential Inauguration Committee.

Since then, the Women’s March on Washington has secured a protest permit (sic) from the Metropolitan Police Department.

If you are interested in protests organized for the convenience of government:

“People from across the nation will gather” at the intersection of Independence Avenue and Third Street SW, near the U.S. Capitol, at 10:00am” on Jan. 21, march organizers said in a statement on Friday.

Each to their own.

Bricking A School Bus

We are all familiar with the typical school bus:

school-bus-460

By Die4kids (Own work) [GFDL or CC BY-SA 3.0], via Wikimedia Commons

The saying, “no one size fits all,” applies to the load capacity of school buses. For example, the North Carolina School Bus Safety Web posted this spreadsheet detailing the empty (column I) and maximum weight (column R) of a variety of school bus sizes. For best results, get the GVWR (Gross Vehicle Weight Rating, maximum load) for your bus and then weight it on reliable scales.

Once you determine the maximum weight capacity of your bus, divide that weight by 4,000 pounds, the weight of one cubic yard of concrete. That results is the amount of concrete that you can have poured into your bus as part of the bricking process.

I use the phrase “your bus” deliberately because pouring concrete into a school bus that doesn’t belong to you would be destruction of private property and thus a crime. Don’t commit crimes. Use your own bus.

Once the concrete has hardened (for stability), drive to a suitable location. It’s a portable barricade, at least for a while.

At a suitable location, puncture the tires on one side and tip the bus over. Remove/burn the tires.

Consulting line 37 of the spreadsheet, with that bus, you have a barricade of almost 30,000 pounds, with no wheels.

Congratulations!

I’m still working on the data science aspects of where to park. More on that in How To Brick A School Bus, Data Science Helps Park It (Part 2), which I will post tomorrow.

Facebook Patents Tool To Think For You

Wednesday, December 7th, 2016

My apologies but Facebook thinks you are too stupid to detect “fake news.” Facebook will compensate for your stupidity with a process submitted for a US patent. For free!

Facebook is patenting a tool that could help automate removal of fake news by Casey Newton.

From the post:

As Facebook works on new tools to stop the spread of misinformation on its network, it’s seeking to patent technology that could be used for that purpose. This month the US Trademark and Patent Office published Facebook’s application for Patent 0350675: “systems and methods to identify objectionable content.” The application, which was filed in June 2015, describes a sophisticated system for identifying inappropriate text and images and removing them from the network.

As described in the application, the primary purpose of the tool is to improve the detection of pornography, hate speech, and bullying. But last month, Zuckerberg highlighted the need for “better technical systems to detect what people will flag as false before they do it themselves.” The patent published Thursday, which is still pending approval, offers some ideas for how such a system could work.

A Facebook spokeswoman said the company often seeks patents for technology that it never implements, and said this patent should not be taken as an indication of the company’s future plans. The spokeswoman declined to comment on whether it was now in use.

The system described in the application is largely consistent with Facebook’s own descriptions of how it currently handles objectionable content. But it also adds a layer of machine learning to make reporting bad posts more efficient, and to help the system learn common markers of objectionable content over time — tools that sound similar to the anticipatory flagging that Zuckerberg says is needed to combat fake news.

If you substitute “user” for “administrator” where it appears in the text, Facebook would be enabling users to police the content they view.

Why Facebook finds users making decisions about the content they view objectionable isn’t clear. Suggestions on that question?

The process doesn’t appear to be either accountable and/or transparent.

If I can’t see the content that is removed by Facebook, how do I make judgments about why it was removed and/or how that compares to content about to be uploaded to Facebook?

Urge Facebook users to demand empowering them to make decisions about the content they view.

Urge Facebook shareholders to pressure management to abandon this quixotic quest to be an internet censor.

Attn: “Fake News” Warriors! Where’s The Harm In Terrorist Propaganda?

Tuesday, December 6th, 2016

Facebook, Microsoft, Twitter, and YouTube team up to stop terrorist propaganda by Justin Carissimo.

Justin’s report is true, at least in the sense that Facebook, Microsoft, Twitter, and YouTube are collaborating to censor “terrorist propaganda.”

Justin’s post also propagates the “fake news” that online content from terrorists “…threaten our national security and public safety….”

Really? You would think after all these years of terrorist propaganda, there would be evidence to support that claim.

True enough, potential terrorists can meet online, but “recruitment” is a far different tale than reading online terrorist content. Consider ISIS and the Lonely Young American, a tale told to support the idea of online recruiting, but is one of the better refutations of that danger.

It’s not hard to whistle up alleged social science studies of online “terrorist propaganda” but the impacts of that so-called propaganda, are speculation at best, when not actually fantasies of the authors.

“Fake News” warriors should challenge the harmful terrorist propaganda narrative as well as those that are laughably false (denying climate change for example).

Resisting EU Censorship

Monday, December 5th, 2016

US tech giants like Facebook could face new EU laws forcing them to tackle hate speech by Arjun Kharpal.

From the post:

U.S. technology giants including Facebook, Twitter, Microsoft, and Google’s YouTube could face new laws forcing them to deal with online hate speech if they don’t tackle the problem themselves, the European Commission warned.

In May, the four U.S. firms unveiled a “code of conduct” drawn up in conjunction with the Commission, the European Union’s executive arm, to take on hate speech on their platforms. It involved a series of commitments including a pledge to review the majority of notifications of suspected illegal hate speech in less than 24 hours and remove or disable access to the content if necessary. Another promise was to provide regular training to staff around hate speech.

But six months on, the Commission is not happy with the progress. EU Justice Commissioner Vera Jourova has commissioned a report, set to be released later this week, which claims that progress in removing offending material has been too slow.

I posted about this ill-fated “code of conduct” under Four Horsemen Of Internet Censorship + One. I pointed out the only robust solution to the “hate speech” problem was to enable users to filter the content they see, as opposed to the EU.

Fast forward 2 internet years (3 months = 1 internet year) and the EU is seeking to increase its censorship powers and not to empower users to regulate the content they consume.

Adding injury to insult, the EU proposes directives that require uncompensated expenditures on the part of its victims, Facebook, Twitter, Microsoft, and Google, to meet criteria that can only be specified user by user.

Why the first refuge of the EU for disagreeable speech is censorship I don’t know. What I do know is any tolerance of EU censorship demands encourages even more outrageous censorship demands.

The usual suspects should push back and push back hard against EU demands for censorship.

Enabling users to filter content means users can shape incoming streams to fit their personal sensitivities and dislikes, without impinging on the rights of others.

Had Facebook, Twitter, Microsoft, and Google started developing shareable content filters when they proposed their foolish “code of conduct” to the EU last May, they would either be available or nearly so by today.

Social media providers should not waste any further time attempting to censor on behalf of the EU or users. Enable users to censor their own content and get out of the censorship business.

There’s no profit in the censorship business. In fact, there is only expense and wasted effort.

PS: The “EU report” in question won’t be released until Wednesday, December 7, 2016 (or so I am told).

Internet Censor(s) Spotted in Mirror

Wednesday, November 30th, 2016

How to solve Facebook’s fake news problem: experts pitch their ideas by Nicky Woolf.

From the post:

The impact of fake news, propaganda and misinformation has been widely scrutinized since the US election. Fake news actually outperformed real news on Facebook during the final weeks of the election campaign, according to an analysis by Buzzfeed, and even outgoing president Barack Obama has expressed his concerns.

But a growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser.

Woolf captures the essential wrongness with the now, 120 pages, of suggestions, quoting Claire Wardle:


“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks have to be on board.”

Don’t worry, selecting the arbiter of truth and what truth is won’t be difficult.

The authors of these suggestions see their favorite candidate every day:

mirror-460

So long as they aren’t seeing my image (substitute your name/image) in the mirror, I’m not interested in any censorship proposal.

Personally, even if offered the post of Internet Censor, I would turn it down.

I can’t speak for you but I am unable to be equally impartial to all. Nor do I trust anyone else to be equally impartial.

The “solution” to “fake news,” if you think that is a meaningful term, is more news, not less.

Enable users to easily compare and contrast news sources, if they so choose. Freedom means being free to make mistakes as well as good choices (from some point of view).

Gab – Censorship Lite?

Tuesday, November 29th, 2016

I submitted my email today at Gab and got this message:

Done! You’re #1320420 in the waiting list.

Only three rules:

Illegal Pornography

We have a zero tolerance policy against illegal pornography. Such material will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We reserve the right to ban accounts that share such material. We may also report the user to local law enforcement per the advice our legal counsel.

Threats and Terrorism

We have a zero tolerance policy for violence and terrorism. Users are not allowed to make threats of, or promote, violence of any kind or promote terrorist organizations or agendas. Such users will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We may also report the user to local and/or federal law enforcement per the advice of our legal counsel.

What defines a ‘terrorist organization or agenda’? Any group that is labelled as a terrorist organization by the United Nations and/or United States of America classifies as a terrorist organization on Gab.

Private Information

Users are not allowed to post other’s confidential information, including but not limited to, credit card numbers, street numbers, SSNs, without their expressed authorization.

If Gab is listening, I can get the rules down to one:

Court Ordered Removal

When Gab receives a court order from a court of competent jurisdiction ordering the removal of identified, posted content, at (service address), the posted, identified content will be removed.

Simple, fair, gets Gab and its staff out of the censorship business and provides a transparent remedy.

At no cost to Gab!

What’s there not to like?

Gab should review my posts: Monetizing Hate Speech and False News and Preserving Ad Revenue With Filtering (Hate As Renewal Resource), while it is in closed beta.

Twitter and Facebook can keep spending uncompensated time and effort trying to be universal and fair censors. Gab has the opportunity to reach up and grab those $100 bills flying overhead for filtered news services.

What is the New York Times if not an opinionated and poorly run filter on all the possible information it could report?

Apply that same lesson to social media!

PS: Seriously, before going public, I would go to the one court-based rule on content. There’s no profit and no wins in censoring any content on your own. Someone will always want more or less. Courts get paid to make those decisions.

Check with your lawyers but if you don’t look at any content, you can’t be charged with constructive notice of it. Unless and until someone points it out, then you have to follow DCMA, court orders, etc.

Mute Account vs. Mute Word/Hashtag – Ineffectual Muting @Twitter

Thursday, November 17th, 2016

twitter-hate-speech-460

I mentioned yesterday the distinction between muting an account versus the new muting by word or #hashtag at Twitter.

Take a moment to check my sources at Twitter support to make sure I have the rules correctly stated. I’ll wait.

(I’m not a journalist but readers should be enabled to satisfy themselves claims I make are at least plausible.)

No feedback from Twitter on the don’t appear in your timeline vs. do appear in your timeline distinction.

Why would I want to only block notifications of what I think of as hate speech and still have those tweets in my timeline?

Then it occurred to me:

If you can block tweets from appearing in your timeline by word or hashtag, you can block advertising tweets from appearing in your timeline.

You cannot effectively mute hate speech @Twitter because you could also mute advertising.

What about it Twitter?

Must feminists, people of color, minorities of all types be subjected to hate speech in order to preserve your revenue streams?


Not that I object to Twitter having revenue streams from advertising but it needs to be more sophisticated than the Nigerian spammer model now in use. Charge a higher price for targeted advertising that users are unlikely to block.

For example, I would be highly unlikely to block ads for cs theory/semantic integration tomes. On the other hand, I would follow a mute list that blocked histories of famous cricket matches. (Apologies to any cricket players in the audience.)

In my post: Twitter Almost Enables Personal Muting + Roving Citizen-Censors I offer a solution that requires only minor changes based on data Twitter already collects plus regexes for muting. It puts what you see entirely in the hands of users.

That enables Twitter to get out of the censorship business altogether, something it doesn’t do well anyway, and puts users in charge of what they see. A win-win from my perspective.

Alt-right suspensions lay bare Twitter’s consistency [hypocrisy] problem

Thursday, November 17th, 2016

Alt-right suspensions lay bare Twitter’s consistency problem by Nausicaa Renner.

From the post:

TWITTER SUSPENDED A NUMBER OF ACCOUNTS associated with the alt-right, USA Today reported this morning. This move was bound to be divisive: While Twitter has banned and suspended users in the past (prominently, Milo Yiannopoulos for incitement), USA Today points out the company has never suspended so many at once—at least seven in this case. Richard Spencer, one of the suspended users and prominent alt-righter, also had a verified account on Twitter. He claims, “I, and a number of other people who have just got banned, weren’t even trolling.”

If this is true, it would be a powerful political statement, indeed. As David Frum notes in The Atlantic, “These suspensions seem motivated entirely by viewpoint, not by behavior.” Frum goes on to argue that a kingpin strategy on Twitter’s part will only strengthen the alt-right’s audience. But we may never know Twitter’s reasoning for suspending the accounts. Twitter declined to comment on its moves, citing privacy and security reasons.

(emphasis in original)

Contrary to the claims of the Southern Poverty Law Center (SPLC) to Twitter, these users may not have been suspended for violating Twitter’s terms of service, but for their viewpoints.

Like the CIA, FBI and NSA, Twitter uses secrecy to avoid accountability and transparency for its suspension process.

The secrecy – avoidance of accountability/transparency pattern is one you should commit to memory. It is quite common.

Twitter needs to develop better muting options for users and abandon account suspension (save on court order) altogether.

Twitter Almost Enables Personal Muting + Roving Citizen-Censors

Wednesday, November 16th, 2016

Investigating news reports of Twitter enabling muting of words and hashtags lead me to Advanced muting options on Twitter. Also relevant is Muting accounts on Twitter.

Alex Hern‘s post: Twitter users to get ability to mute words and conversations prompted this search because I found:

After nine years, Twitter users will finally be able to mute specific conversations on the site, as well as filter out all tweets with a particular word or phrase from their notifications.

The much requested features are being rolled out today, according to the company. Muting conversations serves two obvious purposes: users who have a tweet go viral will no longer have to deal with thousands of replies from strangers, while users stuck in an interminable conversation between people they don’t know will be able to silently drop out of the discussion.

A broader mute filter serves some clear general uses as well. Users will now be able to mute the names of popular TV shows, for instance, or the teams playing in a match they intend to watch later in the day, from showing up in their notifications, although the mute will not affect a user’s main timeline. “This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time,” says Twitter in a blogpost.

to be too vague to be useful.

Starting with Advanced muting options on Twitter, you don’t have to read far to find:

Note: Muting words and hashtags only applies to your notifications. You will still see these Tweets in your timeline and via search. The muted words and hashtags are applied to replies and mentions, including all interactions on those replies and mentions: likes, Retweets, additional replies, and Quote Tweets.

That’s the second paragraph and displayed with a high-lighted background.

So, “muting” of words and hashtags only stops notifications.

“Muted” offensive or inappropriate content is still visible “in your timeline and search.”

Perhaps really muting based on words and hashtags will be a paid subscription feature?

The other curious aspect is that “muting” an account carries an entirely different meaning.

The first sentence in Muting accounts on Twitter reads:

Mute is a feature that allows you to remove an account’s Tweets from your timeline without unfollowing or blocking that account.

Quick Summary:

  • Mute account – Tweets don’t appear in your timeline.
  • Mute by word or hashtag – Tweets do appear in your timeline.

How lame is that?

Solution That Avoids Censorship

The solution to Twitter’s “hate speech,” which means different things to different people isn’t hard to imagine:

  1. Mute by account, word, hashtag or regex – Tweets don’t appear in your timeline.
  2. Mute lists can be shared and/or followed by others.

Which means that if I trust N’s judgment on “hate speech,” I can follow their mute list. That saves me the effort of constructing my own mute list and perhaps even encourages the construction of public mute lists.

Twitter has the technical capability to produce such a solution in short order so you have to wonder why they haven’t? I have no delusion of being the first person to have imagined such a solution. Twitter? Comments?

The Alternative Solution – Roving Citizen-Censors

The alternative to a clean and non-censoring solution is covered in the USA Today report Twitter suspends alt-right accounts:

Twitter suspended a number of accounts associated with the alt-right movement, the same day the social media service said it would crack down on hate speech.

Among those suspended was Richard Spencer, who runs an alt-right think tank and had a verified account on Twitter.

The alt-right, a loosely organized group that espouses white nationalism, emerged as a counterpoint to mainstream conservatism and has flourished online. Spencer has said he wants blacks, Asians, Hispanics and Jews removed from the U.S.

[I personally find Richard Spencer’s views abhorrent and report them here only by way of example.]

From the report, Twitter didn’t go gunning for Richard Spencer’s account but the Southern Poverty Law Center (SPLC) did.

The SPLC didn’t follow more than 100 white supermacists to counter their outlandish claims or to offer a counter-narrative. They followed to gather evidence of alleged violations of Twitter’s terms of service and to request removal of those accounts.

Government censorship of free speech is bad enough, enabling roving bands of self-righteous citizen-censors to do the same is even worse.

The counter-claim that Twitter isn’t the government, it’s not censorship, etc., is intellectually and morally dishonest. Technically true in U.S. constitutional law sense but suppression of speech is the goal and that’s censorship, whatever fig leaf the SPLC wants to put on it. They should be honest enough to claim and defend the right to censor the speech of others.

I would not vote in their favor, that is to say they have a right to censor the speech of others. They are free to block speech they don’t care to hear, which is what my solution to “hate speech” on Twitter enables.

Support muting, not censorship or roving bands of citizen-censors.

Preventing Another Trump – Censor Facebook To Protect “Dumb” Voters

Saturday, November 12th, 2016

Facebook can no longer be ‘I didn’t do it’ boy of global media by Emily Bell.


Barack Obama called out the fake news problem directly at a rally in Michigan on the eve of the election: “And people, if they just repeat attacks enough, and outright lies over and over again, as long as it’s on Facebook and people can see it, as long as it’s on social media, people start believing it….And it creates this dust cloud of nonsense.”

Yesterday, Zuckerberg disputed this, saying that “the idea that fake news on Facebook… influenced the election…is a pretty crazy idea” and defending the “diversity” of information Facebook users see. Adam Mosseri, the company’s VP of Product Development, said Facebook must work on “improving our ability to detect misinformation.” This line is part of Zuckerberg’s familiar but increasingly unconvincing narrative that Facebook is not a media company, but a tech company. Given the shock of Trump’s victory and the universal finger-pointing at Facebook as a key player in the election, it is clear that Zuckerberg is rapidly losing that argument.

In fact, Facebook, now the most influential and powerful publisher in the world, is becoming the “I didn’t do it” boy of global media. Clinton supporters and Trump detractors are searching for reasons why a candidate who lied so frequently and so flagrantly could have made it to the highest office in the land. News organizations, particularly cable news, are shouldering part of the blame for failing to report these lies for what they were. But a largely hidden sphere of propagandistic pages that target and populate the outer reaches of political Facebook are arguably even more responsible.

You can tell Bell has had several cups of the Obama kool-aid by her uncritical acceptance of Barack Obama’s groundless attacks on “…fake news problem….”

Does Bell examine the incidence of “fake news” in other elections?

No.

Does Bell specify which particular “fake news” stories should have been corrected?

No.

Does Bell explain why voters can’t distinguish “fake news” from truthful news?

No.

Does Bell explain why mainstream media is better than voters at detecting “fake news?”

No.

Does Bell explain why she should be the judge over reporting during the 2016 Presidential election?

No.

Does Bell explain why she and Obama consider voters to be dumber than themselves?

No.

Do I think Bell or anyone else should be censoring Facebook for “false news?”

No.

How about you?

Freedom of Speech/Press – Great For “Us” – Not So Much For You (Wikileaks)

Saturday, November 5th, 2016

The New York Times, sensing a possible defeat of its neo-liberal agenda on November 8, 2016, has loosed the dogs of war on social media in general and Wikileaks in particular.

Consider the sleight of hand in Farhad Manjoo’s How the Internet Is Loosening Our Grip on the Truth, which argues on one hand,


You’re Not Rational

The root of the problem with online news is something that initially sounds great: We have a lot more media to choose from.

In the last 20 years, the internet has overrun your morning paper and evening newscast with a smorgasbord of information sources, from well-funded online magazines to muckraking fact-checkers to the three guys in your country club whose Facebook group claims proof that Hillary Clinton and Donald J. Trump are really the same person.

A wider variety of news sources was supposed to be the bulwark of a rational age — “the marketplace of ideas,” the boosters called it.

But that’s not how any of this works. Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

This dynamic becomes especially problematic in a news landscape of near-infinite choice. Whether navigating Facebook, Google or The New York Times’s smartphone app, you are given ultimate control — if you see something you don’t like, you can easily tap away to something more pleasing. Then we all share what we found with our like-minded social networks, creating closed-off, shoulder-patting circles online.

This gets to the deeper problem: We all tend to filter documentary evidence through our own biases. Researchers have shown that two people with differing points of view can look at the same picture, video or document and come away with strikingly different ideas about what it shows.

You caught the invocation of authority by Manjoo, “researchers have shown,” etc.

But did you notice he never shows his other hand?

If the public is so bat-shit crazy that it takes all social media content as equally trustworthy, what are we to do?

Well, that is the question isn’t it?

Manjoo invokes “dozens of news outlets” who are tirelessly but hopelessly fact checking on our behalf in his conclusion.

The strong implication is that without the help of “media outlets,” you are a bundle of preconceptions and biases doing what feels easiest.

“News outlets,” on the other hand, are free from those limitations.

You bet.

If you thought Majoo was bad, enjoy seething through Zeynep Tufekci’s claims that Wikileaks is an opponent of privacy, sponsor of censorship and opponent of democracy, all in a little over 1,000 words (1069 exact count). Wikileaks Isn’t Whistleblowing.

It’s a breath taking piece of half-truths.

For example, playing for your sympathy, Tufekci invokes the need of dissidents for privacy. Even to the point of invoking the ghost of the former Soviet Union.

Tufekci overlooks and hopes you do as well, that these emails weren’t from dissidents, but from people who traded in and on the whims and caprices at the pinnacles of American power.

Perhaps realizing that is too transparent a ploy, she recounts other data dumps by Wikileaks to which she objects. As lawyers say, if the facts are against you, pound on the table.

In an echo of Manjoo, did you know you are too dumb to distinguish critical information from trivial?

Tufekci writes:


These hacks also function as a form of censorship. Once, censorship worked by blocking crucial pieces of information. In this era of information overload, censorship works by drowning us in too much undifferentiated information, crippling our ability to focus. These dumps, combined with the news media’s obsession with campaign trivia and gossip, have resulted in whistle-drowning, rather than whistle-blowing: In a sea of so many whistles blowing so loud, we cannot hear a single one.

I don’t think you are that dumb.

Do you?

But who will save us? You can guess Tufekci’s answer, but here it is in full:


Journalism ethics have to transition from the time of information scarcity to the current realities of information glut and privacy invasion. For example, obsessively reporting on internal campaign discussions about strategy from the (long ago) primary, in the last month of a general election against a different opponent, is not responsible journalism. Out-of-context emails from WikiLeaks have fueled viral misinformation on social media. Journalists should focus on the few important revelations, but also help debunk false misinformation that is proliferating on social media.

If you weren’t frightened into agreement by the end of her parade of horrors:


We can’t shrug off these dangers just because these hackers have, so far, largely made relatively powerful people and groups their targets. Their true target is the health of our democracy.

So now Wikileaks is gunning for democracy?

You bet. 😉

Journalists of my youth, think Vietnam, Watergate, were aggressive critics of government and the powerful. The Panama Papers project is evidence that level of journalism still exists.

Instead of whining about releases by Wikileaks and others, journalists* need to step up and provide context they see as lacking.

It would sure beat the hell out of repeating news releases from military commanders, “justice” department mouthpieces, and official but “unofficial” leaks from the American intelligence community.

* Like any generalization this is grossly unfair to the many journalists who work on behalf of the public everyday but lack the megaphone of the government lapdog New York Times. To those journalists and only them, do I apologize in advance for any offense given. The rest of you, take such offense as is appropriate.

Bias in Data Collection: A UK Example

Monday, October 10th, 2016

Kelly Fiveash‘s story, UK’s chief troll hunter targets doxxing, virtual mobbing, and nasty images starts off:

Trolls who hurl abuse at others online using techniques such as doxxing, baiting, and virtual mobbing could face jail, the UK’s top prosecutor has warned.

New guidelines have been released by the Crown Prosecution Service to help cops in England and Wales determine whether charges—under part 2, section 44 of the 2007 Serious Crime Act—should be brought against people who use social media to encourage others to harass folk online.

It even includes “encouraging” statistics:


According to the most recent publicly available figures—which cite data between May 2013 and December 2014—1,850 people were found guilty in England and Wales of offences under section 127 of the Communications Act 2003. But the numbers reveal a steady climb in charges against trolls. In 2007, there were a total of 498 defendants found guilty under section 127 in England and Wales, compared with 693 in 2008, 873 in 2009, 1,186 in 2010 and 1,286 in 2011.

But the “most recent publicly available figures,” doesn’t ring true does it?

Imagine that, 1850 trolls out of a total population of England and Wales of 57 million. (England 53.9 million, Wales 3.1 million, mid-2013)

Really?

Let’s look at the referenced government data, 25015 Table.xls.

For the months of May 2013 to December 2014, there are only monthly totals of convictions.

What data is not being collected?

Among other things:

  1. Offenses reported to law enforcement
  2. Offenses investigated by law enforcement (not the same as #1)
  3. Conduct in question
  4. Relationship, if any, between the alleged offender/victim
  5. Race, economic status, location, social connections of alleged offender/victim
  6. Law enforcement and/or prosecutors involved
  7. Disposition of cases without charges being brought
  8. Disposition of cases after charges brought but before trial
  9. Charges dismissed by courts and acquittals
  10. Judges who try and/or dismiss charges
  11. Penalties imposed upon guilty plea and/or conviction
  12. Appeals and results on appeal, judges, etc.

All that information exists for every reported case of “trolls,” and is recorded at some point in the criminal justice process or could be discerned from those records.

Can you guess who isn’t collecting that information?

The TheyWorkForYou site reports at: Communications Act 2003, Jeremy Wright, The Parliamentary Under-Secretary of State for Justice, saying:


The Ministry of Justice Court Proceedings Database holds information on defendants proceeded against, found guilty and sentenced for criminal offences in England and Wales. This database holds information on offences provided by the statutes under which proceedings are brought but not the specific circumstances of each case. It is not possible to separately identify, in all cases brought under section 127 of the Communications Act 2003, whether a defendant sent or caused to send information to an individual or a small group of individuals or made the information widely available to the public. This detailed information may be held by the courts on individual case files which due to their size and complexity are not reported to Justice Analytical Services. As such this information can be obtained only at disproportionate cost.
… (emphasis added)

I was unaware that courts in England and Wales were still recording their proceedings on vellum. That would be expensive to manually gather that data together. (NOT!)

How difficult is it from any policy organization, whether seeking greater protection from trolls and/or opposing classes of prosecution based on discrimination and free speech to gather the same data?

Here is a map of the Crown Prosecution Service districts:

cps-460

Counting the sub-offices in each area, I get forty-three separate offices.

But that’s only cases that are considered for prosecution and that’s unlikely to be the same number as reported to the police.

Checking for police districts in England, I get thirty-nine.

england-police-460

Plus, another four areas for Wales:

wales-police-230

The Wikipedia article List of law enforcement agencies in the United Kingdom, Crown dependencies and British Overseas Territories has links for all these police areas, which in the interest of space, I did not repeat here.

I wasn’t able to quickly find a map of English criminal courts, although you can locate them by postcode at: Find the right court or tribunal. My suspicion is that Crown Prosecution Service areas correspond to criminal courts. But verify that for yourself.

In order to collect the information already in the possession of the government, you would have to search records in 43 police districts, 43 Crown Prosecution Service offices, plus as many as 43 criminal courts in which defendants may be prosecuted. All over England and Wales. With unhelpful clerks all along the way.

All while the government offers the classic excuse:

As such this information can be obtained only at disproportionate cost.

Disproportionate because:

Abuse of discretion, lax enforcement, favoritism, discrimination by police officers, Crown prosecutors, judges could be demonstrated as statistical facts?

Governments are old hands at not collecting evidence they prefer to not see thrown back in their faces.

For example: FBI director calls lack of data on police shootings ‘ridiculous,’ ‘embarrassing’.

Non-collection of data is a source of bias.

What bias is behind the failure to collect troll data in the UK?