Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 6, 2016

Attn: “Fake News” Warriors! Where’s The Harm In Terrorist Propaganda?

Filed under: Censorship,Free Speech,Government,Journalism — Patrick Durusau @ 7:34 pm

Facebook, Microsoft, Twitter, and YouTube team up to stop terrorist propaganda by Justin Carissimo.

Justin’s report is true, at least in the sense that Facebook, Microsoft, Twitter, and YouTube are collaborating to censor “terrorist propaganda.”

Justin’s post also propagates the “fake news” that online content from terrorists “…threaten our national security and public safety….”

Really? You would think after all these years of terrorist propaganda, there would be evidence to support that claim.

True enough, potential terrorists can meet online, but “recruitment” is a far different tale than reading online terrorist content. Consider ISIS and the Lonely Young American, a tale told to support the idea of online recruiting, but is one of the better refutations of that danger.

It’s not hard to whistle up alleged social science studies of online “terrorist propaganda” but the impacts of that so-called propaganda, are speculation at best, when not actually fantasies of the authors.

“Fake News” warriors should challenge the harmful terrorist propaganda narrative as well as those that are laughably false (denying climate change for example).

December 5, 2016

Resisting EU Censorship

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 10:45 am

US tech giants like Facebook could face new EU laws forcing them to tackle hate speech by Arjun Kharpal.

From the post:

U.S. technology giants including Facebook, Twitter, Microsoft, and Google’s YouTube could face new laws forcing them to deal with online hate speech if they don’t tackle the problem themselves, the European Commission warned.

In May, the four U.S. firms unveiled a “code of conduct” drawn up in conjunction with the Commission, the European Union’s executive arm, to take on hate speech on their platforms. It involved a series of commitments including a pledge to review the majority of notifications of suspected illegal hate speech in less than 24 hours and remove or disable access to the content if necessary. Another promise was to provide regular training to staff around hate speech.

But six months on, the Commission is not happy with the progress. EU Justice Commissioner Vera Jourova has commissioned a report, set to be released later this week, which claims that progress in removing offending material has been too slow.

I posted about this ill-fated “code of conduct” under Four Horsemen Of Internet Censorship + One. I pointed out the only robust solution to the “hate speech” problem was to enable users to filter the content they see, as opposed to the EU.

Fast forward 2 internet years (3 months = 1 internet year) and the EU is seeking to increase its censorship powers and not to empower users to regulate the content they consume.

Adding injury to insult, the EU proposes directives that require uncompensated expenditures on the part of its victims, Facebook, Twitter, Microsoft, and Google, to meet criteria that can only be specified user by user.

Why the first refuge of the EU for disagreeable speech is censorship I don’t know. What I do know is any tolerance of EU censorship demands encourages even more outrageous censorship demands.

The usual suspects should push back and push back hard against EU demands for censorship.

Enabling users to filter content means users can shape incoming streams to fit their personal sensitivities and dislikes, without impinging on the rights of others.

Had Facebook, Twitter, Microsoft, and Google started developing shareable content filters when they proposed their foolish “code of conduct” to the EU last May, they would either be available or nearly so by today.

Social media providers should not waste any further time attempting to censor on behalf of the EU or users. Enable users to censor their own content and get out of the censorship business.

There’s no profit in the censorship business. In fact, there is only expense and wasted effort.

PS: The “EU report” in question won’t be released until Wednesday, December 7, 2016 (or so I am told).

November 30, 2016

Internet Censor(s) Spotted in Mirror

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 9:04 pm

How to solve Facebook’s fake news problem: experts pitch their ideas by Nicky Woolf.

From the post:

The impact of fake news, propaganda and misinformation has been widely scrutinized since the US election. Fake news actually outperformed real news on Facebook during the final weeks of the election campaign, according to an analysis by Buzzfeed, and even outgoing president Barack Obama has expressed his concerns.

But a growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser.

Woolf captures the essential wrongness with the now, 120 pages, of suggestions, quoting Claire Wardle:


“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks have to be on board.”

Don’t worry, selecting the arbiter of truth and what truth is won’t be difficult.

The authors of these suggestions see their favorite candidate every day:

mirror-460

So long as they aren’t seeing my image (substitute your name/image) in the mirror, I’m not interested in any censorship proposal.

Personally, even if offered the post of Internet Censor, I would turn it down.

I can’t speak for you but I am unable to be equally impartial to all. Nor do I trust anyone else to be equally impartial.

The “solution” to “fake news,” if you think that is a meaningful term, is more news, not less.

Enable users to easily compare and contrast news sources, if they so choose. Freedom means being free to make mistakes as well as good choices (from some point of view).

November 24, 2016

China Gets A Facebook Filter, But Not You

Filed under: Censorship,Facebook,Government,News — Patrick Durusau @ 6:11 pm

Facebook ‘quietly developing censorship tool’ for China by Bill Camarda.

From the post:


That’s one take on the events that might have led to today’s New York Times expose: it seems Facebook has tasked its development teams with “quietly develop[ing] software to suppress posts from appearing in people’s news feeds in specific geographic areas”.

As “current and former Facebook employees” told the Times, Facebook wouldn’t do the suppression themselves, nor need to. Rather:

It would offer the software to enable a third party – in this case, most likely a partner Chinese company – to monitor popular stories and topics that bubble up as users share them across the social network… Facebook’s partner would then have full control to decide whether those posts should show up in users’ feeds.

This is a step beyond the censorship Facebook has already agreed to perform on behalf of governments such as Turkey, Russia and Pakistan. In those cases, Facebook agreed to remove posts that had already “gone live”. If this software were in use, offending posts could be halted before they ever appeared in a local user’s news feed.

You can’t filter your own Facebook timeline or share your filter with other Facebook users, but the Chinese government can filter the timelines of 721,000,000+ internet users?

My proposal for Facebook filters would generate income for Facebook, filter writers and enable the 3,600,000,000+ internet users around the world to filter their own content.

All of Zuckerberg’s ideas:

Stronger detection. The most important thing we can do is improve our ability to classify misinformation. This means better technical systems to detect what people will flag as false before they do it themselves.

Easy reporting. Making it much easier for people to report stories as fake will help us catch more misinformation faster.

Third party verification. There are many respected fact checking organizations and, while we have reached out to some, we plan to learn from many more.

Warnings. We are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them.

Related articles quality. We are raising the bar for stories that appear in related articles under links in News Feed.

Disrupting fake news economics. A lot of misinformation is driven by financially motivated spam. We’re looking into disrupting the economics with ads policies like the one we announced earlier this week, and better ad farm detection.

Listening. We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them.

Enthrone Zuckerman as Censor of the Internet.

His blinding lust to be Censor of the Internet*, is responsible for Zuckerman passing up $millions if not $billions in filtering revenue.

Facebook shareholders should question this loss of revenue at every opportunity.

* Zuckerberg’s “lust” to be “Censor of the Internet” is an inference based on the Facebook centered nature of his “ideas” for dealing with “fake news.” Unpaid censorship instead of profiting from user-centered filtering is a sign of poor judgment and/or madness.

November 17, 2016

Alt-right suspensions lay bare Twitter’s consistency [hypocrisy] problem

Filed under: Censorship,Free Speech,Twitter — Patrick Durusau @ 10:10 am

Alt-right suspensions lay bare Twitter’s consistency problem by Nausicaa Renner.

From the post:

TWITTER SUSPENDED A NUMBER OF ACCOUNTS associated with the alt-right, USA Today reported this morning. This move was bound to be divisive: While Twitter has banned and suspended users in the past (prominently, Milo Yiannopoulos for incitement), USA Today points out the company has never suspended so many at once—at least seven in this case. Richard Spencer, one of the suspended users and prominent alt-righter, also had a verified account on Twitter. He claims, “I, and a number of other people who have just got banned, weren’t even trolling.”

If this is true, it would be a powerful political statement, indeed. As David Frum notes in The Atlantic, “These suspensions seem motivated entirely by viewpoint, not by behavior.” Frum goes on to argue that a kingpin strategy on Twitter’s part will only strengthen the alt-right’s audience. But we may never know Twitter’s reasoning for suspending the accounts. Twitter declined to comment on its moves, citing privacy and security reasons.

(emphasis in original)

Contrary to the claims of the Southern Poverty Law Center (SPLC) to Twitter, these users may not have been suspended for violating Twitter’s terms of service, but for their viewpoints.

Like the CIA, FBI and NSA, Twitter uses secrecy to avoid accountability and transparency for its suspension process.

The secrecy – avoidance of accountability/transparency pattern is one you should commit to memory. It is quite common.

Twitter needs to develop better muting options for users and abandon account suspension (save on court order) altogether.

November 16, 2016

Twitter Almost Enables Personal Muting + Roving Citizen-Censors

Filed under: Censorship,Free Speech,Tweets,Twitter — Patrick Durusau @ 12:40 pm

Investigating news reports of Twitter enabling muting of words and hashtags lead me to Advanced muting options on Twitter. Also relevant is Muting accounts on Twitter.

Alex Hern‘s post: Twitter users to get ability to mute words and conversations prompted this search because I found:

After nine years, Twitter users will finally be able to mute specific conversations on the site, as well as filter out all tweets with a particular word or phrase from their notifications.

The much requested features are being rolled out today, according to the company. Muting conversations serves two obvious purposes: users who have a tweet go viral will no longer have to deal with thousands of replies from strangers, while users stuck in an interminable conversation between people they don’t know will be able to silently drop out of the discussion.

A broader mute filter serves some clear general uses as well. Users will now be able to mute the names of popular TV shows, for instance, or the teams playing in a match they intend to watch later in the day, from showing up in their notifications, although the mute will not affect a user’s main timeline. “This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time,” says Twitter in a blogpost.

to be too vague to be useful.

Starting with Advanced muting options on Twitter, you don’t have to read far to find:

Note: Muting words and hashtags only applies to your notifications. You will still see these Tweets in your timeline and via search. The muted words and hashtags are applied to replies and mentions, including all interactions on those replies and mentions: likes, Retweets, additional replies, and Quote Tweets.

That’s the second paragraph and displayed with a high-lighted background.

So, “muting” of words and hashtags only stops notifications.

“Muted” offensive or inappropriate content is still visible “in your timeline and search.”

Perhaps really muting based on words and hashtags will be a paid subscription feature?

The other curious aspect is that “muting” an account carries an entirely different meaning.

The first sentence in Muting accounts on Twitter reads:

Mute is a feature that allows you to remove an account’s Tweets from your timeline without unfollowing or blocking that account.

Quick Summary:

  • Mute account – Tweets don’t appear in your timeline.
  • Mute by word or hashtag – Tweets do appear in your timeline.

How lame is that?

Solution That Avoids Censorship

The solution to Twitter’s “hate speech,” which means different things to different people isn’t hard to imagine:

  1. Mute by account, word, hashtag or regex – Tweets don’t appear in your timeline.
  2. Mute lists can be shared and/or followed by others.

Which means that if I trust N’s judgment on “hate speech,” I can follow their mute list. That saves me the effort of constructing my own mute list and perhaps even encourages the construction of public mute lists.

Twitter has the technical capability to produce such a solution in short order so you have to wonder why they haven’t? I have no delusion of being the first person to have imagined such a solution. Twitter? Comments?

The Alternative Solution – Roving Citizen-Censors

The alternative to a clean and non-censoring solution is covered in the USA Today report Twitter suspends alt-right accounts:

Twitter suspended a number of accounts associated with the alt-right movement, the same day the social media service said it would crack down on hate speech.

Among those suspended was Richard Spencer, who runs an alt-right think tank and had a verified account on Twitter.

The alt-right, a loosely organized group that espouses white nationalism, emerged as a counterpoint to mainstream conservatism and has flourished online. Spencer has said he wants blacks, Asians, Hispanics and Jews removed from the U.S.

[I personally find Richard Spencer’s views abhorrent and report them here only by way of example.]

From the report, Twitter didn’t go gunning for Richard Spencer’s account but the Southern Poverty Law Center (SPLC) did.

The SPLC didn’t follow more than 100 white supermacists to counter their outlandish claims or to offer a counter-narrative. They followed to gather evidence of alleged violations of Twitter’s terms of service and to request removal of those accounts.

Government censorship of free speech is bad enough, enabling roving bands of self-righteous citizen-censors to do the same is even worse.

The counter-claim that Twitter isn’t the government, it’s not censorship, etc., is intellectually and morally dishonest. Technically true in U.S. constitutional law sense but suppression of speech is the goal and that’s censorship, whatever fig leaf the SPLC wants to put on it. They should be honest enough to claim and defend the right to censor the speech of others.

I would not vote in their favor, that is to say they have a right to censor the speech of others. They are free to block speech they don’t care to hear, which is what my solution to “hate speech” on Twitter enables.

Support muting, not censorship or roving bands of citizen-censors.

November 12, 2016

Preventing Another Trump – Censor Facebook To Protect “Dumb” Voters

Filed under: Censorship,Free Speech,Government,Journalism,News,Politics,Reporting — Patrick Durusau @ 9:01 pm

Facebook can no longer be ‘I didn’t do it’ boy of global media by Emily Bell.


Barack Obama called out the fake news problem directly at a rally in Michigan on the eve of the election: “And people, if they just repeat attacks enough, and outright lies over and over again, as long as it’s on Facebook and people can see it, as long as it’s on social media, people start believing it….And it creates this dust cloud of nonsense.”

Yesterday, Zuckerberg disputed this, saying that “the idea that fake news on Facebook… influenced the election…is a pretty crazy idea” and defending the “diversity” of information Facebook users see. Adam Mosseri, the company’s VP of Product Development, said Facebook must work on “improving our ability to detect misinformation.” This line is part of Zuckerberg’s familiar but increasingly unconvincing narrative that Facebook is not a media company, but a tech company. Given the shock of Trump’s victory and the universal finger-pointing at Facebook as a key player in the election, it is clear that Zuckerberg is rapidly losing that argument.

In fact, Facebook, now the most influential and powerful publisher in the world, is becoming the “I didn’t do it” boy of global media. Clinton supporters and Trump detractors are searching for reasons why a candidate who lied so frequently and so flagrantly could have made it to the highest office in the land. News organizations, particularly cable news, are shouldering part of the blame for failing to report these lies for what they were. But a largely hidden sphere of propagandistic pages that target and populate the outer reaches of political Facebook are arguably even more responsible.

You can tell Bell has had several cups of the Obama kool-aid by her uncritical acceptance of Barack Obama’s groundless attacks on “…fake news problem….”

Does Bell examine the incidence of “fake news” in other elections?

No.

Does Bell specify which particular “fake news” stories should have been corrected?

No.

Does Bell explain why voters can’t distinguish “fake news” from truthful news?

No.

Does Bell explain why mainstream media is better than voters at detecting “fake news?”

No.

Does Bell explain why she should be the judge over reporting during the 2016 Presidential election?

No.

Does Bell explain why she and Obama consider voters to be dumber than themselves?

No.

Do I think Bell or anyone else should be censoring Facebook for “false news?”

No.

How about you?

October 10, 2016

Bias in Data Collection: A UK Example

Filed under: Censorship,Free Speech,Law — Patrick Durusau @ 8:56 pm

Kelly Fiveash‘s story, UK’s chief troll hunter targets doxxing, virtual mobbing, and nasty images starts off:

Trolls who hurl abuse at others online using techniques such as doxxing, baiting, and virtual mobbing could face jail, the UK’s top prosecutor has warned.

New guidelines have been released by the Crown Prosecution Service to help cops in England and Wales determine whether charges—under part 2, section 44 of the 2007 Serious Crime Act—should be brought against people who use social media to encourage others to harass folk online.

It even includes “encouraging” statistics:


According to the most recent publicly available figures—which cite data between May 2013 and December 2014—1,850 people were found guilty in England and Wales of offences under section 127 of the Communications Act 2003. But the numbers reveal a steady climb in charges against trolls. In 2007, there were a total of 498 defendants found guilty under section 127 in England and Wales, compared with 693 in 2008, 873 in 2009, 1,186 in 2010 and 1,286 in 2011.

But the “most recent publicly available figures,” doesn’t ring true does it?

Imagine that, 1850 trolls out of a total population of England and Wales of 57 million. (England 53.9 million, Wales 3.1 million, mid-2013)

Really?

Let’s look at the referenced government data, 25015 Table.xls.

For the months of May 2013 to December 2014, there are only monthly totals of convictions.

What data is not being collected?

Among other things:

  1. Offenses reported to law enforcement
  2. Offenses investigated by law enforcement (not the same as #1)
  3. Conduct in question
  4. Relationship, if any, between the alleged offender/victim
  5. Race, economic status, location, social connections of alleged offender/victim
  6. Law enforcement and/or prosecutors involved
  7. Disposition of cases without charges being brought
  8. Disposition of cases after charges brought but before trial
  9. Charges dismissed by courts and acquittals
  10. Judges who try and/or dismiss charges
  11. Penalties imposed upon guilty plea and/or conviction
  12. Appeals and results on appeal, judges, etc.

All that information exists for every reported case of “trolls,” and is recorded at some point in the criminal justice process or could be discerned from those records.

Can you guess who isn’t collecting that information?

The TheyWorkForYou site reports at: Communications Act 2003, Jeremy Wright, The Parliamentary Under-Secretary of State for Justice, saying:


The Ministry of Justice Court Proceedings Database holds information on defendants proceeded against, found guilty and sentenced for criminal offences in England and Wales. This database holds information on offences provided by the statutes under which proceedings are brought but not the specific circumstances of each case. It is not possible to separately identify, in all cases brought under section 127 of the Communications Act 2003, whether a defendant sent or caused to send information to an individual or a small group of individuals or made the information widely available to the public. This detailed information may be held by the courts on individual case files which due to their size and complexity are not reported to Justice Analytical Services. As such this information can be obtained only at disproportionate cost.
… (emphasis added)

I was unaware that courts in England and Wales were still recording their proceedings on vellum. That would be expensive to manually gather that data together. (NOT!)

How difficult is it from any policy organization, whether seeking greater protection from trolls and/or opposing classes of prosecution based on discrimination and free speech to gather the same data?

Here is a map of the Crown Prosecution Service districts:

cps-460

Counting the sub-offices in each area, I get forty-three separate offices.

But that’s only cases that are considered for prosecution and that’s unlikely to be the same number as reported to the police.

Checking for police districts in England, I get thirty-nine.

england-police-460

Plus, another four areas for Wales:

wales-police-230

The Wikipedia article List of law enforcement agencies in the United Kingdom, Crown dependencies and British Overseas Territories has links for all these police areas, which in the interest of space, I did not repeat here.

I wasn’t able to quickly find a map of English criminal courts, although you can locate them by postcode at: Find the right court or tribunal. My suspicion is that Crown Prosecution Service areas correspond to criminal courts. But verify that for yourself.

In order to collect the information already in the possession of the government, you would have to search records in 43 police districts, 43 Crown Prosecution Service offices, plus as many as 43 criminal courts in which defendants may be prosecuted. All over England and Wales. With unhelpful clerks all along the way.

All while the government offers the classic excuse:

As such this information can be obtained only at disproportionate cost.

Disproportionate because:

Abuse of discretion, lax enforcement, favoritism, discrimination by police officers, Crown prosecutors, judges could be demonstrated as statistical facts?

Governments are old hands at not collecting evidence they prefer to not see thrown back in their faces.

For example: FBI director calls lack of data on police shootings ‘ridiculous,’ ‘embarrassing’.

Non-collection of data is a source of bias.

What bias is behind the failure to collect troll data in the UK?

September 23, 2016

Are You A Closet Book Burner? Google Crowdsources Censorship!

Filed under: Censorship,Free Speech — Patrick Durusau @ 12:52 pm

YouTube is cleaning up and it wants your help! by Lisa Vaas.

From the post:

Google is well aware that the hair-raising comments of YouTube users have turned the service into a fright fest.

It’s tried to drain the swamp. In February 2015, for example, it created a kid-safe app that would keep things like, oh, say, racist/anti-Semitic/homophobic comments or zombies from scaring the bejeezus out of young YouTubers.

Now, Google’s trying something new: it’s soliciting “YouTube Heroes” to don their mental hazmat suits and dive in to do some cleanup.

You work hard to make YouTube better for everyone… and like all heroes, you deserve a place to call home.

Google has renamed the firemen of Fahrenheit 451 to YouTube Heroes.

Positive names cannot change the fact that censors by any name, are in fact just that, censors.

Google has taken censorship to a new level in soliciting the participation of the close-minded, the intolerant, the bigoted, the fearful, etc., from across the reach of the Internet, to censor YouTube.

Google does own YouTube and if it wants to turn it into a pasty gray pot of safe gruel, it certainly can do so.

As censors flood into YouTube, free thinkers, explorers, users who prefer new ideas over pablum, need to flood out of YouTube.

Ad revenue needs to fall as this ill-advised campaign, “come be a YouTube censor” succeeds.

Only falling ad revenue will stop this foray into the folly of censorship by Google.

First steps:

  1. Don’t post videos to Google.
  2. Avoid watching videos on Google as much as possible.
  3. Urge other to not post/use YouTube.
  4. Post videos to other venues.
  5. Speak out against YouTube censorship.
  6. Urge YouTube authors to post/repost elsewhere

“Safe place” means a place safe from content control at the whim and caprice of governments, corporations and even other individuals.

What’s so hard to “get” about that?

September 9, 2016

Let’s Offend Mark Zuckerberg! Napalm-Girl – Please Repost Image

Filed under: Censorship,Free Speech — Patrick Durusau @ 11:01 am

Facebook deletes Norwegian PM’s post as ‘napalm girl’ row escalates by Alice Ross and Julia Carrie Wong.

napalm-girl-460

From the post:

Facebook has deleted a post by the Norwegian prime minister in an escalating row over the website’s decision to remove content featuring the Pulitzer-prize winning “napalm girl” photograph from the Vietnam war.

Erna Solberg, the Conservative prime minister, called on Facebook to “review its editing policy” after it deleted her post voicing support for a Norwegian newspaper that had fallen foul of the social media giant’s guidelines.

Solberg was one of a string of Norwegian politicians who shared the iconic image after Facebook deleted a post from Tom Egeland, a writer who had included the Nick Ut picture as one of seven photographs he said had “changed the history of warfare”.

I remember when I first saw that image during the Vietnam War. As if the suffering of the young girl wasn’t enough, the photo captures the seeming indifference of the soldiers in the background.

This photo certainly changes approach of the U.S. military to press coverage of wars. From TV cameras recording live footage of battles and the wounded in Vietnam, present day coverage is highly sanitized and “safe” for any viewing audience.

There are the obligatory shots of the aftermath of “terrorist” bombings but where is the live reporting on allied bombing of hospitals, weddings, schools and the like? Where are the shrieking wounded and death rattles?

Too much of that and American voters might get the idea that war has real consequences, for real people. Well, war always does but it the profit consequences that concern military leadership and their future employers. Can’t have military spending without a war and a supposed enemy.

Zuckerberg should not shield us and especially not children from the nasty side of war.

Sanitized and “safe” reporting of wars is a recipe for the continuation of the same.

Read more about the photo and the photographer who took it: Nick Ut’s Napalm Girl Helped End the Vietnam War. Today in L.A., He’s Still Shooting

You can’t really tell from the photo but the girl’s skin (Kim Phuc) was melting off in strips. That’s the reality of war that needs to be brought home to everyone who supports war to achieve abstract policy goals and objectives.

August 29, 2016

ISIS Turns To Telegram App After Twitter Crackdown [Farce Alert + My Telegram Handle]

Filed under: Censorship,Cybersecurity,Encryption,Government,Telegram App,Twitter — Patrick Durusau @ 4:01 pm

ISIS Turns To Telegram App After Twitter Crackdown

From the post:

With the micro-blogging site Twitter coming down heavily on ISIS-sponsored accounts, the terrorist organisation and its followers are fast joining the heavily-encrypted messaging app Telegram built by a Russian developer.

On Telegram, the ISIS followers are laying out detailed plans to conduct bombing attacks in the west, voanews.com reported on Monday.

France and Germany have issued statements that they now want a crackdown against them on Telegram.

“Encrypted communications among terrorists constitute a challenge during investigations. Solutions must be found to enable effective investigation… while at the same time protecting the digital privacy of citizens by ensuring the availability of strong encryption,” the statement said.

Really?

Oh, did you notice the source? “Voanews.com reported on Monday.”

If you skip over to that post: IS Followers Flock to Telegram After being Driven from Twitter (I don’t want to shame the author so omitting their name), it reads in part:

With millions of IS loyalists communicating with one another on Telegram and spreading their message of radical Islam and extremism, France and Germany last week said that they want a continent wide effort to allow for a crackdown on Telegram.

“Encrypted communications among terrorists constitute a challenge during investigations,” France and Germany said in a statement. “Solutions must be found to enable effective investigation… while at the same time protecting the digital privacy of citizens by ensuring the availability of strong encryption.”

On private Telegram channels, IS followers have laid out detailed plans to poison Westerners and conduct bombing attacks, reports say.

What? “…millions of IS loyalists…?” IS in total is about 30K of active fighters, maybe. Millions of loyalists? Documentation? Citation of some sort? Being the Voice of America, I’d say they pulled that number out of a dark place.

Meanwhile, while complaining about the strong encryption, they are party to:

detailed plans to poison Westerners and conduct bombing attacks, reports say.

You do know wishing Westerners would choke on their Fritos doesn’t constitute a plan. Yes?

Neither does wishing to have an unspecified bomb, to be exploded at some unspecified location, at no particular time, constitute planning either.

Not to mention that “reports say” is a euphemism for: “…we just made it up.”

Get yourself to Telegram!

telegram-01-460

telegram-03-460

They left out my favorite:

Annoy governments seeking to invade a person’s privacy.

Reclaim your privacy today! Telegram!


Caveat: I tried using one device for the SMS to setup my smartphone. Nada, nyet, no joy. Had to use my cellphone number to setup the account on the cellphone. OK, but annoying.

BTW, on Telegram, my handle is @PatrickDurusau.

Yes, my real name. Which excludes this account from anything requiring OpSec. 😉

August 26, 2016

A Tiny Whiff Of Freedom – But Only A Tiny One

Filed under: Censorship,Government — Patrick Durusau @ 9:03 am

No guarantees that it will last but CNN reports: French court suspends burkini bans.

Just in case you haven’t defamed the French police, recently, do use the image from that article or from my post: Defame the French Police Today!

I am sickened anyone finds it acceptable for men to force women to disrobe.

It is even more disturbing no one in the immediate area intervened on her behalf.

Police abuse will continue and escalate until average citizens step up and intervene.

August 25, 2016

From Preaching to Meddling – Censors, I-94s, Usage of Social Media

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 10:31 am

Tony Romm has a highly amusing account of how internet censors, Google, Facebook and Twitter, despite their own censorship efforts, object to social media screening on I-94 (think international arrival and departure) forms.

Tony writes Tech slams Homeland Security on social media screening:

Internet giants including Google, Facebook and Twitter slammed the Obama administration on Monday for a proposal that would seek to weed out security threats by asking foreign visitors about their social media accounts.

The Department of Homeland Security for months has weighed whether to prompt foreign travelers arriving on visa waivers to disclose the social media websites they use — and their usernames for those accounts — as it seeks new ways to spot potential terrorist sympathizers. The government unveiled its draft plan this summer amid widespread criticism that authorities aren’t doing enough to monitor suspicious individuals for signs of radicalization, including the married couple who killed 14 people in December’s mass shooting in San Bernardino, Calif.

But leading tech companies said Monday that the proposal could “have a chilling effect on use of social media networks, online sharing and, ultimately, free speech online.”
….

Google, Facebook and Twitter casually censor hundreds of thousands of users every year so their sudden concern for free speech is puzzling.

Until you catch the line:

have a chilling effect on use of social media networks, online sharing

Translation: chilling effect on market share of social media, diminished advertising revenues and online sales.

The reaction of Google, Facebook and Twitter reminds me of the elderly woman in church who would shout “Amen!” when the preacher talked about the dangers of alcohol, “Amen!” when he spoke against smoking, “Amen! when he spoke of the shame of gambling, but was curiously silent when the preacher said that dipping snuff was also sinful.

After the service, as the parishioners left the church, the preacher stopped the woman to ask about her change in demeanor. The woman said, “Well, but you went from preaching to meddling.”

😉

Speaking against terrorism, silencing users by the hundred thousand, is no threat to the revenue streams of Google, Facebook and Twitter. Easy enough and they benefit from whatever credibility that buys with governments.

Disclosure of social media use, which could have some adverse impact on revenue, the government has gone from preaching to meddling.

The revenue stream impacts imagined by Google, Facebook and Twitter are just that, imagined. Its impact in fact is unknown. But fear of an adverse impact is so great that all three have swung into frantic action.

That’s a good measure of their commitment to free speech versus their revenue streams.

Having said all that, the Agency Information Collection Activities: Arrival and Departure Record (Forms I-94 and I-94W) and Electronic System for Travel Authorization is about as lame and ineffectual of an anti-terrorist proposal as I have seen since 9/11.

You can see the comments on the I-94 farce, which I started to collect but then didn’t.

I shouldn’t say this for free but here’s one insight into “radicalization:”

Use of social media to exchange “radical” messages is a symptom of “radicalization,” not its cause.

You can convince yourself of that fact.

Despite expensive efforts to stamp out child pornography (radical messages), sexual abuse of children (radicalization) continues. The consumption of child pornography doesn’t cause sexual abuse of children, rather it is consumed by sexual abusers of children. The market is driving the production of the pornography. No market, no pornography.

So why the focus on child pornography?

It’s visible (like social media), it’s easy to find (like tweets), it’s abhorrent (ditto for beheadings), and cheap (unlike uncovering real sexual abuse of children and/or actual terrorist activity).

The same factors explain the mis-guided and wholly ineffectual focus on terrorism and social media.

August 22, 2016

First Amendment Secondary? [Full Text – Response to Stay]

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 8:49 pm

Backpage.com defies sex trafficking subpoena despite Senate contempt vote by David Kravets.

From the post:

The First Amendment has been good, really good to the online classified ads portal Backpage.com. In 2015, the US Constitution helped Backpage dodge a lawsuit from victims of sex trafficking. What’s more, a federal judge invoked the First Amendment and crucified an Illinois sheriff—who labeled Backpage a “sex trafficking industry profiteer”—because the sheriff coerced Visa and Mastercard to refrain from processing payments to the site. The judge said Cook County Sheriff Thomas Dart’s anti-Backpage lobbying amounted to “an informal extralegal prior restraint of speech” because Dart’s actions were threatening the site’s financial survival.

But the legal troubles didn’t end there for Backpage, which The New York Times had labeled “the leading site for trafficking of women and girls in the United States.”

Kravets does a great job of linking to the primary documents in this case and while quoting from the government’s response to the request for a stay, does not include a link for the government’s response.

For your research and reading convenience, RESPONSE IN OPPOSITION [1631269] filed by Senate Permanent Subcommittee on Investigations to motion to stay case. A total of 128 pages.

In that consolidated document, Schedule A of the subpoena runs from page 40 to page 50, although the government contends in its opposition that it tried to be more reasonable that it appears.

Even more disturbing than the Senate’s fishing expedition into the records of Backpage is the justification for disregarding the First Amendment:

The Subcommittee is investigating the serious problem of human trafficking on the Internet—much of which takes place on Backpage’s website—and has subpoenaed Mr. Ferrer for documents relating to Backpage’s screening for illegal trafficking. It is important for the Subcommittee’s investigation of Internet sex trafficking to understand what methods the leading online marketplace for sex advertisements employs to screen out illegal sex trafficking on its website. Mr. Ferrer has no First Amendment right to ignore a subpoena for documents about Backpage’s business practices related to that topic. He has refused to identify his First Amendment interests except in sweeping generalities and failed even to attempt to show that any such interests outweigh important governmental interests served by the Subcommittee’s investigation. Indeed, Mr. Ferrer cannot make any balancing argument because he refused to search for responsive documents or produce a privilege log describing them, claiming that the First Amendment gave him blanket immunity from having to carry out these basic duties of all subpoena respondents.

As serious a problem as human trafficking surely is, there are no exceptions to the First Amendment because a crime is a serious one. Just as there are no exceptions to the Fourth or Fifth Amendments because a crime is a serious one.

If you are interested in the “evidence” cited against Backpage, S. Hrg. 114–179 Human Trafficking Investigation (November 2015), runs some 260 pages, details the commission of illegal human trafficking by others, not Backpage.

Illegal sex traffic undoubtedly occurs in the personal ads of the New York Times (NYT) but the Senate hasn’t favored the NYT with such a subpoena.

Kravets reports Backpage is due to respond to the government by 4:00 p.m. Wednesday of this week. I will post a copy of that response as soon as it is available.

August 20, 2016

235,000 Voices Cried Out And Were Suddenly Silenced

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 1:19 pm

Yahoo! News carried this report of censorship: Twitter axes 235,000 more accounts in terror crackdown.

From the post:

Twitter on Thursday announced that it has cut off 235,000 more accounts for violating its policies regarding promotion of terrorism at the global one-to-many messaging service.

The latest account suspensions raised to 360,000 the total number of accounts sidelined since the middle of 2015 and was helping “drive meaningful results” in curbing the activity, according to the San Francisco-based company.

Twitter has been under pressure to balance protecting free speech at the service with not providing a stage for terrorist groups to spread violent messages and enlist people to their causes.

The latest account suspensions came since February, when Twitter announced that it had neutralized 125,000 accounts for violating rules against violent threats and promotion of terrorism.

“Since that announcement, the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe,” Twitter said in a blog post.

When you read Twitter’s blog post, An update on our efforts to combat violent extremism, out of 235,000 accounts, how many are directly tied to a terrorist attack?

Would you guess:


235,000?

150,000?

100,000?

50,000?

25,000?

10,000?

5,000?

Twitter reports 0 accounts as being tied to terrorist attacks.

Odd considering that Twitter says:

Since that announcement, the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe

“…wave of deadly, abhorrent terror attacks…” What wave?

From March of 2016 until July 31, the List of terrorist incidents, 2016 lists some 864 attacks.

A far cry from the almost 1/4 million silenced accounts.

Of course, “terrorism” depends on your definition, the Global Terrorism Database lists over 6,000 terrorist attacks for the time period March 2015 until July 31, 2015.

Even using 2015’s 6,000 attack figure, that’s a long way from 235,000 Twitter accounts.

If you think “…wave of deadly, abhorrent terror attacks…” is just marketing talk on the part of Twitter, the evidence is on your side.

Anyone who thinks they may be in danger of being silenced by Twitter should obtain regular archives of their tweets. Don’t allow your history to be stolen by Twitter.

I do have a question for anyone working on this issue:

Are there efforts to create a non-Twitter servers that make fair use of the Twitter API, so that archives of tweets and/or even new accounts, could continue silenced accounts? Say a Dark Web Not-Twitter Server?

I ask because Twitter continues to demonstrate that “free speech” is subject to its whim and caprice.

A robust and compatible alternative to Twitter, especially if archives can be loaded, would enable free speech for many diverse groups.

August 14, 2016

Another Data Point On Twitter Censorship Practices

Filed under: #gamergate,Censorship,Free Speech,Twitter — Patrick Durusau @ 1:07 pm

twitter-censor-olympics-460

Alert! Non-Lobbyists Have Personal Contact For Members Of Congress!

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 12:54 pm

Hacker posts contact information for almost 200 congressional Democrats

Summary: Guccifer 2.0 posted a spreadsheet with the personal contact details of almost 200 Democratic members of Congress.

Sorry, I don’t see why non-lobbyists having personal contact information of members of Congress is a bad thing?

The very thought of non-lobbyists contacting members of Congress provoked frantic activity at WordPress, which promptly disabled Guccifer 2.0 page because of:

receipt of a valid complaint regarding the publication of private information, (WordPress blocks latest Guccifer 2.0 docs

The WordPress model of democracy looks something like this:

wordpress-democracy

I’m not vouching for the donation amounts and/or the amount of access you get for those amounts. It varies from congressional district to district.

Check with your local representative for current prices and access.

If and when you meet with your representative, be sure to ask for their new cellphone number.

August 13, 2016

Twitter Too Busy With Censorship To Care About Abuse

Filed under: #gamergate,Censorship,Free Speech,Government — Patrick Durusau @ 9:29 pm

Complaints about Twitter ignoring cases of abuse are quite common, “A Honeypot For Assholes” [How To Monetize Assholes/Abuse]. I may have stumbled on why Twitter “ignores” abuse cases.

Twitter staff aren’t “ignoring” abuse cases, they are too damned busy being ad hoc government censors to handle abuse cases.

Consider: How Israel is trying to enforce gag orders beyond its borders by Michael Schaeffer Omer-Man.

From the post:

Israeli authorities are taking steps to block their own citizens from reading materials published online in other countries, including the United States.

The Israeli State Attorney’s Office Cyber Division has sent numerous take-down requests to Twitter and other media platforms in recent months, demanding that they remove certain content, or block Israeli users from viewing it.

In an email viewed by +972, dated August 2, 2016, Twitter’s legal department notified American blogger Richard Silverstein that the Israeli State Attorney claimed a tweet of his violates Israeli law. The tweet in question had been published 76 days earlier, on May 18. Silverstein has in the past broken stories that Israeli journalists have been unable to report due to gag orders, including the Anat Kamm case.

Without demanding that he take any specific action, Twitter asked Silverstein to let its lawyers know, “if you decide to voluntarily remove the content.” The American blogger, who says he has not stepped foot in any Israeli jurisdiction for two decades, refused, noting that he is not bound by Israeli law. Twitter is based in California.

Two days later, Twitter sent Silverstein a follow-up email, informing him that it was now blocking Israeli users from viewing the tweet in question. Or in Twitter-talk, “In accordance with applicable law and our policies, Twitter is now withholding the following Tweet(s) in Israel.”

It’s no wonder Twitter lacks the time and resources to think of robust solutions that enable free speech and at the same time, protects users who aren’t interested in listening to the free speech of certain others.

Both rights are equally important but Twitter has its hands full responding in an ad hoc fashion to unreasonable demands.

Adopt a policy of delivering any content, anywhere, from any author and empower users to choose what they see.

The seething ball of lawyers, which add no value for Twitter or its users, will suddenly melt away.

No issues to debate.

Governments block content on their own or they don’t.

Users block content on their own or they don’t.

BTW, 972mag.com needs your financial support to keep up this type of reporting. If you are having a good month, keep them in mind.

Twitter Censor Strikes Again (and again, and again)

Filed under: Censorship,Free Speech,Twitter — Patrick Durusau @ 3:54 pm

Twitter censors accounts for reasons known only to itself, but in the case, truth telling is one obvious trigger for Twitter censorship:

twitter-censors-again-460

Twitter censors accounts every day that don’t make the news and those are just as serious violations of free speech as this instance.

Twitter could trivially empower users to have free speech and the equally important right to not listen but also for reasons known only to Twitter, has chosen not to do so.

Free speech and the right to not listen are equally important.

What’s so difficult to understand about that?

August 12, 2016

Government Toadies Target “Propaganda”

Filed under: Censorship,Government — Patrick Durusau @ 10:16 am

Sam Schechner gives a “heads up” in Tech Giants Target Terrorist Propaganda to plans by tech companies to counter “propaganda.”

From the post:

Nearly half a million teenagers and young adults who had posted content with terms like “sharia” or “mujahideen” began last fall seeing a series of animated videos pop up on their Facebook news feeds.

In one, cartoon figures with guns appear underneath an Islamic State flag. “Do not be confused by what extremists say, that you must reject the new world. You don’t need to pick,” the narrator says. “Remember, peace up. Extremist thinking out.”

The videos are part of three experiments—funded by Google parent Alphabet Inc., with help from Facebook Inc. and Twitter Inc.—that explore how to use the machinery of online advertising to counterbalance the growing wave of extremist propaganda on the internet, both from Islamist radicals and far-right groups.

The goal: See what kinds of messages and targeting could reach potential extremists before they become radicalized—and then quickly roll the model out to content producers across the internet.

The study, detailed in a report set to be published Monday by London-based think tank Institute for Strategic Dialogue, is a step toward understanding what techniques work, said Yasmin Green, who heads the counter-radicalization efforts at Jigsaw, the Alphabet unit formerly known as Google Ideas.

Sam never gives you the link to the report from the “London-based think tank Institute for Strategic Dialogue,” which you can find at: The Impact of Counter-Narratives.

Which might lead you to discover another August 2016 publication: “Shooting in the right direction”: Anti-ISIS Foreign Fighters in Syria and Iraq, a study on recruitment and facilitating the use of anti-ISIS foreign fighters in Syria and Iraq.

The Institute for Strategic Dialogue (ISD) would be better named “Institute for Strategic Propaganda.

It isn’t “propaganda” that the ISD seeks to counter but the choice of particular propaganda.

A simple count of the lives of Arabs blighted or ended by the Western Powers since 9/11 (just to pick a well known starting point), will leave you wondering who are the terrorists in this “conflict?”

If that weren’t enough disappointment, Google, Facebook and others are enabling this foolish effort by not demanding payment for their work. The lack of budget busting expenses encourages governments to act irresponsibly.

August 10, 2016

Twitter Censorship On Behalf Of Turkish Government

Filed under: Censorship,Free Speech,Government,Tweets,Twitter — Patrick Durusau @ 11:05 am

twitter-turkey-censor-460

The link Post Coup Censorship takes you to a list of twenty-three (23) journalist/publicist accounts verified as withheld by Twitter in Turkey.

I have tweeted to Efe Kerem Sözeri about this issue and was advised the censorship is based on IP addresses. Sözeri points out that use of a VPN is one easy means of avoiding the censorship.

Hopefully that was productive than a rant about Twitter’s toadyism and self-anointed role to prevent abuse (as opposed to empowering Twitter users to avoid abuse on their own).

July 27, 2016

The Right to be Forgotten in the Media: A Data-Driven Study

Filed under: Censorship,EU,Privacy — Patrick Durusau @ 4:55 pm

The Right to be Forgotten in the Media: A Data-Driven Study by , , , , .

Abstract:

Due to the recent “Right to be Forgotten” (RTBF) ruling, for queries about an individual, Google and other search engines now delist links to web pages that contain “inadequate, irrelevant or no longer relevant, or excessive” information about that individual. In this paper we take a data-driven approach to study the RTBF in the traditional media outlets, its consequences, and its susceptibility to inference attacks. First, we do a content analysis on 283 known delisted UK media pages, using both manual investigation and Latent Dirichlet Allocation (LDA). We find that the strongest topic themes are violent crime, road accidents, drugs, murder, prostitution, financial misconduct, and sexual assault. Informed by this content analysis, we then show how a third party can discover delisted URLs along with the requesters’ names, thereby putting the efficacy of the RTBF for delisted media links in question. As a proof of concept, we perform an experiment that discovers two previously-unknown delisted URLs and their corresponding requesters. We also determine 80 requesters for the 283 known delisted media pages, and examine whether they suffer from the “Streisand effect,” a phenomenon whereby an attempt to hide a piece of information has the unintended consequence of publicizing the information more widely. To measure the presence (or lack of presence) of a Streisand effect, we develop novel metrics and methodology based on Google Trends and Twitter data. Finally, we carry out a demographic analysis of the 80 known requesters. We hope the results and observations in this paper can inform lawmakers as they refine RTBF laws in the future.

Not collecting data prior to laws and policies seems to be a trademark of the legislative process.

Otherwise, the “Right to be Forgotten” (RTBF) nonsense that only impacts searching and then only in particular ways could have been avoided.

The article does helpfully outline how to discover delistings, of which they discovered 283 known delisted links.

Seriously? Considering that Facebook has 1 Billion+ users, much ink and electrons are being spilled over a minimum of 283 delisted links?

It’s time for the EU to stop looking for mites and mole hills to attack.

Especially since they are likely to resort to outright censorship as their next move.

That always ends badly.

July 21, 2016

Twitter Nanny Says No! No!

Filed under: Censorship,Free Speech,News,Reporting,Tweets,Twitter — Patrick Durusau @ 2:36 pm

twitter-nanny-460

For the other side of this story, enjoy Milo Yiannopoulos’s Twitter ban, explained by Aja Romano, where Aja is supportive of Twitter and its self-anointed role as arbiter of social values.

From my point of view, the facts are fairly simple:

Milo Yiannopoulos (formerly @Nero) has been banned from Twitter on the basis of his speech and the speech of others who agree with him.

What more needs to be said?

I have not followed, read, reposted or retweeted any tweets by Milo Yiannopoulos (formerly @Nero). And would not even if someone sent them to me.

I choose to not read that sort of material and so can anyone else. Including the people who complain in Aja’s post.

The Twitter Nanny becomes censor in insisting that no one be able to read tweets from Milo Yiannopoulos (formerly @Nero).

I’ve heard the argument that the First Amendment doesn’t apply to Twitter, which is true, but irrelevant. Only one country in the world has the First Amendment as stated in the US Constitution but that doesn’t stop critics from decrying censorship by other governments.

Or is it only censorship if you agree with the speech being suppressed?

Censorship of speech that I find disturbing, sexist, racist, misogynistic, dehumanizing, transphobic, homophobic, supporting terrorism, is still censorship.

And it is still wrong.

We only have ourselves to blame for empowering Twitter to act as a social media censor. Central point of failure and all that jazz.

Suggestions on a free speech alternative to Twitter?

July 15, 2016

Google = No Due Process

Filed under: Censorship — Patrick Durusau @ 1:25 pm

Not new but noteworthy headline about Google: Google deletes artist’s blog and a decade of his work along with it by Ethan Chiel.

From the post:

Artist Dennis Cooper has a big problem on his hands: Most of his artwork from the past 14 years just disappeared.

It’s gone because it was kept entirely on his blog, which the experimental author and artist has maintained on the Google-owned platform Blogger since 2002 (Google bought the service in 2003). At the end of June, Cooper says he discovered he could no longer access his Blogger account and that his blog had been taken offline.

As you know without even reading Ethan’s post, Google has been not responsive to Dennis Cooper or others inquiring on his behalf.

Cooper failed to keep personal backups of his work, but when your files are stored with Google, what’s the point? Doesn’t Google keep backups? Of course they do, but that doesn’t help Cooper in this case.

The important lesson here is that as a private corporation, Google isn’t obligated to give any user notice or an opportunity to be heard before their content is blocked. Or in short, no due process.

Instead of pestering Google with new antitrust charges, the EU could require that Google maintain backups of any content it blocks and require it to deliver that content to the person posting it upon request.

Such a law should include all content hosting services and consequently, be a benefit to everyone living in the EU.

Unlike the headline grabbing antitrust charges against Google.

June 14, 2016

Mapping Media Freedom

Filed under: Censorship,Free Speech,Journalism,News,Reporting — Patrick Durusau @ 6:51 pm

Mapping Media Freedom

From the webpage:

Journalists and media workers are confronting relentless pressure simply for doing their job. Mapping Media Freedom identifies threats, violations and limitations faced by members of the press throughout European Union member states, candidates for entry and neighbouring countries.

My American readers should not be mis-led by the current map image:

media-map-2-cropped-460

If it is true the United States is free from press suppression, something I seriously doubt, it won’t be long before it starts to rack up incidents on this site.

Just today, Newt Gingrich, a truly unpleasant waste of human skin, proposed re-igniting the witch hunt committees of the 1950’s. Newt Gingrich Suggests Reforming House Un-American Committee In Wake Of Orlando Shooting.

The so-called “presumptive” candidates for President, Clinton and Trump, have called for tech companies to aid in the suppression of jihadist content and even the closing off of parts of the internet.

At least once a week, visit the Mapping Media Freedom and do what you can to support the media everywhere.

More Censorship Is Coming – To The USA

Filed under: Censorship,Free Speech — Patrick Durusau @ 9:32 am

Hillary Clinton says tech companies need to ‘step up’ fight against ISIS propaganda by Amar Toor.

From the post:

Hillary Clinton said this week that if elected president, she would work with major technology companies to “step up” counter-terrorism efforts, including surveillance of social media and campaigns to combat jihadist propaganda online. As Reuters reports, the presumptive Democratic presidential nominee made the comments in a speech in Cleveland Monday, one day after a gunman killed 49 people and left 53 wounded at a gay nightclub in Orlando.

Clinton did not provide details on how she would work with tech companies, though her comments add to the ongoing debate over privacy and national security, which has intensified following recent terrorist attacks in both the US and Europe. In her speech, the former secretary of state called for an “intelligence surge,” saying that security agencies “need better intelligence to discover and disrupt terrorist plots before they can be carried out.” She also called on the government and tech companies to “use all our capabilities to counter jihadist propaganda online.”

“As president, I will work with our great tech companies from Silicon Valley to Boston to step up our game,” Clinton said. “We have to [do] a better job intercepting ISIS’ communications, tracking and analyzing social media posts and mapping jihadist networks, as well as promoting credible voices who can provide alternatives to radicalization.”

What does it mean to “counter jihadist propaganda online?”

Does it include factual reports about the aims of jihadists and the abuses they seek to correct?

For example, the Declaration of Independence was once considered “propaganda.”

Does it include factual reports of terrorist bombings by coalition forces on jihadists positions?

Question: Who do you root for in the Star Wars movies, the tiny band of rebels or the empire?

Does it include calling on young people to actively resist corrupt and oppressive governments?

Let’s see…

Even anarchy itself, that bugbear held up by the tools of power (though truly to be deprecated) is infinitely less dangerous to mankind than arbitrary government. Anarchy can be but of short duration; for when men are at liberty to pursue that course which is most conducive to their own happiness, they will soon come into it, and for the rudest state of nature, order and good government must soon arise. But tyranny, when once established, entails its curse on a nation to the latest period of time; unless some daring genius, inspired by Heaven, shall unappalled by danger, bravely form and execute the arduous design of restoring liberty and life to his enslaved, murdered country.” [AN ORATION DELIVERED MARCH 6, 1775, AT THE… Joseph Warren (1741-1775) Boston: Printed by Messieurs Edes and Gill, and by J. Greenleaf, 1775 E297 W54, Fighting Words, a collection at Utah State University.]

Updated to use modern language, would that qualify?

As I remember the First Amendment, all of those qualify as protected free speech.

Clinton and her separated-at-birth twin, Donald Trump, can try to impose censorship on the legitimate speech of jihadists.

Let’s all lend the jihadists a hand and repeat their legitimate speech on a regular basis.

I for one would like to hear what the jihadists have to say for themselves.

Wouldn’t you?

June 13, 2016

How Do I Become A Censor?

Filed under: Censorship,Free Speech,Social Media — Patrick Durusau @ 12:22 pm

You read about censorship or efforts at censorship on a daily basis.

But none of those reports answers the burning question of the social media age: How Do I Become A Censor?

I mean, what’s the use of reading about other people censoring your speech if you aren’t free to censor theirs? Where the fun in that?

Andrew Golis has an answer for you in: Comments are usually garbage. We’re adding comments to This.!.

Three steps to becoming a censor:

  1. Build a social media site that accepts comments
  2. Declare a highly subjective ass-hat rules
  3. Censor user comments

There being no third-party arbiters, you are now a censor! Feel the power surging through your fingers. Crush dangerous thoughts, memes or content with a single return. The safety and sanity of your users is now your responsibility.

Heady stuff. Yes?

If you think this is parody, check out the This. Community Guidelines for yourself:


With that in mind, This. is absolutely not a place for:

Violations of law. While this is expanded upon below, it should be clear that we will not tolerate any violations of law when using our site.

Hate speech, malicious speech, or material that’s harmful to marginalized groups. Overtly discriminating against an individual belonging to a minority group on the basis of race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability status, or medical condition won’t be tolerated on the site. This holds true whether it’s in the form of a link you post, a comment you make in a conversation, a username or display name you create (no epithets or slurs), or an account you run.

Harassment; incitements to violence; or threats of mental, emotional, cyber, or physical harm to other members. There’s a line between civil disagreement and harassment. You cross that line by bullying, attacking, or posing a credible threat to members of the site. This happens when you go beyond criticism of their words or ideas and instead attack who they are. If you’ve got a vendetta against a certain member, do not police and criticize that member’s every move, post, or comment on a conversation. Absolutely don’t take this a step further and organize or encourage violence against this person, whether through doxxing, obtaining dirt, or spreading that person’s private information.

Violations of privacy. Respect the sanctity of our members’ personal information. Don’t con them – or the infrastructure of our site – to obtain, post, or disseminate any information that could threaten or harm our members. This includes, but isn’t limited to, credit card or debit card numbers; social or national security numbers; home addresses; personal, non-public email addresses or phone numbers; sexts; or any other identifying information that isn’t already publicly displayed with that person’s knowledge.

Sexually-explicit, NSFW, obscene, vulgar, or pornographic content. We’d like for This. to be a site that someone can comfortably scroll through in a public space – say a cafe, or library. We’re not a place for sexually-explicit or pornographic posts, comments, accounts, usernames, or display names. The internet is rife with spaces for you to find people who might share your passion for a certain Pornhub video, but This. isn’t the place to do that. When it comes to nudity, what we do allow on our site is informative or newsworthy – so, for example, if you’re sharing this article on Cameroon’s breast ironing tradition, that’s fair game. Or a good news or feature article about Debbie Does Dallas. But, artful as it may be, we won’t allow actual footage of Debbie Does Dallas on the site. (We understand that some spaces on the internet are shitty at judging what is and isn’t obscene when it comes to nudity, so if you think we’ve pulled your post off the site because we’re a bunch of unreasonable prudes, we’ll be happy to engage.)

Excessively violent content. Gore, mutilation, bestiality, necrophilia? No thanks! There’s a distinction between a potentially upsetting image that’s worth consuming (think of some of the best war photography) and something you’d find in a snuff film. It’s not always an easy distinction to make – real life is pretty brutal, and some of the images we probably need to see are the hardest to stomach – but we also don’t want to create an overwhelmingly negative experience for anyone who visits the site and happens to stumble upon a painful image.

Promotion of self-harm, eating disorders, alcohol or drug abuse, or similar forms of destructive behavior. The internet is, sadly, also rife with spaces where people get off on encouraging others to hurt themselves. If you’d like to do that, get off our site and certainly seek help.

Username squatting. Dovetailing with that, we reserve the right to take back a username that is not being actively used and give it to someone else who’d like it it – especially if it’s, say, an esteemed publication, organization, or person. We’re also firmly against attempts to buy or sell stuff in exchange for usernames.

Use of the This. brand, trademark, or logo without consent. You also cannot use the This. name or anything associated with the brand without our consent – unless, of course, it’s a news item. That means no creating accounts, usernames, or display names that use our brand.

Spam. Populating the site with spammy accounts is antithetical to our mission – being the place to find the absolute best in media. If you’ve created accounts that are transparently selling, say, “installation help for Macbooks” or some other suspicious form tech support, or advertising your “viral video” about Justin Bieber that’s got a suspiciously low number of views, you don’t belong on our site. That contradicts why we exist as a platform – to give members a noise-free experience they can’t find elsewhere on the web.

Impersonation of others. Dovetailing with that – though we’d all like to be The New York Times or Diana Ross, don’t pretend to be them. Don’t create an identity on the site in the likeness of a company or person who isn’t you. If you decide, for some reason, to create a parody account of a public figure or organization – though we can think of better sites to do that on, frankly – make sure you make that as clear as possible in your display name, avatar, and bio.

Infringement of copyright or intellectual property rights. Don’t post copyrighted works without the permission of its original owner or creator. This extends, for example, to copying and pasting a copyrighted set of words into a comment and passing it off as your own without credit. If you think someone has unlawfully violated your own copyright, please follow the DMCA procedures set forth in our Terms of Service.

Mass or automated registration and following. We’ve worked hard to build the site’s infrastructure. If you manipulate that in any way to game your follow count or register multiple spam accounts, we’ll have to terminate your account.

Exploits, phishing, resource abuse, or fraudulent content. Do not scam our members into giving you money, or mislead our members through misrepresenting a link to, say, a virus.

Exploitation of minors. Do not post any material regarding minors that’s sexually explicit, violent, or harmful to their safety. Don’t solicit or request their private or personally identifiable information. Leave them alone.

So how do we take punitive action against anyone who violates these? Depends on the severity of the offense. If you’re a member with a good track record who seems to have slipped up, we’ll shoot you an email telling you why your content was removed. If you’ve shared, written, or done something flagrantly and recklessly violating one of these rules, we’ll ban you from the site through deleting your account and all that’s associated with it. And if we feel it’s necessary or otherwise believe it is required, we will work with law enforcement to handle any risk to one of our members, the This. community in general, or to public safety.

To put it plainly – if you’re an asshole, we’ll kick you off the site.

Let’s make that a little more concrete.

I want to say: “Former Vice-President Dick Cheney should be tortured for a minimum of thirty (30) years and be kept alive for that purpose, as a penalty for his war crimes.”

I can’t say that on This. because:

  • “incitement to violence” If torture is ok, then so it other violence.
  • “harmful to marginalized group” If you think of sociopaths as a marginalized group.
  • “harassment” Cheney is a victim too. He didn’t start life as a moral leper.
  • “excessively violence content” Assume I illustrate the torture Cheney should suffer.

Rules broken vary by the specific content of my speech.

Remind me to pass this advice along to: Jonathan “I Want To Be A Twitter Censor” Weisman. All he needs to do is build a competitor to Twitter and he can censor to his heart’s delight.

The build your own platform isn’t just my opinion. This. confirms my advice:

If you don’t like these rules, feel free to create your own platform! There are a lot of awesome, simple ways to do that. That’s what’s so lovely about the internet.

June 12, 2016

Art and the Law: [UK Focused]

Filed under: Art,Censorship,Free Speech,Law — Patrick Durusau @ 4:29 pm

Art and the Law: Guides to the legal framework and its impact on artistic freedom of expression by Jodie Ginsberg, chief executive, Index on Censorship.

From the post:

Freedom of expression is essential to the arts. But the laws and practices that protect and nurture free expression are often poorly understood both by practitioners and by those enforcing the law.

As part of Index on Censorship’s work on art and offence, Index has published a series of law packs intended to address questions about legal limits related to free expression and the arts.

We intend them as “living” documents, to be enhanced and developed in partnership with arts groups so that artistic freedom is nurtured and nourished.

This work builds on an earlier study by Index on Censorship, Taking the Offensive, which showed how self-censorship manifests itself in arts organisations and institutions.

Descriptions of:

Child Protection: PDF | web

Counter Terrorism: PDF | web

Obscene Publications: PDF | web

Public Order: PDF | web

Race and Religion: PDF | web

along with numerous other resources appear on this page.

Realize these are UK specific and the laws on such matters vary widely. That’s not a criticism but an observation for the safety of readers. Check your local laws with qualified legal advisers.

Unlike Jonathan “I Want To Be A Twitter Censor” Weisman, my advice for when you find offensive content, is to look away.

What other people choose to create, publish, perform, listen to, view, read, etc., is their business and certainly none of yours.

Criminal acts against other people, children in particular, are already unlawful and censorship isn’t required outlaw them.

June 11, 2016

Jonathan Weisman: Don’t Let The Door Hit You

Filed under: Censorship,Free Speech,Journalism — Patrick Durusau @ 10:44 am

Jonathan Weisman has decided to leave Twitter for reasons he sets forth (at length) at: Why I Quit Twitter — and Left Behind 35,000 Followers.

The essence of his complaint: Twitter failed to censor the speech of other Twitter users.

I offer no defense for the offensive and crude tweets Weisman received via Twitter.

However, as the The Times’s deputy Washington editor, Weisman had the resources to filter his Twitter stream to remove such posts on his own.

But avoiding the offensive tweets wasn’t his goal.

Weisman’s goal is to silence others and to enlist Twitter in that task.

I have to agree that Twitter’s use of its “terms of service” is arbitrary and capricious, not to mention lacking transparency, but that’s all the more reason to discard content rules from “terms of service,” not make them more onerous.

Weisman’s parting shot is to describe Twitter as a “…cesspoll of hate….”

Humanity has a number of such cesspools as well as large swaths of people who fall somewhere between there and sainthood. No reason to expect social media that reflects society to be any different.

Jonathan Weisman leaving Twitter is the loss of another advocate for censorship and there can never be too few of those.

PS: The New York Times needs to seriously think about why it employs a censorship advocate as its deputy Washington Editor.

June 2, 2016

#NoTROHere – Defending Free Speech and MuckRock

Filed under: Censorship,Free Speech — Patrick Durusau @ 3:27 pm

Court grants Temporary Restraining Order forcing removal of MuckRock documents by Michael Morisy.

From the post:

A King County, Washington court has ordered MuckRock to un-publish documents that the City of Seattle released to one of our users, Phil Mocek, via a public records request concerning private contractors who bid on the city’s smart utility meter program.

Although MuckRock has complied with the order, we disagree with the court’s decision and are confident that ultimately we will vindicate our right to publish the documents that Mr. Mocek lawfully obtained.

I don’t see my name or yours on the TRO or any of the other documents.

To enable you (and me) to compare the original documents to both the redacted copies and the highly amusing affidavits filed in support of the facially flawed TRO, find the following:

Landis-Gyr-Managed-Services-Report-2015-Final.pdf

Req-9-Security-Overview.pdf

Disclaimer: I did NOT obtain these documents from MuckRock or anyone known to me to be associated with MuckRock or named in any of the pleadings referenced above. (Just to forestall any groundless accusations of contempt, etc.)

Request: In the unlikely event that someone serves me with a TRO, please repost the documents on your blogs, discussion lists, twitter feeds, etc.

If Landis+Gyr wants to violate the U.S. Constitution and state constitutions in every county/parish (Louisiana) in the United States, let’s give them that opportunity. #NoTROHere

Before I forget, all the legal documents are here, including the affidavits.

PS: I know I am violating the advice I gave in Avoiding Imperial (Computer Fraud and Abuse Act (CFAA)) Entanglement – Identification but:

  • I never claimed to be perfectly consistent, and
  • The public is the ultimate arbiter of the conduct of its government. Concealment of information by government serves only to breed mistrust and a lack of confidence in outcomes.

Look for my analysis of the public documents vs. the affidavits next Monday, 6 June 2016.

« Newer PostsOlder Posts »

Powered by WordPress