Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 25, 2019

Banned By Twitter

Filed under: Censorship,Free Speech,Twitter — Patrick Durusau @ 7:28 pm

Twitter is vigilant about protecting the feelings of people who deny vaccines for children and even let them die in their custody. I’m speaking of CBP/ICE agents and the following notice I received from Twitter:

Twitter Suspension

Isn’t that amazing? No doubt had Twitter been around when the Brown Shirts and SS were popular, it would be protecting their feelings as well.

Apologies for the long silence! I hope to resume at least daily postings starting with this one.

April 28, 2019

Ex-Police Chief, Outs Self as Extremist!

Filed under: Censorship,Government — Patrick Durusau @ 4:20 pm

The Ex-Met Police assistant commissioner Sir Mark Rowley has outed himself as an extremist (or an idiot, take your pick) in remarks to BBC Radio Programme 4, saying:

The top-ranked search referred to by Sir Mark takes users to the Wikipedia entry for Anjem Choudary, who was released from prison last year, halfway through a five-year jail term for encouraging support for the so-called Islamic State group.

He told Today: “I think I mentioned on your programme a few months ago, if you Google ‘British Muslim spokesman’ you get Anjem Choudary. That’s a disgrace.”

Sir Mark said: “These algorithms are designed to push us towards contentious material because that feeds their bottom line of advertising revenues, by pushing readers to extremist material.”

This is something Google denies, pointing out that it actually wants to get people off the platform and on to a third-party site as quickly as possible.

‘Extremist’ Google algorithms concern ex-police chief

Extremist may sound harsh but using the results of one “Google” search to condemn search algorithms untested and unseen, is clearly extreme. Public policy cannot be reasonably based on ad hoc reports by public figures and their reactions to search result content. Any student writing a paper on the recent history of Muslims in the UK would likely appreciate the pointer to Anjem Choudary.

Unless Sir Mark intends to expunge Choudary from BBC and other news reports held in libraries. And prohibiting discussion of Choudary online and in the news, opps, Sir Mark has already violated his own rule! Discussion of Choudary as “British Muslim spokesman.” Which now shows up as the first “hit” in a competiting search engine.


March 28, 2019

Terrorist Usage of Twitter and Social Media (AKA Advertising)

Filed under: Advertising,Censorship,Social Media,Terrorism — Patrick Durusau @ 8:29 pm

Primer: Terrorist Usage of Twitter and Social Media

I mention this as an example of a catchy title for what is otherwise an “advertising on social media” post. Consider this re-write of the lead paragraph:

In recent years the Internet and social media has rapidly grown and become a part of everyday life for many people.  For example, YouTube alone has nearly two billion active users each month, has one billion hours of content watched every day, and over 300 hours of new video uploaded every minute (Aslam, 2019).  Other social media platforms also generate huge amounts of users and views.  The wide reach of these and other platforms has given many people and groups the opportunity to be heard when they otherwise would not have a voice.  While in many cases this opportunity is celebrated for supporting free speech, advertisers can take advantage of this access to reach and entice people that would otherwise be outside their influence.  Advertisers are becoming increasingly aware of, and taking advantage of, the global access the Internet and social media gives them.  These advertisers are no longer limited to recruiting new buyers in their physical sphere of influence; they can entice and recruit new buyers from anywhere around the world.  Advertisers are also using the Internet to encourage and carry out sales (physical and cyber) around the world…

The bolded text replaces text in the original.

For all of the bleeting and whining about terrorists on social media, what is being discussed is advertising. Any decent introduction to advertising is more useful to terrorists and their opponents than all of the literature on terrorist use of social media.

Critics of terrorist advertising miss the validity of terrorist ads in the eyes of their target populations. Twenty to thirty year old males in most cultures know they lack of ability to make a difference. For their families and communities. Structural inequalities guarantee that lack of ability. Those have been the “facts” all their lives. Terrorists offer the chance to perhaps not make a difference, but to at least not grow bent and old under the weight of oppression.

Your counter ad? …. There’s the problem with countering terrorist advertising. The facts underlying those ads are well known and have no persuasive refutation. Change the underlying facts as experienced by terrorists and their families and terrorist ads will die of their own accord. Keep the underlying facts and …, well, you know how that turns out.


December 3, 2018

Distributed Denial of Secrets (#DDoSecrets) – There’s a New Censor in Town

Filed under: Censorship,CIA,Leaks,NSA — Patrick Durusau @ 6:59 pm

Distributed Denial of Secrets (#DDoSecrets) (ddosecretspzwfy7.onion/)

From a tweet by @NatSecGeek:

Distributed Denial of Secrets (#DDoSecrets), a collective/distribution system for leaked and hacked data, launches today with over 1 TB of data from our back catalogue (more TK).

Great right? Well, maybe not so great:

Our goal is to preserve info and ensure its available to those who need it. When possible, we will distribute complete datasets to everyone. In some instances, we will offer limited distribution due to PII or other sensitive info. #DDoSecrets currently has ~15 LIMDIS releases.

As we’re able, #DDoSecrets will produce sanitized versions of these datasets for public distribution. People who can demonstrate good cause for a copy of the complete dataset will be provided with it.

Rahael Satter in Leak site’s launch shows dilemma of radical transparency documents the sad act of self-mutilation (self-censorship) by #DDoSecrets.

Hosting the Ashley Madison hack drew criticism from Joseph Cox (think Motherboard) and Gabriella Coleman (McGill University anthropologist). The Ashley Madison data is available for searching (by email for example https://ashley.cynic.al/), so the harm of a bulk release isn’t clear.

What is clear is the reasoning of Coleman:


Best said the data would now be made available to researchers privately on a case-by-case basis, a decision that mollified some critics.

“Much better,” said Coleman after reviewing the newly pared-back site. “Exactly the model we might want.”

I am not surprised this is the model Coleman wants, academics are legendary for treating access as a privilege, thus empowering themselves to sit in judgment on others.

Let me explicitly say that I have no doubts that Emma Best will be as fair handed with such judgments as anyone.

But once we concede any basis for censorship, the withholding of information of any type, then we are cast into a darkness from which there is no escape. A censor claims to have withheld only X, but how are we to judge? We have no access to the original data. Only its mutilated, bastard child.

Emma Best is likely the least intrusive censor you can find but what is your response when the CIA or the NSA makes the same claim?

Censorship is a danger when practiced by anyone for any reason.

Support and leak to the project but always condition deposits on raw leaking by #DDoSecrets.

October 2, 2018

More Free Speech Lost at Twitter

Filed under: Censorship,Free Speech,Hacking,Twitter — Patrick Durusau @ 7:19 pm

Twitter bans distribution of hacked materials ahead of US midterm elections by Catalin Cimpanu.

From the post:


Twitter already had rules in place that prohibited the distribution of hacked materials that contain private information or trade secrets, but after Monday’s update, the platform’s review teams will also ban accounts that claim responsibility for a hack, make hacking threats, or issue incentives to hack specific people and accounts.

Nevertheless, the social network hasn’t been that successful, barely putting a dent in spam-related reports, with the number of complaints going down from 17,000 in May to only 16,000 in September. More work needs to be done, and Twitter just gave its staff sharper teeth to go about their job.

See Cimpanu’s post for the full scope of the damage being done to free speech at Twitter.

Any Twitter investor’s with insight into how much Twitter wastes on its censorship operations every year?

As an investor, I would want to see some ROI from censorship. You?

September 25, 2018

Twitter’s Quest to Police Public Conversation [Note on feminist power analysis]

Filed under: Censorship,Free Speech,Twitter — Patrick Durusau @ 10:05 am

Not satisfied with suppressing the free speech of millions, Twitter is expanding the power of its faceless censors to seek out and silence dehumanizing language.

From their post:


For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease), but there are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public conversation.

With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined the relationship between dehumanization and violence. For example, Susan Benesch has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that dehumanization can reduce the strength of restraining forces against violence.

Let’s be clear: I don’t tweet, re-tweet or otherwise amplify any of the conduct that is now or would be in the future, forbidden as “dehumanizing language.”

At the same time, it is every user’s right to determine for themselves what content, harmful and/or dehumanizing, they wish to say or view.

Trivially easy for Twitter to implement filters that users could “follow” in order to avoid either harmful or dehumanizing speech, tuned to their specific choices. The same is true for followable block list of users known to spew such nonsense.

For reasons unknown to me, Twitter and its fellow travelers want to police the “public conversation.” So that its nameless and faceless censors can shape the public conversation.

Twitter censorship favors the same values I do, but even so, I find it objectionable in all respects.

If you know anyone working at Twitter, challenge them to empower users with followable content filters and block lists.

I have and all I get is silence in response.

PS: If you are interested in feminist power analysis, silence is the response of the privileged when challenged. They don’t even have to acknowledge your argument or produce facts. Just silence. Maybe I should write a post: Twitter and Patterns of Privilege. What do you think?

September 11, 2018

Censorship Fail (no surprise) at Facebook

Filed under: Censorship,Facebook,Free Speech — Patrick Durusau @ 6:01 pm

Facebook’s idea of ‘fact-checking’: Censoring ThinkProgress because conservative site told them to by Ian Millhiser

From the post:

Last year, Facebook announced that it would partner with The Weekly Standard, a conservative magazine, to “fact check” news articles that are shared on Facebook. At the time, ThinkProgress expressed alarm at this decision.

The Weekly Standard has a history of placing right-wing ideology before accurate reporting. Among other things, it labeled the Iraq War “A War to Be Proud Of” in 2005, and it ran an article in 2017 labeling climate science “Dadaist Science,” and promoted that article with the phrase “look under the hood on climate change ‘science’ and what you see isn’t pretty.”

The Weekly Standard brought its third-party “fact-checking” power to bear against ThinkProgress on Monday, when the outlet determined a ThinkProgress story about Supreme Court nominee Brett Kavanaugh was “false,” a category defined by Facebook to indicate “the primary claim(s) in this content are factually inaccurate.”

To save you the suspense, the ThinkProgress story was true by any literate reading of its report and the claims by The Weekly Standard are false.

Millhiser details the financial impact of a “false” rating from Facebook, which reverberates through the system and the lack of responsiveness of The Weekly Standard when questioned about its “false” rating.

The Weekly Standard has been empowered by Facebook to become a scourge on free expression. Hold Facebook and The Weekly Standard accountable for their support and acts of censorship.

September 10, 2018

Make Yourself and Staff, Legitimate Military Targets

Filed under: Censorship,Free Speech — Patrick Durusau @ 8:20 pm

YouTube Shuts Down All Syrian State Channels As Idlib Assault Begins

From the post:

Syrian state YouTube channels have been shut down this morning just as the Syrian Army’s ground offensive has officially begun.

This includes the following now terminated Syrian state and pro-government channels: Syrian Presidency, Syria MoD (Ministry of Defense), SANA, and Sama TV. This follows YouTube reportedly closing Syria’s Ortas News last week.

The post goes on to point out that perhaps this latest censorship by YouTube is just that, more censorship.

However, YouTube and its staff should be aware that coordination, apparent or otherwise, with forces opposed to the Syrian government, makes them legitimate military targets.

Unlikely military targets but if you are allergic to military action and employed by YouTube, you should consider other employment at your earliest opportunity.

August 30, 2018

Censorship: Compensating for Poor Design, Assumed User Incompetence

Filed under: Censorship,Free Speech — Patrick Durusau @ 12:58 pm

Tumblr is explicitly banning hate speech, posts that celebrate school shootings, and revenge porn by Shannon Liao.

From the post:

Tumblr is changing its community guidelines to more explicitly ban hate speech, glorifying violence, and revenge porn. The new rules go into effect on September 10th.

“It’s on all of us to create a safe, constructive, and empowering environment,” Tumblr writes in its blog post. “Our community guidelines need to reflect the reality of the internet and social media today.” The previous version of the guidelines can still be viewed on GitHub for comparison.

Some people cheer censorship of undefined “hate speech, glorifying violence, and revenge porn.” At least until they realize that censorship is made necessary by poor design and assumptions about user incompetence.

Poor Design

The filtering options for a Tumblr account are especially sparse:

“Safe” mode is a shot-in-the-dark filter with no known settings.

You can only choose “tags” to filter on. As though “tags” are going to be assigned in good faith by bad actors.

A better design of filtering would include user (with wildcarding), terms (with wildcarding), tags, dates (with ranges), along with the ability to “follow” filters created by other Tumblr users. (That could be a commercial incentive for users to create and sell such filters.)

Centralized censorship at Tumblr is an attempt to correct for an engineering failure, a failure that denies users the ability to choose the content they wish to view.

Assuming User Incompetence

Closely allied with the lack of even minimal, shareable filters, is the Tumblr assumption that users are incompetent to filter their own content. Hence, Tumblr has to step in to filter content for everyone.

I don’t recall Tumblr (or any other Internet censor) offering any evidence that users are incapable of choosing the content they wish to view or avoid.

Are you incapable of making that choice?

I ask because the Spanish Inquisition censors made similar fact-free assumptions about readers. Why should Tumblr repeat the mistakes of the Spanish Inquisition?

Censorship shouts at everyone they aren’t competent to choose their own reading materials.

Conclusion

Tumblr isn’t the only Internet forum that is covering up poor design and making false assumptions about users and their competence to in choosing material. I mention it here only as a sign that censorship is spreading and should be resisted without quarter.

I think you are smart enough to choose the content you wish to view and I extend that assumption to all other users.

Do you disagree?

April 25, 2018

DoS of Censorship at YouTube?

Filed under: Censorship — Patrick Durusau @ 8:42 am

YouTube publishes deleted videos report (BBC)

From the post:

During the last quarter of 2017, 8.3 million videos were deleted for violating “community guidelines.”

If you don’t approve of censorship of others, consider recycling deleted videos, altered to produce a different key signature to increase the friction at YouTube for their deletion.

Think of overloading censors as a form of Denial-of-Service attack.

For example, if files known to be unacceptable were uploaded with altered key signatures and then reported by the uploader, from distributed accounts, it creates a loop of uploads and complaints. The overload helps protect other videos but also increases the friction of maintaining its reign of censorship.

Further automation is possible by training an AI to search the web for videos that human censors at YouTube would find to violate “community standards.”

If you fall into the “I want to be effective, not so much clever” category, scraping YouTube deletion notices, then searching for those files elsewhere, is an effective, albeit brain dead alternative for discovery of unacceptable files at YouTube.

Censorship of others is wrong.* (full stop)

Anyone who practices it, merits whatever fate befalls them.

* Note the limitation to “censorship of others.” I routinely “censor/filter” what I choose to view, discuss, etc. I leave everyone else with an identical freedom to choose what they wish to view, discuss, etc. Please extend the same courtesy to me. Thanks!

March 31, 2018

More Google Censorship – ‘Kodi’ Banned from Auto-Complete

Filed under: Censorship,Free Speech,Intellectual Property (IP) — Patrick Durusau @ 7:42 pm

Google Adds ‘Kodi’ to Autocomplete Piracy Filter

From the post:

Google has banned the term “Kodi” from the autocomplete feature of its search engine. This means that the popular software and related suggestions won’t appear unless users type out the full term. Google has previously taken similar measures against “pirate” related terms and confirms that Kodi is targeted because it’s “closely associated with copyright infringement.”

In recent years entertainment industry groups have repeatedly urged Google to ramp up its anti-piracy efforts.

These remarks haven’t fallen on deaf ears and Google has made several changes to its search algorithms to make copyright-infringing material less visible.

In addition to censoring a legitimate project, Kodi, Google is reported to be acting on behalf of entertainment industry groups, gasp, without being paid.

That’s anti-capitalist! It conditions entertainment industry groups and the anti-piracy crowd to expect free handouts. (Property class privilege for any Marxists in the audience.)

To hell with that!

I urge you to not censor at all, but if you do, make others pay dearly for the privilege.

Forced to pay for censorship, entertainment/anti-piracy groups will collect legitimate data on piracy to determine their cost/benefit ratio for censorship. (Legitimate data being defined as data unchanged by membership calendars and fund raising drives.)

March 24, 2018

The Dark Web = Freedom of Speech

Filed under: Censorship,Free Speech,Privacy — Patrick Durusau @ 4:49 pm

Freedom of speech never was all that popular in the United States and recently it has become even less so.

Craigslist personals, some subreddits disappear after FOSTA passage by Cyrus Farivar.

From the post:

In the wake of this week’s passage of the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) bill in both houses of Congress on Wednesday, Craigslist has removed its “Personals” section entirely, and Reddit has removed some related subreddits, likely out of fear of future lawsuits.

FOSTA, which awaits the signature of President Donald Trump before becoming law, removes some portions of Section 230 of the Communications Decency Act. The landmark 1996 law shields website operators that host third-party content (such as commenters, for example) from civil liability. The new bill is aimed squarely at Backpage, a notorious website that continues to allow prostitution advertisements and has been under federal scrutiny for years.

I am deeply saddened to report that the House vote was 388 ayes and 25 noes and the Senate vote was 97 to 2.

You can follow the EFF lead as they piss and moan about this latest outrage. But all their activity (and fund raising) didn’t prevent its passage. So, what are the odds the EFF will get it repealed? That’s what I thought.

I’m not looking for Craigslist to jump to the Dark Web but certainly subreddits should be able to make the switch. The more subreddits, along with new sites and services that switch to the Dark Web, the more its usage and bandwidth will grow. Looking forward to the day when the default configuration of new computers is for the Dark Web. The “open” web being an optional choice with appropriate warnings.

If you are not (yet) a Dark Web jockey, try: How To Access Notorious Dark Web Anonymously (10 Step Guide). Enough to get you started and to demonstrate the potential of the Dark Web.

December 28, 2017

Twitter Taking Sides – Censorship-Wise

Filed under: Censorship,Free Speech,Twitter — Patrick Durusau @ 10:16 pm

@wikileaks pointed out that Twitter’s censorship policies are taking sides:

Accounts that affiliate with organizations that use or promote violence against civilians to further their causes. Groups included in this policy will be those that identify as such or engage in activity — both on and off the platform — that promotes violence. This policy does not apply to military or government entities and we will consider exceptions for groups that are currently engaging in (or have engaged in) peaceful resolution.
… (emphasis added)

Does Twitter need a new logo? Birds with government insignia dropping bombs on civilians?

December 8, 2017

Contra Censors: Tor Bridges and Pluggable Transports [Please Donate to Tor]

Filed under: Censorship,Tor — Patrick Durusau @ 1:08 pm

Tor at the Heart: Bridges and Pluggable Transports by ssteele.

From the post:


Censors block Tor in two ways: they can block connections to the IP addresses of known Tor relays, and they can analyze network traffic to find use of the Tor protocol. Bridges are secret Tor relays—they don’t appear in any public list, so the censor doesn’t know which addresses to block. Pluggable transports disguise the Tor protocol by making it look like something else—for example like HTTP or completely random.

Ssteele points out censorship, even censorship of Tor, is getting worse, so the time to learn these tools is now. Don’t wait until Tor has gone dark for you to respond.

December seems to be when all the begging bowls come out from a number of worthwhile projects.

I should be pitching my cause at this point but instead, please donate to support the Tor project.

November 10, 2017

Who Has More Government Censorship of Social Media, Canada or US?

Filed under: Censorship,Government,Social Media — Patrick Durusau @ 5:31 pm

Federal government blocking social media users, deleting posts by Elizabeth Thompson.

From the post:

Canadian government departments have quietly blocked nearly 22,000 Facebook and Twitter users, with Global Affairs Canada accounting for nearly 20,000 of the blocked accounts, CBC News has learned.

Moreover, nearly 1,500 posts — a combination of official messages and comments from readers — have been deleted from various government social media accounts since January 2016.

However, there could be even more blocked accounts and deleted posts. In answer to questions tabled by Opposition MPs in the House of Commons, several departments said they don’t keep track of how often they block users or delete posts.

It is not known how many of the affected people are Canadian.

It’s also not known how many posts were deleted or users were blocked prior to the arrival of Prime Minister Justin Trudeau’s government.

But the numbers shed new light on how Ottawa navigates the world of social media — where it can be difficult to strike a balance between reaching out to Canadians while preventing government accounts from becoming a destination for porn, hate speech and abuse.

US Legal Issues

Davison v. Loudoun County Board of Supervisors

Meanwhile, south of the Canadian border, last July (2017), a US district court decision carried the headline: Federal Court: Public Officials Cannot Block Social Media Users Because of Their Criticism.


Davison v. Loudoun County Board of Supervisors (Davidson) involved the chair of the Loudoun County Board of Supervisors, Phyllis J. Randall. In her capacity as a government official, Randall runs a Facebook page to keep in touch with her constituents. In one post to the page, Randall wrote, “I really want to hear from ANY Loudoun citizen on ANY issues, request, criticism, compliment, or just your thoughts.” She explicitly encouraged Loudoun residents to reach out to her through her “county Facebook page.”

Brian C. Davidson, a Loudon denizen, took Randall up on her offer and posted a comment to a post on her page alleging corruption on the part of Loudoun County’s School Board. Randall, who said she “had no idea” whether Davidson’s allegations were true, deleted the entire post (thereby erasing his comment) and blocked him. The next morning, she decided to unblock him. During the intervening 12 hours, Davidson could view or share content on Randall’s page but couldn’t comment on its posts or send it private messages.

Davidson sued, alleging a violation of his free speech rights. As U.S. District Judge James C. Cacheris explained in his decision, Randall essentially conceded in court that she had blocked Davidson “because she was offended by his criticism of her colleagues in the County government.” In other words, she “engaged in viewpoint discrimination,” which is generally prohibited under the First Amendment.

Blocking Twitter users by President Trump has lead to other litigation.

Knight First Amendment Institute at Columbia University v. Trump (1:17-cv-05205)

You can track filings in Knight First Amendment Institute at Columbia University v. Trump courtesy of the Court Listener Project. Please put the Court Listener project on your year end donation list.

US Factual Issues

The complaint outlines the basis for the case, both legal and factual, but does not recite any data on blocking of social media accounts by federal agencies. Would not have to, it’s not really relevant to the issue at hand but it would be useful to know the standard practice among US government agencies.

I can suggest where to start looking for that answer: U.S. Digital Registry, which as of today, lists 10877 social media accounts.

You could ask the agencies in question, FOIA requests for lists of blocked accounts.

Twitter won’t allow you to see the list of blocked users for accounts other than your own. Of course, that rule depends on your level of access. You’ll find similar situations for other social media providers.

Assuming you have blocked users by official or self-help means, comparing blocked users across agencies, by their demographics, etc., would make a nice data-driven journalism project. Yes?

November 9, 2017

Google Doc Lock – Google As Censor

Filed under: Censorship,Free Speech — Patrick Durusau @ 9:21 am

Monica Chin reports in Google is locking people out of documents, and you should be worried, Google’s role as censor has taken an ugly turn.

From the post:


“This morning, we made a code push that incorrectly flagged a small percentage of Google docs as abusive, which caused those documents to be automatically blocked,” the company told Mashable. “A fix is in place and all users should have access to their docs.”

Google added, “We apologize for the disruption and will put processes in place to prevent this from happening again.”

Still, the incident raises important questions about the control Google Docs users have over their own content. The potential to lose access to an important document because it hasn’t yet been polished to remove certain references or sensitive material has concrete implications for the way Google Docs is used.

For many who work in media and communications, Google Docs serves as a drafting tool, allowing writers and editors to collaborate. And, of course, it’s necessary and important for writers to retain ownership of documents that are early versions of their final product — no matter how raw — so as to put a complete draft through the editorial process.

Nobody should be writing hate speech or death threats in their Google docs — or anywhere.

But if Google’s flagging system is so glitchy as to incorrectly target other content, a Google Docs user on a deadline needs to be on their toes. Bale tweeted that she no longer plans to write in Google Docs. Until Google fully resolves this issue, perhaps other journalists should follow her lead.

Chin’s suggestion:

Nobody should be writing hate speech or death threats in their Google docs — or anywhere.

Is clearly not the answer to Google censorship.

What if you are a novelist who is unfortunate enough to be using Google Docs to write about white supremacy in the Trump White House? Unlikely I know (sarcasm) but it isn’t hard to think of fictional content that qualifies as “hate speech” or “death threats.” Nor should novelists be required to mark their writings as “fiction” to escape Google censorship.

A Google Docs lock has No Notice, No Opportunity to Be Heard Prior to Lockout, and No Transparent Process.

Three very good reasons to not use Google Docs at all.

October 4, 2017

Defeating Israeli Predictive Policing Algorithm

Filed under: Censorship,Free Speech — Patrick Durusau @ 4:53 pm

The Israeli algorithm criminalizing Palestinians for online dissent by Nadim Nashif and Marwa Fatafta.

From the post:

The Palestinian Authority’s (PA) arrest of West Bank human rights defender Issa Amro for a Facebook post last month is the latest in the the PA’s recent crackdown on online dissent among Palestinians. Yet it’s a tactic long used by Israel, which has been monitoring social media activity and arresting Palestinians for their speech for years – and has recently created a computer algorithm to aid in such oppression.

Since 2015, Israel has detained around 800 Palestinians because of content they wrote or shared online, mainly posts that are critical of Israel’s repressive policies or share the reality of Israeli violence against Palestinians. In the majority of these cases, those detained did not commit any attack; mere suspicion was enough for their arrest.

The poet Dareen Tatour, for instance, was arrested on October 2015 for publishing a poem about resistance to Israel’s 50-year-old military rule on her Facebook page. She spent time in jail and has been under house arrest for over a year and a half. Civil rights groups and individuals in Israel, the Occupied Palestinian Territory (OPT), and abroad have criticized Israel’s detention of Tatour and other Palestinian internet users as violations of civil and human rights.

Israeli officials have accused social media companies of hosting and facilitating what they claim is Palestinian incitement. The government has pressured these companies, most notably Facebook, to remove such content. Yet the Israeli government is mining this content. Israeli intelligence has developed a predictive policing system – a computer algorithm – that analyzes social media posts to identify Palestinian “suspects.”

One response to Israel’s predictive policing is to issue a joint statement: Predictive Policing Today: A Shared Statement of Civil Rights Concerns.

Another response, undertaken by Nadim Nashif and Marwa Fatafta, is to document the highly discriminatory and oppressive use of Israel’s predictive policing.

Both of those responses depend upon 1) the Israeli government agreeing it has acted wrongfully, and 2) the Israeli government in fact changing its behavior.

No particular reflection on the Israeli government but I don’t trust any government claiming, unverified, to have changed its behavior. How would you ever know for sure? Trusting any unverified answer from any government (read party) is a fool’s choice.

Discovering the Israeli algorithm for social media based arrests

What facts do we have about Israeli monitoring of social media?

  1. Identity of those arrested on basis of social media posts
  2. Content posted prior to their arrests
  3. Content posted by others who were not arrested
  4. Relationships with others, etc.

Think of the problem as being similar to breaking the Engima machine during WWII. We don’t have to duplicate the algorithm in use by Israel, we only have to duplicate it output. We have on hand some of the inputs and the outcomes of those inputs to start our research.

Moreover, as Israel uses social media monitoring, present guesses at the algorithm can be refined on the basis of more arrests.

Knowing Israeli’s social media algorithm is cold comfort to arrested Palestinians, but that knowledge can help prevent future arrests or make the cost of the method too high to be continued.

Social Media Noise Based on Israeli Social Media Algorithm

What makes predictive policing algorithms effective is their narrowing of the field of suspects to a manageable number. If instead of every male between the ages of 16 and 30 you have 20 suspects with scattered geographic locations, you can reduce the number of viable suspects fairly quickly.

But that depends upon being able to distinguish between all the males between the ages of 16 and 30. What if based on the discovered parallel algorithm to the Israeli predictive policing one, a group of 15,000 or 20,000 young men were “normalized” so they present the Israeli algorithm with the same profile?

If instead of 2 or 3 people who seem to be angry enough to commit violence, you have real and fake, 10,000 people right on the edge of extreme violence.

Judicious use of social media noise, informed by a parallel to the Israeli social media algorithm, could make the Israeli algorithm useless in practice. There would be too much noise for it to be effective. Or the resources required to eliminate the noise would be prohibitively expensive.

For predictive policing algorithms based on social media, “noise” is its Achilles heel.

PS: Actually defeating a predictive policing algorithm, to say nothing of generating noise on social media, isn’t a one man band sort of project. Experts in data mining, predictive algorithms, data analysis, social media plus support personnel. Perhaps a multi-university collaboration?

PPS: I don’t dislike the Israeli government any more or less than any other government. It was happenstance Israel was the focus of this particular article. I see the results of such research as applicable to all other governments and private entities (such as Facebook, Twitter).

October 3, 2017

Facebook Hiring 1,000+ Censors

Filed under: Censorship,Facebook,Free Speech — Patrick Durusau @ 4:49 pm

Facebook‘s assault on free speech, translated into physical terms:

That is a scene of violence as Spanish police assault voters on Catalonia independence.

Facebook is using social and mainstream media to cloak its violence in high-minded terms:

  1. “…thwart deceptive ads crafted to knock elections off course.” Facebook knows the true “course” of elections?
  2. “…hot-button issues to turn people against one another ahead of last year’s US election.” You never saw the Willy Horton ad?
  3. “Many appear to amplify racial and social divisions.” Ditto on the Willy Horton ad
  4. “…exacerbating political clashes ahead of and following the 2016 US presidential election.” Such as: 10 Most-Shared 2012 Republican Campaign Ads on YouTube
  5. “…ads that touted fake or misleading news or drove traffic to pages with such messages…” And Facebook is going to judge this? The same Facebook that knows “how” elections are supposed to go?

Quotations from Facebook beefing up team to thwart election manipulation by Glenn Chapman.

Like the Spanish police, Facebook has chosen the side of oppression and censorship, however much it wants to hide that fact.

When you think of Facebook, think of police swinging their batons, beating, kicking protesters.

Choose your response to Facebook and anyone proven to be a Facebook censor accordingly.

September 30, 2017

Female Journalists Fight Online Harrassment [An Anti-Censorship Response]

Filed under: Censorship,Feminism,Free Speech,Journalism,News,Reporting — Patrick Durusau @ 2:24 pm

(Before you tweet, pro or con, I take everything Ricchiari reports as true and harassment of women as an issue that must be addressed.)

Female Journalists Fight Online Harrassment by Sherry Ricchiardi.

From the post:

Online tormentors have called Swedish broadcaster Alexandra Pascalidou a “dirty whore,” a “Greek parasite” (a reference to her ethnic heritage), a “stupid psycho,” “ugly liar” and “biased hater.” They have threatened her with gang rape and sexual torture in hideous detail.

But Pascalidou has chosen to fight back by speaking out publicly, as often as she can, against the online harassment faced by female journalists. In November 2016, she testified before a European commission about the impact of gender-based trolling. “(The perpetrators’) goal is our silence,” she told the commission. “It’s censorship hidden behind the veil of freedom of speech. Their freedom becomes our prison.”

In April 2017, Pascalidou appeared on a panel at the International Journalism Festival in Italy, discussing how to handle sexist attacks online. She described the vitriol and threats as “low-intense, constant warfare.”

“Some say switch it off, it’s just online,” she told The Sydney Morning Herald. “It doesn’t count. But it does count, and it’s having a real impact on our lives. Hate hurts. And it often fuels action IRL (in real life).”

Other media watchdogs have taken notice. International News Safety Institute director Hannah Storm has called online harassment “the scourge of the moment in our profession” and a “major threat to the safety and security of women journalists.”

“When women journalists are the target, online harassment quickly descends into sexualized hate or threats more often than with men,” she added. “Women are more likely to be subjected to graphic sexual and physical violence.”

You will be hard pressed to find a more radical supporter of free speech than myself. I don’t accept the need for censorship of any content, for any reason, by any public or private entity.

Having said that, users should be enabled to robustly filter speech they encounter, so as to avoid harassment, threats, etc. But they are filtering their information streams and not mine. There’s a difference.

Online harassment is consistent with the treatment of women IRL (in real life). Cultural details will vary but the all encompassing abuse described in Woman at point zero by Nawāl Saʻdāwī can be found in any culture.

The big answer is to change the treatment of women in society, which in turn will reduce online harassment. But big answers don’t provide relief to women who are suffering online now. Ricchiardi lists a number of medium answers, the success of which will vary from one newsroom to another.

I have a small answer that isn’t seeking a global, boil-the-ocean answer.

Follow female journalists on Twitter and other social media. Don’t be silent in the face of public harassment.

You can consider one or more of the journalists from Leading women journalists – A public list by Ellie Van Houtte.

Personally I’m looking for local or not-yet-leading female journalists to follow. A different perspective on the news than my usual feed plus an opportunity to be supportive in a hostile environment.

Being supportive requires no censorship and supplies aid where it is needed the most.

Yes?

September 28, 2017

EU Humps Own Leg – Demands More Censorship From Tech Companies

Filed under: Censorship,EU,Free Speech,Government — Patrick Durusau @ 8:09 pm

In its mindless pursuit of the marginal and irrelevant, the EU is ramping up pressure on tech companies censor more speech.

Security Union: Commission steps up efforts to tackle illegal content online

Brussels, 28 September 2017

The Commission is presenting today guidelines and principles for online platforms to increase the proactive prevention, detection and removal of illegal content inciting hatred, violence and terrorism online.

As a first step to effectively fight illegal content online, the Commission is proposing common tools to swiftly and proactively detect, remove and prevent the reappearance of such content:

  • Detection and notification: Online platforms should cooperate more closely with competent national authorities, by appointing points of contact to ensure they can be contacted rapidly to remove illegal content. To speed up detection, online platforms are encouraged to work closely with trusted flaggers, i.e. specialised entities with expert knowledge on what constitutes illegal content. Additionally, they should establish easily accessible mechanisms to allow users to flag illegal content and to invest in automatic detection technologies.
  • Effective removal: Illegal content should be removed as fast as possible, and can be subject to specific timeframes, where serious harm is at stake, for instance in cases of incitement to terrorist acts. The issue of fixed timeframes will be further analysed by the Commission. Platforms should clearly explain to their users their content policy and issue transparency reports detailing the number and types of notices received. Internet companies should also introduce safeguards to prevent the risk of over-removal.
  • Prevention of re-appearance: Platforms should take measures to dissuade users from repeatedly uploading illegal content. The Commission strongly encourages the further use and development of automatic tools to prevent the re-appearance of previously removed content.

… (emphasis in original)

Taking Twitter as an example, EU terrorism concerns are generously described as coke-fueled fantasies.

Twitter Terrorism By The Numbers

Don’t take my claims about Twitter as true without evidence! Such as statistics gathered on Twitter and Twitter’s own reports.

Twitter Statistics:

Total Number of Monthly Active Twitter Users: 328 million (as of 8/12/17)

Total Number of Tweets sent per Day: 500 million (as of 1/24/17)

Number of Twitter Daily Active Users: 100 million (as of 1/24/17)

Government terms of service reports Jan – Jun 30, 2017

Reports 338 reports on 1200 accounts suspended for promotion of terrorism.

Got that? From Twitter’s official report, 1200 accounts suspended for promotion of terrorism.

I read that to say 1200 accounts out of 328 million monthly users.

Aren’t you just shaking in your boots?

But it gets better, Twitter has a note on promotion of terrorism:

During the reporting period of January 1, 2017 through June 30, 2017, a total of 299,649 accounts were suspended for violations related to promotion of terrorism, which is down 20% from the volume shared in the previous reporting period. Of those suspensions, 95% consisted of accounts flagged by internal, proprietary spam-fighting tools, while 75% of those accounts were suspended before their first tweet. The Government TOS reports included in the table above represent less than 1% of all suspensions in the reported time period and reflect an 80% reduction in accounts reported compared to the previous reporting period.

We have suspended a total of 935,897 accounts in the period of August 1, 2015 through June 30, 2017.

That’s more than the 1200 reported by governments, but comparing 935,897 accounts total, against 328 million monthly users, assuming all those suspensions were warranted (more on that in a minute), “terrorism” accounts were less than 1/3 of 1% of all Twitter accounts.

The EU is urging more pro-active censorship over less than 1/3 of 1% of all Twitter accounts.

Please help the EU find something more trivial and less dangerous to harp on.

The Dangers of Twitter Censorship

Known Unknowns: An Analysis of Twitter Censorship in Turkey by Rima S. Tanash, et. al, studies Twitter censorship in Turkey:

Twitter, widely used around the world, has a standard interface for government agencies to request that individual tweets or even whole accounts be censored. Twitter, in turn, discloses country-by-country statistics about this censorship in its transparency reports as well as reporting specific incidents of censorship to the Chilling Effects web site. Twitter identifies Turkey as the country issuing the largest number of censorship requests, so we focused our attention there. Collecting over 20 million Turkish tweets from late 2014 to early 2015, we discovered over a quarter million censored tweets—two orders of magnitude larger than what Twitter itself reports. We applied standard machine learning / clustering techniques, and found the vast bulk of censored tweets contained political content, often critical of the Turkish government. Our work establishes that Twitter radically under-reports censored tweets in Turkey, raising the possibility that similar trends hold for censored tweets from other countries as well. We also discuss the relative ease of working around Twitter’s censorship mechanisms, although we can not easily measure how many users take such steps.

Are you surprised that:

  1. Censors lie about the amount of censoring done, or
  2. Censors censor material critical of governments?

It’s not only users in Turkey who have been victimized by Twitter censorship. Alfons López Tena has great examples of unacceptable Twitter censorship in: Twitter has gone from bastion of free speech to global censor.

You won’t notice Twitter censorship if you don’t care about Arab world news or Catalan independence. And, after all, you really weren’t interested in those topics anyway. (sarcasm)

Next Steps

The EU wants an opaque, private party to play censor for content on a worldwide basis. In pursuit of a gnat in the flood of social media content.

What could possibly go wrong? Well, as long as you don’t care about the Arab world, Catalan independence, or well, criticism of government in general. You don’t care about those things, right? Otherwise you might be a terrorist in the eyes of the EU and Twitter.

The EU needs to be distracted from humping its own leg and promoting censorship of social media.

Suggestions?

PS: Other examples of inappropriate Twitter censorship abound but the answer to all forms of censorship is NO. Clear, clean, easy to implement. Don’t want to see content? Filter your own feed, not mine.

September 26, 2017

571 threats to press freedom in first half of 2017 [Hiding the Perpetrators?]

Filed under: Censorship,Free Speech,Government,Journalism,News,Reporting — Patrick Durusau @ 6:09 pm

Mapping Media Freedom verifies 571 threats to press freedom in first half of 2017

First Limit on Coverage

When reading this report, which is excellent coverage of assaults on press freedom, bear in mind the following limitation:

Mapping Media Freedom identifies threats, violations and limitations faced by members of the press throughout European Union member states, candidates for entry and neighbouring countries.

You will not read about US-based and other threats to press freedom that fall outside the purview of Mapping Media Freedom.

From the post:

Index on Censorship’s database tracking violations of press freedom recorded 571 verified threats and limitations to media freedom during the first two quarters of 2017.

During the first six months of the year: three journalists were murdered in Russia; 155 media workers were detained or arrested; 78 journalists were assaulted; 188 incidents of intimidation, which includes psychological abuse, sexual harassment, trolling/cyberbullying and defamation, were documented; 91 criminal charges and civil lawsuits were filed; journalists and media outlets were blocked from reporting 91 times; 55 legal measures were passed that could curtail press freedom; and 43 pieces of content were censored or altered.

“The incidents reported to the Mapping Media Freedom in the first half of 2017 tell us that the task of keeping the public informed is becoming much harder and more dangerous for journalists. Even in countries with a tradition of press freedom journalists have been harassed and targeted by actors from across the political spectrum. Governments and law enforcement must redouble efforts to battle impunity and ensure fair treatment of journalists,” Hannah Machlin, Mapping Media Freedom project manager, said.

This is a study of threats, violations and limitations to media freedom throughout Europe as submitted to Index on Censorship’s Mapping Media Freedom platform. It is made up of two reports, one focusing on Q1 2017 and the other on Q2 2017.

You can obtain the report in PDF format.

Second Limit on Coverage

As I read about incident after incident, following the links, I only see “the prosecutor,” “the police,” “traffic police,” “its publisher,” “the publisher of the channel,” and similar opaque prose.

Surely “the prosecutor” and “the publisher” was known to the person reporting the incident. If that is the case, then why hide the perpetrators? What does that gain for freedom of the press?

Am I missing some unwritten rule that requires members of the press to be perpetual victims?

Exposing the perpetrators to the bright light of public scrutiny, enables local and remote defenders of press freedom to join in defense of the press.

Yes?

September 22, 2017

Torrent Sites: Preserving “terrorist propaganda” and “evil material”

Filed under: Censorship,Cybersecurity,Free Speech,Government,Security — Patrick Durusau @ 1:37 pm

I mentioned torrent sites in Responding to Theresa May on Free Speech as a way to help preserve and spread “terrorist propaganda” and “evil material.”

My bad, I forgot to post a list of torrent sites for you to use!

Top 15 Most Popular Torrent Sites 2017 reads in part:

The list of the worlds most popular torrent sites has seen a lot of changes in recent months. While several torrent sites have shut down, some newcomers joined the list. With the shutdown of Torrentz.eu and Kickass Torrents, two of the largest sites in the torrenting scene disappeared. Since then, Torrentz2 became a popular successor of Torrentz.eu and Katcr.co is the community driven version of the former Kickass Torrents.

Finding torrents can be stressful as most of the top torrent sites are blocked in various countries. A torrent proxy let you unblock your favorite site in a few seconds.

While browsing the movies, music or tv torrents sites list you can find some good alternatives to The Pirate Bay, Extratorrent, RARBG and other commonly known sites. This list features the most popular torrent download sites:

The list changes over time so check back at Torrents.me.

As a distributed hash storage system, torrent preserves content across all the computers that downloaded the content.

Working towards the mention of torrent sites making Theresa May‘s sphincter eat her underpants. (HT, Dilbert)

September 21, 2017

Responding to Theresa May on Free Speech

Filed under: Censorship,Free Speech — Patrick Durusau @ 1:32 pm

Google and Facebook among tech giants Theresa May will order to remove extremist content by Rob Merrick.

Theresa May and her cadre of censorious thugs pose a clear and present danger to free speech on the Internet. No news there but the danger she poses has increased.

From the post:

The world’s biggest technology firms will be told to take down terrorist propaganda in as little as one hour, as Theresa May seeks to dramatically reduce the danger of it inspiring further atrocities.

The Prime Minister will also challenge them to develop technology to prevent “evil material” ever appearing on the web, as they are forced to defend their efforts in public for the first time.

Facebook, Microsoft, Google and Twitter are among the firms who will face their critics in New York, having agreed to set up a Global Internet Forum to Counter Terrorism.

At the heart of the plan is a target for terror propaganda to be taken down within one to two hours – the crucial period during which most of it is disseminated.

My response to Theresa May’s latest assault on free speech doesn’t depend on the “details” of her proposal. Proposing to suppress “terrorist propaganda” and “evil material” is a clear violation of free speech. What is “terrorist propaganda” and “evil material” is left free for individuals to judge for themselves, in a free society.

No one should dignify her assault on free speech by debating the details of how much or what kind of free speech will be suppressed. Her request for censoring of any speech, should be rejected unconditionally.

Sadly Merrick reports that censorship by content now takes 36 hours, as opposed to 30 days a year ago. The tech giants mentioned above, have been laboring mightily to censor the Internet and are no less guilty than Theresa May in that regard.

The loss of free speech has been debated and lamented over that same year, when 30 days of freedom shrank to 36 hours. Or in equivalent terms, going from 720 hours of freedom to only 36, a reduction of 95%.

With a loss of 95% of practical freedom for “terrorist propaganda” and “evil material,” I’d say that lamenting the loss of free speech has been largely ineffectual. You?

Practical Responses to Theresa May and Her Cadre of Internet Censors

If lamenting the loss of freedom of speech (and other rights) on Facebook, Twitter, the web isn’t effectual, what is? What follows are my suggestions, feel free to share yours.

1. Upload/Download “terrorist propaganda” and “evil material” to Torrent Sites

As Robert Graham, @ErrataBob reminds us at: Did You Miss The Macron Leak? @ErrataBob To The Rescue!, a “distributed hash network” preserves files even if the original link has been deleted.

If high tech toadies remove “terrorist propaganda” and “evil material,” from a Torrent download site, the content is preserved on the computers of everyone who has downloaded it.

Uploading and downloading using Torrent is a value-add activity for every user. The larger the group that downloads, the greater the preservation of the content. Enlist your followers/users today!

2. Generate and Share “evil material”

I’ve only looked at a small amount of “evil material” on the Internet but what I have seen, well, I’m not impressed. The “bomb making” recipes I have seen pose almost as much danger to their maker as they do to any intended victims. There is a certain romance to making your own ordinance, but there’s a reason professional armies don’t. Yes?

But pointing out the repetitious and dubious nature of bomb making recipes on the Internet won’t stop Theresa May. That being the case, I suggest prodding her to a fever pitch with imaginative and innovative ways to create chaos.

If you think about it for a few minutes, bombs, cars and guns are the simplest of tools. Defending freedom of speech requires imagination.

Anyone up for war gaming a future event in London? (Speech only, no action.)

3. Governments and Tech Giants Who Support Censorship

For governments, tech giants and staffers supporting censorship of “terrorist propaganda” and “evil material,” we should all draw inspiration from this slightly altered lyric:

In their styes with all their backing
They don’t care what goes on around
In their eyes there’s something lacking
What they need’s a damn good hacking

(with apologies to the Beatles, Piggies

Governments and tech giants have chosen to be censors of free speech. They can just as easily choose to be supporters of free speech.

Their choices dictate how they should be seen and treated by others.

PS: For your image recognition software, Theresa May:

September 17, 2017

Rewarding UK Censorship Demands

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 4:13 pm

Image of the Daily Mail from Twitter:

No link to the online version. It’s easy enough to find on your own. Besides, regular reading of the Daily Mail increases your risk of rumored appointment by the accidental president of the United States. As your mother often said, “you are what you read.”

The story claims:


Theresa May will order internet giants to clamp down on extremism following yesterday’s Tube terror attack.

Where “extremism” doesn’t include the daily bombing runs and other atrocities committed by the West.

I don’t expect better from the Daily Mail but the government’s hysteria over online content is clearly misplaced.

The inability of a group to make a successful “fairy light” bomb, speaks volumes about the threat posted by online bomb making plans.

Bomb making plans are great wannabe reading, tough guy talk for cell meetings, evidence for the police when discovered in your possession, but in and of themselves, are hardly worthy of notice. The same can be said for “radical” literature of all stripes.

Still, it seems a shame for the UK’s paranoid delusions to go unrewarded, especially in light of the harm it intends to free speech for all Internet users.

Suggestions?

September 14, 2017

Self-Censorship and Privilege on the Internet

Filed under: Censorship,CIA,Journalism,News,NSA,Reporting — Patrick Durusau @ 4:42 pm

Sloppy U.S. Spies Misused A Covert Network For Personal Shopping — And Other Stories From Internal NSA Documents by Micah Lee, Margot Williams, Talya Cooper.

From the post:

NSA agents successfully targeted “the entire business chain” connecting foreign cafes to the internet, bragged about an “all-out effort” to spy on liberated Iraq, and began systematically trying to break into virtual private networks, according to a set of internal agency news reports dating to the first half of 2005.

British spies, meanwhile, were made to begin providing new details about their informants via a system of “Intelligence Source Descriptors” created in response to intelligence failures in Iraq. Hungary and the Czech Republic pulled closer to the National Security Agency.

And future Intercept backer Pierre Omidyar visited NSA headquarters for an internal conference panel on “human networking” and open-source intelligence.

These stories and more are contained in a batch of 294 articles from SIDtoday, the internal news website of the NSA’s core Signals Intelligence Directorate. The Intercept is publishing the articles in redacted form as part of an ongoing project to release material from the files provided by NSA whistleblower Edward Snowden.

In addition to the aforementioned highlights, summarized in further detail below, the documents show how the NSA greatly expanded a secret eavesdropping partnership with Ethiopia’s draconian security forces in the Horn of Africa, as detailed in an investigation by longtime Intercept contributor Nick Turse. They describe the NSA’s operations at a base in Digby, England, where the agency worked with its British counterpart GCHQ to help direct drones in the Middle East and tap into communications through the Arab Spring uprisings, according to a separate article by Intercept reporter Ryan Gallagher. And they show how the NSA and GCHQ thwarted encryption systems used to protect peer-to-peer file sharing through the apps Kazaa and eDonkey, as explained here by Intercept technologist Micah Lee.

NSA did not comment for this article.

If you are interested in reporting based on redacted versions of twelve year old news (last half of 2005), this is the article for you.

The authors proclaim self-censorship and privilege saying:


The Intercept is publishing the articles in redacted form as part of an ongoing project to release material from the files provided by NSA whistleblower Edward Snowden.

These authors can milk their treasure trove of unredacted SIDreports, giving them an obvious advantage over other journalists.

Not as great an advantage as being white and male but it is a privilege unrelated to merit, one violates any concept of equal access.

Other reporters or members of the public notice connections unseen by the Intercept authors.

We won’t ever know since the Intercept, along with other media outlets, is quick to call foul on the privileges of others while clinging to its own.

PS: The lack of efforts by intelligence agencies to stop the SIDtoday series is silent testimony to its lack of importance. The SIDtoday series is little better than dated office gossip and not a complete (redacted) account of the same.

Meaningful intelligence reporting derails initiatives, operations, exposes criminal excesses with named defendants and holds the intelligence community accountable to the public. Not to be confused with the SIDtoday series and its like.

September 1, 2017

Google As Censorship Repeat Offender : The Kashmir Hill Story

Filed under: Censorship,Free Speech — Patrick Durusau @ 10:59 am

That Google is a censorship repeat offender surprises no one. Censorship is part and parcel of its toadyism to governments and its delusional war against “dangerous” ideas.

Kashmir Hill‘s story of Google censorship put a personal spin on censorship too massive to adequately appreciate.

Reporter: Google successfully pressured me to take down critical story by Timothy B. Lee.

From the post:

The recent furor over a Google-funded think tank firing an anti-Google scholar has inspired Gizmodo journalist Kashmir Hill to tell a story about the time Google used its power to squash a story that was embarrassing to the company.

The incident occurred in 2011. Hill was a cub reporter at Forbes, where she covered technology and privacy. At the time, Google was actively promoting Google Plus and was sending representatives to media organizations to encourage them to add “+1” buttons to their sites. Hill was pulled into one of these meetings, where the Google representative suggested that Forbes would be penalized in Google search results if it didn’t add +1 buttons to the site.

Hill thought that seemed like a big story, so she contacted Google’s PR shop for confirmation. Google essentially confirmed the story, and so Hill ran with it under the headline: “Stick Google Plus Buttons On Your Pages, Or Your Search Traffic Suffers.”

Hill described what happened next:

No government, practitioners of censorship themselves, will punish Google for this and its continuing acts of censorship.

Some things you can do:

  • Follow and support Kashmir Hill, who is likely to catch a lot of shit over this report.
  • Follow and support Ars Technica, anyone for boosting their search results?
  • Vote with your feet for other search services.
  • Place ads with other search services.
  • Hackers, well, do what you do best.

And to those who respond: “Well, that’s just good business.”

For some sense of “good business,” sure. But users are also free to make their own choices about “good business.”

If Google ad revenue takes a measurable hit between now and December 31, 2017, user choices may be heard.

August 25, 2017

DOJ Wanted To Hunt Down DisruptJ20.org Visitors

Filed under: Censorship,Free Speech,Government,Politics,Protests,Tor — Patrick Durusau @ 2:34 pm

National Public Radio (NPR) details the Department of Justice (DOJ) request for web records from DisruptJ20.org, which organized protests against the coronation of the current U.S. president, in Government Can Search Inauguration Protest Website Records, With Safeguards and Justice Department Narrows Request For Visitor Logs To Inauguration Protest Website. (The second story has the specifics on the demand.)

The narrowed DOJ request excludes:

f. DreamHost shall not disclose records that constitute HTTP requests and error logs.

A win for casual visitors this time, but no guarantees for next time.

The NPR stories detail this latest governmental over-reaching but the better question is:

How to avoid being scooped up if such a request were granted?

One word answer: Tor!

What is Tor?

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

Why Anonymity Matters

Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location.

What’s your default browser?

If your answer is anything but Tor, you are putting yourself and others at risk.

August 24, 2017

Blasphemy and Related Laws (Censorship)

Filed under: Censorship,Free Speech,Religion — Patrick Durusau @ 10:49 am

Years ago I encountered a description of a statement as being so vile that it made:

…strong men curse and women faint…

The author did not capture the statement and I don’t remember the book with that description. Based on the sexism in the quote, I’m assuming either the work or the time described was late 19th century.

Suggestions?

Blasphemy is a possible subject area for such a statement and the Library of Congress has helpfully compiled:

Blasphemy and Related Laws.

Description:

This report surveys laws criminalizing blasphemy, defaming religion, harming religious feelings, and similar conduct in 77 jurisdictions. In some instances the report also addresses laws criminalizing proselytization. Laws prohibiting incitement to religious hatred and violence are outside the scope of this report, although in some cases such laws are mentioned where they are closely intertwined with blasphemy. The report focuses mostly on laws at the national level, and while it aims to cover the majority of countries with such laws, it does not purport to be comprehensive.

I recognize not blaspheming in the presence of believers as a social courtesy but the only true blasphemy, in my view, is censorship of the speech of others.

Censorship of blasphemy implies a Deity threatened by human speech. That is a slander of any Deity worthy of worship.

August 23, 2017

Censors To Hate: Alison Saunders, Crown Prosecution Services

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 2:52 pm

There is no complete list of censors to hate, but take all the posts marked censorship as a starting point for an incomplete list.

Alison Saunders in Hate is hate. Online abusers must be dealt with harshly announces the bizarre proposition:


the Crown Prosecution Service (CPS) today commits to treat online hate crimes as seriously as those committed face to face.

Not distinguishing between face to face versus online hate crimes places the value of a University of Leeds legal education in question.

Unlike a face to face hate crime, all online users have access to an on/off button to immediately terminate any attempt at a hate crime.

Moreover, applications worthy of use offer a variety of filtering mechanisms, by which an intended victim of a hate crime can avoid contact with a would be abuser.

Saunders claims 15,000 hate crime prosecutions in 2015-2016, but fails to point out their conviction rate was 82.9%. More hate crimes prosecuted by the Crown Prosecution Service than ever before.

If these were all online crimes, Saunders and the CPS would be prosecuting almost 1 in 5 cases where no crime was committed.

Or put differently, there is a four out of five chance if charged with a hate crime, you will be convicted.

Are you more or less likely to make a strong objection or post if there is a four out of five chance you will be convicted of a crime?

Check your local laws before acting on any hatred for Alison Saunders or Crown Prosecution Services.

Citizens of the world must oppose censors and censorship everywhere. If you can’t criticize local censorship, speak out against censors elsewhere.

August 19, 2017

Rethinking (read abandoning) Free Speech

Filed under: Censorship,Free Speech — Patrick Durusau @ 4:22 pm

If The A.C.L.U. Needs to Rethink Free Speech by K-Sue Parkaug were an exercise in legal logic, Parkaug would get an F.

These paragraphs capture Pakkaug’s argument:


After the A.C.L.U. was excoriated for its stance, it responded that “preventing the government from controlling speech is absolutely necessary to the promotion of equality.” Of course that’s true. The hope is that by successfully defending hate groups, its legal victories will fortify free-speech rights across the board: A rising tide lifts all boats, as it goes.

While admirable in theory, this approach implies that the country is on a level playing field, that at some point it overcame its history of racial discrimination to achieve a real democracy, the cornerstone of which is freedom of expression.

I volunteered with the A.C.L.U. as a law student in 2011, and I respect much of its work. But it should rethink how it understands free speech. By insisting on a narrow reading of the First Amendment, the organization provides free legal support to hate-based causes. More troubling, the legal gains on which the A.C.L.U. rests its colorblind logic have never secured real freedom or even safety for all.

For marginalized communities, the power of expression is impoverished for reasons that have little to do with the First Amendment. Numerous other factors in the public sphere chill their voices but amplify others.

Without doubt, the government, American society in general and the legal system in particular is not race, gender, class or in any other meaningful sense, blind. Marginalized communities bear the brunt of that lack of blindness.

If the legal system deprives those with privilege and power of free speech, what does logic and experience dictate will be the impact on marginalized communities?

Are you expecting a different free speech result for the marginalized from courts that discriminate against them?

If yes, call your mother to say your failure at legal logic is putting the marginalized in harm’s way. (post her reaction)

Older Posts »

Powered by WordPress