Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 19, 2016

Allo, Allo, Google and the Government Can Both Hear You

Filed under: Government,Privacy,Security — Patrick Durusau @ 10:35 am

Google’s Allo fails to use end-to-end encryption by default by Graham Cluley.

The lack of end-to-end encryption by default in Google’s Allo might look like a concession to law enforcement.

Graham points out given the choice of no government or Google spying versus government and Google spying, Google chose the latter.

Anyone working on wrappers for apps to encrypt their output and/or to go dark in terms of reporting to the mother ship?

PS: Yes, Allo offers encryption you can “turn on” but will you trust encryption from someone who obviously wants to spy on you? Your call.

May 13, 2016

Receding Trust In Internet Privacy

Filed under: Cybersecurity,Privacy,Security — Patrick Durusau @ 8:49 pm

You may have seen this post on Twitter:

trust-internet-01-450

So, what is this:

…single problem that we just can’t seem to solve[?]

The Washington Post headline was even more lurid: Why a staggering number of Americans have stopped using the Internet the way they used to.

The government post releasing the data was somewhat calmer: Lack of Trust in Internet Privacy and Security May Deter Economic and Other Online Activities by Rafi Goldberg.

Rafi writes:

Every day, billions of people around the world use the Internet to share ideas, conduct financial transactions, and keep in touch with family, friends, and colleagues. Users send and store personal medical data, business communications, and even intimate conversations over this global network. But for the Internet to grow and thrive, users must continue to trust that their personal information will be secure and their privacy protected.

NTIA’s analysis of recent data shows that Americans are increasingly concerned about online security and privacy at a time when data breaches, cybersecurity incidents, and controversies over the privacy of online services have become more prominent. These concerns are prompting some Americans to limit their online activity, according to data collected for NTIA in July 2015 by the U.S. Census Bureau. This survey included several privacy and security questions, which were asked of more than 41,000 households that reported having at least one Internet user.

Perhaps the most direct threat to maintaining consumer trust is negative personal experience. Nineteen percent of Internet-using households—representing nearly 19 million households—reported that they had been affected by an online security breach, identity theft, or similar malicious activity during the 12 months prior to the July 2015 survey. Security breaches appear to be more common among the most intensive Internet-using households. For example, while 9 percent of online households that used just one type of computing device (either a desktop, laptop, tablet, Internet-connected mobile phone, wearable device, or TV-connected device) reported security breaches, 31 percent of those using at least five different types of devices suffered this experience (see Figure 1).

No real surprises in the report until you reach:


NTIA’s initial analysis only scratches the surface of this important area, but it is clear that policymakers need to develop a better understanding of mistrust in the privacy and security of the Internet and the resulting chilling effects. In addition to being a problem of great concern to many Americans, privacy and security issues may reduce economic activity and hamper the free exchange of ideas online.

I’m sorry, given that almost 1 out of every 5 households surveyed had suffered from an online security breach, what is there to “…better understand…” about their mistrust?

The Internet, their computers and other online devices, etc., are all insecure.

What seems to be the problem with acknowledging that fact?

It’s mis-leading for the Washington Post to wave it hands and say this is a …single problem that we just can’t seem to solve.

Online services and computers can be made less insecure, but no computer system is completely secure. (Not even the ones used by the NSA. Remember Snowden.)

Nor can computer systems be less insecure without some effort from users.

I know, I know, I blaming all those users who get hacked. Teaching users to protect themselves has some chance of a positive outcome. Wringing your hands over poor hacked users that someone should be protecting has none.

Educate yourself about basic computer security and be careful out there. The number of assholes on the Internet seems to multiply geometrically. Even leaving state actors to one side.

May 12, 2016

OKCupid data and Scientific Censorship

Filed under: Cybersecurity,Privacy,Security — Patrick Durusau @ 2:40 pm

Scientific consent, data, and doubling down on the internet by Oliver Keyes.

From the post:

There is an excellent Tim Minchin song called If You Open Your Mind Too Much, Your Brain Will Fall Out. I’m sad to report that the same is also true of your data and your science.

At this point in the story I’d like to introduce you to Emil Kirkegaard, a self-described “polymath” at the University of Aarhus who has neatly managed to tie every single way to be irresponsible and unethical in academic publishing into a single research project. This is going to be a bit long, so here’s a TL;DR: linguistics grad student with no identifiable background in sociology or social computing doxes 70,000 people so he can switch from publishing pseudoscientific racism to publishing pseudoscientific homophobia in the vanity journal that he runs.

Yeah, it’s just as bad as it sounds.

The Data

Yesterday morning I woke up to a Twitter friend pointing me to a release of OKCupid data, by Kirkegaard. Having now spent some time exploring the data, and reading both public statements on the work and the associated paper: this is without a doubt one of the most grossly unprofessional, unethical and reprehensible data releases I have ever seen.

There are two reasons for that. The first is very simple; Kirkegaard never asked anyone. He didn’t ask OKCupid, he didn’t ask the users covered by the dataset – he simply said ‘this is public so people should expect it’s going to be released’.

This is bunkum. A fundamental underpinning of ethical and principled research – which is not just an ideal but a requirement in many nations and in many fields – is informed consent. The people you are studying or using as a source should know that you are doing so and why you are doing so.

And the crucial element there is “informed”. They need to know precisely what is going on. It’s not enough to simply say ‘hey, I handed them a release buried in a pile of other paperwork and they signed it’: they need to be explicitly and clearly informed.

Studying OKCupid data doesn’t allow me to go through that process. Sure: the users “put it on the internet” where everything tends to end up public (even when it shouldn’t). Sure: the users did so on a site where the terms of service explicitly note they can’t protect your information from browsing. But the fact of the matter is that I work in this field and I don’t read the ToS, and most people have a deeply naive view of how ‘safe’ online data is and how easy it is to backtrace seemingly-meaningless information to a real life identity.

In fact, gathering of the data began in 2014, meaning that a body of the population covered had no doubt withdrawn their information from the site – and thus had a pretty legitimate reason to believe that information was gone – when Kirkegaard published. Not only is there not informed consent, there’s good reason to believe there’s an implicit refusal of consent.

The actual data gathered is extensive. It covers gender identity, sexuality, race, geographic location; it covers BDSM interests, it covers drug usage and similar criminal activity, it covers religious beliefs and their intensity, social and political views. And it does this for seventy thousand different people. Hell, the only reason it doesn’t include profile photos, according to the paper, is that it’d take up too much hard-drive space.

Which nicely segues into the second reason this is a horrifying data dump: it is not anonymised in any way. There’s no aggregation, there’s no replacement-of-usernames-with-hashes, nothing: this is detailed demographic information in a context that we know can have dramatic repercussions for subjects.

This isn’t academic: it’s willful obtuseness from a place of privilege. Every day, marginalised groups are ostracised, excluded and persecuted. People made into the Other by their gender identity, sexuality, race, sexual interests, religion or politics. By individuals or by communities or even by nation states, vulnerable groups are just that: vulnerable.

This kind of data release pulls back the veil from those vulnerable people – it makes their outsider interests or traits clear and renders them easily identifiable to their friends and communities. It’s happened before. This sort of release is nothing more than a playbook and checklist for stalkers, harassers, rapists.

It’s the doxing of 70,000 people for a fucking paper.

I offer no defense for the Emil Kirkegaard’s paper, its methods or conclusions.

I have more sympathy for Oliver’s concerns over consent and anonymised data than say the International Consortium of Investigative Journalists (ICIJ) and their concealment of the details from the Panama Papers, but only just.

It is in the very nature of data “leaks” that no consent is asked of or given by those exposed by the “leak.”

Moreover, anonymised data sounds suspiciously like ICIJ saying they can protect the privacy of the “innocents” in the Panama Papers leak.

I don’t know, hiding from the tax man doesn’t raise a presumption of innocence to me. You?

Someone has to decide who are “innocents,” or who merits protection of anonymised data. To claim either one, means you have someone in mind to fill that august role.

In our gender-skewed academic systems, would that be your more than likely male department head?

My caveat to Oliver’s post is even with good intentions, the power to censor data releases is a very dangerous one. One that reinforces the power of those who possess it.

The less dangerous strategy is to teach users if information is recorded, it is leaked. Perhaps not today, maybe not tomorrow, but certainly by the day after that.

Choose what information you record carefully.

May 9, 2016

Who Is Special Agent Mark W. Burnett? (FBI)

Filed under: FBI,Government,Privacy,Tor — Patrick Durusau @ 10:38 am

In FBI Harassment, Tor developer isis agora lovecruft describes a tale of FBI harrassment, that begins with this business card:

burnett-fbi

The card was left while no one was at home. At best the business card is a weak indicator of a visitor’s identity. It was later confirmed Mark W. Burnett had visited, in various conversations between counsel and the FBI. See the original post for the harassment story.

What can we find out about Special Agent Mark W. Burnett? Reasoning if the FBI is watching us, we damned sure better be watching them.

The easiest thing to find is that Mark W. Burnett isn’t a “special agent in charge,” as per the FBI webpage for the Los Angeles office. A “special agent in charge” is a higher “rank” than a “special agent.”

Turning to Google, here’s a screenshot of my results:

burnett-google

The first two “hits” are the same Special Agent Mark W. Burnett (the second one requires a password) but the first one says in relevant part:

Special Luncheon Speaker – Mr. Mark W. Burnett, FBI Cyber Special Agent, who will discuss the Bureau’s efforts regarding cyber security measures

The event was:

3rd Annual West Coast Cyber Security Summit
Special Report on Cyber Technology and Its Impact on the Banking Community
The California Club
538 South Flower Street, Los Angeles, CA 90071
Tuesday, May 13, 2014

If you don’t know the California Club, as the song says “…you aren’t supposed to be here.”

So we know that Mark W. Burnett was working for the FBI in May of 2014.

The third “hit” is someone who says they know a Mark W. Burnett but it doesn’t go any further than that.

The last two “hits” are interesting because they both point to the Congressional Record on February 1, 2010, wherein the Senate confirms the promotion of a “Mark. W. Burnett” to the rank of colonel in the United States Army.

I searched U.S. District Court decisions at Justia but could not find any cases where Mark W. Burnett appeared.

The hand written “desk phone” detracts from the professionalism of the business card. It also indicates that Mark hasn’t been in the Los Angeles office long enough to get better cards.

What do you know about Special Agent Mark W. Burnett?

PS: There are hundreds of FBI agents from Los Angeles on LinkedIn but Mark W. Burnett isn’t one of them. At least not by that name.

May 6, 2016

Electronic Frontier Foundation (EFF) 2015 Annual Report – (Highly Summarized)

Filed under: Electronic Frontier Foundation,Privacy — Patrick Durusau @ 1:14 pm

Electronic Frontier Foundation (EFF) 2015 Annual Report

If you have ever read an annual report, from any organization, you remember it as a stultifying experience. You could sense your life force ebbing away. 😉

To save you from a similar experience with the Electronic Frontier Foundation (EFF) 2015 Annual Report, I’ll hit high points in their own words:

Technology

Let’s Encrypt

A free, automated, and open certificate authority (CA), run for the public’s benefit, puts a secure Internet within reach.

Privacy Badger

Our browser extension, which automatically blocks hidden trackers that would otherwise spy on your web browsing habits, leaves beta.

Panopticlick

The latest version of our tracking and fingerprinting detection tool includes new tests, updating its ability to uniquely identify browsers with current techniques.

Activism

USA Freedom

After more than two years of work in the wake of the Snowden revelations, this bill’s passage marks the first significant reform on NSA surveillance in over 30 years.

Who Has Your Back?

Our yearly report—which documents the practices of major Internet companies and service providers, judges their publicly available policies, and highlights best practices—goes global.

Street Level Surveillance

Our new Web portal is loaded with comprehensive, easy-to-access information on police spying tools like license plate readers, biometric collection devices, and “Stingrays.”

Law

NSA Cases

EFF fights unconstitutional gag orders on behalf of clients forced to remain anonymous.

Save Podcasting

EFF successfully challenged the bogus podcasting patent owned by Personal Audio LLC.

ECPA

California is now the largest state to adopt digital privacy protections including both the content of messages and location data.

DMCA Exemptions

In the U.S. Copyright Office’s latest triennial rulemaking, EFF requested—and secured—6 anti-circumvention exemptions in 4 different categories.

Net Neutrality

Title II reclassification drew bright-line rules to protect the open Internet.

All of which is to say:

Join the EFF today!

Two hundred and ninety-eight words down to that last “!”

What more needs to be said?

April 29, 2016

Privacy Protects Murderers

Filed under: Journalism,News,Privacy,Reporting — Patrick Durusau @ 9:06 pm

What a broad shadow “privacy” can cast.

A week or so ago, Keeping Panama Papers Secret? Law Firms, Journalists and Privacy, I was pointing out the specious “we’re protecting privacy claims” of Suddeutsche Zeitung.

Now, the United States cites “privacy concerns” in not revealing the identities of sixteen military personnel who murdered 42 people and wounded 37 others in an attack on a Doctors Without Borders (MSF) hospital in Afghanistan last year. US: Afghan MSF hospital air strike was not a war crime

The acts of aircraft crews may not be war crimes, they can only function based on the information they are given by others, but the casual indifference that resulted in wholly inadequate information systems upon which they relied, certainly could result in command level charges of war crimes.

Moving war crimes charges upon the chain of command could well result in much needed accountability.

But, like the case with Suddeutsche Zeitung, accountability is something that is desired for others. Never for those calling upon privacy.

April 28, 2016

U.S. Government Surveillance Breeds Meekness, Fear and Self-Censorship [Old News]

Filed under: Cybersecurity,Government,Privacy,Security — Patrick Durusau @ 7:45 pm

New Study Shows Mass Surveillance Breeds Meekness, Fear and Self-Censorship by Glenn Greenwald.

From the post:

A newly published study from Oxford’s Jon Penney provides empirical evidence for a key argument long made by privacy advocates: that the mere existence of a surveillance state breeds fear and conformity and stifles free expression. Reporting on the study, the Washington Post this morning described this phenomenon: “If we think that authorities are watching our online actions, we might stop visiting certain websites or not say certain things just to avoid seeming suspicious.”

The new study documents how, in the wake of the 2013 Snowden revelations (of which 87% of Americans were aware), there was “a 20 percent decline in page views on Wikipedia articles related to terrorism, including those that mentioned ‘al-Qaeda,’ “car bomb’ or ‘Taliban.’” People were afraid to read articles about those topics because of fear that doing so would bring them under a cloud of suspicion. The dangers of that dynamic were expressed well by Penney: “If people are spooked or deterred from learning about important policy matters like terrorism and national security, this is a real threat to proper democratic debate.”

As the Post explains, several other studies have also demonstrated how mass surveillance crushes free expression and free thought. A 2015 study examined Google search data and demonstrated that, post-Snowden, “users were less likely to search using search terms that they believed might get them in trouble with the US government” and that these “results suggest that there is a chilling effect on search behavior from government surveillance on the Internet.”

While I applaud Greenwald and others who are trying to expose the systematic dismantling of civil liberties in the United States, at least as enjoyed by the privileged, the breeding of meekness, fear and self-censorship is hardly new.

Meekness, fear and self-censorship are especially not new to the non-privileged.

Civil Rights:

Many young activists of the 1960s saw their efforts as a new departure and themselves as a unique generation, not as actors with much to learn from an earlier, labor-infused civil rights tradition. Persecution, censorship, and self-censorship reinforced that generational divide by sidelining independent black radicals, thus whitening the memory and historiography of the Left and leaving later generations with an understanding of black politics that dichotomizes nationalism and integrationism.

The Long Civil Rights Movement and the Political Uses of the Past by Jacquelyn Dowd Hall, at page 1253.

Communism:

Those who might object to a policy that is being defended on the grounds that it is protecting threats to the American community may remain silent rather than risk isolation. Arguably, this was the greatest long-term consequence of McCartyism. No politician thereafter could be seen to be soft on Communism, so that America could slide, almost by consensus, into a war against Vietnamese communists without rigorous criticism of successive administrations’ policies ever being mounted. Self-censoring of political and social debate among politicians and others can act to counter the positive effects of the country’s legal rights of expression.

Political Conflict in American by Alan Ware, pages 63-64.

The breeding of meekness, fear and self-censorship has long been a tradition in the United States. A tradition far older than the Internet.

A tradition that was enforced by fear of loss of employment, social isolation, loss of business.

You may recall in Driving Miss Daisy when her son (Boolie) worries about not getting invited to business meetings if he openly support Dr. Martin Luther King. You may mock Boolie now but that was a day to day reality. Still is, most places.

How to respond?

Supporting Wikileaks, Greenwald and other journalists is a start towards resisting surveillance, but don’t take it as a given that journalists will be able to preserve free expression for all of us.

As a matter of fact, journalists have been shown to be as reticent as the non-privileged:


Even the New York Times, the most aggressive news organization throughout the year of investigations, proved receptive to government pleas for secrecy. The Times refused to publicize President Ford’s unintentional disclosure of assassination plots. It joined many other papers in suppressing the Glomar Explorer story and led the editorial attacks on the Pike committee and on Schorr. The real question, as Tom Wicker wrote in 1978, is not “whether the press had lacked aggressiveness in challenging the national-security mystique, but why?” Why, indeed, did most journalists decide to defer to the administration instead of pursuing sensational stories?

Challenging the Secret Government by Kathryn S. Olmsted, at page 183.

You may have noticed the lack of national press organs in the United States challenging the largely fictional “war on terrorism.” There is the odd piece, You’re more likely to be fatally crushed by furniture than killed by a terrorist by Andrew Shaver, but those are easily missed in the maelstrom of unquestioning coverage of any government press release on terrorism.

My suggestion? Don’t be meek, fearful or self-censor. Easier said than done but every instance of meekness, fearfulness or self-censorship, is another step towards the docile population desired by governments and others.

Let’s disappoint them together.

April 25, 2016

Anonymity and Privacy – Lesson 1

Filed under: Cybersecurity,Privacy,Security — Patrick Durusau @ 3:10 pm

“Welcome to ‘How to Triforce’ advanced”

Transcript of the first OnionIRC class on anonymity and privacy.

From the introduction:

Welcome to the first of (hopefully) many lessons to come here on the OnionIRC, coming to you live from The Onion Routing network! This lesson is an entry-level course on Anonymity and Privacy.

Some of you may be wondering why we are doing this. What motivates us? Some users have shown concern that this network might be “ran by the feds” and other such common threads of discussion in these dark corners of the web. I assure you, our goal is to educate. And I hope you came to learn. No admin here will ask you something that would compromise your identity nor ask you to do anything illegal. We may, however, give you the tools and knowledge necessary to commit what some would consider a crime. (Shout out to all the prisons out there that need a good burning!) What you do with the knowledge you obtain here is entirely your business.

We are personally motivated to participate in this project for various reasons. Over the last five years we have seen the numbers of those aligning with Anonymous soaring, while the average users’ technical knowhow has been on the decline. The average Anonymous “member” believes that DDoS and Twitter spam equates to hacking & activism, respectively. While this course is not covering these specific topics, we think this is a beginning to a better understanding of what “hacktivism” is & how to protect yourself while subverting corrupt governments.

Okay, enough with the back story. I’m sure you are all ready to start learning.

An important but somewhat jumpy discussion of OpSec (Operational Security) occurs between time marks 0:47 and 1:04.

Despite what you read in these notes, you have a substantial advantage over the NSA or any large organization when it comes to Operational Security.

You and you alone are responsible for your OpSec.

All large organizations, including the NSA, are vulnerable through employees (current/former), contractors (current/former), oversight committees, auditors, public recruitment, etc. They all leak, some more than others.

Given the NSA’s footprint, you should have better than NSA-grade OpSec from the outset. If you don’t, you need a safer hobby. Try binging on X-Files reruns.

The chat is informative, sometimes entertaining, and tosses out a number of useful tidbits but you will get more details out of the notes.

Enjoy!

April 12, 2016

Anonymous Chat Service

Filed under: Cybersecurity,Encryption,Government,Privacy,Security,Tor — Patrick Durusau @ 7:43 pm

From the description:

The continued effort of governments around the globe to censor our seven sovereign seas has not gone unnoticed. This is why we, once again, raise our Anonymous battle flags to expose their corruption and disrupt their surveillance operations. We are proud to present our new chat service residing within the remote island coves of the deep dark web. The OnionIRC network is designed to allow for full anonymity and we welcome any and all to use it as a hub for anonymous operations, general free speech use, or any project or group concerned about privacy and security looking to build a strong community. We also intend to strengthen our ranks and arm the current and coming generations of internet activists with education. Our plan is to provide virtual classrooms where, on a scheduled basis, ‘teachers’ can give lessons on any number of subjects. This includes, but is not limited to: security culture, various hacking/technical tutorials, history lessons, and promoting how to properly utilize encryption and anonymity software. As always, we do not wish for anyone to rely on our signal alone. As such, we will also be generating comprehensible documentation and instructions on how to create your own Tor hidden-service chat network in order to keep the movement decentralized. Hackers, activists, artists and internet citizens, join us in a collective effort to defend the internet and our privacy.

Come aboard or walk the plank.

We are Anonymous,
we’ve been expecting you.

Protip: This is not a website, it’s an IRC chat server. You must use an IRC chat client to connect. You cannot connect simply through a browser.

Some popular IRC clients are: irssi, weechat, hexchat, mIRC, & many more https://en.wikipedia.org/wiki/Compari…

Here is an example guide for connecting with Hexchat: https://ghostbin.com/paste/uq7bt/raw

To access our IRC network you must be connecting through the Tor network! https://www.torproject.org/

Either download the Tor browser or install the Tor daemon, then configure your IRC client’s proxy settings to pass through Tor or ‘torify’ your client depending on your setup.

If you are connecting to Tor with the Tor browser, keep in mind that the Tor browser must be open & running for you to pass your IRC client through Tor.

How you configure your client to pass through Tor will vary depending on the client.
Hostname: onionirchubx5363.onion

Port: 6667 No SSL, but don’t worry! Tor connections to hidden-services are end-to-end encrypted already! Thank you based hidden-service gods!

In the near future we will be releasing some more extensive client-specific guides and how-to properly setup Tor for transparent proxying (https://trac.torproject.org/projects/…) & best use cases.

This is excellent news!

With more good news promised in the near future (watch the video).

Go dark, go very dark!

An Introduction to Threat Modeling

Filed under: Cybersecurity,Government,Privacy — Patrick Durusau @ 2:09 pm

An Introduction to Threat Modeling

From the post:

There is no single solution for keeping yourself safe online. Digital security isn’t about which tools you use; rather, it’s about understanding the threats you face and how you can counter those threats. To become more secure, you must determine what you need to protect, and whom you need to protect it from. Threats can change depending on where you’re located, what you’re doing, and whom you’re working with. Therefore, in order to determine what solutions will be best for you, you should conduct a threat modeling assessment.

The five questions in the assessment:

  1. What do you want to protect?
  2. Who do you want to protect it from?
  3. How likely is it that you will need to protect it?
  4. How bad are the consequences if you fail?
  5. How much trouble are you willing to go through in order to try to prevent those?

are useful whether you are discussing cyber, physical or national security.

Assuming you accept the proposition a “…no sparrow shall fall…” system is literally impossible.

In the light of terrorist attacks, talking heads call for this to “…never happen again….” Nonsense. Of course terror attacks will happen again. No matter what counter-measures are taken.

Consider bank robberies for instance. We know where all the banks are located. Never a question of where bank robberies will take place. But, given other values, such as customer convenience, it isn’t possible to prevent all bank robberies.

There is an acceptable rate of bank robbery and security measures keep it roughly at that rate.

The same is true for cyber, physical or national security.

This threat assessment exercise will help you create a fact-based assessment of your risk and the steps you take to counter it.

Better a fact-based assessment than the talking head variety.

I first saw this in a tweet by the EFF.

California Surveillance Sweep – Official News

Filed under: Government,Privacy — Patrick Durusau @ 1:31 pm

As I predicted in California Surveillance Sweep – Success!, a preliminary report by Dave Maass, Here are 79 California Surveillance Tech Policies. But Where Are the Other 90?, outlines the success:

Laws are only as strong as their enforcement.

That’s why last weekend more than 30 citizen watchdogs joined EFF’s team to hold California law enforcement and public safety agencies accountable. Together, we combed through nearly 170 California government websites to identify privacy and usage policies for surveillance technology that must now be posted online under state law.

You can tell from the headline that some 90 websites are missing surveillance policies required by law.

See Dave’s post for early analysis of the results, more posts to follow on the details.

This crowd-sourcing was an experiment for the EFF and I am hopeful they will provide similar opportunities to participate in the future.

Age has made me less useful at the barricades but I can still wield a keyboard. It was a real joy to contribute to such a cause.

Along those lines, consider joining the Electronic Frontier Alliance:

a new network we’ve [EFF] launched to increase grassroots activism on digital civil liberties issues around the country

Most of my readers have digital skills oppressors only dream about.

It’s up to you where you put them to work.

April 11, 2016

Knights of Ignorance (Burr and Feinstein) Hold Tourney With No Opponents

Filed under: Cryptography,Government,Journalism,News,Privacy,Reporting — Patrick Durusau @ 8:27 pm

Burr And Feinstein Plan One Sided Briefing For Law Enforcement To Bitch About ‘Going Dark’ by Mike Masnick.

From the post:

With the world mocking the sheer ignorance of their anti-encryption bill, Senators Richard Burr and Dianne Feinstein are doubling down by planning a staff “briefing” on the issue of “going dark” with a panel that is made up entirely of law enforcement folks. As far as we can tell, it hasn’t been announced publicly, but an emailed announcement was forwarded to us, in which they announce the “briefing” (notably not a “hearing“) on “barriers to law enforcement’s ability to lawfully access the electronic evidence they need to identify suspects, solve crimes, exonerate the innocent and protect communities from further crime.” The idea here is to convince others in Congress to support their ridiculous bill by gathering a bunch of staffers and scaring them with bogeyman stories of “encryption caused a crime wave!” As such, it’s no surprise that the panelists aren’t just weighted heavily in one direction, they’re practically flipping the boat. Everyone on the panel comes from the same perspective, and will lay out of the argument for “encryption bad!”

An upside to the approaching farce is it identifies people who possess “facts” to support the “encryption bad” position.

Given fair warning of their identities, what can you say about these “witnesses?”

Do you think some enterprising reporter will press them for detailed facts and not illusory hand waving? (I realize Senators are never pressed, not really, for answers. Reporters want the next interview. But these witnesses aren’t Senators.)

For example, Hillar C. Moore, III, has campaigned for a misdemeanor jail to incarcerate traffic offenders in order to lower violent crime.

“He said Wednesday that he believes the jail is an urgent public safety tool that could lower violent crime in the city. “This summer, we didn’t have the misdemeanor jail, and while it’s not responsible for every murder, this is responsible for the crime rate being slightly higher,” Moore said. “Baton Rouge could have done better than other cities, but we missed out on that. It’s time for everyone to get on board and stop looking the other way.”

Moore’s office asked the East Baton Rouge Parish Metro Council in recent weeks for authorization to use dedicated money to open a misdemeanor jail on a temporary basis, two weeks at a time for the next several months, to crack down on repeat offenders who refuse to show up in court.

The request was rejected by the council, after opponents accused law enforcement officials of using the jail to target nonviolent, low-income misdemeanor offenders as a way to shake them down for money for the courts. More than 60 percent of misdemeanor warrants are traffic-related offenses, and critics angrily took issue with a proposal that potentially could result in jailing traffic violators.”

Evidence and logic aren’t Hillar’s strong points.

That’s one fact about one of the prospective nut-job witnesses.

What’s your contribution to discrediting this circus of fools?

April 10, 2016

California Surveillance Sweep – Success!

Filed under: Government,Privacy — Patrick Durusau @ 8:57 am

I mentioned the California Surveillance Sweep effort in Walking the Walk on Privacy.

Just a reminder:

Join EFF on Saturday, April 9 for a first-of-its-kind crowdsourcing campaign to hold California law enforcement agencies accountable for their use of surveillance technologies.

Volunteers like you will help us track down the privacy and useage policies of law enforcement agencies across California and add them to our database. We’ll show you how to do it, and you can be anywhere with an Internet connection to participate.

That was yesterday and I got word this morning that the effort was a complete success!

The EFF will be announcing more details but I wanted to give a quick shout out to everyone who participated in this effort!

It isn’t a hammer strike against the forces of darkness but then successful resistance rarely has that luxury.


The project design made participation easy and some elements that should be repeated in the future are:

  • Each volunteer was sent a set of small tasks which could be completed in a short period of time. Data entry immediately followed each task, generating a sense of accomplishment.
  • The data entry form was short and well labeled and designed.
  • Multiple volunteers got the same tasks to enable cross-checking
  • Users chose one-time handles for this project only, encouraging a sense of being in the “resistance.” We’re all human and grouping is something that comes naturally to us. Even with anonymous others in a common cause. Encourage that feeling.

Every project will be different but those principles are the ones I observed in operation in the California Surveillance Sweep.

Others?

April 5, 2016

NSA-grade surveillance software: IBM i2 Analyst’s Notebook (Really?)

Filed under: Government,Graphs,Neo4j,NSA,Privacy,Social Networks — Patrick Durusau @ 8:20 pm

I stumbled across Revealed: Denver Police Using NSA-Grade Surveillance Software which had this description of “NSA-grade surveillance software…:”


Intelligence gathered through Analyst’s Notebook is also used in a more active way to guide decision making, including with deliberate targeting of “networks” which could include loose groupings of friends and associates, as well as more explicit social organizations such as gangs, businesses, and potentially political organizations or protest groups. The social mapping done with Analyst’s Notebook is used to select leads, targets or points of intervention for future actions by the user. According to IBM, the i2 software allows the analyst to “use integrated social network analysis capabilities to help identify key individuals and relationships within networks” and “aid the decision-making process and optimize resource utilization for operational activities in network disruption, surveillance or influencing.” Product literature also boasts that Analyst’s Notebook “includes Social Network Analysis capabilities that are designed to deliver increased comprehension of social relationships and structures within networks of interest.”

Analyst’s Notebook is also used to conduct “call chaining” (show who is talking to who) and analyze telephone metadata. A software extension called Pattern Tracer can be used for “quickly identifying potential targets”. In the same vein, the Esri Edition of Analyst’s Notebook integrates powerful geo-spatial mapping, and allows the analyst to conduct “Pattern-of-Life Analysis” against a target. A training video for Analyst’s Notebook Esri Edition demonstrates the deployment of Pattern of Life Analysis in a military setting against an example target who appears appears to be a stereotyped generic Muslim terrorism suspect:

Perhaps I’m overly immune to IBM marketing pitches but I didn’t see anything in this post that could not be done with Python, R and standard visualization techniques.

I understand that IBM markets the i2 Analyst’s Notebook (and training too) as:

…deliver[ing] timely, actionable intelligence to help identify, predict, prevent and disrupt criminal, terrorist and fraudulent activities.

to a reported tune of over 2,500 organizations worldwide.

However, you have to bear in mind the software isn’t delivering that value-add but rather the analyst plus the right data and the IBM software. That is the software is at best only one third of what is required for meaningful results.

That insight seems to have gotten lost in IBM’s marketing pitch for the i2 Analyst’s Notebook and its use by the Denver police.

But to be fair, I have included below the horizontal bar, the complete list of features for the i2 Analyst’s Notebook.

Do you see any that can’t be duplicated with standard software?

I don’t.

That’s another reason to object to the Denver Police falling into the clutches of maintenance agreements/training on software that is likely irrelevant to their day to day tasks.


IBM® i2® Analyst’s Notebook® is a visual intelligence analysis environment that can optimize the value of massive amounts of information collected by government agencies and businesses. With an intuitive and contextual design it allows analysts to quickly collate, analyze and visualize data from disparate sources while reducing the time required to discover key information in complex data. IBM i2 Analyst’s Notebook delivers timely, actionable intelligence to help identify, predict, prevent and disrupt criminal, terrorist and fraudulent activities.

i2 Analyst’s Notebook helps organizations to:

Rapidly piece together disparate data

Identify key people, events, connections and patterns

Increase understanding of the structure, hierarchy and method of operation

Simplify the communication of complex data

Capitalize on rapid deployment that delivers productivity gains quickly

Be sure to leave a comment if you see “NSA-grade” capabilities. We would all like to know what those are.

March 30, 2016

Walking the Walk on Privacy

Filed under: Government,Privacy,Security — Patrick Durusau @ 4:26 pm

Many people grumble about government surveillance but how many do you know who have taken concrete steps to combat that surveillance?

That many. Huh.

Sounds like government surveillance has and will maintain the upper hand.

Unless, the people under surveillance organize to do something about it.

The Electronic Freedom Foundation (EFF) is organizing an effort to enable you, yes you, to do exactly that!

California Surveillance Sweep

From the post:

Join EFF on Saturday, April 9 for a first-of-its-kind crowdsourcing campaign to hold California law enforcement agencies accountable for their use of surveillance technologies.

Volunteers like you will help us track down the privacy and useage policies of law enforcement agencies across California and add them to our database. We’ll show you how to do it, and you can be anywhere with an Internet connection to participate.

What: California Surveillance Sweep

Date: Saturday, April 9

Time: 12 pm – 4 pm PT

Where: Anywhere (virtual participation); San Francisco (details TBD)

I bitch as much about privacy as anyone and have any number of unsound suggestions in that regard.

This effort by the EFF is a low-risk effort to hoist the surveillance state on its own laws.

Given the propensity for national law enforcement to lie I’m not betting on state and local law enforcement being any more truthful.

Still, you can’t say you haven’t exhausted all traditional remedies unless you have.

I signed up.

Are you?

March 11, 2016

How-To Defeat Analysis of Seized Cellphones

Filed under: Government,Privacy — Patrick Durusau @ 5:51 pm

Jason Hernandez commented on my post FOIA Confirms Lawless Nature of FBI Sky Spies [Dark Art/Dark Clouds] and the use of Cellebrite to extract data from seized cellphones.

Here is a visualization of you and the data on your “regular” cellphone:

An Austin parcels tractor (6196934300)

Unsolicited advice:

Never appear public with anything but a clean, unused burner phone.

Call forwarding and burner phones are too cheap to drag a parcel tractor load of electronic baggage around with you.

That doesn’t help with cellphones you use in secure locations but it’s a start in the right direction.

March 10, 2016

FOIA Confirms Lawless Nature of FBI Sky Spies [Dark Art/Dark Clouds]

Filed under: Cybersecurity,FBI,Government,Privacy — Patrick Durusau @ 5:32 pm

FOIA Confirms Lawless Nature of FBI Sky Spies

From the post:

The Electronic Frontier Foundation (EFF) released documents received in response to a Freedom of Information Act lawsuit that confirm the use of cell-site simulators in surveillance aircraft and the shocking disregard for oversight or regulatory policy what-so-ever. The federal government is flying spy-planes over US soil as the EFF put it, “without any policies or legal guidance.” North Star Post has been reporting on these activities since our founding following the independent disclosure of FBI operated domestic aerial surveillance on May 26th, 2015.

The EFF reports: the FBI’s “first successful airborne geolocation mission involving cellular technology” apparently occurred sometime in 2009, even as late as April 2014 lawyers from the FBI’s Office of General Counsel were discussing the need to develop a “coordinated policy” and “determine any legal concerns.”

NSP most prominently reported on the FBI evasion of established policy in regards to warrants for the use of cell-site simulator deployment in October of last year.

Aircraft have been identified as part of the FBI, DEA, DHS and other fleets, with many aircraft flying on a daily basis. The fleet is predominantly single-engine Cessna aircraft, with most flying 4-5 hours in looped patterns and circles with a radius of 2-2.5 miles. The 2+ mile figure is most likely the range of the DRT box although this has yet to be substantiated by government documents.

More details at the post will help you with tracking these planes and other details.

Security Syllogism:

All software/hardware have vulnerabilities.

DRT boxes are hardware and software.

Therefore, DRT boxes have vulnerabilities.

Yes? It’s been a while but I think that works.

While tracking airplanes and complaining about illegal law enforcement activity is useful, how much more useful would be vulnerabilities in DRT boxes?

DRT boxes promiscuously accept input, always a bad starting point for any hardware/software.

It could be as simple as building a directional “fake” cellphone that overloads the DRT box with noise.

Experts who have access to or who liberate DRT boxes can no doubt provide better advice than I can.

But on the whole, I’m not included to trust law breakers who later plead having been caught, they can now be trusted to follow the rules, but without any oversight.

That just strikes me as wholly implausible if not idiotic. The best defense is a good offense.

North Star Post has started a series on aerial surveillance: Part 1.

If you don’t know North Star Post (I didn’t), you should check them out. Follow @NStarPost.

I have no connections with North Star Post but consider it a public service to recommend you follow useful accounts, even ones that aren’t mine.

PS: If you do run across hacking information for DRT boxes, please post and/or re-post prominently. It’s not so much a matter that I become aware of it but that the public at large is enabled to defend itself.

March 8, 2016

FBI Has More Privacy Than Average US Citizen

Filed under: FBI,Government,Privacy — Patrick Durusau @ 4:20 pm

FBI quietly changes its privacy rules for accessing NSA data on Americans by Spencer Ackerman.

From the post:

The FBI has quietly revised its privacy rules for searching data involving Americans’ international communications that was collected by the National Security Agency, US officials have confirmed to the Guardian.

The classified revisions were accepted by the secret US court that governs surveillance, during its annual recertification of the agencies’ broad surveillance powers. The new rules affect a set of powers colloquially known as Section 702, the portion of the law that authorizes the NSA’s sweeping “Prism” program to collect internet data. Section 702 falls under the Foreign Intelligence Surveillance Act (Fisa), and is a provision set to expire later this year.

Spender’s report is marred by what it can’t state:


But the PCLOB’s new compliance report, released on Saturday, found that the administration has submitted “revised FBI minimization procedures” that address at least some of the group’s concerns about “many” FBI agents who use NSA-gathered data.

“Changes have been implemented based on PCLOB recommendations, but we cannot comment further due to classification,” said Christopher Allen, a spokesman for the FBI.

Sharon Bradford Franklin, a spokesperson for the PCLOB, said the classification prevented her from describing the rule changes in detail, but she said they move to enhance privacy. She could not say when the rules actually changed – that, too, is classified.

“They do apply additional limits” to the FBI, Franklin said.

Timothy Barrett, a spokesman for the office of the director of national intelligence, also confirmed the change to FBI minimization rules.

We know how trustworthy government has proven itself to be, Pentagon Papers, Watergate, Iran-Contra, the Afghan War Diaries, the Snowden leaks, just to hit the highlights.

Here is what Snowden said was being collected:

PRISM_Collection_Details

By National Security Agencyoriginal image | source, Public Domain, https://commons.wikimedia.org/w/index.php?curid=26526602

So where is the danger of the FBI being limited (picking one at random) from monitoring all chats from New York state to overseas locations? That only means it has to have some cause to invade the privacy of a given individual.

Doesn’t say what cause, don’t say which individual.

What privacy for the FBI does do is conceal incompetence, waste of resources and perpetuate a lack of effective outside oversight over the FBI.

Otherwise the FBI would not have to recruiting the mentally ill to carry out terrorist preparations at the behest of the FBI. They would have real, non-FBI sponsored terrorists to arrest.

Now there’s a category for terrorists: non-FBI sponsored terrorists.

Is anyone doing data mining on FBI “terrorist” arrests?

V Sign Biometrics [Building Privacy Zones a/k/a Unobserved Spaces]

Filed under: Biometrics,Identity,Privacy — Patrick Durusau @ 2:58 pm

Machine-Learning Algorithm Aims to Identify Terrorists Using the V Signs They Make

From the post:

Every age has its iconic images. One of the more terrifying ones of the 21st century is the image of a man in desert or army fatigues making a “V for victory” sign with raised arm while standing over the decapitated body of a Western victim. In most of these images, the perpetrator’s face and head are covered with a scarf or hood to hide his identity.

That has forced military and law enforcement agencies to identify these individuals in other ways, such as with voice identification. This is not always easy or straightforward, so there is significant interest in finding new ways.

Today, Ahmad Hassanat at Mu’tah University in Jordan and a few pals say they have found just such a method. These guys say they have worked out how to distinguish people from the unique way they make V signs; finger size and the angle between the fingers is a useful biometric measure like a fingerprint.

The idea of using hand geometry as a biometric indicator is far from new. Many anatomists have recognized that hand shape varies widely between individuals and provides a way to identify them, if the details can be measured accurately. (emphasis in original)

The review notes this won’t give you personal identity but would have to be combined with other data.

Overview of: Victory Sign Biometric for Terrorists Identification by Ahmad B. A. Hassanata, Mahmoud B. Alhasanat, Mohammad Ali Abbadi, Eman Btoush, Mouhammd Al-Awadi.

Abstract:

Covering the face and all body parts, sometimes the only evidence to identify a person is their hand geometry, and not the whole hand- only two fingers (the index and the middle fingers) while showing the victory sign, as seen in many terrorists videos. This paper investigates for the first time a new way to identify persons, particularly (terrorists) from their victory sign. We have created a new database in this regard using a mobile phone camera, imaging the victory signs of 50 different persons over two sessions. Simple measurements for the fingers, in addition to the Hu Moments for the areas of the fingers were used to extract the geometric features of the shown part of the hand shown after segmentation. The experimental results using the KNN classifier were encouraging for most of the recorded persons; with about 40% to 93% total identification accuracy, depending on the features, distance metric and K used.

All of which makes me suspect that giving a surveillance camera the “finger,” indeed, your height, gait, any physical mannerism, are fodder for surveillance systems.

Hotels and businesses need to construct privacy zones for customers to arrive and depart free from surveillance.

February 19, 2016

Motion Forcing Apple to comply with FBI [How Does Baby Blue’s Make Law More Accessible?]

Filed under: Cybersecurity,FBI,Government,Privacy,Security — Patrick Durusau @ 5:24 pm

The DoJ is trying to force Apple to comply with FBI by Nicole Lee.

I mention this because Nicole includes a link to: Case 5:16-cm-00010-SP Document 1 Filed 02/19/16 Page 1 of 35 Page ID #:1, which is the GOVERNMENT’S MOTION TO COMPEL APPLE INC. TO COMPLY WITH THIS COURT’S FEBRUARY 16, 2016 ORDER COMPELLING ASSISTANCE IN SEARCH; EXHIBIT.

Whatever the Justice Department wants to contend to the contrary, a hearing date of March 22, 2016 on this motion is ample evidence that the government has no “urgent need” for information, if any, on the cell phone in question. The government’s desire to waste more hours and resources on dead suspects is quixotic at best.

Now that Baby Blue’s Manual of Legal Citation (Baby Blue’s) is online and legal citations are no long captives of the Bluebook® gang, tell me again how Baby Blue’s has increased public access to the law?

This is, after all, a very important public issue and the public should be able to avail itself of the primary resources.

You will find Baby Blue’s doesn’t help much in that regard.

Contrast Baby Blue’s citation style advice with adding hyperlinks to the authorities cited in the Department of Justice’s “memorandum of points and authorities:”

Federal Cases

Central Bank of Denver v. First Interstate Bank of Denver, 551 U.S. 164 (1994).

General Construction Company v. Castro, 401 F.3d 963 (9th Cir. 2005)

In re Application of the United States for an Order Directing a Provider of Communication Services to Provide Technical Assistance to the DEA, 2015 WL 5233551, at *4-5 (D.P.R. Aug. 27, 2015)

In re Application of the United States for an Order Authorizing In-Progress Trace o Wire Commc’ns over Tel. Facilities (Mountain Bell), 616 F.2d 1122 (9th Cir. 1980)

In re Application of the United States for an Order Directing X to Provide Access to Videotapes (Access to Videotapes), 2003 WL 22053105, at *3 (D. Md. Aug. 22, 2003) (unpublished)

In re Order Requiring [XXX], Inc., to Assist in the Execution of a Search Warrant Issued by This Court by Unlocking a Cellphone (In re XXX) 2014 WL 5510865, at #2 (S.D.N.Y. Oct. 31, 2014)

Konop v. Hawaiian Airlines, Inc., 302 F.3d 868 (9th Cir. 2002)

Pennsylvania Bureau of Correction v. United States Marshals Service, 474 U.S. 34 (1985)

Plum Creek Lumber Co. v. Hutton, 608 F.2d 1283 (9th Cir. 1979)

Riley v. California, 134 S. Ct. 2473 (2014) [For some unknown reason, local rules must allow varying citation styles for U.S. Supreme Courts decisions.]

United States v. Catoggio, 698 F.3d 64 (2nd Cir. 2012)

United States v. Craft, 535 U.S. 274 (2002)

United States v. Fricosu, 841 F.Supp.2d 1232 (D. Co. 2012)

United States v. Hall, 583 F. Supp. 717 (E.D. Va. 1984)

United States v. Li, 55 F.3d 325, 329 (7th Cir. 1995)

United States v. Navarro, No. 13-CR-5525, ECF No. 39 (W.D. Wa. Nov. 13, 2013)

United States v. New York Telephone Co., 434 U.S. 159 (1977)

Federal Statutes

18 U.S.C. 2510

18 U.S.C. 3103

28 U.S.C. 1651

47 U.S.C. 1001

47 U.S.C. 1002

First, I didn’t format a one of these citations. I copied them “as is” into a search engine so Baby Blue’s played no role in those searches.

Second, I added hyperlinks to a variety of sources for both the case law and statutes to make the point that one citation can resolve to a multitude of places.

Some places are commercial and have extra features while others are non-commercial and may have fewer features.

If instead of individual hyperlinks, I had a nexus for each case, perhaps using its citation as its public name, then I could attach pointers to a multitude of resources that all offer the same case or statute.

If you have WestLaw, LexisNexis or some other commercial vendor, you could choose to find the citation there. If you prefer non-commercial access to the same material, you could choose one of those access methods.

That nexus is what we call a topic in topic maps (“proxy” in the TMRM) and it would save every user, commercial or non-commercial, the sifting of search results that I performed this afternoon.

The hyperlinks I used above make some of the law more accessible but not as accessible as it could be.

Creating a nexus/topic/proxy for each of these citations would enable users to pick pre-formatted citations (good-bye to formatting manuals for most of us) and the law material most accessible to them.

That sounds like greater public access to the law to me.

You?


Read the government’s “Memorandum of Points and Authorities” with a great deal of care.

For example:

The government is also aware of multiple other unpublished orders in this district and across the country compelling Apple to assist in the execution of a search warrant by accessing the data on devices running earlier versions of iOS, orders with which Apple complied.5

Be careful! Footnote 5 refers to a proceeding in the Eastern District of New York where the court sua sponte raised the issue of its authority under the All Writs Act. Footnote 5 recites no sources or evidence for the prosecutor’s claim of “…multiple other unpublished orders in this district and across the country….” None.

My impression is the government’s argument is mostly bluster and speculation. Plus repeating that Apple has betrayed its customers in the past and the government doesn’t understand its reluctance now. Business choices are not subject to government approval, or at least they weren’t the last time I read the U.S. Constitution.

Yes?

February 15, 2016

BMG Seeks to Violate Privacy Rights – Cox Refuses to Aid and Abet

Filed under: Cybersecurity,Intellectual Property (IP),Privacy,Security — Patrick Durusau @ 4:58 pm

Cox Refuses to Spy on Subscribers to Catch Pirates by Ernesto Van der Sar.

From the post:

Last December a Virginia federal jury ruled that Internet provider Cox Communications was responsible for the copyright infringements of its subscribers.

The ISP was found guilty of willful contributory copyright infringement and must pay music publisher BMG Rights Management $25 million in damages.

The verdict was a massive victory for the music company and a disaster for Cox, but the case is not closed yet.

A few weeks ago BMG asked the court to issue a permanent injunction against Cox Communications, requiring the Internet provider to terminate the accounts of pirating subscribers and share their details with the copyright holder.

In addition BMG wants the Internet provider to take further action to prevent infringements on its network. While the company remained vague on the specifics, it mentioned the option of using invasive deep packet inspection technology.

Last Friday, Cox filed a reply pointing out why BMG’s demands go too far, rejecting the suggestion of broad spying and account termination without due process.

“To the extent the injunction requires either termination or surveillance, it imposes undue hardships on Cox, both because the order is vague and because it imposes disproportionate, intrusive, and punitive measures against households and businesses with no due process,” Cox writes (pdf).

Read the rest of Ernesto’s post for sure but here’s a quick summary:

Cox.com is spending money to protect your privacy.

I don’t live in a Cox service area but if you do, sign up with Cox and say their opposition to BMG is driving your new subscription. Positive support always rings louder than protesters with signs and litter.

BMG.com is spending money to violate your privacy.

BMG is a subsidiary of Bertelsmann, which claims 112,037 employees.

I wonder how many of those employees have signed off on the overreaching and abusive positions of BMG?

Perhaps members of the public oppressed by BMG and/or Bertelsmann should seek them out to reason with them.

Bearing in mind that “rights” depend upon rules you choose to govern your discussions/actions.

February 11, 2016

UK Parliament Reports on the Draft Investigatory Powers Bill

Filed under: Cybersecurity,Government,Privacy,Security — Patrick Durusau @ 7:32 pm

I have stumbled on several “news” reports about the Investigatory Powers Bill in the UK.

Reports from committees in Parliament have started appearing, but are those reports linked in breathless accounts of the horrors of the Investigatory Powers bill?

You already know the answer to that question!

I did find UK surveillance bill condemned by a Parliamentary committee, for the third time by Cory Doctorow, which pointed to the Joint Select Committee recommendations for changes in the IP Bill.

For two other reports, Cory relied on no originals reporting in Wired.co.uk, which quoted many sources but failed to link to the reports themselves.

To get you started with the existing primary criticisms of the Investigatory Powers Bill:

There was a myth the Internet (later the WWW) would provide greater access to information, along the lines of the Memex.

Deep information is out there and when you find it, please insert a link to it.

You and everyone who reads your paper, post, tweet, etc. will be better off for it.

February 8, 2016

Governments Race To Bottom On Privacy Rights

Filed under: Government,Privacy,Security — Patrick Durusau @ 2:30 pm

British spies want to be able to suck data out of US Internet giants by Cory Doctorow.

Cory points out a recent US/UK agreement subjects U.S. citizens to surveillance under British laws that no one understands and that don’t require even a fig leaf of judicial approval.

The people of the United States fought one war to free themselves of arbitrary and capricious British rule. Declaration of Independence.

Is the stage being set for a war to enforce the constitution that resulted from the last war the United States waged against the UK?

February 4, 2016

Truthful Paedophiles On The Darknet?

Filed under: Government,Privacy,Tor — Patrick Durusau @ 3:13 pm

There is credibility flaw in Cryptopolitik and the Darknet by Daniel Moore & Thomas Rid that I overlooked yesterday (The Dark Web, “Kissing Cousins,” and Pornography) Perhaps it was just too obvious to attract attention.

Moore and Rid write:

The pornographic content was perhaps the most distressing. Websites dedicated to providing links to videos purporting to depict rape, bestiality and paedophilia were abundant. One such post at a supposedly nonaffiliated content-sharing website offered a link to a video of ‘a 12 year old girl … getting raped at school by 4 boys’.52 Other examples include a service that sold online video access to the vendor’s own family members:

My two stepsisters … will be pleased to show you their little secrets. Well, they are rather forced to show them, but at least that’s what they are used to.53

Several communities geared towards discussing and sharing illegitimate fetishes were readily available, and appeared to be active. Under the shroud of anonymity, various users appeared to seek vindication of their desires, providing words of support and comfort for one another in solidarity against what was seen as society’s unjust discrimination against non-mainstream sexual practices. Users exchanged experiences and preferences, and even traded content. One notable example from a website called Pedo List included a commenter freely stating that he would ‘Trade child porn. Have pics of my daughter.’54 There appears to be no fear of retribution or prosecution in these illicit communities, and as such users apparently feel comfortable enough to share personal stories about their otherwise stifled tendencies. (page 23)

Despite their description of hidden services as dens of iniquity and crime, those who use them are suddenly paragons of truthfulness, at least when it suits the authors purpose?

Doesn’t crediting the content of the Darknet as truthful, as opposed to being wishful, fantasy, or even police officers posing to investigate (some would say entrap) others, strain the imagination?

Some of the content is no doubt truthful but policy arguments need to be based on facts, not a collection of self-justifying opinions from like minded individuals.

A quick search on the string (without quotes):

police officers posing as children sex rings

Returns 9.7 million “hits.

How many of those police officers appeared in the postings collected by Moore & Rid it isn’t possible to say.

But in science, there is this thing called the burden of proof. That is simply asserting a conclusion, even citing equally non-evidence based conclusions, isn’t sufficient to prove a claim.

Moore & Rid had the burden to prove that the Darknet is a wicked place that poses all sorts of dangers and hazards.

As I pointed out yesterday, The Dark Web, “Kissing Cousins,” and Pornography, their “proof” is non-replicable conclusions about a small part of the Darkweb.

Earlier today I realized their conclusions depend upon a truthful criminal element using the Darkweb.

What do you think about the presumption that criminals are truthful?

Sounds doubtful to me!

February 3, 2016

The Dark Web, “Kissing Cousins,” and Pornography

Filed under: Cybersecurity,Government,Privacy,Security,Tor — Patrick Durusau @ 7:57 pm

Dark web is mostly illegal, say researchers by Lisa Vaas.

You can tell where Lisa comes out on the privacy versus law enforcement issue by the slant of her conclusion:

Users, what’s your take: are hidden services worth the political firestorm they generate? Are they worth criminals escaping justice?

Illegal is a slippery concept.

Marriage of first “kissing” cousins is “illegal” in:

Arkansas, Delaware, Idaho, Iowa, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, South Dakota, Texas, Washington, West Virginia, and Wyoming.

Marriage of first “kissing” cousins is legal in:

Alabama, Alaska, California, Colorado, Connecticut, District of Columbia, Florida, Georgia, Hawaii, Maryland, Massachusetts, New Jersey, New Mexico, New York, North Carolina (first cousins but not double first), Rhode Island, South Carolina, Tennessee, Vermont, and Virginia.

There are some other nuances I didn’t capture and for those see: State Laws Regarding Marriages Between First Cousins.

If you read Cryptopolitik and the Darknet by Daniel Moore & Thomas Rid carefully, you will spot a number of problems with their methodology and reasoning.

First and foremost, no definitions were offered for their taxonomy (at page 20):

  • Arms
  • Drugs
  • Extremism
  • Finance
  • Hacking
  • Illegitimate pornography
  • Nexus
  • Other illicit
  • Social
  • Violence
  • Other
  • None

Readers and other researchers are left to wonder what was included or excluded from each of those categories.

In science, that would be called an inability to replicate the results. As if this were science.

Moore & Rid recite anecdotal accounts of particular pornography sites, calculated to shock the average reader, but that’s not the same thing as enabling replication of their research. Or a fair characterization of all the pornography encountered.

They presumed that text was equivalent to image content, so they discarded all images (pages 19-20). Which left them unable to test that presumption. Hmmm, untested assumptions in science?

The results of the unknown basis for classification identied 122 sites (page 21) as pornographic out of the 5,205 initial set of sites.

If you accept Tor’s estimate of 30,000 hidden services that announce themselves every day, Moore & Rid have found that illegal pornography (whatever that means) is:

122 / 30000 = 0.004066667

Moore & Rid have established that “illegal” porn is .004066667% of the Dark Net.

I should be grateful Moore & Rid have so carefully documented the tiny part of the Dark Web concerned with their notion of “illegal” pornography.

But, when you encounter “reasoning” such as:


The other quandary is how to deal with darknets. Hidden services have already damaged Tor, and trust in the internet as a whole. To save Tor – and certainly to save Tor’s reputation – it may be necessary to kill hidden services, at least in their present form. Were the Tor Project to discontinue hidden services voluntarily, perhaps to improve the reputation of Tor browsing, other darknets would become more popular. But these Tor alternatives would lack something precious: a large user base. In today’s anonymisation networks, the security of a single user is a direct function of the number of overall users. Small darknets are easier to attack, and easier to de-anonymise. The Tor founders, though exceedingly idealistic in other ways, clearly appreciate this reality: a better reputation leads to better security.85 They therefore understand that the popularity of Tor browsing is making the bundled-in, and predominantly illicit, hidden services more secure than they could be on their own. Darknets are not illegal in free countries and they probably should not be. Yet these widely abused platforms – in sharp contrast to the wider public-key infrastructure – are and should be fair game for the most aggressive intelligence and law-enforcement techniques, as well as for invasive academic research. Indeed, having such clearly cordoned-off, free-fire zones is perhaps even useful for the state, because, conversely, a bad reputation leads to bad security. Either way, Tor’s ugly example should loom large in technology debates. Refusing to confront tough, inevitable political choices is simply irresponsible. The line between utopia and dystopia can be disturbingly thin. (pages 32-33)

it’s hard to say nothing and see public discourse soiled with this sort of publication.

First, there is no evidence presented that hidden services have damaged Tor and/or trust in the Internet as a whole. Even the authors concede that Tor is the most popular option anonymous browsing and hidden services. That doesn’t sound like damage to me. You?

Second, the authors dump all hidden services in the “bad, very bad” basket, despite their own research classifying only .004066667% of the Dark Net as illicit pornography. They use stock “go to” examples to shock readers in place of evidence and reasoning.

Third, the charge that Tor has “[r]efused to confront tough, inevitable political choices is simply irresponsible” is false. Demonstrably false because the authors point out that Tor developers made a conscious choice to not take political considerations into account (page 25).

Since Moore & Rid disagree with that choice, they resort to name calling, terming the decision “simply irresponsible.” Moore & Rid are entitled to their opinions but they aren’t going to persuade even a semi-literate audience with name calling.

Take Cryptopolitik and the Darknet as an example of how to not write a well researched and reasoned paper. Although, that isn’t a bar to publication as you can see.

February 2, 2016

Google to deliver wrong search results to would-be jihadis[, gays, unwed mothers, teenagers, Muslims

Filed under: Censorship,Government,Privacy,Security — Patrick Durusau @ 8:52 pm

Google to deliver wrong search results to would-be jihadis by David Barrett.

From the post:

Jihadi sympathisers who type extremism-related words into Google will be shown anti-radicalisation links instead, under a pilot scheme announced by the internet giant.

The new technology means people at risk of radicalisation will be presented with internet links which are the exact opposite of what they were searching for.

Dr Anthony House, a senior Google executive, revealed the pilot scheme in evidence to MPs scrutinising the role of internet companies in combating extremism.

It isn’t hard to see where this slippery road leads.

If any of the current Republican candidates are elected to the U.S. presidency, Google will:

Respond to gay sex or gay related searches with links for praying yourself straight.

Unwed mothers requesting abortion services will have their personal information forwarded to right-to-birth organizations and sent graphic anti-abortion images by email.

Teenagers seeking birth control advice will only see – Abstinence or Hell!

Muslims, well, unless Trump has deported all of them, will see anti-Muslim links.

Unlike bad decisions by government, Google can effectively implement demented schemes such as this one.

Censoring of search results to favor any side, policy, position, is just that censorship.

If you forfeit the rights of others, you have no claim to rights yourself.

Your call.

January 13, 2016

Cracka Bags NDI James Clapper! Kudos!

Filed under: Cybersecurity,Privacy — Patrick Durusau @ 3:53 pm

US Intelligence director’s personal e-mail, phone hacked by Sean Gallagher.

From the post:

The same individual or group claiming to be behind a recent breach of the personal e-mail account of CIA Director John Brennan now claims to be behind the hijacking of the accounts of Director of National Intelligence James Clapper. The Office of the Director of National Intelligence confirmed to Motherboard that Clapper was targeted and that the case has been forwarded to law enforcement.

Someone going by the moniker “Cracka,” claiming to be with a group of “teenage hackers” called “Crackas With Attitude,” told Motherboard’s Lorenzo Franceschi-Bicchiarai that he had gained access to Clapper’s Verizon FiOS account and changed the settings for his phone service to forward all calls to the Free Palestine Movement. Cracka also claimed to have gained access to Clapper’s personal e-mail account and his wife’s Yahoo account.

See the rest of Sean’s post for the details but really, good show!

If Cracka is not an NSA operative, this hack was:

  1. Without national security letters to phone providers
  2. Without the melting NSA data center in Colorado
  3. Without highly paid consultants and contractors
  4. Without government-only grade hardware
  5. Without secret information about phone networks
  6. etc.

Sounds like all our data is already easily available to government agents, just not the one who think of Excel as data processing. 😉

As I said before, Cracka/s needs to dump all the data they can access and then announce the hack.

Unless and until the government joins its citizens in the same goldfish bowl, its attitude towards privacy will never change. Perhaps not even then but it’s worth a shot.

PS: For the automatic: But people will get hurt from open data dumping! And your point? People are being hurt now by government secrecy and invasions of their privacy.

I wonder what your basis is for choosing who it is acceptable to hurt across an entire nation? I forgot, that’s a secret isn’t it?

January 12, 2016

[Don’t] …Join the National Security State

Filed under: Free Speech,Government,Privacy,Security — Patrick Durusau @ 10:13 pm

Social Media Companies Should Decline the Government’s Invitation to Join the National Security State by Hugh Handeyside.

The pressure on social media companies to limit or take down content in the name of national security has never been greater. Resolving any ambiguity about the how much the Obama administration values the companies’ cooperation, the White House on Friday dispatched the highest echelon of its national security team — including the Attorney General, the FBI Director, the Director of National Intelligence, and the NSA Director — to Silicon Valley for a meeting with technology executives chaired by the White House Chief of Staff himself. The agenda for the meeting tried to convey a locked-arms sense of camaraderie, asking, “How can we make it harder for terrorists to leveraging [sic] the internet to recruit, radicalize, and mobilize followers to violence?”

Congress, too, has been turning up the heat. On December 16, the House passed the Combat Terrorist Use of Social Media Act, which would require the President to submit a report on “United States strategy to combat terrorists’ and terrorist organizations’ use of social media.” The Senate is considering a far more aggressive measure which would require providers of Internet communications services to report to government authorities when they have “actual knowledge” of “apparent” terrorist activity (a requirement that, because of its vagueness and breadth, would likely harm user privacy and lead to over-reporting).

The government is of course right that terrorists use social media, including to recruit others to their cause. Indeed, social media companies already have systems in place for catching real threats, incitement, or actual terrorism. But the notion that social media companies can or should scrub their platforms of all potentially terrorism-related content is both unrealistic and misguided. In fact, mandating affirmative monitoring beyond existing practices would sweep in protected speech and turn the social media companies into a wing of the national security state.

The reasons not to take that route are both practical and principled. On a technical level, it would be extremely difficult, if not entirely infeasible, to screen for actual terrorism-related content in the 500 million tweets that are generated each day, or the more than 400 hours of video uploaded to YouTube each minute, or the 300 million daily photo uploads on Facebook. Nor is it clear what terms or keywords any automated screening tools would use — or how using such terms could possibly exclude beliefs and expressive activity that are perfectly legal and non-violent, but that would be deeply chilled if monitored for potential links to terrorism.

Hugh makes a great case why social media companies should resist becoming arms of the national security state.

You should read his essay in full and I would add only one additional point:

Do you and/or your company want to be remembered for resisting the security state or as collaborators? The choice is that simple.

January 5, 2016

Back from the Dead: Politwoops

Filed under: Journalism,Privacy,Tweets,Twitter — Patrick Durusau @ 7:07 pm

Months after Twitter revoked API access, Politwoops is back, tracking the words politicians take back by Joseph Lichterman.

From the post:

We’ll forgive you if you missed the news, since it was announced on New Year’s Eve: Politwoops, the service which tracks politicians’ deleted tweets, is coming back after Twitter agreed to let it access the service’s API once again.

On Tuesday, the Open State Foundation, the Dutch nonprofit that runs the international editions of Politwoops, said it was functioning again in 25 countries, including the United Kingdom, the Netherlands, Ireland, and Turkey. The American version of Politwoops, operated by the Sunlight Foundation, isn’t back up yet, but the foundation said in a statement that “in the coming days and weeks, we’ll be working behind the scenes to get Politwoops up and running.”

Excellent news!

Politwoops will be reporting tweets that politicians send and then suddenly regret.

I don’t disagree with Twitter that any user can delete their tweets but strongly disagree that I can’t capture the original tweet and at a minimum, point to its absence from the “now” Twitter archive.

Politicians should not be allowed to hide from their sporadic truthful tweets.

January 1, 2016

How to Avoid Being a Terrorism “False Positive” in 2016

Filed under: Government,Privacy,Security — Patrick Durusau @ 7:56 pm

For all of the fear mongering about terrorists and terrorism, I’m more worried about being a “false positive” for terrorism than terrorism.

Radley Balko wrote about a swat raid on an entirely innocent family in: Federal judge: Drinking tea, shopping at a gardening store is probable cause for a SWAT raid on your home, saying:

Last week, U.S. District Court Judge John W. Lungstrum dismissed every one of the Hartes’s claims. Lungstrum found that sending a SWAT team into a home first thing in the morning based on no more than a positive field test and spotting a suspect at a gardening store was not a violation of the Fourth Amendment. He found that the police had probable cause for the search, and that the way the search was conducted did not constitute excessive force. He found that the Hartes had not been defamed by the raid or by the publicity surrounding it. He also ruled that the police were under no obligation to know that drug testing field kits are inaccurate, nor were they obligated to wait for the more accurate lab tests before conducting the SWAT raid. The only way they’d have a claim would be if they could show that the police lied about the results, deliberately manipulated the tests or showed a reckless disregard for the truth — and he ruled that the Hartes had failed to
do so.

If you think that’s a sad “false positive” story, consider Jean Charles de Menezes who was murdered by London Metropolitan Police for sitting on a bus. He was executed with 7 shots to the head, while being physically restrained by another police officer.

Home Secretary Charles Clarke (at that time) is quoted by the BBC saying:

“I very, very much regret what happened.

“I hope [the family] understand the police were trying to do their very best under very difficult circumstances.”

What “very difficult circumstances?” Menezes was sitting peacefully on a bus, unarmed and unaware that he was about to be attacked by three police officers. What’s “very difficult” about those circumstances?

Ah, but it was the day after bombings in London and the usual suspects had spread fear among the police and the public. The “very difficult circumstances” victimized the police, the public and of course, Menezes.

If you live in the United States, there is the ongoing drum roll of police shooting unarmed black men, when they don’t murder a neighbor on the way.

No doubt the police need to exercise more restraint but the police are being victimized by the toxic atmosphere of fear generated by public officials as well as those who profit from fear-driven public policies.

You do realize the TSA agents at airports are supplied by contractors. Yes? $Billions in contracts.

Less fear, fewer TSA (if any at all) = Loss of $Billions in contracts

With that kind of money at stake, the toxic atmosphere of fear will continue to grow.

How can you reduce your personal odds of being a terrorism “false positive” in 2016?

The first thing is to realize that the police may look like the “enemy” but they really aren’t. For the most part they are underpaid, under-trained, ordinary people who have a job most of us wouldn’t take on a bet. There are bad cops, have no doubt, but the good ones out-number the bad ones.

The police are being manipulated by the real bad actors, the ones who drive and profit from the fear machine.

The second thing to do is for you and your community to reach out to the police officers who regularly patrol your community. Get to know them by volunteering at police events or inviting them to your own.

Create an environment where the police don’t see a young black man but Mr. H’s son, you know Mr. H, he helped with the litter campaign last year, etc.

Getting to know the local police and getting the police to know your community won’t solve every problem but it may lower the fear level enough to save lives, one of which may be your own.

You won’t be any worse off and on the up side, enough good community relations may result in the police being on your side when it is time to oust the fear mongers.

« Newer PostsOlder Posts »

Powered by WordPress