Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

February 23, 2016

Anti-Encryption National Commission News 24 February 2016

Filed under: Cryptography,Cybersecurity,Government,Security — Patrick Durusau @ 3:07 pm

Shedding Light on ‘Going Dark’: Practical Approaches to the Encryption Challenge.

WHEN: Wednesday, February 24, 2016 12:00 p.m. to 1:30 p.m. ET
WHERE: Bipartisan Policy Center, 1225 Eye Street NW, Suite 1000, Washington, DC, 20005

REGISTER NOW

From the post:

The spate of terrorist attacks last year, especially those in Paris and San Bernardino, raised the specter of terrorists using secure digital communications to evade intelligence and law enforcement agencies and, in the words of FBI Director James Comey, “go dark.” The same technologies that companies use to keep Americans safe when they shop online and communicate with their friends and family on the Internet are the same technologies that terrorists and criminals exploit to disguise their illicit activity.

In response to this challenge, House Homeland Security Committee Chairman Michael McCaul (R-TX) and Sen. Mark Warner (D-VA), a member of the Senate Intelligence Committee, have proposed a national commission on security and technology challenges in the digital age. The commission would bring together experts who understand the complexity and the stakes to develop viable recommendations on how to balance competing digital security priorities.

Please join the Bipartisan Policy Center on February 24 for a conversation with the two lawmakers as they roll out their legislation creating the McCaul-Warner Digital Security Commission followed by a panel discussion highlighting the need to take action on this critical issue.

Ironically, I won’t be able to watch the live streaming of this event because:

The video you are trying to watch is using the HTTP Live Streaming protocol which is only support in iOS devices.

Any discussion of privacy or First Amendment rights must begin without the presumption that any balancing or trade-off is necessary.

While it is true that some trade-offs have been made in the past, the question that should begin the anti-encryption discussion is whether terrorism is any more than a fictional threat or not?

Since 9/11, it has been 5278 days without a terrorist being sighted at a U.S. airport.

One explanation for those numbers is the number of terrorists in the United States is extremely small.

The FBI routinely takes advantage of people suffering from mental illness to create terrorist “threats,” which the FBI then eliminates. So those arrests should be removed from the showing of “terrorists” in our midst.

Before any discussion of “balancing” take place, challenge the need for balancing at all.

PS: Find someone with an unhacked iOS device on which to watch this presentation.

I first saw this in a post by Cory Doctorow, U.S. lawmakers expected to introduce major encryption bill.

February 21, 2016

FBI Must Reveal Its Hack – Maybe

Filed under: Cybersecurity,Security — Patrick Durusau @ 2:46 pm

Judge Rules FBI Must Reveal Malware It Used to Hack Over 1,000 Computers by Joseph Cox.

From the post:

On Wednesday, a judge ruled that defense lawyers in an FBI child pornography case must be provided with all of the code used to hack their client’s computer.

When asked whether the code would include the exploit used to bypass the security features of the Tor Browser, Colin Fieman, a federal public defender working on the case, told Motherboard in an email, simply, “Everything.”

“The declaration from our code expert was quite specific and comprehensive, and the order encompasses everything he identified,” he continued.

Fieman is defending Jay Michaud, a Vancouver public schools administration worker. Michaud was arrested after the FBI seized ‘Playpen’, a highly popular child pornography site on the dark web, and then deployed a network investigative technique (NIT)—the agency’s term for a hacking tool.

This NIT grabbed suspects’ real IP address, MAC address, and pieces of other technical information, and sent them to a government controlled server.

The case has drawn widespread attention from civil liberties activists because, from all accounts, one warrant was used to hack the computers of unknown suspects all over the world. On top of this, the defense has argued that because the FBI kept the dark web site running in order to deploy the NIT, that the agency, in effect, distributed child pornography. Last month, a judge ruled that the FBI’s actions did not constitute “outrageous conduct.”

If that sounds like a victory for those trying to protect users from government overreaching, consider the Department of Justice response to questions about the ruling:


“The court has granted the defense’s third motion to compel, subject to the terms of the protective order currently in place,” Carr wrote to Motherboard in an email.

I’m just guessing but I suspect “…the terms of the protective order currently in place,…” means that post-arrest the public may find out about the FBI hack but not before.

February 20, 2016

Saturday Night And You Ain’t Got Nobody?

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:43 pm

If you are spending Saturday night alone, take a look at More IoT insecurity: The surveillance camera that anyone can log into by Paul Ducklin.

It won’t help your social life but you are likely to see people who do have social lives.

The message here should be clear:

Your security is your responsibility, no one else’s.

Cybersecurity and Business ROI

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:35 am

Cybersecurity is slowing down my business, say majority of chief execs by Kat Hall.

From the post:

Cisco Live Chief execs polled in a major survey have little time for their cybersecurity folk and believe complying with security regulations hampers business.

Some 71 per cent of 1,000 top bosses surveyed by Cisco feel that efforts to shore up IT defences slows the pace of commerce. The study is due to be published next month.

Big cheeses cheesed off with security staff getting in the way of profit may well rid themselves of their troublesome priests, though: Craig Williams, senior technical leader at Cisco’s security biz Talos, believes quite a few bods working in computer security will not be in the sector in the next five years.

The profit motive is responsible for vulnerable software. Fitting the profit motive is responsible for a lack of effective efforts to protect against vulnerable software.

Does it seem odd that the business community views cybersecurity, both in terms of original software vulnerabilities and efforts to guard against them in the balance sheet of profit and loss?

That is even though data breaches can and do occur, if they are reasonable in scope and cost, it is easier to simply roll on and keep making a profit.

If you think about it, only the government and the uninformed (are those different groups?) think cybersecurity should be free and that it should never fail.

Neither one of those is the case nor will they ever be the case.

Security is always a question of how much security can you afford and for what purpose?

At the next report of a data breach, ask how do the costs of the breach compare to the cost to prevent the breach?

And to who? If a business suffers a data breach but the primary cost is to its customers, how does ROI work in that situation for the business? Or for the consumer? Am I going to move because the State of Georgia suffers data breaches?

I don’t recall that question ever being asked. Do you?

February 19, 2016

Motion Forcing Apple to comply with FBI [How Does Baby Blue’s Make Law More Accessible?]

Filed under: Cybersecurity,FBI,Government,Privacy,Security — Patrick Durusau @ 5:24 pm

The DoJ is trying to force Apple to comply with FBI by Nicole Lee.

I mention this because Nicole includes a link to: Case 5:16-cm-00010-SP Document 1 Filed 02/19/16 Page 1 of 35 Page ID #:1, which is the GOVERNMENT’S MOTION TO COMPEL APPLE INC. TO COMPLY WITH THIS COURT’S FEBRUARY 16, 2016 ORDER COMPELLING ASSISTANCE IN SEARCH; EXHIBIT.

Whatever the Justice Department wants to contend to the contrary, a hearing date of March 22, 2016 on this motion is ample evidence that the government has no “urgent need” for information, if any, on the cell phone in question. The government’s desire to waste more hours and resources on dead suspects is quixotic at best.

Now that Baby Blue’s Manual of Legal Citation (Baby Blue’s) is online and legal citations are no long captives of the Bluebook® gang, tell me again how Baby Blue’s has increased public access to the law?

This is, after all, a very important public issue and the public should be able to avail itself of the primary resources.

You will find Baby Blue’s doesn’t help much in that regard.

Contrast Baby Blue’s citation style advice with adding hyperlinks to the authorities cited in the Department of Justice’s “memorandum of points and authorities:”

Federal Cases

Central Bank of Denver v. First Interstate Bank of Denver, 551 U.S. 164 (1994).

General Construction Company v. Castro, 401 F.3d 963 (9th Cir. 2005)

In re Application of the United States for an Order Directing a Provider of Communication Services to Provide Technical Assistance to the DEA, 2015 WL 5233551, at *4-5 (D.P.R. Aug. 27, 2015)

In re Application of the United States for an Order Authorizing In-Progress Trace o Wire Commc’ns over Tel. Facilities (Mountain Bell), 616 F.2d 1122 (9th Cir. 1980)

In re Application of the United States for an Order Directing X to Provide Access to Videotapes (Access to Videotapes), 2003 WL 22053105, at *3 (D. Md. Aug. 22, 2003) (unpublished)

In re Order Requiring [XXX], Inc., to Assist in the Execution of a Search Warrant Issued by This Court by Unlocking a Cellphone (In re XXX) 2014 WL 5510865, at #2 (S.D.N.Y. Oct. 31, 2014)

Konop v. Hawaiian Airlines, Inc., 302 F.3d 868 (9th Cir. 2002)

Pennsylvania Bureau of Correction v. United States Marshals Service, 474 U.S. 34 (1985)

Plum Creek Lumber Co. v. Hutton, 608 F.2d 1283 (9th Cir. 1979)

Riley v. California, 134 S. Ct. 2473 (2014) [For some unknown reason, local rules must allow varying citation styles for U.S. Supreme Courts decisions.]

United States v. Catoggio, 698 F.3d 64 (2nd Cir. 2012)

United States v. Craft, 535 U.S. 274 (2002)

United States v. Fricosu, 841 F.Supp.2d 1232 (D. Co. 2012)

United States v. Hall, 583 F. Supp. 717 (E.D. Va. 1984)

United States v. Li, 55 F.3d 325, 329 (7th Cir. 1995)

United States v. Navarro, No. 13-CR-5525, ECF No. 39 (W.D. Wa. Nov. 13, 2013)

United States v. New York Telephone Co., 434 U.S. 159 (1977)

Federal Statutes

18 U.S.C. 2510

18 U.S.C. 3103

28 U.S.C. 1651

47 U.S.C. 1001

47 U.S.C. 1002

First, I didn’t format a one of these citations. I copied them “as is” into a search engine so Baby Blue’s played no role in those searches.

Second, I added hyperlinks to a variety of sources for both the case law and statutes to make the point that one citation can resolve to a multitude of places.

Some places are commercial and have extra features while others are non-commercial and may have fewer features.

If instead of individual hyperlinks, I had a nexus for each case, perhaps using its citation as its public name, then I could attach pointers to a multitude of resources that all offer the same case or statute.

If you have WestLaw, LexisNexis or some other commercial vendor, you could choose to find the citation there. If you prefer non-commercial access to the same material, you could choose one of those access methods.

That nexus is what we call a topic in topic maps (“proxy” in the TMRM) and it would save every user, commercial or non-commercial, the sifting of search results that I performed this afternoon.

The hyperlinks I used above make some of the law more accessible but not as accessible as it could be.

Creating a nexus/topic/proxy for each of these citations would enable users to pick pre-formatted citations (good-bye to formatting manuals for most of us) and the law material most accessible to them.

That sounds like greater public access to the law to me.

You?


Read the government’s “Memorandum of Points and Authorities” with a great deal of care.

For example:

The government is also aware of multiple other unpublished orders in this district and across the country compelling Apple to assist in the execution of a search warrant by accessing the data on devices running earlier versions of iOS, orders with which Apple complied.5

Be careful! Footnote 5 refers to a proceeding in the Eastern District of New York where the court sua sponte raised the issue of its authority under the All Writs Act. Footnote 5 recites no sources or evidence for the prosecutor’s claim of “…multiple other unpublished orders in this district and across the country….” None.

My impression is the government’s argument is mostly bluster and speculation. Plus repeating that Apple has betrayed its customers in the past and the government doesn’t understand its reluctance now. Business choices are not subject to government approval, or at least they weren’t the last time I read the U.S. Constitution.

Yes?

John McAfee As Unpaid Intern?

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:17 am

I read with disappointment John McAfee’s JOHN MCAFEE: I’ll decrypt the San Bernardino phone free of charge so Apple doesn’t need to place a back door on its product.

McAfee writes:


So here is my offer to the FBI. I will, free of charge, decrypt the information on the San Bernardino phone, with my team. We will primarily use social engineering, and it will take us three weeks. If you accept my offer, then you will not need to ask Apple to place a back door in its product, which will be the beginning of the end of America.

I don’t object to McAfee breaking the security on the San Bernardino phone, but I do object to him doing it for free.

McAfee donating services to governments with budgets in the $trillions sets a bad precedent.

First, it enables and encourages the government to continue hiring from the shallow end of the talent/gene pool for technical services. When it is stymied by ROT-13, some prince or princess will come riding in to save the day.

Second, we know the use of unpaid interns damage labor markets, Unpaid internships: A scourge on the labor market.

Third, and perhaps most importantly, “free” services cause governments and others to value those services as of little or no value. “Free” services degrade the value of those services in the future.

McAfee’s estimate of breaking the encryption on the San Bernardino phone in three weeks seems padded to me. I suspect there will be eighteen days of drunken debauchery concluded by three (3) actual days of work when the encryption is broken. For a total of twenty-one (21) days.

Open request to John McAfee: Please withdraw your offer to break the encryption on the San Bernardino phone for free. Charge at least $1 million on the condition that it is tax free. The bar you set for the hacker market will benefit everyone in that market.


The FBI interest in breaking encryption on the San Bernadino phone has nothing to do with that incident. Both the shooters in a spur of the moment incident are dead and no amount of investigation is going to change that. What the FBI wants is a routine method of voiding such encryption in the future.

To that extent, sell the FBI the decrypted phone and not the decryption method. So you can maintain a market for phone decryption on a case by case basis. Coupled with a high enough price for the service, that will help keep FBI intrusions into iPhones to a minimum.

February 17, 2016

Support Apple and Tim Cook!

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:08 pm

Tim Cook’s open letter summarizes the demand being made on Apple by the FBI:


Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession.

The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.

That’s insane.

First, unless it has gone unreported, the FBI hasn’t offered to pay Apple for such an operating system. Last time I looked, involuntary servitude was prohibited by the US constitution.

Second, Apple is free to choose the products it wishes to create and cannot be forced to create any particular product, even if paid.

The FBI’s position runs counter to any principled notion of liberty. The FBI would have our liberty subject to the whim and caprice of law enforcement agencies.

I don’t use any Apple products but if a defense fund for Apple is created to resist this absurdity, I will certainly be in line to contribute to it.

So should we all.

February 16, 2016

Choose Memory over Memorex (sex tape stolen)

Filed under: Security — Patrick Durusau @ 5:56 pm

If you have never seen Ella Fitzgerald in the classic “Is it live, or is it Memorex?” commercial, do take a moment to view the video.

With regard to sex acts between consenting adults, you should choose memory over Memorex or any other type of recording.

Lisa Vaas covers the consequences of recording such acts in Teacher’s sex tape stolen from hacked Dropbox, posted on school site.

Given the option to remember an occasion or to record it, choose human memory.

Thus far, human memory cannot be accessed without you or a live witness, thus far.

Breach Fatigue? (Safe for Work)

Filed under: Cybersecurity,Security — Patrick Durusau @ 2:57 pm

Sorry! After my report of Nathan’s Million to One Shot, Doc post, I could not resist titling this post with “Breach Fatigue.”

Sarah Kuranda reports expected lower spending on security with this quote:

Wright said some customers interviewed by Technology Business Research also cited what some are calling “breach fatigue” as a reason behind lower security spending. Year after year of mega breaches have caused massive jumps in reactionary security spending, Wright said companies are now saying, “There’s not much more I can do.” (emphasis added) [Is The Security Spending Party Over?]

“…[M}assive jumps in reactionary security spending…” have benefited the security services/software vendors but not appreciably increased enterprise security. That much is known.

What remains unknown is why companies say:

There’s not much more I can do.

Post this scenario to your nearest business manager/executive:

Assume that all the locks are broken on your new Lexus and it isn’t possible to remove the ignition key:

2016-Lexus

Here are the options enterprises have followed to protect the Lexus:

  1. Surround the Lexus with a chain-link fence, with missing sections. (defective security software)
  2. Surround the Lexus with a chain-link fence, with a gate-lock with the key in it. (defective security software design)
  3. Staff the gate with personnel who can’t recognized authorized users. (poor security training)
  4. Purchase broken/insecure solutions to protect a broken/insecure vehicle. (poor strategy)

No doubt, enterprises can continue to throw money at defective software to protect defective software, with continuing mega-breach results.

To that extent, realizing throwing good money after bad is a positive sign. Sort of.

What more enterprises can do: Invest/require secure software. More costly but layering broken software on top of broken software has failed.

Why not try something more plausible?

February 15, 2016

BMG Seeks to Violate Privacy Rights – Cox Refuses to Aid and Abet

Filed under: Cybersecurity,Intellectual Property (IP),Privacy,Security — Patrick Durusau @ 4:58 pm

Cox Refuses to Spy on Subscribers to Catch Pirates by Ernesto Van der Sar.

From the post:

Last December a Virginia federal jury ruled that Internet provider Cox Communications was responsible for the copyright infringements of its subscribers.

The ISP was found guilty of willful contributory copyright infringement and must pay music publisher BMG Rights Management $25 million in damages.

The verdict was a massive victory for the music company and a disaster for Cox, but the case is not closed yet.

A few weeks ago BMG asked the court to issue a permanent injunction against Cox Communications, requiring the Internet provider to terminate the accounts of pirating subscribers and share their details with the copyright holder.

In addition BMG wants the Internet provider to take further action to prevent infringements on its network. While the company remained vague on the specifics, it mentioned the option of using invasive deep packet inspection technology.

Last Friday, Cox filed a reply pointing out why BMG’s demands go too far, rejecting the suggestion of broad spying and account termination without due process.

“To the extent the injunction requires either termination or surveillance, it imposes undue hardships on Cox, both because the order is vague and because it imposes disproportionate, intrusive, and punitive measures against households and businesses with no due process,” Cox writes (pdf).

Read the rest of Ernesto’s post for sure but here’s a quick summary:

Cox.com is spending money to protect your privacy.

I don’t live in a Cox service area but if you do, sign up with Cox and say their opposition to BMG is driving your new subscription. Positive support always rings louder than protesters with signs and litter.

BMG.com is spending money to violate your privacy.

BMG is a subsidiary of Bertelsmann, which claims 112,037 employees.

I wonder how many of those employees have signed off on the overreaching and abusive positions of BMG?

Perhaps members of the public oppressed by BMG and/or Bertelsmann should seek them out to reason with them.

Bearing in mind that “rights” depend upon rules you choose to govern your discussions/actions.

February 12, 2016

Brick an iOS Device with Date Setting (local or remote)

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:33 pm

iOS bug warning: Setting this date on your iPhone or iPad will kill your device permanently by Justin Ferris.

From the post:

No one is quite sure yet why this happens, and Apple is still looking into it. However, the best guess is that iOS sees the date January 1, 1970, as either zero or a negative number, and that causes some or all of the iOS functions that require a date to crash.

Now, you might be thinking this isn’t a big deal, because you’d never set your gadget to this date. And it actually is a long process to do it. However, maybe you have a friend who’s a prankster or an ex-friend with a grudge that has access to your gadget. Or it could be done remotely, in the right circumstances.

Not all iOS devices, see: http://www.komando.com/happening-now/347426/ios-date-bug-kills-iphone-and-ipad for the models affected.

Imagine that, bricking an iOS device with a date setting.

What will Apple think of next?

It may be the case that no one could have guessed this would be an issue.

However, entry of January 1, 1970 that bricks any device sold after 12 February 2016, should be treated as a case of strict liability against the manufacturer. At a minimum.

February 11, 2016

UK Parliament Reports on the Draft Investigatory Powers Bill

Filed under: Cybersecurity,Government,Privacy,Security — Patrick Durusau @ 7:32 pm

I have stumbled on several “news” reports about the Investigatory Powers Bill in the UK.

Reports from committees in Parliament have started appearing, but are those reports linked in breathless accounts of the horrors of the Investigatory Powers bill?

You already know the answer to that question!

I did find UK surveillance bill condemned by a Parliamentary committee, for the third time by Cory Doctorow, which pointed to the Joint Select Committee recommendations for changes in the IP Bill.

For two other reports, Cory relied on no originals reporting in Wired.co.uk, which quoted many sources but failed to link to the reports themselves.

To get you started with the existing primary criticisms of the Investigatory Powers Bill:

There was a myth the Internet (later the WWW) would provide greater access to information, along the lines of the Memex.

Deep information is out there and when you find it, please insert a link to it.

You and everyone who reads your paper, post, tweet, etc. will be better off for it.

February 10, 2016

“Butts In Seats” Management At The FBI

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 5:48 pm

The FBI Wants $38 More Million to Buy Encryption-Breaking Technology by Lorenzo Franceschi-Bicchierai.

From the post:

For more than a year, FBI Director James Comey has been publicly complaining about how much of a hard time his agents, as well as local and state cops, are having when they encounter encryption during their investigations.

Now, the FBI is asking for more money to break encryption when needed.

In its budget request for next year, the FBI asked for $38.3 more million on top of the $31 million already requested last year to “develop and acquire” tools to get encrypted data, or to unmask internet users who hide behind a cloak of encryption. This money influx is designed to avoid “going dark,” an hypothetical future where the rise of encryption technologies make it impossible for cops and feds to track criminal suspects, or to access and intercept the information or data they need to solve crimes and investigations.

Greet story and the total requested by the FBI totals up to: $69.3 million.

From further in the post:


For Julian Sanchez, one of the authors of a recent report on going dark, which concluded that technology is actually helping law enforcement, rather than hindering it, is skeptical that the FBI even needs all this money.

“$38.3 million is a hefty chunk of change to dole out for a ‘problem’ the FBI has so steadfastly refused to publicly quantify in any meaningful way,“ he told me. “First let’s see some hard numbers about how often encryption is a serious obstacle to investigations and what the alternatives are; then maybe we’ll be in a position to know how much it’s reasonable to spend addressing the issue.“

But to be fair to Director Comey, there isn’t a metric in the possession of the FBI (or anyone else) that would justify a dollar on breaking encryption any more or less than $1 million or $1 billion.

Those numbers simply don’t exist. How do we know that?

I’m willing to concede that the publicists for the FBI are probably dishonest but their’re not stupid.

If there was any evidence, even evidence that had to be perverted to support the case for breaking encryption research, it would be on a flashing banner on the FBI website.

What you are seeing from Director Comey is a “butts in seats” management style.

How many “butts” you can get into seats, yours or contractors, increases the prestige of your department and the patronage you can dispense. You may think those are not related to the mission of the department.

You would be right but so what? What made you think that appropriations have any relationship to the mission of the department? The core mission of the department is to survive and increase its influence. Mission is something you put on flyers. Nothing more.

I don’t mean to denigrate the “heads down, doing their jobs as best they can even with political master interventions staff,” but they aren’t the ones who set policy or waste funds in “butts in seats” management plans.

Congress needs to empower inspector generals and the Government Accounting Office to vet agency budget proposals prior to submission to Congress. Unsubstantiated requests should be deleted from requests and not restored by Congress during the budgetary process.

It’s called evidence based management for anyone unfamiliar with the practice.

Bringing the CIA to Heel

Filed under: Government,Politics,Security — Patrick Durusau @ 3:19 pm

Cory Doctorow reports in CIA boss flips out when Ron Wyden reminds him that CIA spied on the Senate, that John Brennan, CIA Director, had a tantrum when asked about the CIA spying on the Senate Select Committee on Intelligence.

See Cory’s post for the details and an amusing video of the incident.

The easiest way to bring the CIA to heel is for the Senate to publicly release all classified documents that come into its possession and to de-criminalize leaks from any U.S. government agency.

Starting at least with the Pentagon Papers and probably before, every government leak has demonstrated the incompetence and fundamental dishonesty of members of the U.S. government.

We should stop even pretending to listen to the fanciful list of horrors that “will result” if classified information is leaked.

Some post-9/11 torturers might face retribution but since U.S. authorities won’t pursue the cases, is that a bad thing?

I am untroubled by the claims, “…but we did it for you/country/flag….” That is as self-serving as “…order are orders….” And to my mind, just as reprehensible.

February 9, 2016

Not-so-secret atomic tests:… […how an earlier era viewed citizens’ rights and safety.]

Filed under: Government,Politics,Security — Patrick Durusau @ 8:25 pm

Not-so-secret atomic tests: Why the photographic film industry knew what the American public didn’t by Tim Barribeau.

From the post:

It’s one of the dark marks of the U.S. Government in the 20th century — a complete willingness to expose unwitting citizens to dangerous substances in the name of scientific advancement. It happened with the Tuskegee syphilis experiment, with the MKUltra mind control project and with the atomic bomb testing of the 1940s and 50s. The Atomic Energy Commission (AEC) knew that dangerous levels of fallout were being pumped into the atmosphere, but didn’t bother to tell anyone. Well, anyone except the photographic film industry, that is.

Photographic film is particularly radiosensitive — that’s the reason why you see dosimeters made from the stuff, as they can be used to detect gamma, X-ray and beta particles. But in 1946, Kodak customers started complaining about film they had bought coming out fogged.


Kodak complained to the Atomic Energy Commission and that Government agency agreed to give Kodak advanced information on future tests, including ‘expected distribution of radioactive material in order to anticipate local contamination.

In fact, the Government warned the entire photographic industry and provided maps and forecasts of potential contamination. Where, I ask, were the maps for dairy farmers? Where were the warnings to parents of children in these areas? So here we are, Mr. Chairman. The Government protected rolls of film, but not the lives of our kids. There is something wrong with this picture.

Senator Harkin’s remarks about dairy farms and children reveals the dark side of this story. It’s not enough that the AEC was knowingly releasing fallout into American skies, but that one of the side effects they were aware of was that it could enter the food supply, and potentially cause long term health problems. The I-131 would fall on the ground, be eaten by cattle through radioactive feed, and through their milk, be passed on to the public. Your thyroid needs iodine to function, so it builds up stores of iodine from the environment, and high concentrations of I-131 are directly linked to higher risks of radiogenic thyroid cancer — especially from exposure during childhood. And that’s exactly what happened to thousands of American children.

It turns out there’s a relatively easy way to prevent thyroid cancer after exposure to I-131 — standard iodine supplements will do. But if you’re unaware of the fallout, you wouldn’t know to take the countermeasure. The atmospheric tests have been linked to up to 75,000 cases of thyroid cancer in the U.S. alone. To this day, the National Cancer Institute runs a program to help people identify if they were exposed, and between 1951 and 1962, it was an awful lot of people.

radiation-map

If the story weren’t disturbing enough, consider the closing note from the editor:

[Ed. note: This piece ranges far from our normal digital photography fare, but we found it an interesting historical note on a moment in time when the photo industry, military development and public health all intersected, and on how an earlier era viewed citizens’ rights and safety.]

Really?

The atomic test piece was published in 2013.

In 2015/16, it is discovered that the entire city of Flint, Michigan was deliberately poisoned by state government. New information is appearing on a daily basis as the crisis continues.

The present era has little concern for citizens, their rights and safety. If you don’t believe that, consider all the reports of bad water elsewhere that have begun to surface. Mark Ruffalo: We’re Heading Toward a National Water Crisis.

To demonstrate her lack of concern for the citizens of Flint, Hillary Clinton wants to incorporate them in the planning of the recovery process. To “empower” them.

Pure BS. Every citizen in Flint wants potable drinking water and safe water for their families to use for bathing, laundry, etc. Empowerment isn’t going to do any of those things.

Let’s stop harming people first and play privilege/power shell game later, if we have to play it at all.

Perjurer’s Report: Worldwide Threat Assessment…

Filed under: Government,Security — Patrick Durusau @ 7:35 pm

Worldwide Threat Assessment of the US Intelligence Community by James R. Clapper, Director of National Intelligence.

From the introduction:

Chairman Burr, Vice Chairman Feinstein, Members of the Committee, thank you for the invitation to offer the United States Intelligence Community’s 2016 assessment of threats to US national security. My statement reflects the collective insights of the Intelligence Community’s extraordinary men and women, whom I am privileged and honored to lead. We in the Intelligence Community are committed every day to provide the nuanced, multidisciplinary intelligence that policymakers, warfighters, and domestic law enforcement personnel need to protect American lives and America’s interests anywhere in the world.

The order of the topics presented in this statement does not necessarily indicate the relative importance or magnitude of the threat in the view of the Intelligence Community.

Information available as of February 3, 2016 was used in the preparation of this assessment.

You may remember that in March of 2013, Director Clapper deliberately perjured himself before this self-same committee.

It’s entirely possible that some truths appear in the assessment Clapper presented, but those are either inadvertent or were a lie could not improve the story.

One of the difficulties of government agents lying when it suits their purposes, is that other members of government and/or the public have no means to distinguish self-serving lies from an occasional truth.

If your interests are served by the threat assessment, make what use of it you will, being mindful that leaks may suddenly discredit both it and any proposal you advance based upon it.

$19 Billion in “Protection Money” and Not One Incentive For Secure Code

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 6:19 pm

Protecting U.S. Innovation From Cyberthreats by Barack Obama (current President of the United States).

From the statement:

More than any other nation, America is defined by the spirit of innovation, and our dominance in the digital world gives us a competitive advantage in the global economy. However, our advantage is threatened by foreign governments, criminals and lone actors who are targeting our computer networks, stealing trade secrets from American companies and violating the privacy of the American people.

Networks that control critical infrastructure, like power grids and financial systems, are being probed for vulnerabilities. The federal government has been repeatedly targeted by cyber criminals, including the intrusion last year into the Office of Personnel Management in which millions of federal employees’ personal information was stolen. Hackers in China and Russia are going after U.S. defense contractors. North Korea’s cyberattack on Sony in 2014 destroyed data and disabled thousands of computers. With more than 100 million Americans’ personal data compromised in recent years—including credit-card information and medical records—it isn’t surprising that nine out of 10 Americans say they feel like they’ve lost control of their personal information.

These cyberthreats are among the most urgent dangers to America’s economic and national security. That’s why, over the past seven years, we have boosted cybersecurity in government—including integrating and quickly sharing intelligence about cyberthreats—so we can act on threats even faster. We’re sharing more information to help companies defend themselves. We’ve worked to strengthen protections for consumers and students, guard the safety of children online, and uphold privacy and civil liberties. And thanks to bipartisan support in Congress, I signed landmark legislation in December that will help bolster cooperation between government and industry.

That’s why, today, I’m announcing our new Cybersecurity National Action Plan, backed by my proposal to increase federal cybersecurity funding by more than a third, to over $19 billion. This plan will address both short-term and long-term threats, with the goal of providing every American a basic level of online security.

First, I’m proposing a $3 billion fund to kick-start an overhaul of federal computer systems. It is no secret that too often government IT is like an Atari game in an Xbox world. The Social Security Administration uses systems and code from the 1960s. No successful business could operate this way. Going forward, we will require agencies to increase protections for their most valued information and make it easier for them to update their networks. And we’re creating a new federal position, Chief Information Security Officer—a position most major companies have already adopted—to drive these changes across government.

The Social Security Administration is no doubt running systems and code for the 1960s, which is no doubt why you so seldom hear its name in data breach stories.

Social Security Numbers, sure, those flooded from the Office of Personnel Management, but that wasn’t the fault of the Social Security Administration.

To be fair, the SSA has experienced data breaches, but self-inflicted ones like leaking information on 14,000 “live” people in a list of 90 million deceased Americans.

In case you are wondering, in round numbers that means SSA staff make an error in 00.015% of all the cases they handle.

I should be so careful! So should you! 😉

That’s a quite remarkably low error rate. Consider that a batter is “hot” if they hit more than 3 times out of 10.

Sorry, back to the main story.

President Obama’s “protection money” will delay the onset of incentives for producing secure code and systems.

Following the money, vendors/contractors will pursue strategies that layer more insecure code on top of already insecure code. After all, that’s what the President is paying for and that’s way he is going to get.

Pay close attention to any attempt to “upgrade” the information systems at the Social Security Administration. The net effect will be to bring the SSA to a modern level of insecurity.

The more code produced by the Cybersecurity National Action Plan, the more attack surfaces for hackers.

There is an upside to the President’s plan.

The surplus of hacking opportunities will doom some hackers to cycles of indecision and partial hacks. They will jump from one breach story to another.

How to calculate an ROI on surplus hacking opportunities isn’t clear. Suggestions?

February 8, 2016

The 2016 cyber security roadmap – [Progress on Security B/C of Ransomware?]

Filed under: Cybersecurity,Security — Patrick Durusau @ 5:25 pm

The 2016 cyber security roadmap by Chloe Green.

From the post:

2014 was heralded as the ‘year of the data breach’ – but we’d seen nothing yet. From unprecedented data theft to crippling hacktivism attacks and highly targeted state-sponsored hacks, 2015 has been the bleakest year yet for the cyber security of businesses and organisations.

High profile breaches at Ashley Madison, TalkTalk and JD Wetherspoons have brought the protection of personal and enterprise data into the public consciousness.

In the war against cybercrime, companies are facing off against ever more sophisticated and crafty approaches, while the customer data they hold grows in value, and those that fail to protect it find themselves increasingly in the media and legislative spotlight with nowhere to hide.

We asked a panel of leading industry experts to highlight the major themes for enterprise cyber security in 2016 and beyond.

There isn’t a lot of comfort coming from industry experts these days. Some advice on mitigating strategies and a warning that ransomeware is about to come into its own in 2016. I believe the phrase was “…corporate and not consumer rates…” for ransoms.

A surge in rasonware may be a good thing for the software industry. It would fix a cost for insecure software and practices.

When ransomware extracts commercially unacceptable costs from users of software, users will demand better software from developers.

Financial incentives all the way around. Incentives for hackers to widely deploy ransomeware, incentives for software users to watch their bottom line and last but not least, incentives for developers to implement more robust testing and development processes.

Ransomware may do what reams of turgid prose in journals, conference presentations, books and classrooms have failed to do. Ransomware can create financial incentives for software users to demand better software engineering and testing. Not to mention liability for defects in software.

Faced with financial demands, the software industry will be forced to adopt better software development processes. Those unable to produce sufficiently secure (no software being perfect) software will collapse under the weight of falling sales or liability litigation.

Hackers will be forced to respond to improvement in software quality, for their own financial gain, creating a virtuous circle of immproving software security.

Governments Race To Bottom On Privacy Rights

Filed under: Government,Privacy,Security — Patrick Durusau @ 2:30 pm

British spies want to be able to suck data out of US Internet giants by Cory Doctorow.

Cory points out a recent US/UK agreement subjects U.S. citizens to surveillance under British laws that no one understands and that don’t require even a fig leaf of judicial approval.

The people of the United States fought one war to free themselves of arbitrary and capricious British rule. Declaration of Independence.

Is the stage being set for a war to enforce the constitution that resulted from the last war the United States waged against the UK?

February 4, 2016

Beating Body Scanners

Filed under: Government,Security — Patrick Durusau @ 9:10 pm

Just on the off chance that some government mandates wholly ineffectual full body scanners for security purposes, Jonathan Corbett has two videos that demonstrate the ease with which such scanner can be defeated, completely.

Oh, I forgot, the US government has mandated such scanners!

Jonathan maintains a great site at: http://professional-troublemaker.com/. You can folow him @_JonCorbett.

Jon is right about the scanners being ineffectual but being effective wasn’t part of the criteria for purchasing the systems. Scanners were purchased to give the impression of frenzied activity, even if it was totally ineffectual.

What would happen if a terrorist did attack an airport, through one of the hundreds of daily lapses in security? What would the government say if it weren’t engaged in non-stop but meaningless activity?

Someone would say, falsely, that it was inactive on the part of government that enabled the attack.

Stuff and nonsense.

“Terrorist” attacks, actually violence committed by criminals by another name, can and will happen no matter what measures are taken by the government. Short of having an all-nude policy beginning at the perimeter of the airport and prohibiting anything larger than a clear quart zip lock bag being shipped. With passengers or as cargo.

Even then it isn’t hard to imagine several dozen ways to carry out “terrorist” attacks at any airport.

The sooner government leaders begin to educate their citizens that some risks are simply unavoidable, the sooner money can stop being wasted on visible but ineffectual efforts like easily defeated body scanners.

Comodo Chromodo browser – Danger! Danger! – Discontinue Use

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:19 pm

Comodo Chromodo browser does not enforce same origin policy and is based on an outdated version of Chromium

From the overview:

Comodo Chromodo browser, version 45.8.12.392, 45.8.12.391, and possibly earlier, does not enforce same origin policy, which allows for the possibility of cross-domain attacks by malicious or compromised web hosts. Chromodo is based on an outdated release of Chromium with known vulnerabilities.

Solution

The CERT/CC is currently unaware of a practical solution to this problem and recommends the following workarounds.

Disable JavaScript

Disabling JavaScript may mitigate cross-domain scripting attacks. For instructions, refer to Comodo’s help page.

Note that disabling JavaScript may not protect against known vulnerabilities in the version of Chromium on which Chromodo is based. For this reason, users should prioritize implementing the following workaround.

Discontinue use

Until these issues are addressed, consider discontinuing use of Chromodo.

Discontinue use is about as extreme a workaround as I can imagine.

Too bad the Comodo site doesn’t say anything about refunds and/or compensation for damaged customers.

Would you say that without any penalty, there is no incentive for Comodo to produce better software?

Or to put it differently, where is the downside to Comodo producing buggy software?

Where does that impact their bottom line?

I first saw this in a tweet by SecuriTay.

SQL Injection Hall-Of-Shame / Internet-of-Things Hall-Of-Shame

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:13 pm

SQL Injection Hall-Of-Shame by Arthur Hicken.

From the webpage:

In this day and age it’s ridiculous how frequently large organizations are falling prey to SQL Injection which is almost totally preventable as I’ve written previously.

Note that this is a work in progress. If I’ve missed something you’re aware of please let me know in the comments at the bottom of the page.

Don’t let this happen to you! For some simple tips see the OWASP SQL Injection Prevention Cheat Sheet. For more security info check out the security resources page and the book SQL Injection Attacks and Defense or Basics of SQL injection Analysis, Detection and Prevention: Web Security for more info.

IOT HALL-OF-SHAME

With the rise of internet enabled devices in the Internet of Things or IoT the need for software security is becoming even more important. Unfortunately many device makers seem to put security on the back burner or not even understand the basics of cybersecurity.

I am maintaining here a list of known hacks for “things”. The list is short at the moment but will grow, and is often more generic than it could be. It’s kind of in reverse-chronological order, based on the date that the hack was published. Please assist – if you’re aware of additional thing-hacks please let me know in the comments at the bottom of the page.

I assume you find “wall-of-shame” efforts as entertaining as I do.

I am aware of honor-shame debates from a biblical studies perspective, on which see: Complete Bibliography of Honor-Shame Resources

“Complete” is a relative term when used regarding any bibliography in biblical studies and this appears to have at least one resource from 2011, but none later. You can run the references forward to collect more recent literature.

But the question with shaming techniques is are they effective?

As a case in point, consider Researchers find it’s terrifyingly easy to hack traffic lights where the post points out:

In fact, the most upsetting passage in the entire paper is the dismissive response issued by the traffic controller vendor when the research team presented its findings. According to the paper, the vendor responsible stated that it “has followed the accepted industry standard and it is that standard which does not include security.”

We can entertain ourselves by shaming vendors all day but only the “P” word will drive greater security.

“P” as in penalty.

Vormetric found that to be the case in What Drives Compliance? Hint: The P Word Missing From Cybersecurity Discussions.

Be entertained by wall-of-shame efforts but lobby for compliance enforced by penalties. (Know to anthropologists as a fear culture.)

February 3, 2016

The Dark Web, “Kissing Cousins,” and Pornography

Filed under: Cybersecurity,Government,Privacy,Security,Tor — Patrick Durusau @ 7:57 pm

Dark web is mostly illegal, say researchers by Lisa Vaas.

You can tell where Lisa comes out on the privacy versus law enforcement issue by the slant of her conclusion:

Users, what’s your take: are hidden services worth the political firestorm they generate? Are they worth criminals escaping justice?

Illegal is a slippery concept.

Marriage of first “kissing” cousins is “illegal” in:

Arkansas, Delaware, Idaho, Iowa, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, South Dakota, Texas, Washington, West Virginia, and Wyoming.

Marriage of first “kissing” cousins is legal in:

Alabama, Alaska, California, Colorado, Connecticut, District of Columbia, Florida, Georgia, Hawaii, Maryland, Massachusetts, New Jersey, New Mexico, New York, North Carolina (first cousins but not double first), Rhode Island, South Carolina, Tennessee, Vermont, and Virginia.

There are some other nuances I didn’t capture and for those see: State Laws Regarding Marriages Between First Cousins.

If you read Cryptopolitik and the Darknet by Daniel Moore & Thomas Rid carefully, you will spot a number of problems with their methodology and reasoning.

First and foremost, no definitions were offered for their taxonomy (at page 20):

  • Arms
  • Drugs
  • Extremism
  • Finance
  • Hacking
  • Illegitimate pornography
  • Nexus
  • Other illicit
  • Social
  • Violence
  • Other
  • None

Readers and other researchers are left to wonder what was included or excluded from each of those categories.

In science, that would be called an inability to replicate the results. As if this were science.

Moore & Rid recite anecdotal accounts of particular pornography sites, calculated to shock the average reader, but that’s not the same thing as enabling replication of their research. Or a fair characterization of all the pornography encountered.

They presumed that text was equivalent to image content, so they discarded all images (pages 19-20). Which left them unable to test that presumption. Hmmm, untested assumptions in science?

The results of the unknown basis for classification identied 122 sites (page 21) as pornographic out of the 5,205 initial set of sites.

If you accept Tor’s estimate of 30,000 hidden services that announce themselves every day, Moore & Rid have found that illegal pornography (whatever that means) is:

122 / 30000 = 0.004066667

Moore & Rid have established that “illegal” porn is .004066667% of the Dark Net.

I should be grateful Moore & Rid have so carefully documented the tiny part of the Dark Web concerned with their notion of “illegal” pornography.

But, when you encounter “reasoning” such as:


The other quandary is how to deal with darknets. Hidden services have already damaged Tor, and trust in the internet as a whole. To save Tor – and certainly to save Tor’s reputation – it may be necessary to kill hidden services, at least in their present form. Were the Tor Project to discontinue hidden services voluntarily, perhaps to improve the reputation of Tor browsing, other darknets would become more popular. But these Tor alternatives would lack something precious: a large user base. In today’s anonymisation networks, the security of a single user is a direct function of the number of overall users. Small darknets are easier to attack, and easier to de-anonymise. The Tor founders, though exceedingly idealistic in other ways, clearly appreciate this reality: a better reputation leads to better security.85 They therefore understand that the popularity of Tor browsing is making the bundled-in, and predominantly illicit, hidden services more secure than they could be on their own. Darknets are not illegal in free countries and they probably should not be. Yet these widely abused platforms – in sharp contrast to the wider public-key infrastructure – are and should be fair game for the most aggressive intelligence and law-enforcement techniques, as well as for invasive academic research. Indeed, having such clearly cordoned-off, free-fire zones is perhaps even useful for the state, because, conversely, a bad reputation leads to bad security. Either way, Tor’s ugly example should loom large in technology debates. Refusing to confront tough, inevitable political choices is simply irresponsible. The line between utopia and dystopia can be disturbingly thin. (pages 32-33)

it’s hard to say nothing and see public discourse soiled with this sort of publication.

First, there is no evidence presented that hidden services have damaged Tor and/or trust in the Internet as a whole. Even the authors concede that Tor is the most popular option anonymous browsing and hidden services. That doesn’t sound like damage to me. You?

Second, the authors dump all hidden services in the “bad, very bad” basket, despite their own research classifying only .004066667% of the Dark Net as illicit pornography. They use stock “go to” examples to shock readers in place of evidence and reasoning.

Third, the charge that Tor has “[r]efused to confront tough, inevitable political choices is simply irresponsible” is false. Demonstrably false because the authors point out that Tor developers made a conscious choice to not take political considerations into account (page 25).

Since Moore & Rid disagree with that choice, they resort to name calling, terming the decision “simply irresponsible.” Moore & Rid are entitled to their opinions but they aren’t going to persuade even a semi-literate audience with name calling.

Take Cryptopolitik and the Darknet as an example of how to not write a well researched and reasoned paper. Although, that isn’t a bar to publication as you can see.

They are deadly serious about crypto backdoors [And of the CIA and Chinese Underwear]

Filed under: Cryptography,Cybersecurity,Government,Security — Patrick Durusau @ 3:25 pm

They are deadly serious about crypto backdoors by Robert Graham.

From the post:

Julian Sanchez (@normative) has an article questioning whether the FBI is serious about pushing crypto backdoors, or whether this is all a ploy pressuring companies like Apple to give them access. I think they are serious — deadly serious.

The reason they are only half-heartedly pushing backdoors at the moment is that they believe we, the opposition, aren’t serious about the issue. After all, the 4rth Amendment says that a “warrant of probable cause” gives law enforcement unlimited power to invade our privacy. Since the constitution is on their side, only irrelevant hippies could ever disagree. There is no serious opposition to the proposition. It’ll all work itself out in the FBI’s favor eventually. Among the fascist class of politicians, like the Dianne Feinsteins and Lindsay Grahams of the world, belief in this principle is rock solid. They have absolutely no doubt.

But the opposition is deadly serious. By “deadly” I mean this is an issue we are willing to take up arms over. If congress were to pass a law outlawing strong crypto, I’d move to a non-extradition country, declare the revolution, and start working to bring down the government. You think the “Anonymous” hackers were bad, but you’ve seen nothing compared to what the tech community would do if encryption were outlawed.

On most policy questions, there are two sides to the debate, where reasonable people disagree. Crypto backdoors isn’t that type of policy question. It’s equivalent to techies what trying to ban guns would be to the NRA.

What he says.

Crypto backdoors are a choice between a policy that benefits government at the expense of everyone (crypto backdoors) versus a policy that benefits everyone at the expense of the government (no crypto backdoors). It’s really that simple.

When I say crypto backdoors benefit the government, I mean that quite literally. Collecting data via crypto backdoors and otherwise, enables government functionaries to pretend to be engaged in meaningful responses to serious issues.

Collecting and shoveling data from desk to desk is about as useless an activity as can be imagined.

Basis for that claim? Glad you asked!

If you haven’t read: Chinese Underwear and Presidential Briefs: What the CIA Told JFK and LBJ About Mao by Steve Usdin, do so.

Steve covers the development of the “presidential brief” and its long failure to provide useful information about China and Mao in particular. The CIA long opposed declassification of historical presidential briefs based on the need to protect “sources and methods.”

The presidential briefs for the Kennedy and Johnson administrations have been released and here is what Steve concludes:

In any case, at least when it comes to Mao and China, the PDBs released to date suggest that the CIA may have fought hard to keep the these documents secret not to protect “sources and methods,” but rather to conceal its inability to recruit sources and failure to provide sophisticated analyses.

Past habits of the intelligence community explain rather well why they have no, repeat no examples of how strong encryption as interfered with national security. There are none.

The paranoia about “crypto backdoors” is another way to engage in “known to be useless” action. It puts butts in seats and inflates agency budgets.


Unlike Robert, should Congress ban strong cryptography, I won’t be moving to a non-extradition country. Some of us need to be here when local police come to their senses and defect.

February 2, 2016

Google to deliver wrong search results to would-be jihadis[, gays, unwed mothers, teenagers, Muslims

Filed under: Censorship,Government,Privacy,Security — Patrick Durusau @ 8:52 pm

Google to deliver wrong search results to would-be jihadis by David Barrett.

From the post:

Jihadi sympathisers who type extremism-related words into Google will be shown anti-radicalisation links instead, under a pilot scheme announced by the internet giant.

The new technology means people at risk of radicalisation will be presented with internet links which are the exact opposite of what they were searching for.

Dr Anthony House, a senior Google executive, revealed the pilot scheme in evidence to MPs scrutinising the role of internet companies in combating extremism.

It isn’t hard to see where this slippery road leads.

If any of the current Republican candidates are elected to the U.S. presidency, Google will:

Respond to gay sex or gay related searches with links for praying yourself straight.

Unwed mothers requesting abortion services will have their personal information forwarded to right-to-birth organizations and sent graphic anti-abortion images by email.

Teenagers seeking birth control advice will only see – Abstinence or Hell!

Muslims, well, unless Trump has deported all of them, will see anti-Muslim links.

Unlike bad decisions by government, Google can effectively implement demented schemes such as this one.

Censoring of search results to favor any side, policy, position, is just that censorship.

If you forfeit the rights of others, you have no claim to rights yourself.

Your call.

“A little sinister!!” (NRO’s Octopus Logo)

Filed under: Government,Security — Patrick Durusau @ 8:14 pm

“A little sinister!!” The story behind National Reconnaissance Office’s octopus logo by JPat Brown.

From the post:

When the National Reconnaissance Office (NRO) announced the upcoming launch of their NROL-39 mission back in December 2013, they didn’t get quite the response they had hoped.

nrol-2

That might have had something to do with the mission logo being a gigantic octopus devouring the Earth.

The logo was widely lampooned as emblematic of the intelligence community’s tone-deafness to public sentiment. Incidentally, an octopus enveloping the planet also so happens to be the logo of SPECTRE, the international criminal syndicate that James Bond is always thwarting. So there’s that.

Privacy and security researcher Runa Sandvik wanted to know who approved this and why, so she filed a FOIA with the NRO for the development materials that went into the logo. A few months later, the NRO delivered.

This is a great read and one you need to save to your local server. Especially for days when you think the U.S. government is conspiring against its citizens. It should be so well-organized.

All sorts of government outrages are the produce of the same decision making process as this lame looking octopus.

At the very least they could have gotten John Romita Jr. to do something a bit more creative:

Doctoroctopus Fair use.

More than “a little sinister” but why not be honest?

January 31, 2016

Danger of Hackers vs. AI

Filed under: Artificial Intelligence,Cybersecurity,Security — Patrick Durusau @ 9:11 pm

An interactive graphical history of large data breaches by Mark Gibbs.

From the post:

If you’re trying to convince your management to beef up the organization’s security to protect against data breaches, an interactive infographic from Information Is Beautiful might help.

Built with IIB’s forthcoming VIZsweet data visualization tools, the World’s Biggest Data Breaches visualization combines data from DataBreaches.net, IdTheftCentre, and press reports to create a timeline of breaches that involved the loss of 30,000 or more records (click the image below to go to the interactive version). What’s particularly interesting is that while breaches were caused by accidental publishing, configuration errors, inside job, lost or stolen computer, lost or stolen media, or just good old poor security, the majority of events and the largest, were due to hacking.

Make sure the powers that be understand that you don’t have to be a really big organization for a serious data breach to happen.

See Mark’s post for the image and link to the interactive graphic.

Hackers (human intelligence) are kicking cybersecurity’s ass 24 x 7.

Danger of AI (artificial intelligence), maybe, someday, it might be a problem, but we don’t know or to what extent.

What priority do you assign these issues in your IT budget?

If you said hackers are #1, congratulations! You have an evidence-based IT budgeting process.

Otherwise, well, see you at DragonCon. I’m sure you will have lots of free time when you aren’t in the unemployment line.

PS: Heavy spending on what is mis-labeled as “artificial intelligence” is perfectly legitimate. Think of it as training computers to do tasks humans can’t do or that machines can do more effectively. Calling it AI loads it with unnecessary baggage.

January 28, 2016

Federal Cybersecurity Egg Roll Event Continues (DHS)

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:26 pm

If you remember my posts, “Cybersecurity Sprint or Multi-Year Egg Roll?” from last June (2015), and Fed Security Sprint – Ans: Multi-Year Egg Roll (Nov. 2015), there is further confirmation of the projected duration of the egg roll from the GAO.

The GAO report, DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System

The executive summary prepares the reader for 61 pages of grim reading:

The Department of Homeland Security’s (DHS) National Cybersecurity Protection System (NCPS) is partially, but not fully, meeting its stated system objectives:

  • Intrusion detection: NCPS provides DHS with a limited ability to detect potentially malicious activity entering and exiting computer networks at federal agencies. Specifically, NCPS compares network traffic to known patterns of malicious data, or “signatures,” but does not detect deviations from predefined baselines of normal network behavior. In addition, NCPS does not monitor several types of network traffic and its “signatures” do not address threats that exploit many common security vulnerabilities and thus may be less effective.
  • Intrusion prevention: The capability of NCPS to prevent intrusions (e.g., blocking an e-mail determined to be malicious) is limited to the types of network traffic that it monitors. For example, the intrusion prevention function monitors and blocks e-mail. However, it does not address malicious content within web traffic, although DHS plans to deliver this capability in 2016.
  • Analytics: NCPS supports a variety of data analytical tools, including a centralized platform for aggregating data and a capability for analyzing the characteristics of malicious code. In addition, DHS has further enhancements to this capability planned through 2018.
  • Information sharing: DHS has yet to develop most of the planned functionality for NCPS’s information-sharing capability, and requirements were only recently approved. Moreover, agencies and DHS did not always agree about whether notifications of potentially malicious activity had been sent or received, and agencies had mixed views about the usefulness of these notifications. Further, DHS did not always solicit—and agencies did not always provide—feedback on them.

In addition, while DHS has developed metrics for measuring the performance of NCPS, they do not gauge the quality, accuracy, or effectiveness of the system’s intrusion detection and prevention capabilities. As a result, DHS is unable to describe the value provided by NCPS.

Regarding future stages of the system, DHS has identified needs for selected capabilities. However, it had not defined requirements for two capabilities: to detect (1) malware on customer agency internal networks or (2) threats entering and exiting cloud service providers. DHS also has not considered specific vulnerability information for agency information systems in making risk-based decisions about future intrusion prevention capabilities.

Federal agencies have adopted NCPS to varying degrees. The 23 agencies required to implement the intrusion detection capabilities had routed some traffic to NCPS intrusion detection sensors. However, only 5 of the 23 agencies were receiving intrusion prevention services, but DHS was working to overcome policy and implementation challenges. Further, agencies have not taken all the technical steps needed to implement the system, such as ensuring that all network traffic is being routed through NCPS sensors. This occurred in part because DHS has not provided network routing guidance to agencies. As a result, DHS has limited assurance regarding the effectiveness of the system.

The brightest part of the report is that DHS “concurred with GAO’s recommendations.

That’s a far cry from the state of total denial at the Office of Personnel Management last year. DHS is acknowledging its problems. Whether than translates into fixing those problems remains to be seen.

(Do you know the fate of the management incompetents at OPM? Just curious who is being inflicted with their incompetence now.)

I truly hate to say anything nice about the DHS but one must give the devil his due.

Unfortunately for the DHS, elected leaders don’t understand that need, desire, importance, are all non-factors in technical success. You may not like mendelian genetics, but as Stalin discovered, you pursue other models at your own risk.

The same is true for cybersecurity.

Looking For Hidden Tear (rasomware) Source?

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:18 am

Disappointed to read David Bisson advocating security through secrecy in Ransomware author tries to blackmail security researcher into taking down ‘educational’ malware project.

From the post:

The author of the Magic ransomware unsuccessfully attempted to blackmail a security researcher into taking down two open-source ‘educational’ malware projects on GitHub.

Magic, a malicious program which is written in C# and which demands 1 Bitcoin from its victims, is the second strain of ransomware discovered in January to have been built on malware that has been made available to the public for ‘educational’ purposes.

The first threat, Ransom_Cryptear.B, is based on an open-source project called Hidden Tear, which is currently hosted by Turkish security researcher Utku Sen on his GitHub page.

Whether Sen is able to recover the victims’ files without working with the ransomware author remains to be seen. However, what is abundantly clear is Sen’s foolishness in releasing ransomware code as open-source. Though such a move might have educational motives at heart, this will not stop malicious and inexperienced attackers from co-opting the ransomware code for their own purposes.

Going forward, researchers should never make ransomware code available beyond the labs where they study it. Ordinary users will surely benefit in the long run.

See David’s post for the details. My concern is his advocacy of non-publication of ransomware code.

McAfee Labs 2016 Threats Predictions report makes it clear that “malicious and inexperienced attackers” are not the source of great concern for ransomware.

…..
In 2015 we saw ransomware-as-a-service hosted on the Tor network and using virtual currencies for payments. We expect to see more of this in 2016, as inexperienced cybercriminals will gain access to this service while staying relatively anonymous.

Although a few families—including CryptoWall 3, CTB-Locker, and CryptoLocker—dominate the current ransomware landscape, we predict that new variants of these families and new families will surface with new stealth functionalities. For example, new variants may start to silently encrypt data. These encrypted files will be backed up and eventually the attacker will pull the key, resulting in encrypted files both on the system and in the backup. Other new variants might use kernel components to hook the file system and encrypt files on the fly, as the user accesses them.
….. (at page 24)

Amateurs aren’t building “ransomware-as-a-service” sites and there’s no reason to pretend otherwise.

Moreover, the “good old boy network” of security researchers hasn’t protected anyone from ransomware if the McAfee Labs and similar reports are to be credited. If concealment of security flaws and malware were effective, there should be some evidence to that effect. Yes?

In the absence of evidence, dare I say “data?,” we should dismiss concealment as a strategy for cybersecurity as utter speculation. Speculation that favors a particular class of researchers. (Can you guess their gender statistics?)

In case you are interested, the Github page for Hidden Tear now reads in part:

This project is abandoned. If you are a researcher and want the code, contact me with your university or company e-mail http://utkusen.com/en/contact.html

Well, no harm done. If you are looking for the master.zip file for Hidden Tear, check the Internet Archive: Wayback Machine, or more directly, the backup of the Hidden Tear project on 26 January 2016.

You can presume that copies have been made of the page and master.zip file, just in case something unfortunate happens to the copies at the Internet Archive: Wayback Machine.

Better software, user education, legal actions against criminals are all legitimate and effective means of combating the known problem of ransomware.

Concealing ransomware code is a form of privilege. As we all know, privilege has an unhappy record in computer programming and society in general. Don’t support it, here or anywhere.

Large-scale Conspiracies Fail On Revelation? – A Contrary Example

Filed under: Peer Review,Security — Patrick Durusau @ 8:00 am

Large-scale conspiracies would quickly reveal themselves, equations show

From the post:

While we can all keep a secret, a study by Dr David Robert Grimes suggests that large groups of people sharing in a conspiracy will very quickly give themselves away. The study is published online by journal PLOS ONE.

Dr Grimes, a physicist working in cancer research, is also a science writer and broadcaster. His profile means that he receives many communications from people who believe in science-related conspiracies. Those messages prompted him to look at whether large-scale collusions were actually tenable.

He explained: ‘A number of conspiracy theories revolve around science. While believing the moon landings were faked may not be harmful, believing misinformation about vaccines can be fatal. However, not every belief in a conspiracy is necessarily wrong — for example, the Snowden revelations confirmed some theories about the activities of the US National Security Agency.

He then looked at the maximum number of people who could take part in an intrigue in order to maintain it. For a plot to last five years, the maximum was 2521 people. To keep a scheme operating undetected for more than a decade, fewer than 1000 people can be involved. A century-long deception should ideally include fewer than 125 collaborators. Even a straightforward cover-up of a single event, requiring no more complex machinations than everyone keeping their mouth shut, is likely to be blown if more than 650 people are accomplices.

Dr. Grimes equates revelation with “failure” of a conspiracy.

But what of conspiracies that are “revealed” that don’t fail? Conspiracies sustained in spite of revelation of the true state of affairs.

Peer review has been discredited too often to require citation. But, for the sake of tradition: NIH grants could be assigned by lottery as effectively as the present grant process, …lotteries to pick NIH research-grant recipients, editors and peer reviewers fail to catch basic errors, Science self-corrects – instantly, and replication is a hit or miss affair, Replication in Psychology?.

There are literally thousands of examples of peer review as preached not being realized in practice. Yet every journal in the humanities and sciences and conferences for both, continue to practice and swear by peer review, in the face of known evidence to the contrary.

Dr. Grimes fails to account for maintenance of the peer review conspiracy, one of the most recent outrages being falsification of research results is not misconduct, Pressure on controversial nanoparticle paper builds.

How is it that both the conspiracy and the contrary facts are revealed over and over again, yet the conspiracy attracts new adherents every year?

BTW, the conspiracy against citizens of the United States and the world continues, despite the revelations of Edward Snowden.

Perhaps revelation isn’t “failure” for a conspiracy but simply another stage in its life-cycle?

You can see this work in full at: David Robert Grimes. On the Viability of Conspiratorial Beliefs. PLOS ONE, 2016; 11 (1): e0147905 DOI: 10.1371/journal.pone.0147905.

« Newer PostsOlder Posts »

Powered by WordPress