Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 27, 2016

OUCH!

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:51 pm

OUCH! Security Awareness Newsletter

From the post:

Wow! This is the first security awareness document that our users really like! Thank you, SANS”

That note came from the CISO of an 8,000 employee organization. OUCH! is the world’s leading, free security awareness newsletter designed for the common computer user. Published every month and in multiple languages, each edition is carefully researched and developed by the SANS Securing The Human team, SANS instructor subject matter experts and team members of the community. Each issue focuses on and explains a specific topic and actionable steps people can take to protect themselves, their family and their organization. OUCH! is distributed under the Creative Commons BY-NC-ND 4.0 license. You are encouraged to distribute OUCH! within your organization or share with family and friends, the only limitation is you cannot modify nor sell OUCH!.

The OUCH! newsletter and all of its translations are done by community volunteers. As such, some languages may not be available upon initial publication date, but will be added as soon as they are. Be sure to review our other free resources for security awareness programs such as presentations, posters and planning materials on our Resources Page.

You probably won’t benefit from this but may know users who will. Fairly commonplace advice.

You can subscribe to the newsletter but must have a SANS account.

Be aware your users may compare your password requirements with those at SANS:

Passwords must be at least 10 characters long and contain 5 unique characters

Passwords must also include at least one of each of the following: number, uppercase letter, lowercase letter, and special character ( ! £ $ % ^ & * ( ) @ # ? < > . )

I suppose that helps but you remember Schneier’s first book on cryptography? In the introduction he says there are two kinds of cryptography. The first kind is the sort that keeps your kid sister from reading your diary. The second kind stumps government sourced agencies. This book is about the latter. Words to that effect.

The SANS passwords will keep your kid sister out, at least to middle school, maybe.

PS: Best name for a newsletter I have seen in a long time. Suggestions for a topic map newsletter name?

January 26, 2016

Undocumented Admin Access – Backdoor or Feature? – You Decide

Filed under: Cybersecurity,Security — Patrick Durusau @ 11:43 am

Adrian Bridgwater uncovers a cybersecurity version of three card monte in Fortinet on SSH vulnerabilities: look, this really isn’t a backdoor, honest.

Fortinet created an undocumented method to communicate with FortiManager devices. Or in Fortinet’s own security warning:

An undocumented account used for communication with authorized FortiManager devices exists on some versions of FortiOS, FortiAnalyzer, FortiSwitch and FortiCache.

On vulnerable versions, and provided “Administrative Access” is enabled for SSH, this account can be used to log in via SSH in Interactive-Keyboard mode, using a password shared across all devices. It gives access to a CLI console with administrative rights.

In an update to previous attempts at obfuscation, Fortinet says:

As previously stated, this vulnerability is an unintentional consequence of a feature that was designed with the intent of providing seamless access from an authorized FortiManager to registered FortiGate devices. It is important to note, this is not a case of a malicious backdoor implemented to grant unauthorized user access.

Even with a generous reading, Fortinet created a “feature” that benefited only Fortinet, did not disclose it to its customers and that “feature” lessened the security of those customers.

If “backdoor” is limited to malicious third parties, perhaps we should call this a “designed security defect” by a manipulative first party.

January 24, 2016

Searching For Sleeping Children? (IoT)

Filed under: Cybersecurity,IoT - Internet of Things,Security — Patrick Durusau @ 7:35 am

Internet of Things security is so bad, there’s a search engine for sleeping kids by J.M. Porup.

From the post:

Shodan, a search engine for the Internet of Things (IoT), recently launched a new section that lets users easily browse vulnerable webcams.

The feed includes images of marijuana plantations, back rooms of banks, children, kitchens, living rooms, garages, front gardens, back gardens, ski slopes, swimming pools, colleges and schools, laboratories, and cash register cameras in retail stores, according to Dan Tentler, a security researcher who has spent several years investigating webcam security.

“It’s all over the place,” he told Ars Technica UK. “Practically everything you can think of.”

We did a quick search and turned up some alarming results:
….

Just so you know, the images from webcams are a premium feature of Shodan.

As the insecure IoT continues to spread, coupling the latest face recognition software with webcam feeds and public image databases could be a viable service. Early warning for those seeking to avoid detection and video evidence for hoping for it.

Similar to detective agencies but on a web scale.

There are less obvious digital cameras with IP than the ones from Wowwee:

wowwee

But most of them still scream “digital camera” in white with obvious lens, etc.

Porup reports that the FTC is attempting to be proactive about webcam security but penalties after a substantial number of insecure webcams appear won’t help those already exposed on the Internet.

January 21, 2016

What Drives Compliance? Hint: The P Word Missing From Cybersecurity Discussions

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:52 pm

Majority of Organizations Have False Sense of Data Security by David Weldon.

From the post:

A majority of organizations equate IT security compliance with actual strong defense, and are thereby leaving their data at risk to cyber incidents through a false sense of security.

That is the conclusion of the 2016 Vormetric Data Threat Report, released today by analyst firm 451 Resarch and Vormetric, a leader in enterprise data security.

The fourth annual report, which polled 1,100 senior IT security executives at large enterprises worldwide, details thee rates of data breach and compliance failures, perceptions of threats to data, data security stances and IT security spending plans. The study looked at physical, virtual, big data and cloud environments.

The bad news: 91 percent of organizations are vulnerable to data threats by not taking IT security measures beyond what is required by industry standards or government regulation.

Compliance occurs 44 time in the report, the third and fourth times in:

We’re also seeing encouraging signs that data security is moving beyond serving as merely a compliance checkbox. Though compliance remains a top reason for both securing sensitive data and spending on data security products and services, implementing security best practices posted the largest gain across all regions.

Why would a compliance be the top reason for data security measures?

I consulted Compliance Week, a leading compliance zine that featured on its enforcement blog: Court: Compliance Officers Must Ensure Compliance With AML Laws by Jaclyn Jaeger.

Here’s the lead paragraph from that story:

A federal district court this month upheld a $1 million fine imposed against the former chief compliance officer for MoneyGram International, finding that individual officers, including chief compliance officers, of financial institutions may be held responsible for ensuring compliance with the anti-money laundering provisions of the Bank Secrecy Act.

A $1 million dollar fine is an incentive in favor of compliance.

A very large incentive.

Let’s compare the incentives for compliance versus cybersecurity:

Non-Compliance $1 million
Data Breach $0.00

I selected the first compliance penalty I saw and such penalties run and entire range, some higher and some lower. The crucial point is that non-compliance carried penalties. Substantial ones in some cases.

Compare the iPhone “cookie theft bug” that took 18 months to fix, penalty imposed on vendor, $0.00.

Cybersecurity proposals without a stick are a waste of storage and more importantly, your time.

January 18, 2016

90% banking, payment, health apps – Are Insecure – Surprised?

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:40 pm

Most Health and Financial Mobile Apps Are Rife With Vulnerabilities by Tara Seals.

From the post:

When it comes to mobile app security, there appears to be a disparity between consumer confidence in the level of security incorporated into mobile health and finance apps, and the degree to which those apps are actually vulnerable to common hack techniques (code tampering and reverse-engineering). In turn this has clear implications for both patient safety and data security.

According to Arxan Technologies’ 5th Annual State of Application Security Report, the majority of app users and app executives believe their apps to be secure. A combined 84% of respondents said that the offerings are “adequately secure,” and 63% believe that app providers are doing “everything they can” to protect their mobile health and finance apps.

Yet, nearly all of the apps that Arxan assessed, (90% of them in fact, including popular banking and payment apps and government-approved health apps), proved to be vulnerable to at least two of the Open Web Application Security Project (OWASP) Mobile Top 10 Risks, which could result in privacy violations, theft of customer credentials and other malicious acts, including device tampering.

I’m not proud, I’ll admit to being surprised.

I thought 100% of banking, payment and health care apps would be found to be vulnerable.

Perhaps the 90% range was just on cursory review.

Seriously.

After decades of patch-after-vulnerabilty-found, with no financial incentives to change that practice, what did you expect?

The real surprise for me was anyone thinking off the shelf apps were secure at all. Ever.

Such users are not following the news or have a crack pipe as a security consultant.

January 17, 2016

Your Apple Malware Protection Is Good For 5 Minutes

Filed under: Cybersecurity,Security — Patrick Durusau @ 5:02 pm

Researcher Bypasses Apple’s Updated Malware Protection in ‘5 Minutes’ by Lorenzo Franceschi-Bicchierai.

From the post:

Apple’s Mac computers have long been considered safer than their Windows-powered counterparts—so much so that the common belief for a long time was that they couldn’t get viruses or malware. Even Apple adopted that cliche for marketing purposes.

The reality, however, is slightly different. Trojans have targeted Mac computers for years, and things don’t seem to be improving. In fact, cybercriminals created more malware targeting Macs in 2015 than in the past five years combined, according to one study. Since 2012, Apple has tried to protect users with Gatekeeper, a feature designed to block common threats such as fake antivirus products, infected torrent files, and fake Flash installers—all malicious software that Mac users might download while regularly browsing the internet.

But it looks like Gatekeeper’s walls aren’t as strong as they should be. Patrick Wardle, a security researcher who works for the security firm Synack, has been poking holes in Gatekeeper for months. In fact, Wardle is still finding ways to bypass Gatekeeper, even after Apple issued patches for two of the vulnerabilities he found last year.

As it is designed now, Gatekeeper checks apps downloaded from the internet to see if they are digitally signed by either Apple or a developer recognized by Apple. If so, Gatekeeper lets the app run on the machine. If not, Gatekeeper prevents the user from installing and executing the app.

That Apple and Wardle have been going back and forth for months, with Wardel sans the actual source code, is further evidence of the software quality you get with no liability for security flaws in software.

You would think that when a flaw was discovered in Gatekeeper, that a full review would be undertaken to find and fix all of the security issues in Gatekeeper.

No, Apple fixed only the security issue(s) pointed out to it, and no others.

Would that change if there were legal liability for security defects?

There’s only one way to find out.

January 14, 2016

Successful Cyber War OPS As Of 2016.01.05 – (But Fear Based Marketing Works)

Filed under: Cybersecurity,Government,Humor,Marketing,Security — Patrick Durusau @ 5:49 pm

From the text just below the interactive map:

This map lists all unclassified Cyber Squirrel Operations that have been released to the public that we have been able to confirm. There are many more executed ops than displayed on this map however, those ops remain classified.

You can select by squirrel or other animal, year, even month and the map shows successful cyber operations.

Squirrels are in the lead with 623 successes, versus one success by the United States (Stuxnet).

Be careful who you show this map.

Any sane person will laugh and agree that squirrels are a larger danger to the U.S. power grid than any fantasized terrorist.

On the other hand, non-laughing people are making money from speaking engagements, consultations, government contracts, etc., all premised on fear of terrorists attacking the U.S. power grid.

People who laugh at the Cyber Squirrel 1 map, not so much.

They say it is the lizard part of your brain that controls “…fight, flight, feeding, fear, freezing-up, and fornication.

That accords with my view that if we aren’t talking about fear, greed or sex, then we aren’t talking about marketing.

Are you willing to promote world views and uses of technology (think big data) that you know are in fact false or useless? At least in the current fear of terrorists mode, its nearly a guarantee to a payday.

Or are you looking for work from employers who realizes if you are willing to lie in order to gain a contract or consulting gig, you are very willing to lie to them as well?

Your call.

PS: You can get CyberSquirrel1 Unit Patches, 5 for $5.00, but if you put them on your laptop, you may have to leave it at home, depending upon the client.

January 12, 2016

Who Is Lying About Encryption?

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 10:39 pm

Canadian Cops Can Decrypt PGP BlackBerrys Too by Joseph Cox.

From the post:

On Monday, Motherboard reported that leading Dutch forensics investigators say they are able to read encrypted messages sent on PGP BlackBerry phones—custom devices which are advertised as more suited for secure communication than off-the-shelf models.

A myriad of other law enforcement agencies would not comment on whether they have this capability, but court documents reviewed by Motherboard show that the Royal Mounted Canadian Police (RMCP) can also decrypt messages from PGP BlackBerrys.

“This encryption was previously thought to be undefeatable,” one 2015 court document in a drug trafficking case reads, referring to the PGP encryption used to secure messages on a BlackBerry device. “The RCMP technological laboratory destroyed this illusion and extracted from this phone 406 e-mails, 25 address book entries and other information all of which had been protected.”

In another case from 2015, centering around charges of kidnap and assault, three out of four BlackBerrys seized by the RCMP were analysed by the “Technical Assistance Team in Ottawa and the contents were decrypted and reports prepared.

Reports such as this one make you wonder who is lying about encryption?

This report makes current encryption sound like a cheap bicycle lock that can be defeated by anyone.

On the other hand, there are known luddites like FBI Director James Comey, who claim that government must be able to read encrypted files.

Is the “we can’t read the files” simply a ploy for more funding?

Or is current encryption really as good as the “rhythm” method of birth control?

Complicating matters is that encryption is a tough subject that even honest experts disagree about techniques and their safety.

Even with your best encryption, remember two rules for transmitting digital data:

  1. Send as little data as possible.
  2. What data you send should have as short a life span as possible.

For example, “Meet at location N in 20 minutes,” has an operational lifespan of about 25 minutes. Beyond that, even if broken, it’s useless.

BTW, don’t save on burner phones by using the same phone day after day. Why do you think they call them “burner” phones?

Note the Canadian case with 406 e-mails. That’s just irresponsible.

[Don’t] …Join the National Security State

Filed under: Free Speech,Government,Privacy,Security — Patrick Durusau @ 10:13 pm

Social Media Companies Should Decline the Government’s Invitation to Join the National Security State by Hugh Handeyside.

The pressure on social media companies to limit or take down content in the name of national security has never been greater. Resolving any ambiguity about the how much the Obama administration values the companies’ cooperation, the White House on Friday dispatched the highest echelon of its national security team — including the Attorney General, the FBI Director, the Director of National Intelligence, and the NSA Director — to Silicon Valley for a meeting with technology executives chaired by the White House Chief of Staff himself. The agenda for the meeting tried to convey a locked-arms sense of camaraderie, asking, “How can we make it harder for terrorists to leveraging [sic] the internet to recruit, radicalize, and mobilize followers to violence?”

Congress, too, has been turning up the heat. On December 16, the House passed the Combat Terrorist Use of Social Media Act, which would require the President to submit a report on “United States strategy to combat terrorists’ and terrorist organizations’ use of social media.” The Senate is considering a far more aggressive measure which would require providers of Internet communications services to report to government authorities when they have “actual knowledge” of “apparent” terrorist activity (a requirement that, because of its vagueness and breadth, would likely harm user privacy and lead to over-reporting).

The government is of course right that terrorists use social media, including to recruit others to their cause. Indeed, social media companies already have systems in place for catching real threats, incitement, or actual terrorism. But the notion that social media companies can or should scrub their platforms of all potentially terrorism-related content is both unrealistic and misguided. In fact, mandating affirmative monitoring beyond existing practices would sweep in protected speech and turn the social media companies into a wing of the national security state.

The reasons not to take that route are both practical and principled. On a technical level, it would be extremely difficult, if not entirely infeasible, to screen for actual terrorism-related content in the 500 million tweets that are generated each day, or the more than 400 hours of video uploaded to YouTube each minute, or the 300 million daily photo uploads on Facebook. Nor is it clear what terms or keywords any automated screening tools would use — or how using such terms could possibly exclude beliefs and expressive activity that are perfectly legal and non-violent, but that would be deeply chilled if monitored for potential links to terrorism.

Hugh makes a great case why social media companies should resist becoming arms of the national security state.

You should read his essay in full and I would add only one additional point:

Do you and/or your company want to be remembered for resisting the security state or as collaborators? The choice is that simple.

January 11, 2016

50 Predictions for the Internet of Things in 2016 (Ebola Moment for Software Development?)

Filed under: Cybersecurity,IoT - Internet of Things,Security — Patrick Durusau @ 5:03 pm

50 Predictions for the Internet of Things in 2016 by David Oro.

From the post:

Earlier this year I wrote a piece asking “Do you believe the hype?” It called out an unlikely source of hype: the McKinsey Global Institute. The predictions for IoT in the years to come are massive. Gartner believes IoT is a central tenet of top strategic technology trends in 2016. Major technology players are also taking Big Swings. Louis Columbus, writing for Forbes, gathered all the 2015 market forecasts and estimates here.

So what better way to end the year and look into the future than by asking the industry for their predictions for the IoT in 2016. We asked for predictions aimed at the industrial side of the IoT. What new technologies will appear? Which companies will succeed or fail? What platforms will take off? What security challenges will the industry face? Will enterprises finally realize the benefits of IoT? We heard from dozens of startups, big players and industry soothsayers. In no particular order, here are the Internet of Things Predictions for 2016.

I count nine (9) statements from various industry leaders on IoT and you have to register to see the other forty-one (41).

I don’t have a prediction but do have a question:

Will an insecure IoT in 2016 cause enough damage to motivate better hardware/software engineering and testing practices?

I ask because 2015 was a banner year for data breaches, Data Breach Reports (ITRC), December 31, 2015, reports 169,068,506 records exposed in 2015.

Yet, where is the widespread discussion about better software engineering? (silence)

Yes, yes, let’s have more penalties for hackers, which have yet to be shown to improve cybersecurity.

Yes, yes, let’s all be more aware of security threats, except that most can’t be mitigated by those aware of them.

Apparently the exposure of 169,068,506 records in 2015 wasn’t enough to get anyone’s attention. Or at least anyone who could influence the software development process.

Odd because just the rumor of Ebola was enough to change the medical intake procedures from hospitals, to general practices to dentists.

When is the Ebola moment coming for software engineering?

January 7, 2016

Flying while trans [Ikarran]: still unbelievably horrible

Filed under: Government,Security — Patrick Durusau @ 4:12 pm

Flying while trans: still unbelievably horrible by Cory Doctorow.

Cory writes:

Cary Gabriel Costello is a trans-man in Milwaukee. Two-thirds of the time when he flies, the TSA has a complete freakout over the “anomalies” his body displays on the full-body scanner.

See Cory’s post for his commentary on this atrocity.

What Cory describes is a precursor for an episode of Babylon 5 with the title Infection. The gist of the story is that an alien race had been repeatedly attacked so they created a biological weapon that would destroy any entity that wasn’t “pure Ikarran.”

The definition of “pure Ikarran,” like “loyal American,” was set by fanatics, extremists, the result of which no Ikarran could possibly fit the profile of a “pure Ikarran.”

Here’s how the use of that weapon ended for the Ikarrans:

Sinclair: So who set the parameters of what it meant to be a pure Ikarran?

Franklin: A coalition of religious fanatics and military extremists that ran the government. They programmed the weapons with a level of standards based on ideology, not science.

Sinclair: Like the Nazi ideal of the perfect Aryan during World War Two.

Franklin: Exactly. The next invasion, eleven of the machines were released. And they stopped the invaders by killing anything that didn’t match their profile of the pure, perfect Ikarran. When they were finished, the machines turned on their creators. They began a process of extermination based on the slightest deviation from what they were programmed to consider normal. They killed; they kept killing until the last Ikarran was dead. (Dialogue from http://www.midwinter.com/lurk/countries/us/guide/004.weapons.html)

Now the TSA is using a profile of what a “loyal American” looks like in a full body scanner. Data mining is already underway of social media to determine what a “loyal American” says online.

The Ikarran experience lead to complete genocide. That won’t happen here but security fanatics are well on the way to taking all of us to a far darker place than the Japanese internment camps during WWII.

January 6, 2016

Sloth As Character Flaw and Security Acronym

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:05 pm

Fatally weak MD5 function torpedoes crypto protections in HTTPS and IPSEC by Dan Goodin.

From the post:

If you thought MD5 was banished from HTTPS encryption, you’d be wrong. It turns out the fatally weak cryptographic hash function, along with its only slightly stronger SHA1 cousin, are still widely used in the transport layer security protocol that underpins HTTPS. Now, researchers have devised a series of attacks that exploit the weaknesses to break or degrade key protections provided not only by HTTPS but also other encryption protocols, including Internet Protocol Security and secure shell.

The attacks have been dubbed SLOTH—short for security losses from obsolete and truncated transcript hashes. The name is also a not-so-subtle rebuke of the collective laziness of the community that maintains crucial security regimens forming a cornerstone of Internet security. And if the criticism seems harsh, consider this: MD5-based signatures weren’t introduced in TLS until version 1.2, which was released in 2008. That was the same year researchers exploited cryptographic weaknesses in MD5 that allowed them to spoof valid HTTPS certificates for any domain they wanted. Although SHA1 is considerably more resistant to so-called cryptographic collision attacks, it too is considered to be at least theoretically broken. (MD5 signatures were subsequently banned in TLS certificates but not other key aspects of the protocol.)

“Notably, we have found a number of unsafe uses of MD5 in various Internet protocols, yielding exploitable chosen-prefix and generic collision attacks,” the researchers wrote in a technical paper scheduled to be discussed Wednesday at the Real World Cryptography Conference 2016 in Stanford, California. “We also found several unsafe uses of SHA1 that will become dangerous when more efficient collision-finding algorithms for SHA1 are discovered.”

Dan’s final sentence touches on the main reason for cyberinsecurity:

The findings generate yet another compelling reason why technical architects should wean themselves off the SHA1 and MD5 functions, even if it generates short-term pain for people who still use older hardware that aren’t capable of using newer, more secure algorithms.

What kind of pain?

Economic pain.

Amazing that owners of older hardware are allowed to endanger everyone with newer hardware.

At least until you realize that no cybersecurity discussions starts with one source of cybersecurity problems, bugs in software.

Increasing penalties for cybercrime isn’t going to decrease the rate of software bugs that make cybercrime possible.

Incentives for the production of better written and tested code, an option heretofore not explored, might. With enough incentive, even the sloth that leads to software bugs might be reduced, but I would not hold my breath.

January 1, 2016

How to Avoid Being a Terrorism “False Positive” in 2016

Filed under: Government,Privacy,Security — Patrick Durusau @ 7:56 pm

For all of the fear mongering about terrorists and terrorism, I’m more worried about being a “false positive” for terrorism than terrorism.

Radley Balko wrote about a swat raid on an entirely innocent family in: Federal judge: Drinking tea, shopping at a gardening store is probable cause for a SWAT raid on your home, saying:

Last week, U.S. District Court Judge John W. Lungstrum dismissed every one of the Hartes’s claims. Lungstrum found that sending a SWAT team into a home first thing in the morning based on no more than a positive field test and spotting a suspect at a gardening store was not a violation of the Fourth Amendment. He found that the police had probable cause for the search, and that the way the search was conducted did not constitute excessive force. He found that the Hartes had not been defamed by the raid or by the publicity surrounding it. He also ruled that the police were under no obligation to know that drug testing field kits are inaccurate, nor were they obligated to wait for the more accurate lab tests before conducting the SWAT raid. The only way they’d have a claim would be if they could show that the police lied about the results, deliberately manipulated the tests or showed a reckless disregard for the truth — and he ruled that the Hartes had failed to
do so.

If you think that’s a sad “false positive” story, consider Jean Charles de Menezes who was murdered by London Metropolitan Police for sitting on a bus. He was executed with 7 shots to the head, while being physically restrained by another police officer.

Home Secretary Charles Clarke (at that time) is quoted by the BBC saying:

“I very, very much regret what happened.

“I hope [the family] understand the police were trying to do their very best under very difficult circumstances.”

What “very difficult circumstances?” Menezes was sitting peacefully on a bus, unarmed and unaware that he was about to be attacked by three police officers. What’s “very difficult” about those circumstances?

Ah, but it was the day after bombings in London and the usual suspects had spread fear among the police and the public. The “very difficult circumstances” victimized the police, the public and of course, Menezes.

If you live in the United States, there is the ongoing drum roll of police shooting unarmed black men, when they don’t murder a neighbor on the way.

No doubt the police need to exercise more restraint but the police are being victimized by the toxic atmosphere of fear generated by public officials as well as those who profit from fear-driven public policies.

You do realize the TSA agents at airports are supplied by contractors. Yes? $Billions in contracts.

Less fear, fewer TSA (if any at all) = Loss of $Billions in contracts

With that kind of money at stake, the toxic atmosphere of fear will continue to grow.

How can you reduce your personal odds of being a terrorism “false positive” in 2016?

The first thing is to realize that the police may look like the “enemy” but they really aren’t. For the most part they are underpaid, under-trained, ordinary people who have a job most of us wouldn’t take on a bet. There are bad cops, have no doubt, but the good ones out-number the bad ones.

The police are being manipulated by the real bad actors, the ones who drive and profit from the fear machine.

The second thing to do is for you and your community to reach out to the police officers who regularly patrol your community. Get to know them by volunteering at police events or inviting them to your own.

Create an environment where the police don’t see a young black man but Mr. H’s son, you know Mr. H, he helped with the litter campaign last year, etc.

Getting to know the local police and getting the police to know your community won’t solve every problem but it may lower the fear level enough to save lives, one of which may be your own.

You won’t be any worse off and on the up side, enough good community relations may result in the police being on your side when it is time to oust the fear mongers.

December 30, 2015

Leave Your Passport At Home – Push Back At The TSA

Filed under: Government,Security — Patrick Durusau @ 9:14 am

TSA threatens to stop accepting driver’s licenses from nine states as of Jan 10 by Cory Doctorow.

Cory reports that extensions to the 2005 Real ID act are due to expire on January 10, 2016. States/territories facing consequences include “Alaska, California, Illinoois, Missouri, New Jersey, New Mexico, South Carolina, and Washington (as well as Puerto Rico, Guam, and the US Virgin Islands.”

At issue is whether the TSA must accept your state driver’s license as legitimate identification.

Just checking the passenger traffic numbers for California’s two largest airports and one in Illinois, I found:

Los Angeles International – 68,491,451 passengers (Jan-Nov. 2015)

San Francisco International – 41,906,798 passengers (Jan-Oct. 2015)

Chicago O’Hare – 70,823,493 passengers (Jan-Nov. 2015)

I’m not an air travel analyst but 181,221,742 million customers must represent a substantial amount of airline revenue.

At these three airports alone, the TSA is conspiring to inconvenient, delay and harass that group of 181,221,742 million paying customers.

If I had that many customers, threatened by the Not-1-Terrorist-Caught TSA, I would be using face time with House/Senate members to head off this PR nightmare.

If I had a driver’s license from any of these states, that is all that I would take to the airport.

Remember the Stasi fell because people simply stopped obeying.

Maybe this overreaching by the TSA will be its downfall. Ending literally $Billions in lost productive time, groping of women and children, and passengers being humiliated in numerous ways.

It’s time to call the TSA to heel. Leave your passport at home.

It’s a “best practice” for citizens who want to live in a free country.

Windows 10 covertly sends your disk-encryption keys to Microsoft

Filed under: Cybersecurity,Microsoft,Security — Patrick Durusau @ 8:01 am

Windows 10 covertly sends your disk-encryption keys to Microsoft by Cory Doctorow.

Cory gives a harrowing list of “unprecedented anti-user features” in Windows 10.

It is a must read for anyone trying to build support for a move to an open source OS.

Given the public reception of the Snowden revelations, are the “unprecedented anti-user features” a deliberate strategy by Microsoft to escape the clutches of both US and other governments demanding invasion of user privacy?

There has to be a sufficient market for MS to transition to application and OS support for enterprise level open source software and weaning enterprises off of Windows 10 would be one way to establish that market.

After all, GM isn’t going to call your local IT shop for support, even with an open source OS. Much more likely to call Microsoft, which has the staff and historical expertise to manage enterprise systems.

Sure, MS may lose the thin-margin projects at the bottom if it becomes entirely an open source organization but imagine the impact it will have on big data startups.

The high end/high profit markets in software will remain whether the income is from licensing or support/customization services.

That would certainly explain the recent trend towards open source projects at MS. And driving customers away from Windows 10 is probably easier than spiking the Windows/Office teams embedded at MS.

Corporate politics, don’t you just love it? 😉

If management talks about switching to Windows 10, you know the sign to give your co-workers from Helix:

run-like-hell

For non-Helix fans: RUN LIKE HELL!

December 29, 2015

Man Bites Dog – News!

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:04 pm

Raspberry Pi declines bribe to pre-install malware by Robert Abel.

Robert reports that the Raspberry Pi Foundation was offered a bribe to install malware on its product and refused!

I wonder how many US manufacturers could make the same claim for their hardware or software?

Of course, in the United States, the request would have been accompanied by a National Security Letter or some other offense against freedom of speech.

FYI, no oppressive government has ever been overthrown or reformed by people who meekly follow it arbitrary dictates. Just saying.

December 25, 2015

Coal in your stocking: Cybersecurity Act of 2015

Filed under: Cybersecurity,Security — Patrick Durusau @ 1:37 pm

How does the Cybersecurity Act of 2015 change the Internet surveillance laws? by Orin Kerr.

From the post:

The Omnibus Appropriations Act that President Obama signed into law last week has a provision called the Cybersecurity Act of 2015. The Cyber Act, as I’ll call it, includes sections about Internet monitoring that modify the Internet surveillance laws. This post details those changes, focusing on how the act broadens powers of network operators to conduct surveillance for cybersecurity purposes. The upshot: The Cyber Act expands those powers in significant ways, although how far isn’t entirely clear.

Orin covers the present state of provider monitoring which sets a good background for the changes introduced by the Cybersecurity Act of 2015. He breaks the new authorizations into: monitoring, defensive measures and the ability to share data. If you are a policy wonk, definitely worth a read with an eye towards uncertainty and ambiguity in the new authorizations.

It isn’t clear how relevant Orin’s post is for law enforcement and intelligence agencies, since they have amply demonstrated their willingness to disobey the law and the lack of consequences for the same.

Service providers should be on notice from Orin’s post about the ambiguous parts of the act. On the other hand, Congress will grant retroactive immunity for law breaking at the instigation of law enforcement, so that ambiguity may or may not impact corporate policy.

Users: The Cybersecurity Act of 2015 is confirmation that the only person who can be trusted with your security is you. (full stop)

December 22, 2015

Not Our Backdoor? Gasp!

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 3:47 pm

US Gov’t Agencies Freak Out Over Juniper Backdoor; Perhaps They’ll Now Realize Why Backdoors Are A Mistake by Mike Masnick

From the post:

Last week, we wrote about how Juniper Networks had uncovered some unauthorized code in its firewall operating system, allowing knowledgeable attackers to get in and decrypt VPN traffic. While the leading suspect still remains the NSA, it’s been interesting to watch various US government agencies totally freak out over their own networks now being exposed:


The FBI is investigating the breach, which involved hackers installing a back door on computer equipment, U.S. officials told CNN. Juniper disclosed the issue Thursday along with an emergency security patch that it urged customers to use to update their systems “with the highest priority.”

The concern, U.S. officials said, is that sophisticated hackers who compromised the equipment could use their access to get into any company or government agency that used it.

One U.S. official described it as akin to “stealing a master key to get into any government building.”

And, yes, this equipment is used all throughout the US government:


Juniper sells computer network equipment and routers to big companies and to U.S. government clients such as the Defense Department, Justice Department, FBI and Treasury Department. On its website, the company boasts of providing networks that “US intelligence agencies require.”

Its routers and network equipment are widely used by corporations, including for secure communications. Homeland Security officials are now trying to determine how many such systems are in use for U.S. government networks.

As regular readers know, disclosure disrupts zero-day markets, but this is a case where I would favor short-term non-disclosure.

Non-disclosure to allow an informal networks of hackers to drain as much information from government sources as their encrypted AWS storage could hold. Not bothering to check the data, just sucking down whatever is within reach. Any government, any network.

That happy state of affairs didn’t happen so you will have to fall back on poor patch maintenance and after all, it is the holidays. The least senior staffers will be in charge, if even them, after all, their rights come before patch maintenance.

Just guessing, I would say you have until March before most of the holes close up, possibly longer. BTW, that’s March of 2017. Given historical patch behavior.

What stories are you going to find because of this backdoor? Make them costly to the government in question. Might disabuse them of favoring backdoors.

December 20, 2015

‘*Star Wars Spoiler*’ Security

Filed under: Humor,Politics,Security — Patrick Durusau @ 10:57 am

ISIS Secures Comms By Putting ‘*Star Wars Spoiler*’ Before Every Message.

From the post:

The Islamic State has developed a new, incredibly effective way to safeguard their communications, according to intelligence sources. By putting the phrase “Star Wars Spoiler” in message headers, the group has essentially eliminated any chance of their messages being read by United States intelligence services even if they are intercepted.

“It’s been three days since any of us have had any intelligence at all on ISIS maneuvers and plans,” Capt. Mark Newman, Army intelligence officer, said in an interview. “We’re trying to put people who have seen the movie on the rotator out to the sandbox, but that’s pretty much making everyone lie about whether or not they’d seen Episode VII.”

Reporters have been unable to see any of the classified intelligence reports, not because Edward Snowden didn’t leak them, but because much of the staff has not seen Episode VII yet either. The ISIS Twitter account, however, was more difficult to avoid looking at.

More effective than former Queen Hillary’s position that wishes trump (sorry) known principles of cryptography:

It doesn’t do anybody any good if terrorists can move toward encrypted communication that no law enforcement agency can break into before or after. There must be some way. I don’t know enough about the technology, Martha, to be able to say what it is, but I have a lot of confidence in our tech experts. (Last Democrat sham debate of 2015)

At a minimum, that’s dishonest and at maximum, delusional. Stalin was the same way about genetics you recall.

If Hillary can lie to herself and the American public about encryption, ask yourself what else is she willing to lie about?

December 17, 2015

Why Big Data Fails to Detect Terrorists

Filed under: Astroinformatics,BigData,Novelty,Outlier Detection,Security — Patrick Durusau @ 10:15 pm

Kirk Borne tweeted a link to his presentation, Big Data Science for Astronomy & Space and more specifically to slides 24 and 25 on novelty detection, surprise discovery.

Casting about for more resources to point out, I found Novelty Detection in Learning Systems by Stephen Marsland.

The abstract for Stephen’s paper:

Novelty detection is concerned with recognising inputs that differ in some way from those that are usually seen. It is a useful technique in cases where an important class of data is under-represented in the training set. This means that the performance of the network will be poor for those classes. In some circumstances, such as medical data and fault detection, it is often precisely the class that is under-represented in the data, the disease or potential fault, that the network should detect. In novelty detection systems the network is trained only on the negative examples where that class is not present, and then detects inputs that do not fits into the model that it has acquired, that it, members of the novel class.

This paper reviews the literature on novelty detection in neural networks and other machine learning techniques, as well as providing brief overviews of the related topics of statistical outlier detection and novelty detection in biological organisms.

The rest of the paper is very good and worth your time to read but we need not venture beyond the abstract to demonstrate why big data cannot, by definition, detect terrorists.

The root of the terrorist detection problem summarized in the first sentence:

Novelty detection is concerned with recognising inputs that differ in some way from those that are usually seen.

So, what are the inputs of a terrorist that differ from the inputs usually seen?

That’s a simple enough question.

Previously committing a terrorist suicide attack is a definite tell but it isn’t a useful one.

Obviously the TSA doesn’t know because it has never caught a terrorist, despite its profile and wannabe psychics watching travelers.

You can churn big data 24×7 but if you don’t have a baseline of expected inputs, no input is going to stand out from the others.

The San Bernardino were not detected, because the inputs didn’t vary enough for the couple to stand out.

Even if they had been selected for close and unconstitutional monitoring of their etraffic, bank accounts, social media, phone calls, etc., there is no evidence that current data techniques would have detected them.

Before you invest or continue paying for big data to detect terrorists, ask the simple questions:

What is your baseline from which variance will signal a terrorist?

How often has it worked?

Once you have a dead terrorist, you can start from the dead terrorist and search your big data, but that’s an entirely different starting point.

Given the weeks, months and years of finger pointing following a terrorist attack, speed really isn’t an issue.

#IntelGroup

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 5:05 pm

#IntelGroup

From the about page:

The control of information is something the elite always does, particularly in a despotic form of government. Information, knowledge, is power. If you can control information, you can control people.” -Tom Clancy

Intelgroup was was created to first and foremost help amplify and spread the message of Anonymous wide and far. Like Anonymous Intelgroup started off as an idea and thru hard work and a lot of lulz We have become a well respected source for global information. Known for our exclusive one on one interviews with Acitivists , Hackers and Victims of Police Brutality , as well as in depth looks at Anonymous Operations. We Constantly keep you intellectually involved in the movement. And we continue to evolve as often as Information evolves . We Are Not Mainstream Media . We are “Think For Yourself Media” Welcome To Our Page . Welcome to Intelgroup.

Follow Us On Twitter : @AnonIntelGroup

Like Our Page On Facebook : https://www.facebook.com/IntelGroup

We Are Also On Instagram : @intelgroup

Check out our YouTube Channel : http://youtube.com/anonintelgroup1

I’m all for more and not less information about government and its activities.

And everyone has to pick the battles they want to fight.

What puzzles me is the disparity in reports of government insecurity, say the Office of Personnel Management, and the silence on the full 6,000+ page Senate Report on torture.

The most recent figure I could find for the Senate is 6,097 people on staff, as of 2009. Vital Statistics on Congress

Out of over 6,000 potential sources, none of the news organizations, hacktivists, etc. was able to obtain a copy of the full report?

That’s seems too remarkable to credit.

Even more remarkable is the near perfect security of all members of Congress, federal agencies and PACs.

I can’t imagine it is a lack of malfeasance and corruption that accounts for the lack of leaks.

What’s your explanation for the lack of leaks?

December 16, 2015

Vulnerable Printers on the Net

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:42 pm

Bob Gourley tweeted:

Using Shodan I found 40,471 vulnerable printers connected to the Net. (Of the 5 at GMU 1 is low on yellow toner).

gourley-shodan

In case you don’t know Shodan, check it out! Search for “things” on the Insecure Internet of Things (IIoT).

I guess the question for 2016 is going to be: Are you a White or Black Hat on the Insecure Internet of Things (IIoT)?

Privacy Alert! – CISA By Friday (18 December 2015) Time to Raise Hell!

Filed under: Cybersecurity,Government,Privacy,Security — Patrick Durusau @ 9:19 pm

Lawmakers Have Snuck CISA Into a Bill That Is Guaranteed to Become a Law by Jason Koebler.

From the post:

To anyone who has protested the sweeping, vague, and privacy-killing iterations of the Cybersecurity Information Sharing and Protection Act or the Cybersecurity Information Sharing Act over the last several years, sorry, lawmakers have heard you, and they have ignored you.

That sounds bleak, but lawmakers have stripped the very bad CISA bill of almost all of its privacy protections and have inserted the full text of it into a bill that is essentially guaranteed to be passed and will certainly not be vetoed by President Obama.

CISA allows private companies to pass your personal information and online goings-on to the federal government and local law enforcement if it suspects a “cybersecurity threat,” a term so broadly defined that it can apply to “anomalous patterns of communication” and can be used to gather information about just about any crime, cyber or not.

At 2 AM Wednesday morning, Speaker of the House Paul Ryan unveiled a 2000-page budget bill that will fund the federal government well into next year. The omnibus spending bill, as it’s usually referred to, is the result of countless hours of backroom dealings and negotiations between Republicans and Democrats.

Without the budget bill (or a short-term emergency measure), the government shuts down, as it did in 2013 for 16 days when lawmakers couldn’t reach a budget deal. It contains dozens of measures that make the country run, and once it’s released and agreed to, it’s basically a guarantee to pass. Voting against it or vetoing it is politically costly, which is kind of the point: Republicans get some things they want, Democrats get some things they want, no one is totally happy but they live with it anyway. This is how countless pieces of bad legislation get passed in America—as riders on extremely important pieces of legislation that are politically difficult to vote against.

See Jason’s post for the full story but you get the gist of it, your privacy rights will be terminated to a large degree this coming Friday.

I don’t accept Jason’s fatalism, however.

There still remains time for members of Congress to strip the rider from the budget bill, but only if everyone raises hell with their representatives and senators between now and Friday.

We need to overload every switchboard in the 202 area code with legitimate, personal calls to representatives and senators. Fill up every voice mail box, every online message storage, etc.

Those of you will personal phone numbers, put them to good use. Call, now!

This may not make any difference, but, members of Congress can’t say they weren’t warned before taking this fateful step.

When Congress signals it doesn’t care about our privacy, then we damned sure don’t have to care about theirs.

We Know How You Feel [A Future Where Computers Remain Imbeciles]

We Know How You Feel by Raffi Khatchadourian.

From the post:

Three years ago, archivists at A.T. & T. stumbled upon a rare fragment of computer history: a short film that Jim Henson produced for Ma Bell, in 1963. Henson had been hired to make the film for a conference that the company was convening to showcase its strengths in machine-to-machine communication. Told to devise a faux robot that believed it functioned better than a person, he came up with a cocky, boxy, jittery, bleeping Muppet on wheels. “This is computer H14,” it proclaims as the film begins. “Data program readout: number fourteen ninety-two per cent H2SOSO.” (Robots of that era always seemed obligated to initiate speech with senseless jargon.) “Begin subject: Man and the Machine,” it continues. “The machine possesses supreme intelligence, a faultless memory, and a beautiful soul.” A blast of exhaust from one of its ports vaporizes a passing bird. “Correction,” it says. “The machine does not have a soul. It has no bothersome emotions. While mere mortals wallow in a sea of emotionalism, the machine is busy digesting vast oceans of information in a single all-encompassing gulp.” H14 then takes such a gulp, which proves overwhelming. Ticking and whirring, it begs for a human mechanic; seconds later, it explodes.

The film, titled “Robot,” captures the aspirations that computer scientists held half a century ago (to build boxes of flawless logic), as well as the social anxieties that people felt about those aspirations (that such machines, by design or by accident, posed a threat). Henson’s film offered something else, too: a critique—echoed on television and in novels but dismissed by computer engineers—that, no matter a system’s capacity for errorless calculation, it will remain inflexible and fundamentally unintelligent until the people who design it consider emotions less bothersome. H14, like all computers in the real world, was an imbecile.

Today, machines seem to get better every day at digesting vast gulps of information—and they remain as emotionally inert as ever. But since the nineteen-nineties a small number of researchers have been working to give computers the capacity to read our feelings and react, in ways that have come to seem startlingly human. Experts on the voice have trained computers to identify deep patterns in vocal pitch, rhythm, and intensity; their software can scan a conversation between a woman and a child and determine if the woman is a mother, whether she is looking the child in the eye, whether she is angry or frustrated or joyful. Other machines can measure sentiment by assessing the arrangement of our words, or by reading our gestures. Still others can do so from facial expressions.

Our faces are organs of emotional communication; by some estimates, we transmit more data with our expressions than with what we say, and a few pioneers dedicated to decoding this information have made tremendous progress. Perhaps the most successful is an Egyptian scientist living near Boston, Rana el Kaliouby. Her company, Affectiva, formed in 2009, has been ranked by the business press as one of the country’s fastest-growing startups, and Kaliouby, thirty-six, has been called a “rock star.” There is good money in emotionally responsive machines, it turns out. For Kaliouby, this is no surprise: soon, she is certain, they will be ubiquitous.

This is a very compelling look at efforts that have in practice made computers more responsive to the emotions of users. With the goal of influencing users based upon the emotions that are detected.

Sound creepy already?

The article is fairly long but a great insight into progress already being made and that will be made in the not too distant future.

However, “emotionally responsive machines” remain the same imbeciles as they were in the story of H14. That is to say they can only “recognize” emotions much as they can “recognize” color. To be sure it “learns” but its reaction upon recognition remains a matter of programming and/or training.

The next wave of startups will create programmable emotional images of speakers, edging the arms race for privacy just another step down the road. If I were investing in startups, I would concentrate on those to defeat emotional responsive computers.

If you don’t want to wait for a high tech way to defeat emotionally responsive computers, may I suggest a fairly low tech solution:

Wear a mask!

One of my favorites:

Egyptian_Guy_Fawkes_Mask

(From https://commons.wikimedia.org/wiki/Category:Masks_of_Guy_Fawkes. There are several unusual images there.)

Or choose any number of other masks at your nearest variety store.

A hard mask that conceals your eyes and movement of your face will defeat any “emotionally responsive computer.”

If you are concerned about your voice giving you away, search for “voice changer” for over 4 million “hits” on software to alter your vocal characteristics. Much of it for free.

Defeating “emotionally responsive computers” remains like playing checkers against an imbecile. If you lose, it’s your own damned fault.

PS: If you have a Max Headroom type TV and don’t want to wear a mask all the time, consider this solution for its camera:

120px-Cutting_tool_2

Any startups yet based on defeating the Internet of Things (IoT)? Predicting 2016/17 will be the year for those to take off.

December 15, 2015

eSymposium on Hacktivism (Defeating Hactivists)

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:57 pm

eSymposium on Hacktivism

This showed up in my inbox today with the following description:

These vigilante-style, politically motivated attacks are meant to embarass executives by publicizing their secret dealings. What can authorities do to go after those behind these illegal activities, and how can corporations better protect themselves so incidents such as those that happened at the NSA, RSA, Twitter, PayPal, Sony, Pfizer, the FBI, a number of police forces, the U.S. military and many other entities, doesn’t happen to them? We’ll take a deep dive.

Registration requires far more information than I am willing to disclose for a “free” eSymposium so someone else will have to fill in the details.

I can offer advice on defeating hacktivists:

  1. Don’t make deals that will embarrass you if made public.
  2. Don’t lie when questioned about any deal, information, etc.
  3. Don’t cheat or steal.
  4. Don’t do #1-#3 to cover for someone else’s incompetence or dishonesty.

Is any of that unclear?

As far as I can tell, that is a 100% foolproof defense against hacktivists.

Questions?

December 14, 2015

Fixing Bugs In Production

Filed under: Humor,Privacy,Programming,Security — Patrick Durusau @ 8:48 pm

MΛHDI posted this to twitter and it is too good not to share:

Amusing now but what happens when the illusion of “static data” disappears and economic activity data is streamed from every transaction point?

Your code and analysis will need to specify the time boundaries of the data that underlie your analysis. Depending on the level of your analysis, it may quickly become outdated as new data streams in for further analysis.

To do the level of surveillance that law enforcement longs for in the San Bernardino attack, you would need real time sales transaction data for the last 5 years, plus bank records and “see something say something” reports on 322+ million citizens of the United States.

Now imagine fixing bugs in that production code, when arrest and detention, if not more severe consequences await.

December 10, 2015

FBI Official Acknowledges Using Top Secret Hacking Weapons

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 11:11 am

FBI Official Acknowledges Using Top Secret Hacking Weapons by Robert Hackett.

From the post:

The Federal Bureau of Investigation recently made an unprecedented admission: It uses undisclosed software vulnerabilities when hacking suspects’ computers.

Amy Hess, head of the FBI’s science and technology arm, recently went on the record about the practice with the Washington Post. “Hess acknowledged that the bureau uses zero-days,” the Post reported on Tuesday, using industry-speak for generally unknown computer bugs. The name derives from the way such flaws blind side security pros. By the time attackers have begun taking advantage of these coding flubs, software engineers are left with zero days to fix them.

Never before has an FBI official conceded the point, the Post notes. That’s noteworthy. Although the news itself is not exactly a shocker. It is well known among cybersecurity and privacy circles that the agency has had a zero day policy in place since 2010, thanks to documents obtained by the American Civil Liberties Union and published earlier this year on Wired. And working groups had been assembled at least two years earlier to begin mapping out that policy, as a document obtained by the Electronic Frontier Foundation privacy organization and also published on Wired shows. Now though, Hess, an executive assistant director with the FBI, seems to have confirmed the activity.

(People surmised as much after the FBI was outed as a customer of the Italian spyware firm Hacking Team after hackers stole some of its internal documents and published them online this year, too.)

The agency’s “network investigative techniques,” as these hacking operations are known, originate inside the FBI’s Operational Technology Division in an enclave known as its Remote Operations Unit, according to the Post. They’re rarely discussed publicly, and many privacy advocates have a number of concerns about the system, which they say could potentially be abused or have unsavory consequences.

Robert does a great job in covering this latest admission by the FBI and pointing to other resources to fill in its background.

It’s hard to think of a better precedent for this use of hacking weapons than of Silverthorne Lumber Co., Inc. v. United States
251 U.S. 385 (1920).

The opinion for the majority of the Supreme Court was delivered by Justice Holmes at the height of his career. It isn’t long so I quote the opinion in full:

This is a writ of error brought to reverse a judgment of the District Court fining the Silverthorne Lumber Company two hundred and fifty dollars for contempt of court and ordering Frederick W. Silverthorne to be imprisoned until he should purge himself of a similar contempt. The contempt in question was a refusal to obey subpoenas and an order of Court to produce books and documents of the company before the grand jury to be used in regard to alleged violation of the statutes of the United States by the said Silverthorne and his father. One ground of the refusal was that the order of the Court infringed the rights of the parties under the Fourth Amendment of the Constitution of the United States.

The facts are simple. An indictment upon a single specific charge having been brought against the two Silverthornes mentioned, they both were arrested at their homes early in the morning of February 25, and were detained in custody a number of hours. While they were thus detained, representatives of the Department of Justice and the United States marshal, without a shadow of authority, went to the office of their company and made a clean sweep of all the books, papers and documents found there. All the employes were taken or directed to go to the office of the District Attorney of the United States, to which also the books, &c., were taken at once. An application, was made as soon as might be to the District

Page 251 U. S. 391

Court for a return of what thus had been taken unlawfully. It was opposed by the District Attorney so far as he had found evidence against the plaintiffs in error, and it was stated that the evidence so obtained was before the grand jury. Color had been given by the District Attorney to the approach of those concerned in the act by an invalid subpoena for certain documents relating to the charge in the indictment then on file. Thus, the case is not that of knowledge acquired through the wrongful act of a stranger, but it must be assumed that the Government planned or at all events ratified, the whole performance. Photographs and copies of material papers were made, and a new indictment was framed based upon the knowledge thus obtained. The District Court ordered a return of the originals, but impounded the photographs and copies. Subpoenas to produce the originals then were served, and, on the refusal of the plaintiffs in error to produce them, the Court made an order that the subpoenas should be complied with, although it had found that all the papers had been seized in violation of the parties’ constitutional rights. The refusal to obey this order is the contempt alleged. The Government now, while in form repudiating and condemning the illegal seizure, seeks to maintain its right to avail itself of the knowledge obtained by that means which otherwise it would not have had.

The proposition could not be presented more nakedly. It is that, although, of course, its seizure was an outrage which the Government now regrets, it may study the papers before it returns them, copy them, and then may use the knowledge that it has gained to call upon the owners in a more regular form to produce them; that the protection of the Constitution covers the physical possession, but not any advantages that the Government can gain over the object of its pursuit by doing the forbidden act. Weeks v. United States, 232 U. S. 383, to be sure, had established that laying the papers directly before the grand jury was

Page 251 U. S. 392

unwarranted, but it is taken to mean only that two steps are required instead of one. In our opinion, such is not the law. It reduces the Fourth Amendment to a form of words. 232 U. S. 232 U.S. 393. The essence of a provision forbidding the acquisition of evidence in a certain way is that not merely evidence so acquired shall not be used before the Court, but that it shall not be used at all. Of course, this does not mean that the facts thus obtained become sacred and inaccessible. If knowledge of them is gained from an independent source they may be proved like any others, but the knowledge gained by the Government’s own wrong cannot be used by it in the way proposed. The numerous decisions, like Adams v. New York, 192 U. S. 585, holding that a collateral inquiry into the mode in which evidence has been got will not be allowed when the question is raised for the first time at the trial, are no authority in the present proceeding, as is explained in Weeks v. United States, 232 U. S. 383, 232 U. S. 394, 232 U. S. 395. Whether some of those decisions have gone too far or have given wrong reasons it is unnecessary to inquire; the principle applicable to the present case seems to us plain. It is stated satisfactorily in Flagg v. United States, 233 Fed.Rep. 481, 483. In Linn v. United States, 251 Fed.Rep. 476, 480, it was thought that a different rule applied to a corporation, on the ground that it was not privileged from producing its books and papers. But the rights of a corporation against unlawful search and seizure are to be protected even if the same result might have been achieved in a lawful way.

In classic Holmes style, the crux of the case mirrors the use of illegal means to gain information, which then shapes the use of more lawful means of investigation:

It is that, although, of course, its seizure was an outrage which the Government now regrets, it may study the papers before it returns them, copy them, and then may use the knowledge that it has gained to call upon the owners in a more regular form to produce them; that the protection of the Constitution covers the physical possession, but not any advantages that the Government can gain over the object of its pursuit by doing the forbidden act.

Concealment of the use of “top secret hacking weapons,” like flight, is more than ample evidence of a corporate “guilty mind” when it comes to illegal gathering of evidence. If it were pursuing lawful means of investigation, the FBI would not go to such lengths to conceal its activities. Interviews with witnesses, physical evidence, records, wiretaps, pen registers, etc. are all lawful and disclosed means of investigation in general and in individual cases.

The FBI as an organization has created a general exception to all criminal laws and the protections offered United States citizens, when and where its agents, not courts, decide such exceptions are necessary.

Privacy of individual citizens is at risk but the greater danger is the FBI being a lawless enterprise where its goals and priorities take precedent over both the laws of the United States and its Constitution.

The United States suffers from murders, rapes and bank robberies every week of the year, yet none of those grim statistics has forced the wholesale abandonment of the rule of law by law enforcement agencies. Prefacing attacks with the word “terrorist” should have no different result.

December 7, 2015

Toxic Gas Detector Alert!

Filed under: Cybersecurity,IoT - Internet of Things,Security — Patrick Durusau @ 9:55 pm

For years the Chicken Little‘s of infrastructure security have been warning of nearly impossible cyber-attacks on utilities and other critical infrastructure.

Despite nearly universal scorn from security experts, once those warning are heard, they are dutifully repeated by a non-critical press and echoed by elected public officials.

Despite not having been insecure originally, the Internet of Things is catching up to infrastructure and making what was once secure, insecure.

Consider Mark Stockley‘s report: Industrial gas detectors vulnerable to a remote ‘attacker with low skill’.

From the post:

Users of Honeywell’s Midas and Midas Black gas detectors are being urged to patch their firmware to protect against a pair of critical, remotely exploitable vulnerabilities.

These extremely serious vulnerabilities, found by researcher Maxim Rupp and reported by ICS-CERT (the Industrial Control Systems Cyber Emergency Response Team) in advisory ICSA-15-309-02, are simple enough to be exploited by an “attacker with low skill”:

Successful exploitation of these vulnerabilities could allow a remote attacker to gain unauthenticated access to the device, potentially allowing configuration changes, as well as the initiation of calibration or test processes.

…These vulnerabilities could be exploited remotely.

…An attacker with low skill would be able to exploit these vulnerabilities.

So, how bad is the problem?

You judge:

Midas and Midas Black gas detectors are used worldwide in numerous industrial sectors including chemical, manufacturing, energy, food, agriculture and water to:

…detect many key toxic, ambient and flammable gases in a plant. The device monitors points up to 100 feet (30 meters) away while using patented technology to regulate flow rates and ensure error-free gas detection.

The vulnerabilities could allow the devices’ authentication to be bypassed completely by path traversal (CVE-2015-7907) or to be compromised by attackers grabbing an administrator’s password as it’s transmitted in clear text (CVE-2015-7908).

That’s still not a full picture of the danger posed by these vulnerabilities. Take a look at the sales brochure on the Midas Gas Detector and you will find this chart of the “over 35 gases” the Midas Gas Detector can detect:

35-gases

Several nasty gases on the list, Ammonia (caustic, hazarous), Arsine (highly toxic, flammable), Chlorine (extremely dangerous, poisonous for all living organisms), Hydrogen cyanide, and Hydrogen flouride (“Hydrogen fluoride is a highly dangerous gas, forming corrosive and penetrating hydrofluoric acid upon contact with living tissue. The gas can also cause blindness by rapid destruction of the corneas.”)

Bear in mind that patch application doesn’t have an encouraging history: Potent, in-the-wild exploits imperil customers of 100,000 e-commerce sites

Honeywell has put the detection of extremely dangerous gases, at the mercy of script kiddies.

Suggestion: If you worn on-site where Midas Gas Detectors may be in use, inquire before setting foot on the site if they are using Midas Gas Detectors of the relevant models and whether they are patched?

Bear in mind that the risk of “…corrosive and penetrating hydrofluoric acid upon contact with living tissue…” is your in some situations. I would ask first.

Untraceable communication — guaranteed

Filed under: Cybersecurity,Privacy,Security — Patrick Durusau @ 8:50 pm

Untraceable communication — guaranteed by Larry Hardesty.

From the post:

Anonymity networks, which sit on top of the public Internet, are designed to conceal people’s Web-browsing habits from prying eyes. The most popular of these, Tor, has been around for more than a decade and is used by millions of people every day.

Recent research, however, has shown that adversaries can infer a great deal about the sources of supposedly anonymous communications by monitoring data traffic though just a few well-chosen nodes in an anonymity network. At the Association for Computing Machinery Symposium on Operating Systems Principles in October, a team of MIT researchers presented a new, untraceable text-messaging system designed to thwart even the most powerful of adversaries.

The system provides a strong mathematical guarantee of user anonymity, while, according to experimental results, permitting the exchange of text messages once a minute or so.

“Tor operates under the assumption that there’s not a global adversary that’s paying attention to every single link in the world,” says Nickolai Zeldovich, an associate professor of computer science and engineering, whose group developed the new system. “Maybe these days this is not as good of an assumption. Tor also assumes that no single bad guy controls a large number of nodes in their system. We’re also now thinking, maybe there are people who can compromise half of your servers.”

Because the system confuses adversaries by drowning telltale traffic patterns in spurious information, or “noise,” its creators have dubbed it “Vuvuzela,” after the noisemakers favored by soccer fans at the 2010 World Cup in South Africa.

Pay particular attention to the generation of dummy messages as “noise.”

In topic map terms, I would say that the association between sender and a particular message, or between the receiver and a particular message, its identity has been obscured.

The reverse of the usual application of topic map principles. Which is a strong indication that the means to identify those associations, are also establishing associations and their identities. Perhaps not in traditional TMDM terms but they are associations with identities none the less.

For some unknown reason, the original post did not have a link to the article, Vuvuzela: Scalable Private Messaging Resistant to Traffic Analysis by Jelle van den Hooff, David Lazar, Matei Zaharia, and Nickolai Zeldovich.

The non-technical post concludes:

“The mechanism that [the MIT researchers] use for hiding communication patterns is a very insightful and interesting application of differential privacy,” says Michael Walfish, an associate professor of computer science at New York University. “Differential privacy is a very deep and sophisticated theory. The observation that you could use differential privacy to solve their problem, and the way they use it, is the coolest thing about the work. The result is a system that is not ready for deployment tomorrow but still, within this category of Tor-inspired academic systems, has the best results so far. It has major limitations, but it’s exciting, and it opens the door to something potentially derived from it in the not-too-distant future.”

It isn’t clear how such a system would defeat an adversary that has access to all the relevant nodes. Where “relevant nodes” is a manageable subset of all the possible nodes in the world. It’s unlikely that any adversary, aside from the NSA, CIA and other known money pits, would attempt to monitor all network traffic.

But monitoring all network traffic is both counter-productive and unnecessary. In general, one does not set out from the Washington Monument in the search of spies based in the United States. Or at least people who hope to catch spies don’t. I can’t speak for the NSA or CIA.

While you could search for messages between people unknown to you, that sounds like a very low-grade ore mining project. You could find a diamond in the rough, but its unlikely.

The robustness of this proposal should assume that both the sender and receiver have been identified and their network traffic is being monitored.

I think what I am groping towards is the notion that “noise” comes too late in this proposal. If either party is known, or suspected, it may be time consuming to complete the loop on the messages but adding noise at the servers is more of an annoyance than serious security.

At least when the adversary can effectively monitor the relevant nodes. Assuming that the adversary can’t perform such monitoring, seems like a risky proposition.

Thoughts?

IoT: Move Past the Rhetoric and Focus on Success

Filed under: Cybersecurity,IoT - Internet of Things,Security — Patrick Durusau @ 8:03 pm

Move Past the Rhetoric and Focus on Success

A recent missive from Booz Allen on the Internet of Things.

Two critical points that I want to extract for your consideration:


New Models for Security

The proliferation of IoT devices drastically increases the attack surface and creates attractive, and sometimes easy, targets for attackers. Traditional means of securing networks will no longer suffice as attack risks increase exponentially. We will help you learn how to think about security in an IoT world and new security models.

[page 4]

You have to credit Booz Allen with being up front about “…attack risks increas[ing] exponentially.” Considering that “Hello Barbie” has an STD on her first Christmas.

Do you have a grip on your current exposure to cyber-risk? What is that going to look like when it increases exponentially?

I’m not a mid-level manager but I would be wary of increasing cyber-risk exponentially, especially without a concrete demonstration of value add from the Internet-of-Things.

The second item:

Interoperability is Key to Everything

IoT implementations typically contain hundreds of sensors embedded in different “things”, connected to gateways and the Cloud, with data flowing back and forth via a communication protocol. If each node within the system “speaks” the same language, then the implementation functions seamlessly. When these nodes don’t talk with each other, however, you’re left with an Internet of one or some things, rather than an Internet of everything. [page 4]

IoT implementation, at its core, is the integration of dozens and up to tens of thousands of devices seamlessly communicating with each other, exchanging information and commands, and revealing insights. However, when devices have different usage scenarios and operating requirements that aren’t compatible with other devices, the system can break down. The ability to integrate different elements or nodes within broader systems, or bringing data together to drive insights and improve operations, becomes more complicated and costly. When this occurs, IoT can’t reach its potential, and rather than an Internet of everything, you see siloed Internets of some things.

Haven’t we seen this play before? Wasn’t it called the Semantic Web? I suppose now called the Failed Semantic Web (FSW)?

Booz Allen would be more forthright to say, “…the system is broken down…” rather than “…the system can break down.”

I can’t think of a better way to build a failing IoT project than to presume that interoperability exists now (or that it is likely to exist, outside highly constrained circumstances).

Let’s take a simpler problem than everything and limit it to interchange of pricing data in the energy market. As you might expect, there is a standard, a rather open one, on that topic: Energy Market Information Exchange (EMIX) Version 1.0.

That specification is dated 11 January of 2012, which means on 11 January 2016, it will be four years old.

As of today, a search on “Energy Market Information Exchange (EMIX) Version 1.0” produces 695 “hits,” but 327 of them are at Oasis-open.org, the organization where a TC produced this specification.

Even more interesting, only three pages of results are returned with the notation that beyond 30 results, the rest have been suppressed as duplicates.

So, at three years and three hundred and thirty days, Energy Market Information Exchange (EMIX) Version 1.0 has thirty (30) non-duplicate “hits?”

I can’t say that inspires me with a lot of hope for impact on interoperability in the U.S. Energy Market. Like the work Booz Allen cites, this too was sponsored by NIST and the DOE (EISA).

One piece of advice from Booz Allen is worth following:

Start Small

You may actually have IoT implementations within your organization that you aren’t aware of. And if you have any type of health wearable, you are actually already participating in IoT. You don’t have to instrument every car, road, and sign to have an Internet of some things. [page 10]

Building the Internet of Things for everybody should not be on your requirements list.

An Internet of Some Things will be your things and with proper planning it will improve your bottom line. (Contrary to the experience with the Semantic Web.)

« Newer PostsOlder Posts »

Powered by WordPress