Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 13, 2017

Security Design: Stop Trying to Fix the User (Or Catch Offenders)

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:09 pm

Security Design: Stop Trying to Fix the User by Bruce Schneier.

From the post:

Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”

Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we’ve thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This “either/or” thinking results in systems that are neither usable nor secure.

Non-reliance on users is a good first step.

An even better second step would create financial incentives for Bruce’s first step.

Financial incentives similar to those in products liability cases, where a “reasonable care” standard evolves over time. No product has to be perfect, but there are expectations of how not bad a product must be.

Liability not only for the producer of the software but also enterprises using that software, when third-parties are hurt by data breaches.

Claims about the complexity of software are true, but can you honestly say that software is more complex than drug interactions across an unknown population? Yet, we have products liability standards for those cases.

Without financial incentives, substantial financial incentives, such as with products liability, cybersecurity experts (Bruce excepted) will still be trying to “fix the user” a decade from now.

The romantic quest to capture and punish those guilty of cybercrime, hasn’t worked so well. One collection of cybercrime statistics pointed out that detected cybercrime incidents increased by 38% in the last year.

Tell me, do you know of any statistics showing a 38% increase in the arrest and prosecution of cybercriminals in the last year? No? That’s what I thought.

With estimated cybercrime prevention spending at $80 billion this year and an estimated cybercrime cost of $2 trillion by 2019, you don’t seem to be getting very much return on your investment.

We know that fixing users doesn’t work and capturing cybercriminals is a dicey proposition.

Both of those issues can be addressed by establishing incentives for more secure software. (Legal liability takes legislative misjudgment out of the loop, enabling the organic growth of software liability principles.)

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress