Why Apple’s Recent Security Flaw Is So Scary by Brian Barrett.
From the post:
On Friday, Apple quietly released iOS 7.0.6, explaining in a brief release note that it fixed a bug in which “an attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS.” That’s the understated version. Another way to put it? Update your iPhone right now.
Oh, and by the way, OS X has the same issues—except there’s no fix out yet.
…
Google’s Adam Langley detailed the specifics of the bug in his personal blog, if you’re looking to stare at some code. But essentially, it comes down to one simple extra line out of nearly 2,000. As ZDNet points out, one extra “goto fail;” statement tucked in about a third of the way means that the SSL verification will go through in almost every case, regardless of if the keys match up or not.
Langley’s take, and the most plausible? That it could have happened to anybody:
This sort of subtle bug deep in the code is a nightmare. I believe that it’s just a mistake and I feel very bad for whomever might have slipped in an editor and created it.
I am sure editing mistakes happen but what puzzles me is why such a “…subtle bug deep in the code…” wasn’t detected during QA?
No matter how subtle or how deep the bug, if passing invalid SSH keys works, you have a bug.
Might be very hard to find the bug, but detecting it under any sane testing conditions should have been trivial. Yes?
Or was it that the bug was discovered in testing and couldn’t be easily found so the code shipped anyway?
All the more reason to have sufficient subject identities to track both coding and testing. And orders related to the same.
[…] recent Apple “goto fail” farce would not happen in an open source product. Some tester, intentionally or accidentally […]
Pingback by Open Source: Option of the Security Conscious « Another Word For It — March 10, 2014 @ 10:00 am