Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 22, 2016

Hacker-Proof Code Confirmed [Can Liability Be Far Behind?]

Filed under: Formal Methods,Law,Programming — Patrick Durusau @ 8:56 pm

Hacker-Proof Code Confirmed by Kevin Hartnett.

From the post:

In the summer of 2015 a team of hackers attempted to take control of an unmanned military helicopter known as Little Bird. The helicopter, which is similar to the piloted version long-favored for U.S. special operations missions, was stationed at a Boeing facility in Arizona. The hackers had a head start: At the time they began the operation, they already had access to one part of the drone’s computer system. From there, all they needed to do was hack into Little Bird’s onboard flight-control computer, and the drone was theirs.

When the project started, a “Red Team” of hackers could have taken over the helicopter almost as easily as it could break into your home Wi-Fi. But in the intervening months, engineers from the Defense Advanced Research Projects Agency (DARPA) had implemented a new kind of security mechanism — a software system that couldn’t be commandeered. Key parts of Little Bird’s computer system were unhackable with existing technology, its code as trustworthy as a mathematical proof. Even though the Red Team was given six weeks with the drone and more access to its computing network than genuine bad actors could ever expect to attain, they failed to crack Little Bird’s defenses.

“They were not able to break out and disrupt the operation in any way,” said Kathleen Fisher, a professor of computer science at Tufts University and the founding program manager of the High-Assurance Cyber Military Systems (HACMS) project. “That result made all of DARPA stand up and say, oh my goodness, we can actually use this technology in systems we care about.”

Reducing the verification requirement to a manageable size appears to be the key to DARPA’s success.

That is rather than verification of the entire program, only critical parts, such as excluding hackers, need to be verified.

If this spreads, failure to formally verify critical parts of software would be a natural place to begin imposing liability for poorly written code.

PS: Would formal proof of data integration be a value-add?

June 19, 2016

Formal Methods for Secure Software Construction

Filed under: Cybersecurity,Formal Methods,Programming,Software,Software Engineering — Patrick Durusau @ 10:51 am

Formal Methods for Secure Software Construction by Ben Goodspeed.

Abstract:

The objective of this thesis is to evaluate the state of the art in formal methods usage in secure computing. From this evaluation, we analyze the common components and search for weaknesses within the common workflows of secure software construction. An improved workflow is proposed and appropriate system requirements are discussed. The systems are evaluated and further tools in the form of libraries of functions, data types and proofs are provided to simplify work in the selected system. Future directions include improved program and proof guidance via compiler error messages, and targeted proof steps.

George chose Idris for this project saying:

The criteria for selecting a language for this work were expressive power, theorem proving ability (sufficient to perform universal quantification), extraction/compilation, and performance. Idris has sufficient expressive power to be used as a general purpose language (by design) and has library support for many common tasks (including web development). It supports machine verified proof and universal quantification over its datatypes and can be directly compiled to produce efficiently sized executables with reasonable performance (see section 10.1 for details). Because of these characteristics, we have chosen Idris as the basis for our further work. (at page 57)

The other contenders were Coq, Agda, Haskell, and Isabelle.

Ben provides examples of using Idris and his Proof Driven Development (PDD), but stops well short of solving the problem of secure software construction.

While waiting upon the arrival of viable methods for secure software construction, shouldn’t formal methods be useful in uncovering and documenting failures in current software?

Reasoning the greater specificity and exactness of formal methods will draw attention to gaps and failures concealed by custom and practice.

Akin to the human eye eliding over mistakes such as “When the the cat runs.”

The average reader “auto-corrects” for the presence of the second “the” in that sentence, even knowing there are two occurrences of the word “the.”

Perhaps that is a better way to say it: Formal methods avoid the human tendency to auto-correct or elide over unknown outcomes in code.

August 11, 2014

Archive of Formal Proofs

Filed under: Formal Methods,GPU,Proof Theory — Patrick Durusau @ 1:47 pm

Archive of Formal Proofs

From the webpage:

The Archive of Formal Proofs is a collection of proof libraries, examples, and larger scientific developments, mechanically checked in the theorem prover Isabelle. It is organized in the way of a scientific journal, is indexed by dblp and has an ISSN: 2150-914x. Submissions are refereed. The preferred citation style is available [here].

It may not be tomorrow but if I don’t capture this site today, I will need to find it in the near future!

Just skimming I did see an entry of interest to the GPU crowd: Syntax and semantics of a GPU kernel programming language by John Wickerson.

Abstract:

This document accompanies the article “The Design and Implementation of a Verification Technique for GPU Kernels” by Adam Betts, Nathan Chong, Alastair F. Donaldson, Jeroen Ketema, Shaz Qadeer, Paul Thomson and John Wickerson. It formalises all of the definitions provided in Sections 3 and 4 of the article.

I first saw this in a tweet by Computer Science.

June 6, 2014

Myth Busting Doubts About Formal Methods

Filed under: Formal Methods,Modeling,Programming — Patrick Durusau @ 6:38 pm

Use of Formal Methods at Amazon Web Services by Chris Newcombe, Tim Rath, Fan Zhang, Bogdan Munteanu, Marc Brooker, and, Michael Deardeuff. (PDF)

From the paper:

Since 2011, engineers at Amazon Web Services (AWS) have been using formal specification and model-checking to help solve difficult design problems in critical systems. This paper describes our motivation and experience, what has worked well in our problem domain, and what has not. When discussing personal experiences we refer to authors by their initials.

At AWS we strive to build services that are simple for customers to use. That external simplicity is built on a hidden substrate of complex distributed systems. Such complex internals are required to achieve high-availability while running on cost-efficient infrastructure, and also to cope with relentless rapid business-growth. As an example of this growth; in 2006 we launched S3, our Simple Storage Service. In the 6 years after launch, S3 grew to store 1 trillion objects [1]. Less than a year later it had grown to 2 trillion objects, and was regularly handling 1.1 million requests per second [2].

S3 is just one of tens of AWS services that store and process data that our customers have entrusted to us. To safeguard that data, the core of each service relies on fault-tolerant distributed algorithms for replication, consistency, concurrency control, auto-scaling, load-balancing, and other coordination tasks. There are many such algorithms in the literature, but combining them into a cohesive system is a major challenge, as the algorithms must usually be modified in order to interact properly in a real-world system. In addition, we have found it necessary to invent
algorithms of our own. We work hard to avoid unnecessary complexity, but the essential complexity of the task remains high.

The authors are not shy about arguing for the value of formal methods for complex systems:

In industry, formal methods have a reputation of requiring a huge amount of training and effort to verify a tiny piece of relatively straightforward code, so the return on investment is only justified in safety-critical domains such as medical systems and avionics. Our experience with TLA+ has shown that perception to be quite wrong. So far we have used TLA+ on 6 large complex real-world systems. In every case TLA+ has added significant value, either finding subtle bugs that we are sure we would not have found by other means, or giving us enough understanding and confidence to make aggressive performance optimizations without sacrificing correctness. We now have 7 teams using TLA+, with encouragement from senior management and technical leadership. (emphasis added)

Hard to argue with “real-world” success. Yes?

Well, or if you want your system to be successful. Say compare Amazon’s S3 with the ill-fated healthcare site.

The paper also covers what formal methods cannot do and recounts how this was sold to programmers within Amazon.

I suggest reading the paper more than once and following all the links in the bibliography, but if you are in a hurry, at least see these two:

Lamport, L. The TLA Home Page; http://research.microsoft.com/en-us/um/people/lamport/tla/tla.html

Lamport, L. The Wildfire Challenge Problem ; http://research.microsoft.com/en-us/um/people/lamport/tla/wildfire-challenge.html

Public forum of the TLA+ user community; https://groups.google.com/forum/?hl=en&fromgroups#!forum/tlaplus

Which leaves me with the question: How do you create a reliability guarantee for a topic map? Manual inspection doesn’t scale.

I first saw this in a tweet by Marc Brooker.

Powered by WordPress