Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 20, 2018

Software disenchantment (a must read)

Filed under: Computer Science,Design,Programming,Software,Software Engineering — Patrick Durusau @ 3:34 pm

Software disenchantment by Nikita Prokopov.

From the post:


Windows 95 was 30Mb. Today we have web pages heavier than that! Windows 10 is 4Gb, which is 133 times as big. But is it 133 times as superior? I mean, functionally they are basically the same. Yes, we have Cortana, but I doubt it takes 3970 Mb. But whatever Windows 10 is, is Android really 150% of that?

Google keyboard app routinely eats 150 Mb. Is an app that draws 30 keys on a screen really five times more complex than the whole Windows 95? Google app, which is basically just a package for Google Web Search, is 350 Mb! Google Play Services, which I do not use (I don’t buy books, music or videos there)—300 Mb that just sit there and which I’m unable to delete.

Yep, that and more. Brim full of hurtful remarks but also suggestions for a leaner, faster and more effective future.

Prokopov doesn’t mention malware but “ratio of bugs per line of code” has a great summary of various estimates of bugs to lines of code.

Government programmers and their contractors should write as much bloated code as their funding will support.

Programmers working in the public interest, should read Prokopov deeply and follow his advice.

June 30, 2017

Reinventing Wheels with No Wheel Experience

Filed under: Cybersecurity,Programming,Security,Software Engineering — Patrick Durusau @ 9:33 am

Rob Graham, @ErrataRob, captured an essential truth when he tweeted:

Wheel re-invention is inherent every new programming language, every new library, and no doubt, nearly every new program.

How much “wheel experience” every programmer has across the breath of software vulnerabilities?

Hard to imagine meaningful numbers on the “wheel experience” of programmers in general but vulnerability reports make it clear either “wheel experience” is lacking or the lesson didn’t stick. Your call.

Vulnerabilities may occur in any release so standard practice is to check every release, however small. Have your results independently verified by trusted others.

PS: For the details on systemd, see: Sergey Bratus and the systemd thread.

December 17, 2016

Radio Show Host Manual

Filed under: Radio,Software Engineering — Patrick Durusau @ 9:07 pm

Host manual for the Software Engineering Radio

The manual if you want to do a show for Software Engineering Radio and quite possibly the manual for any radio show.

Why?

Consider the numbers (page 7, although engineers haven’t figured out pagination yet):

  • is in its 11th year with over 270 episodes;
  • published three times monthly by IEEE Software magazine’
  • is downloaded in aggregate 180,000 times or more per month (including current and back catalog), with each show reaching each show 30,000-40,000 within three months;
  • was named the #1 rated developer podcast based on an aggregation of hacker news comments;
  • appeared in in The Simple Programmer’s ultimate list of developer podcasts;
  • was included among 11 podcasts that will make you a better software engineer;
  • is highly rated on iTunes “Top Podcasts” under the category Software:How To;
  • features thought leaders in the field (Eric Evans, David Heinemeier Hansson, Kent Beck, The Gang of Four, Rich Hickey, Michael Nygard, James Turnbull, Michael Stonebraker, Adrian Cockroft, Martin Fowler, Martin Odersky, Eric Brewer,…);
  • a demographic survey we did a few years ago indicated that most of our listeners are software engineers with 5-10 years experience, architects, and technical managers.
  • Twenty-eight pages of information and suggestions.

    Instead of trolling internet censors and their suggestions, create high quality content. (Advice to myself as much as anyone else.)

    June 19, 2016

    Formal Methods for Secure Software Construction

    Filed under: Cybersecurity,Formal Methods,Programming,Software,Software Engineering — Patrick Durusau @ 10:51 am

    Formal Methods for Secure Software Construction by Ben Goodspeed.

    Abstract:

    The objective of this thesis is to evaluate the state of the art in formal methods usage in secure computing. From this evaluation, we analyze the common components and search for weaknesses within the common workflows of secure software construction. An improved workflow is proposed and appropriate system requirements are discussed. The systems are evaluated and further tools in the form of libraries of functions, data types and proofs are provided to simplify work in the selected system. Future directions include improved program and proof guidance via compiler error messages, and targeted proof steps.

    George chose Idris for this project saying:

    The criteria for selecting a language for this work were expressive power, theorem proving ability (sufficient to perform universal quantification), extraction/compilation, and performance. Idris has sufficient expressive power to be used as a general purpose language (by design) and has library support for many common tasks (including web development). It supports machine verified proof and universal quantification over its datatypes and can be directly compiled to produce efficiently sized executables with reasonable performance (see section 10.1 for details). Because of these characteristics, we have chosen Idris as the basis for our further work. (at page 57)

    The other contenders were Coq, Agda, Haskell, and Isabelle.

    Ben provides examples of using Idris and his Proof Driven Development (PDD), but stops well short of solving the problem of secure software construction.

    While waiting upon the arrival of viable methods for secure software construction, shouldn’t formal methods be useful in uncovering and documenting failures in current software?

    Reasoning the greater specificity and exactness of formal methods will draw attention to gaps and failures concealed by custom and practice.

    Akin to the human eye eliding over mistakes such as “When the the cat runs.”

    The average reader “auto-corrects” for the presence of the second “the” in that sentence, even knowing there are two occurrences of the word “the.”

    Perhaps that is a better way to say it: Formal methods avoid the human tendency to auto-correct or elide over unknown outcomes in code.

    June 4, 2016

    Learning to like design documents

    Filed under: Programming,Software,Software Engineering — Patrick Durusau @ 8:41 pm

    Learning to like design documents by Julia Evans.

    From the post:

    Hi everyone! Today we’re going to talk about software engineering and process!

    A design document is where, before starting to implement a system, you write up a thing explaining what the system is supposed to do first and how you’re planning to accomplish that. I think there are basically two goals:

    • tell people what you’re doing
    • figure out design problems with the system before you’ve been coding for 2 months

    I understand that it’s super important to think ahead a lot before huge projects, but a little bit of thinking can be helpful even for smaller projects. I asked some people recently if they write design docs for small projects and some of them said “yeah totally! small ones! it helps! :D”.

    I used to get kind of grumpy when someone was like “hey julia can you write a design document for your system?” It would seem like a reasonable idea, though, so I’d try to do it! But the first couple of times I tried to write one I felt like it didn’t actually really help me! I liked the idea in principle, but I didn’t really know how to apply it and I felt like it was hard to get good feedback.

    Last week I wrote a design doc and I thought it was sort of helpful. Here are some current thoughts.

    Be forewarned that Julia is a gifted writer and you will enjoy her posts more than your design documents. 😉

    Still, Julia makes a great case for the use of design documents (a/k/a “documentation”).

    Unless your job security is tied up in undocumented, spaghetti COBOL code (or its equivalent in another language), try putting Julia’s advice into action.

    If you are looking for really broad but practical reading in programming, check out Julia’s list of all her posts. Pick one at random every week. You won’t be disappointed.

    April 23, 2016

    The New Normal

    Filed under: Design,Programming,Software Engineering — Patrick Durusau @ 8:09 pm

    The New Normal, a series by Michael Nygard.

    I encountered one of the more recent posts in this series and when looking for its beginning: The New Normal: Failure is a Good Thing.

    From that starting post:


    Everything breaks. It’s just a question of when and how badly.

    What we need is a new approach where “continuous partial failure” is the normal state of affairs. Continuous partial failure opens the doors to making big changes happen because you’re already good at executing the small stuff.

    In subsequent posts, I’ll talk about moving from the mentality of preventing problems to actually promoting them. I’ll look at the aging models for achieving resiliency and introduce microservices as an extension of the concept of antifragility into the design of IT infrastructure, applications, and organizations.

    Along the way, I’ll share some stories about Netflix and their classic Chaos Monkey, how Amazon is becoming an increasingly terrifying competitor, the significance of maneuverability and the art of war, the unforeseen consequences of outsourcing and how Cognitect’s simple and sharp tools play a pivotal role in shaping the new IT blueprint.

    Does anyone seriously doubt the the proposition: Everything breaks?

    From a security perspective, I would not argue with Everything’s broken.

    I’m starting at the beginning and working my way forward in this series. It promises to be seriously rewarding.

    Enjoy!

    January 16, 2016

    End The Lack Of Diversity On The Internet Today!

    Filed under: Design,Diversity,Programming,Software,Software Engineering — Patrick Durusau @ 4:04 pm

    Julia Evans tweeted earlier today:

    “programmers are 0.66% of internet users, and build the software that everyone uses” – @heddle317

    The strengths of having diversity on teams, including software teams, is well known and I won’t repeat those arguments here.

    See: Why Diverse Teams Create Better Work, Diversity and Work Group Performance, More Diverse Personalities Mean More Successful Teams, Managing Groups and Teams/Diversity, or, How Diversity Makes Us Smarter, for five entry points into the literature on the diversity.

    With 0.66% of internet users writing software for everyone, do you see the lack of diversity?

    One response is to turn people into “Linus Torvalds” so we have a broader diversity of people programming. Good thought but I don’t know of anyone who wants to be a Linus Torvalds. (Sorry Linus.)

    There’s a great benefit to having more people master programming but long-term, its not a solution to the lack of diversity in the production of software for the Internet.

    Even if the number of people writing software for the Internet went up ten-fold, that’s only 6.6% of the population of Internet users. Far too monotone to qualify as any type of diversity.

    There is another way to increase diversity in the production of Internet software.

    Warnings: You will have to express your intuitive experience in words. You will have to communicate your experiences to programmers. Some programmers will think they know a “better way” for you to experience the interface. Always remember your experience is the “users” experience, unlike theirs.

    You can use, express comments on, track your comments and respond to comments from programmers, on software built for the Internet. Programmers won’t seek you or your comments out so volunteering is the only option.

    Programmers have their views, but if software doesn’t meet the need, habits, customs of users, it’s useless.

    Programmers can only learn the needs, habits and customs of users from you.

    Are you going to help end this lack of diversity and programmers to write better software or not?

    December 26, 2015

    Five Key Phases of Software Development – Ambiguity

    Filed under: Humor,Software,Software Engineering — Patrick Durusau @ 1:05 pm

    development

    It isn’t clear to me if the answer is wrong because:

    • Failure to follow instructions: No description followed the five (5) stages.
    • Five stages as listed were incorrect?

    A popular alternative answer to the same question:

    development_life_cycle

    I have heard rumors and exhortations about requirements and documentation/testing but their incidence in practice is said to be low to non-existent.

    As far as “designing” the program, isn’t bargaining what “agile programming” is all about? Showing users the latest mis-understanding of their desires and arguing it is in fact better than their original requests? Sounds like bargaining to me.

    Anger may be a bit brief for “code the program” but after having lost arguments with users and told to make the UI a particular, less than best way, isn’t anger a fair description?

    Acceptance is a no-brainer for “operate and maintain the system.” If no one is actively trying to change the system, what other name would you have for that state?

    On the whole, it was failure to follow instructions and supply a description of each stage that lead to the answer being marked as incorrect. 😉

    However, should you ever take the same exam, may I suggest that you give the popular alternative, although mythic, answer to such a question.

    Like everyone else, software professions don’t appreciate their myths being questioned or disputed.

    I first saw the test results in a tweet by Elena Williams.

    November 13, 2015

    Reverse Engineering Challenges

    Filed under: Programming,Reverse Engineering,Software Engineering — Patrick Durusau @ 4:42 pm

    Reverse Engineering Challenges by Dennis Yorichev.

    After the challenge/exercise listing:

    About the website

    Well, “challenges” is a loud word, these are rather just exercises.

    Some exercises were in my book for beginners, some were in my blog, and I eventually decided to keep them all in one single place like this website, so be it.

    The source code of this website is also available at GitHub: https://github.com/dennis714/challenges.re. I would love to get any suggestions and notices about misspellings and typos.

    Exercise numbers

    There is no correlation between exercise number and hardness. Sorry: I add new exercises occasionally and I can’t use some fixed numbering system, so numbers are chaotic and has no meaning at all.

    On the other hand, I can assure, exercise numbers will never change, so my readers can refer to them, and they are also referred from my book for beginners.

    Duplicates

    There are some pieces of code which are really does the same thing, but in different ways. Or maybe it is implemented for different architectures (x86 and Java VM/.NET). That’s OK.

    A major resource for anyone interested in learning reverse engineering!

    If you are in the job market, Dennis concludes with this advice:

    How can I measure my performance?

    • As far as I can realize, If reverse engineer can solve most of these exercises, he is a hot target for head hunters (programming jobs in general).
    • Those who can solve from ¼ to ½ of all levels, perhaps, can freely apply for reverse engineering/malware analysts/vulnerability research job positions.
    • If you feel even first level is too hard for you, you may probably drop the idea to learn RE.

    You have a target, the book and the exercises. The rest is up to you.

    November 12, 2015

    The Architecture of Open Source Applications

    Filed under: Books,Computer Science,Programming,Software,Software Engineering — Patrick Durusau @ 9:08 pm

    The Architecture of Open Source Applications

    From the webpage:

    Architects look at thousands of buildings during their training, and study critiques of those buildings written by masters. In contrast, most software developers only ever get to know a handful of large programs well—usually programs they wrote themselves—and never study the great programs of history. As a result, they repeat one another’s mistakes rather than building on one another’s successes.

    Our goal is to change that. In these two books, the authors of four dozen open source applications explain how their software is structured, and why. What are each program’s major components? How do they interact? And what did their builders learn during their development? In answering these questions, the contributors to these books provide unique insights into how they think.

    If you are a junior developer, and want to learn how your more experienced colleagues think, these books are the place to start. If you are an intermediate or senior developer, and want to see how your peers have solved hard design problems, these books can help you too.

    Follow us on our blog at http://aosabook.org/blog/, or on Twitter at @aosabook and using the #aosa hashtag.

    I happened upon these four books because of a tweet that mentioned: Early Access Release of Allison Kaptur’s “A Python Interpreter Written in Python” Chapter, which I found to be the tenth chapter of “500 Lines.”

    OK, but what the hell is “500 Lines?” Poking around a bit I found The Architecture of Open Source Applications.

    Which is the source for the material I quote above.

    Do you learn from example?

    Let me give you the flavor of three of the completed volumes and the “500 Lines” that is in progress:

    The Architecture of Open Source Applications: Elegance, Evolution, and a Few Fearless Hacks (vol. 1), from the introduction:

    Carpentry is an exacting craft, and people can spend their entire lives learning how to do it well. But carpentry is not architecture: if we step back from pitch boards and miter joints, buildings as a whole must be designed, and doing that is as much an art as it is a craft or science.

    Programming is also an exacting craft, and people can spend their entire lives learning how to do it well. But programming is not software architecture. Many programmers spend years thinking about (or wrestling with) larger design issues: Should this application be extensible? If so, should that be done by providing a scripting interface, through some sort of plugin mechanism, or in some other way entirely? What should be done by the client, what should be left to the server, and is “client-server” even a useful way to think about this application? These are not programming questions, any more than where to put the stairs is a question of carpentry.

    Building architecture and software architecture have a lot in common, but there is one crucial difference. While architects study thousands of buildings in their training and during their careers, most software developers only ever get to know a handful of large programs well. And more often than not, those are programs they wrote themselves. They never get to see the great programs of history, or read critiques of those programs’ designs written by experienced practitioners. As a result, they repeat one another’s mistakes rather than building on one another’s successes.

    This book is our attempt to change that. Each chapter describes the architecture of an open source application: how it is structured, how its parts interact, why it’s built that way, and what lessons have been learned that can be applied to other big design problems. The descriptions are written by the people who know the software best, people with years or decades of experience designing and re-designing complex applications. The applications themselves range in scale from simple drawing programs and web-based spreadsheets to compiler toolkits and multi-million line visualization packages. Some are only a few years old, while others are approaching their thirtieth anniversary. What they have in common is that their creators have thought long and hard about their design, and are willing to share those thoughts with you. We hope you enjoy what they have written.

    The Architecture of Open Source Applications: Structure, Scale, and a Few More Fearless Hacks (vol. 2), from the introduction:

    In the introduction to Volume 1 of this series, we wrote:

    Building architecture and software architecture have a lot in common, but there is one crucial difference. While architects study thousands of buildings in their training and during their careers, most software developers only ever get to know a handful of large programs well… As a result, they repeat one another’s mistakes rather than building on one another’s successes… This book is our attempt to change that.

    In the year since that book appeared, over two dozen people have worked hard to create the sequel you have in your hands. They have done so because they believe, as we do, that software design can and should be taught by example—that the best way to learn how think like an expert is to study how experts think. From web servers and compilers through health record management systems to the infrastructure that Mozilla uses to get Firefox out the door, there are lessons all around us. We hope that by collecting some of them together in this book, we can help you become a better developer.

    The Performance of Open Source Applications, from the introduction:

    It’s commonplace to say that computer hardware is now so fast that most developers don’t have to worry about performance. In fact, Douglas Crockford declined to write a chapter for this book for that reason:

    If I were to write a chapter, it would be about anti-performance: most effort spent in pursuit of performance is wasted. I don’t think that is what you are looking for.

    Donald Knuth made the same point thirty years ago:

    We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

    but between mobile devices with limited power and memory, and data analysis projects that need to process terabytes, a growing number of developers do need to make their code faster, their data structures smaller, and their response times shorter. However, while hundreds of textbooks explain the basics of operating systems, networks, computer graphics, and databases, few (if any) explain how to find and fix things in real applications that are simply too damn slow.

    This collection of case studies is our attempt to fill that gap. Each chapter is written by real developers who have had to make an existing system faster or who had to design something to be fast in the first place. They cover many different kinds of software and performance goals; what they have in common is a detailed understanding of what actually happens when, and how the different parts of large applications fit together. Our hope is that this book will—like its predecessor The Architecture of Open Source Applications—help you become a better developer by letting you look over these experts’ shoulders.

    500 Lines or Less From the GitHub page:

    Every architect studies family homes, apartments, schools, and other common types of buildings during her training. Equally, every programmer ought to know how a compiler turns text into instructions, how a spreadsheet updates cells, and how a database efficiently persists data.

    Previous books in the AOSA series have done this by describing the high-level architecture of several mature open-source projects. While the lessons learned from those stories are valuable, they are sometimes difficult to absorb for programmers who have not yet had to build anything at that scale.

    “500 Lines or Less” focuses on the design decisions and tradeoffs that experienced programmers make when they are writing code:

    • Why divide the application into these particular modules with these particular interfaces?
    • Why use inheritance here and composition there?
    • How do we predict where our program might need to be extended, and how can we make that easy for other programmers

    Each chapter consists of a walkthrough of a program that solves a canonical problem in software engineering in at most 500 source lines of code. We hope that the material in this book will help readers understand the varied approaches that engineers take when solving problems in different domains, and will serve as a basis for projects that extend or modify the contributions here.

    If you answered the question about learning from example with yes, adding these works to your read and re-read list.

    BTW, for markup folks, check out Parsing XML at the Speed of Light by Arseny Kapoulkine.

    Many hours of reading and keyboard pleasure await anyone using these volumes.

    September 15, 2015

    Most Significant Barriers to Achieving a Strong Cybersecurity Posture

    Filed under: Cybersecurity,Software,Software Engineering — Patrick Durusau @ 6:15 pm

    Cyber-Security Stat of the Day, is sponsored by Grid Cyber Sec, and is a window into cyber-security practices/thinking.

    For September 14, 2015, we find Most Significant Barriers to Achieving a Strong Cybersecurity Posture:

    cyber-stat-barriers

    Does the omission of “more secure software” shock you? (You know the difference between “shock” and “surprise.” Yes?)

    If we keep layering buggy software on top of buggy software, then we are no smarter than most of the members of Congress who think legislation can determine behavior. It can influence it, mostly in ways not intended but determine it?

    Buggy software + more buggy software = cyber insecurity.

    What’s so hard about that?

    BTW, do subscribe to Cyber-Security Stat of the Day. Sometimes funny, sometimes helpful, sometimes dismaying but its never boring.

    June 22, 2015

    Mars Code

    Filed under: Cybersecurity,Programming,Security,Software,Software Engineering — Patrick Durusau @ 2:55 pm

    Mars Code by Gerald Holzmann, JPL Laboratory for Reliable Software.

    Abstract:

    On August 5 at 10:18 p.m. PDT, a large rover named Curiosity made a soft landing on the surface of Mars. Given the one-way light-time to Mars, the controllers on Earth learned about the successful touchdown 14 minutes later, at 10:32 p.m. PDT. As can be expected, all functions on the rover, and on the spacecraft that brought it to Mars, are controlled by software. In this talk we review the process that was followed to secure the reliability of this code.

    Gerard Holzmann is a senior research scientist and a fellow at NASA’s Jet Propulsion Laboratory, the lab responsible for the design of the Mars Science Laboratory Mission to Mars and its Curiosity Rover. He is best known for designing the Logic Model Checker Spin, a broadly used tool for the logic verification of multi-threaded software systems. Holzmann is a fellow of the ACM and a member of the National Academy of Engineering.

    Timemark 8:50 starts the discussion of software environments for testing.

    The first slide about software reads:

    3.8 million lines
    ~ 60,000 pages
    ~ 100 really large books

    120 Parallel Threads

    2 CPUs (1 spare, not parallel, hardware backup)

    5 years development time, with a team of 40 software engineers, < 10 lines of code per hour

    1 customer, 1 use: it must work the first time

    So how do you make sure you get it right?

    Steps they took to make the software right:

    1. adopted a risk-based Coding Standard with tool-based compliance checks (very few rules and every rule had a mission that failed because the rule wasn’t followed)
    2. provided training & Certification for software developers
    3. conducted daily builds integrated with Static Source Code Analysis (with penalities for breaking the build)
    4. used a tool-based Code Review process
    5. thorough unit- and (daily) integration testing
    6. did Logic Verification of critical subsystems with a model checker

    Continues to examine each these areas in detail. Be forewarned, the first level of conformance is compiling with all warnings on and having 0 warnings. The bare minimum.

    BTW, there are a number of resources online at the JPL Laboratory for Reliable Software (LaRS).

    Share this post with anyone who claims it is too hard to write secure software. It may be, for them, but not for everyone.

    May 22, 2015

    Rosetta’s Way Back to the Source

    Filed under: Compilers,Computer Science,Programming,Software Engineering — Patrick Durusau @ 4:11 pm

    Rosetta’s Way Back to the Source – Towards Reverse Engineering of Complex Software by Herman Bos.

    From the webpage:

    The Rosetta project, funded by the EU in the form of an ERC grant, aims to develop techniques to enable reverse engineering of complex software sthat is available only in binary form. To the best of our knowledge we are the first to start working on a comprehensive and realistic solution for recovering the data structures in binary programs (which is essential for reverse engineering), as well as techniques to recover the code. The main success criterion for the project will be our ability to reverse engineer a realistic, complex binary. Additionally, we will show the immediate usefulness of the information that we extract from the binary code (that is, even before full reverse engineering), by automatically hardening the software to make it resilient against memory corruption bugs (and attacks that exploit them).

    In the Rosetta project, we target common processors like the x86, and languages like C and C++ that are difficult to reverse engineer, and we aim for full reverse engineering rather than just decompilation (which typically leaves out data structures and semantics). However, we do not necessarily aim for fully automated reverse engineering (which may well be impossible in the general case). Rather, we aim for techniques that make the process straightforward. In short, we will push reverse engineering towards ever more complex programs.

    Our methodology revolves around recovering data structures, code and semantic information iteratively. Specifically, we will recover data structures not so much by statically looking at the instructions in the binary program (as others have done), but mainly by observing how the data is used

    Research question. The project addresses the question whether the compilation process that translates source code to binary code is irreversible for complex software. Irreversibility of compilation is an assumed property that underlies most of the commercial software today. Specifically, the project aims to demonstrate that the assumption is false.
    … (emphasis added)

    Herman gives a great thumbnail sketch of the difficulties and potential for this project.

    Looking forward to news of a demonstration that “irreversibility of computation” is false.

    One important use case being verification that software that claims to have used prevention of buffer overflow techniques has in fact done so. Not the sort of thing I would entrust to statements in marketing materials.

    April 24, 2015

    jQAssistant 1.0.0 released

    Filed under: Neo4j,Programming,Software,Software Engineering — Patrick Durusau @ 2:25 pm

    jQAssistant 1.0.0 released by Dirk Mahler.

    From the webpage:

    We’re proud to announce the availability of jQAssistant 1.0.0 – lots of thanks go to all the people who made this possible with their ideas, criticism and code contributions!

    Feature Overview

    • Static code analysis tool using the graph database Neo4j
    • Scanning of software related structures, e.g. Java artifacts (JAR, WAR, EAR files), Maven descriptors, XML files, relational database schemas, etc.
    • Allows definition of rules and automated verification during a build process
    • Rules are expressed as Cypher queries or scripts (e.g. JavaScript, Groovy or JRuby)
    • Available as Maven plugin or CLI (command line interface)
    • Highly extensible by plugins for scanners, rules and reports
    • Integration with SonarQube
    • It’s free and Open Source

    Example Use Cases

    • Analysis of existing code structures and matching with proposed architecture and design concepts
    • Impact analysis, e.g. which test is affected by potential code changes
    • Visualization of architectural concepts, e.g. modules, layers and their dependencies
    • Continuous verification and reporting of constraint violations to provide fast feedback to developers
    • Individual gathering and filtering of metrics, e.g. complexity per component
    • Post-Processing of reports of other QA tools to enable refactorings in brown field projects
    • and much more…

    Get it!

    jQAssistant is available as a command line client from the downloadable distribution

    jqassistant.sh scan -f my-application.war
    jqassistant.sh analyze
    jqassistant.sh server
    

    or as Maven plugin:

    <dependency>
        <groupId>com.buschmais.jqassistant.scm</groupId>
        <artifactId>jqassistant-maven-plugin</artifactId>
        <version>1.0.0</version>
    </dependency>
    

    For a list of latest changes refer to the release notes, the documentation provides usage information.

    Those who are impatient should go for the Get Started page which provides information about the first steps about scanning applications and running analysis.

    Your Feedback Matters

    Every kind of feedback helps to improve jQAssistant: feature requests, bug reports and even questions about how to solve specific problems. You can choose between several channels – just pick your preferred one: the discussion group, stackoverflow, a Gitter channel, the issue tracker, e-mail or just leave a comment below.

    Workshops

    You want to get started quickly for an inventory of an existing Java application architecture? Or you’re interested in setting up a continuous QA process that verifies your architectural concepts and provides graphical reports?
    The team of buschmais GbR offers individual workshops for you! For getting more information and setting up an agenda refer to http://jqassistant.de (German) or just contact us via e-mail!

    Short of wide spread censorship, in order for security breaches to fade from the news spotlight, software quality/security must improve.

    jQAssistant 1.0.0 is one example of the type of tool required for software quality/security to improve.

    Of particular interest is its use of Neo4j, enables having named relationships of materials to your code.

    I don’t mean to foster the “…everything is a graph…” any more than I would foster “…everything is a set of relational tables…” or “…everything is a key/value pair…,” etc. Yes, but the question is: “What is the best way, given my requirements and constraints to achieve objective X?” Whether relationships are explicit, if so, what can I say about them?, or implicit, depends on my requirements, not those of a vendor.

    In the case of recording who wrote the most buffer overflows and where, plus other flaws, tracking named relationships and similar information should be part of your requirements and graphs are a good way to meet that requirement.

    March 17, 2015

    Can Spark Streaming survive Chaos Monkey?

    Filed under: Software,Software Engineering,Spark — Patrick Durusau @ 12:57 pm

    Can Spark Streaming survive Chaos Monkey? by Bharat Venkat, Prasanna Padmanabhan, Antony Arokiasamy, Raju Uppalap.

    From the post:

    Netflix is a data-driven organization that places emphasis on the quality of data collected and processed. In our previous blog post, we highlighted our use cases for real-time stream processing in the context of online recommendations and data monitoring. With Spark Streaming as our choice of stream processor, we set out to evaluate and share the resiliency story for Spark Streaming in the AWS cloud environment. A Chaos Monkey based approach, which randomly terminated instances or processes, was employed to simulate failures.

    Spark on Amazon Web Services (AWS) is relevant to us as Netflix delivers its service primarily out of the AWS cloud. Stream processing systems need to be operational 24/7 and be tolerant to failures. Instances on AWS are ephemeral, which makes it imperative to ensure Spark’s resiliency.

    If Spark was commercial product this is where you would see in bold, not a vendor report, from a customer.

    You need to see the post for the details but so you know what to expect:

    Component
    Type
    Behaviour on Component Failure
    Resilient
    Driver
    Process
    Client Mode: The entire application is killed
    Cluster Mode with supervise: The Driver is restarted on a different Worker node
    Master
    Process
    Single Master: The entire application is killed
    Multi Master: A STANDBY master is elected ACTIVE
    Worker Process
    Process
    All child processes (executor or driver) are also terminated and a new worker process is launched
    Executor
    Process
    A new executor is launched by the Worker process
    Receiver
    Thread(s)
    Same as Executor as they are long running tasks inside the Executor
    Worker Node
    Node
    Worker, Executor and Driver processes run on Worker nodes and the behavior is same as killing them individually

    I can think of few things more annoying that software that works, sometimes. If you want users to rely upon you, then your service will have to be reliable.

    A performance post by Netflix is rumored to be in the offing!

    Enjoy!

    March 9, 2015

    Programs and Proofs: Mechanizing Mathematics with Dependent Types

    Filed under: Coq,Functional Programming,Proof Theory,Software Engineering,Types — Patrick Durusau @ 3:49 pm

    Programs and Proofs: Mechanizing Mathematics with Dependent Types by Ilya Sergey.

    From the post:

    coq-logo

    The picture “Le coq mécanisé” is courtesy of Lilia Anisimova

    These lecture notes are the result of the author’s personal experience of learning how to structure formal reasoning using the Coq proof assistant and employ Coq in large-scale research projects. The present manuscript offers a brief and practically-oriented introduction to the basic concepts of mechanized reasoning and interactive theorem proving.

    The primary audience of the manuscript are the readers with expertise in software development and programming and knowledge of discrete mathematic disciplines on the level of an undergraduate university program. The high-level goal of the course is, therefore, to demonstrate how much the rigorous mathematical reasoning and development of robust and intellectually manageable programs have in common, and how understanding of common programming language concepts provides a solid background for building mathematical abstractions and proving theorems formally. The low-level goal of this course is to provide an overview of the Coq proof assistant, taken in its both incarnations: as an expressive functional programming language with dependent types and as a proof assistant providing support for mechanized interactive theorem proving.

    By aiming these two goals, this manuscript is, thus, intended to provide a demonstration how the concepts familiar from the mainstream programming languages and serving as parts of good programming practices can provide illuminating insights about the nature of reasoning in Coq’s logical foundations and make it possible to reduce the burden of mechanical theorem proving. These insights will eventually give the reader a freedom to focus solely on the essential part of the formal development instead of fighting with the proof assistant in futile attempts to encode the “obvious” mathematical intuition.

    One approach to change the current “it works, let’s ship” software development model. Users prefer software that works but in these security conscious times, having software that works and is to some degree secure, is even better.

    Looking forward to software with a warranty as a major disruption of the software industry. Major vendors are organized around there being no warranty/liability for software failures. A startup, organized to account for warranty/liability, would be a powerful opponent.

    Proof techniques are one way to enable the offering limited warranties for software products.

    I first saw this in a tweet by Comp Sci Fact.

    March 3, 2015

    Principles of Model Checking

    Filed under: Design,Modeling,Software,Software Engineering — Patrick Durusau @ 5:15 pm

    Principles of Model Checking by Christel Baier and Joost-Pieter Katoen. Foreword by Kim Guldstrand Larsen.

    From the webpage:

    Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, or request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.

    The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

    The present IT structure has shown itself to be as secure as a sieve. Do you expect the “Internet of Things” to be any more secure?

    If you are interested in secure or at least less buggy software, more formal analysis is going to be a necessity. This title will give you an introduction to the field.

    It dates from 2008 so some updating will be required.

    I first saw this in a tweet by Reid Draper.

    February 12, 2015

    Akin’s Laws of Spacecraft Design*

    Filed under: Design,Software Engineering — Patrick Durusau @ 8:16 pm

    Akin’s Laws of Spacecraft Design* by David Adkins.

    I started to do some slight editing to make these laws of “software” design but if you can’t make that transposition for yourself, my doing isn’t going to help.

    From the site of origin (unchanged):

    1. Engineering is done with numbers. Analysis without numbers is only an opinion.

    2. To design a spacecraft right takes an infinite amount of effort. This is why it’s a good idea to design them to operate when some things are wrong .

    3. Design is an iterative process. The necessary number of iterations is one more than the number you have currently done. This is true at any point in time.

    4. Your best design efforts will inevitably wind up being useless in the final design. Learn to live with the disappointment.

    5. (Miller’s Law) Three points determine a curve.

    6. (Mar’s Law) Everything is linear if plotted log-log with a fat magic marker.

    7. At the start of any design effort, the person who most wants to be team leader is least likely to be capable of it.

    8. In nature, the optimum is almost always in the middle somewhere. Distrust assertions that the optimum is at an extreme point.

    9. Not having all the information you need is never a satisfactory excuse for not starting the analysis.

    10. When in doubt, estimate. In an emergency, guess. But be sure to go back and clean up the mess when the real numbers come along.

    11. Sometimes, the fastest way to get to the end is to throw everything out and start over.

    12. There is never a single right solution. There are always multiple wrong ones, though.

    13. Design is based on requirements. There’s no justification for designing something one bit "better" than the requirements dictate.

    14. (Edison’s Law) "Better" is the enemy of "good".

    15. (Shea’s Law) The ability to improve a design occurs primarily at the interfaces. This is also the prime location for screwing it up.

    16. The previous people who did a similar analysis did not have a direct pipeline to the wisdom of the ages. There is therefore no reason to believe their analysis over yours. There is especially no reason to present their analysis as yours.

    17. The fact that an analysis appears in print has no relationship to the likelihood of its being correct.

    18. Past experience is excellent for providing a reality check. Too much reality can doom an otherwise worthwhile design, though.

    19. The odds are greatly against you being immensely smarter than everyone else in the field. If your analysis says your terminal velocity is twice the speed of light, you may have invented warp drive, but the chances are a lot better that you’ve screwed up.

    20. A bad design with a good presentation is doomed eventually. A good design with a bad presentation is doomed immediately.

    21. (Larrabee’s Law) Half of everything you hear in a classroom is crap. Education is figuring out which half is which.

    22. When in doubt, document. (Documentation requirements will reach a maximum shortly after the termination of a program.)

    23. The schedule you develop will seem like a complete work of fiction up until the time your customer fires you for not meeting it.

    24. It’s called a "Work Breakdown Structure" because the Work remaining will grow until you have a Breakdown, unless you enforce some Structure on it.

    25. (Bowden’s Law) Following a testing failure, it’s always possible to refine the analysis to show that you really had negative margins all along.

    26. (Montemerlo’s Law) Don’t do nuthin’ dumb.

    27. (Varsi’s Law) Schedules only move in one direction.

    28. (Ranger’s Law) There ain’t no such thing as a free launch.

    29. (von Tiesenhausen’s Law of Program Management) To get an accurate estimate of final program requirements, multiply the initial time estimates by pi, and slide the decimal point on the cost estimates one place to the right.

    30. (von Tiesenhausen’s Law of Engineering Design) If you want to have a maximum effect on the design of a new engineering system, learn to draw. Engineers always wind up designing the vehicle to look like the initial artist’s concept.

    31. (Mo’s Law of Evolutionary Development) You can’t get to the moon by climbing successively taller trees.

    32. (Atkin’s Law of Demonstrations) When the hardware is working perfectly, the really important visitors don’t show up.

    33. (Patton’s Law of Program Planning) A good plan violently executed now is better than a perfect plan next week.

    34. (Roosevelt’s Law of Task Planning) Do what you can, where you are, with what you have.

    35. (de Saint-Exupery’s Law of Design) A designer knows that he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.

    36. Any run-of-the-mill engineer can design something which is elegant. A good engineer designs systems to be efficient. A great engineer designs them to be effective.

    37. (Henshaw’s Law) One key to success in a mission is establishing clear lines of blame.

    38. Capabilities drive requirements, regardless of what the systems engineering textbooks say.

    39. Any exploration program which "just happens" to include a new launch vehicle is, de facto, a launch vehicle program.

    39. (alternate formulation) The three keys to keeping a new manned space program affordable and on schedule:
           1)  No new launch vehicles.
           2)  No new launch vehicles.
           3)  Whatever you do, don’t develop any new launch vehicles.

    40. (McBryan’s Law) You can’t make it better until you make it work.

    41. Space is a completely unforgiving environment. If you screw up the engineering, somebody dies (and there’s no partial credit because most of the analysis was right…)

    I left the original as promised for for software projects I would re-cast #1 to read:

    1. Software Engineering is based on user feedback. Analysis without user feedback is fantasy (yours).

    Enjoy!

    I first saw this in a tweet by Neal Richter.

    January 13, 2015

    Adventures in Design

    Filed under: Design,Medical Informatics,Software Engineering — Patrick Durusau @ 2:36 pm

    Whether you remember the name or not, you have heard of the Therac-25, a radiation therapy machine responsible for giving massive radiation doses resulting in serious injury or death between 1985 and 1987. Classic case for software engineering.

    The details are quite interesting but I wanted to point out that it doesn’t take complex or rare software failures to be dangerous.

    Case in point: I received a replacement insulin pump today that had the following header:

    medtronic-models

    The problem?

    medtronic-roll-over

    Interesting. You go down from “zero” to the maximum setting.

    FYI, the device in question measures insulin in 0.05 increments, so 10.0 units is quite a bit. Particularly if that isn’t what you intended to do.

    Medtronic has offered a free replacement for any pump with this “roll around feature.”

    I have been using Medtronic devices for years and have always found them to be extremely responsive to users so don’t take this as a negative comment on them or their products.

    It is, however, a good illustration that what may be a feature to one user may well not be a feature for another. Which makes me wonder, how do you design counters? Do they wrap at maximum/minimum values?

    Design issues only come up when you recognize them as design issues. Otherwise they are traps for the unwary.

    December 27, 2014

    Software Foundations

    Filed under: Coq,Functional Programming,Programming,Proof Theory,Software Engineering,Types — Patrick Durusau @ 4:49 pm

    Software Foundations by Benjamin Pierce and others.

    From the preface:

    This electronic book is a course on Software Foundations, the mathematical underpinnings of reliable software. Topics include basic concepts of logic, computer-assisted theorem proving and the Coq proof assistant, functional programming, operational semantics, Hoare logic, and static type systems. The exposition is intended for a broad range of readers, from advanced undergraduates to PhD students and researchers. No specific background in logic or programming languages is assumed, though a degree of mathematical maturity will be helpful.

    One novelty of the course is that it is one hundred per cent formalized and machine-checked: the entire text is literally a script for Coq. It is intended to be read alongside an interactive session with Coq. All the details in the text are fully formalized in Coq, and the exercises are designed to be worked using Coq.

    The files are organized into a sequence of core chapters, covering about one semester’s worth of material and organized into a coherent linear narrative, plus a number of “appendices” covering additional topics. All the core chapters are suitable for both graduate and upper-level undergraduate students.

    This looks like a real treat!

    Imagine security in a world where buggy software (by error and design) wasn’t patched by more buggy software (by error and design) and protected by security software, which is also buggy (by error and design). Would that change the complexion of current security issues?

    I first saw this in a tweet by onepaperperday.

    PS: Sony got hacked, again. Rumor is that this latest Sony hack was an extra credit exercise for a 6th grade programming class.

    August 20, 2014

    Not just the government’s playbook

    Filed under: Programming,Project Management,Software Engineering — Patrick Durusau @ 3:59 pm

    Not just the government’s playbook by Mike Loukides.

    From the post:

    Whenever I hear someone say that “government should be run like a business,” my first reaction is “do you know how badly most businesses are run?” Seriously. I do not want my government to run like a business — whether it’s like the local restaurants that pop up and die like wildflowers, or megacorporations that sell broken products, whether financial, automotive, or otherwise.

    If you read some elements of the press, it’s easy to think that healthcare.gov is the first time that a website failed. And it’s easy to forget that a large non-government website was failing, in surprisingly similar ways, at roughly the same time. I’m talking about the Common App site, the site high school seniors use to apply to most colleges in the US. There were problems with pasting in essays, problems with accepting payments, problems with the app mysteriously hanging for hours, and more.

    I don’t mean to pick on Common App; you’ve no doubt had your own experience with woefully bad online services: insurance companies, Internet providers, even online shopping. I’ve seen my doctor swear at the Epic electronic medical records application when it crashed repeatedly during an appointment. So, yes, the government builds bad software. So does private enterprise. All the time. According to TechRepublic, 68% of all software projects fail. We can debate why, and we can even debate the numbers, but there’s clearly a lot of software #fail out there — in industry, in non-profits, and yes, in government.

    With that in mind, it’s worth looking at the U.S. CIO’s Digital Services Playbook. It’s not ideal, and in many respects, its flaws reveal its origins. But it’s pretty good, and should certainly serve as a model, not just for the government, but for any organization, small or large, that is building an online presence.

    See Mike’s post for the extracted thirteen (13) principles (plays in Obama-speak) for software projects.

    While everybody needs a reminder, what puzzles me is that none of the principles are new. That being the case, shouldn’t we be asking:

    Why haven’t projects been following these rules?

    Reasoning that if we (collectively) know what makes software projects succeed, what are the barrier to implementing those steps in all software projects?

    Re-stating rules that we already know to be true, without more, isn’t very helpful. Projects that start tomorrow with have a fresh warning in their ears and commit the same errors that doom 68% of all other projects.

    My favorite suggestion and the one I have seen violated most often is:

    Bring in experienced teams

    I am told, “…our staff don’t know how to do X, Y or Z….” That sounds to me like a personnel problem. In an IT recession, a problem that isn’t hard to fix. But no, the project has to succeed with IT staff known to lack the project management or technical skills to succeed. You can guess the outcome of such projects in advance.

    The restatement of project rules isn’t a bad thing to have but your real challenge is going to be following them. Suggestions for everyone’s benefit welcome!

    February 12, 2013

    Software Engineering and Knowledge Engineering (Archives)

    Filed under: Conferences,Knowledge Engineering,Software Engineering — Patrick Durusau @ 6:17 pm

    Proceedings of the International Conference on Software Engineering and Knowledge Engineering

    From the webpage:

    SEKE 2012 Proceedings July 1 – July 3, 2012 Hotel Sofitel, Redwood City, San Francisco Bay, USA
    SEKE 2011 Proceedings July 7 – July 9, 2011 Eden Roc Renaissance Miami Beach, USA
    SEKE 2010 Proceedings July 1 – July 3, 2010 Hotel Sofitel, Redwood City, San Francisco Bay, USA
    SEKE 2009 Proceedings July 1 – July 3, 2009 Hyatt Harborside at Logan Int’l Airport, Boston, USA
    SEKE 2008 Proceedings July 1 – July 3, 2008 Hotel Sofitel, Redwood City, San Francisco Bay, USA
    SEKE 2007 Proceedings July 9 – July 11, 2007 Hyatt Harborside at Logan Int’l Airport, Boston, USA

    Another treasure I discovered while hunting down topic map papers.

    For coverage, see the call for papers, SEKE 2013.

    SEKE 2013

    Filed under: Conferences,Knowledge Engineering,Software Engineering — Patrick Durusau @ 6:16 pm

    SEKE 2013: The 25th International Conference on Software Engineering and Knowledge Engineering

    Dates:

    Paper submission due: Midnight EST, March 1, 2013
    Notification of acceptance: April 20, 2013
    Early registration deadline: May 10, 2013
    Camera-ready copy: May 10, 2013
    Conference: June 27 – 29, 2013

    From the call for papers:

    The Twenty-Fifth International Conference on Software Engineering and Knowledge Engineering (SEKE 2013) will be held at Hyatt Harborside at Boston’s Logan International Airport, USA from June 27 to June 29, 2013.

    The conference aims at bringing together experts in software engineering and knowledge engineering to discuss on relevant results in either software engineering or knowledge engineering or both. Special emphasis will be put on the transference of methods between both domains. Submission of papers and demos are both welcome.

    TOPICS

    Agent architectures, ontologies, languages and protocols
    Multi-agent systems
    Agent-based learning and knowledge discovery
    Interface agents
    Agent-based auctions and marketplaces
    Artificial life and societies
    Secure mobile and multi-agent systems
    Mobile agents
    Mobile Commerce Technology and Application Systems
    Mobile Systems

    Autonomic computing
    Adaptive Systems
    Integrity, Security, and Fault Tolerance
    Reliability
    Enterprise Software, Middleware, and Tools
    Process and Workflow Management
    E-Commerce Solutions and Applications
    Industry System Experience and Report

    Service-centric software engineering
    Service oriented requirements engineering
    Service oriented architectures
    Middleware for service based systems
    Service discovery and composition
    Quality of services
    Service level agreements (drafting, negotiation, monitoring and management)
    Runtime service management
    Semantic web

    Requirements Engineering
    Agent-based software engineering
    Artificial Intelligence Approaches to Software Engineering
    Component-Based Software Engineering
    Automated Software Specification
    Automated Software Design and Synthesis
    Computer-Supported Cooperative Work
    Embedded and Ubiquitous Software Engineering
    Measurement and Empirical Software Engineering
    Reverse Engineering
    Programming Languages and Software Engineering
    Patterns and Frameworks
    Reflection and Metadata Approaches
    Program Understanding

    Knowledge Acquisition
    Knowledge-Based and Expert Systems
    Knowledge Representation and Retrieval
    Knowledge Engineering Tools and Techniques
    Time and Knowledge Management Tools
    Knowledge Visualization
    Data visualization
    Uncertainty Knowledge Management
    Ontologies and Methodologies
    Learning Software Organization
    Tutoring, Documentation Systems
    Human-Computer Interaction
    Multimedia Applications, Frameworks, and Systems
    Multimedia and Hypermedia Software Engineering

    Smart Spaces
    Pervasive Computing
    Swarm intelligence
    Soft Computing

    Software Architecture
    Software Assurance
    Software Domain Modeling and Meta-Modeling
    Software dependability
    Software economics
    Software Engineering Decision Support
    Software Engineering Tools and Environments
    Software Maintenance and Evolution
    Software Process Modeling
    Software product lines
    Software Quality
    Software Reuse
    Software Safety
    Software Security
    Software Engineering Case Study and Experience Reports

    Web and text mining
    Web-Based Tools, Applications and Environment
    Web-Based Knowledge Management
    Web-Based Tools, Systems, and Environments
    Web and Data Mining

    Given the range of topics, I am sure you can find one or two that interest you and involve issues where topic maps can make a significant contribution.

    Looking forward to seeing your paper in the SEKE Proceedings for 2013.

    Powered by WordPress