Archive for the ‘Requirements’ Category

Introspection For Your iPhone (phone security)

Thursday, July 21st, 2016

Against the Law: Countering Lawful Abuses of Digital Surveillance by Andrew “bunnie’ Huang and Edward Snowden.

From the post:

Front-line journalists are high-value targets, and their enemies will spare no expense to silence them. Unfortunately, journalists can be betrayed by their own tools. Their smartphones are also the perfect tracking device. Because of the precedent set by the US’s “third-party doctrine,” which holds that metadata on such signals enjoys no meaningful legal protection, governments and powerful political institutions are gaining access to comprehensive records of phone emissions unwittingly broadcast by device owners. This leaves journalists, activists, and rights workers in a position of vulnerability. This work aims to give journalists the tools to know when their smart phones are tracking or disclosing their location when the devices are supposed to be in airplane mode. We propose to accomplish this via direct introspection of signals controlling the phone’s radio hardware. The introspection engine will be an open source, user-inspectable and field-verifiable module attached to an existing smart phone that makes no assumptions about the trustability of the phone’s operating system.

If that sounds great, you have to love their requirements:

Our introspection engine is designed with the following goals in mind:

  1. Completely open source and user-inspectable (“You don’t have to trust us”)
  2. Introspection operations are performed by an execution domain completely separated from the phone’s CPU (“don’t rely on those with impaired judgment to fairly judge their state”)
  3. Proper operation of introspection system can be field-verified (guard against “evil maid” attacks and hardware failures)
  4. Difficult to trigger a false positive (users ignore or disable security alerts when there are too many positives)
  5. Difficult to induce a false negative, even with signed firmware updates (“don’t trust the system vendor” – state-level adversaries with full cooperation of system vendors should not be able to craft signed firmware updates that spoof or bypass the introspection engine)
  6. As much as possible, the introspection system should be passive and difficult to detect by the phone’s operating system (prevent black-listing/targeting of users based on introspection engine signatures)
  7. Simple, intuitive user interface requiring no specialized knowledge to interpret or operate (avoid user error leading to false negatives; “journalists shouldn’t have to be cryptographers to be safe”)
  8. Final solution should be usable on a daily basis, with minimal impact on workflow (avoid forcing field reporters into the choice between their personal security and being an effective journalist)

This work is not just an academic exercise; ultimately we must provide a field-ready introspection solution to protect reporters at work.

You need to copy those eight requirements out to a file for editing. When anyone proposes a cybersecurity solution, reword as appropriate as your user requirements.

An artist conception of what protection for an iPhone might look like:


Interested in protecting reporters and personal privacy? Follow Andrew ‘bunnie’ Huang’s blog.

Requirements – Programming Exercise – @jessitron

Friday, March 4th, 2016

Jessica Kerr @jessitron posted to Twitter:

Programming exercise:
I give you some requirements
You write the code
A third person tries to guess the requirements based on the code.

Care to try the same exercise on existing business/government processes?

Or return to code that you wrote a year or more ago?

If you aren’t following @jessitron you should be.

Army Changing How It Does Requirements [How Are Your Big Data Requirements Coming?]

Friday, February 20th, 2015

Army Changing How It Does Requirements: McMaster by Sydney J. Freedberg Jr.

From the post:

So there’s a difficult balance to strike between the three words that make up “mobile protected firepower.” The vehicle is still just a concept, not a funded program. But past projects like FCS began going wrong right from those first conceptual stages, when TRADOC Systems Managers (TSMs) wrote up the official requirements for performance with little reference to what tradeoffs would be required in terms of real-world engineering. So what is TRADOC doing differently this time?

“We just did an Initial Capability Document [ICD] for ‘mobile protected firepower,’” said McMaster. “When we wrote that document, we brought together 18th Airborne Corps and other [infantry] and Stryker brigade combat team leadership” — i.e. the units that would actually use the vehicle — “who had recent operational experience.”

So they’re getting help — lots and lots of help. In an organization as bureaucratic and tribal as the Army, voluntarily sharing power is a major breakthrough. It’s especially big for TRADOC, which tends to take on priestly airs as guardian of the service’s sacred doctrinal texts. What TRADOC has done is a bit like the Vatican asking the Bishop of Boise to help draft a papal bull.

But that’s hardly all. “We brought together, obviously, the acquisition community, so PEO Ground Combat Vehicle was in on the writing of the requirements. We brought in the Army lab, TARDEC,” McMaster told reporters at a Defense Writers’ Group breakfast this morning. “We brought in Army Materiel Command and the sustainment community to help write it. And then we brought in the Army G-3 [operations and plans] and the Army G-8 [resources]” from the service’s Pentagon staff.

Traditionally, all these organizations play separate and unequal roles in the process. This time, said McMaster, “we wrote the document together.” That’s the model for how TRADOC will write requirements in the future, he went on: “Do it together and collaborate from the beginning.”

It’s important to remember how huge a hole the Army has to climb out of. The 2011 Decker-Wagner report calculated that, since 1996, the Army had wasted from $1 billion to $3 billion annually on two dozen different cancelled programs. The report pointed out an institutional problem much bigger than just the Future Combat System. Indeed, since FCS went down in flames, the Army has cancelled yet another major program, its Ground Combat Vehicle.

As I ask in the headline: How Are Your Big Data Requirements Coming?

Have you gotten all the relevant parties together? Have they all collaborated on making the business case for your use of big data? Or are your requirements written by managers who are divorced from the people to use the resulting application or data? (Think Virtual Case File.)

The Army appears to have gotten the message on requirements, temporarily at least. How about you?

Inside the world’s biggest agile software project disaster

Tuesday, September 10th, 2013

Inside the world’s biggest agile software project disaster by Lucy Carey.

From the post:

In theory, it was a good idea – using a smart new methodology to unravel a legacy of bureaucratic tangles. In reality, execution of the world’s largest agile software project has been less than impressive.

By developing its flagship Universal Credit (UC) digital project – an initiative designed to merge six separate benefits strands into one – using agile principles, the UK Department for Work and Pensions (DWP) hoped to decisively lay the ghosts of past DWP-backed digital projects to bed.

Unfortunately, a report by the National Audit Office (NAO) has demonstrated that the UK government’s IT gremlins remain in rude health, with £34 million of new IT assets to date written off by the DWP on this project alone. Moreover, the report states that the project has failed to deliver its rollout targets, and that the DWP is now unsure how much of its current IT will be viable for a national rollout – all pretty damning indictments for an initiative that was supposed to be demonstrating the merits of the Agile Framework for central UK government systems.

Perhaps one of the most biggest errors for implementing an agile approach highlighted by the NAO is the failure of the DWP to define how it would monitor progress or document decisions and the need to integrate the new systems with existing IT, procured and managed assuming the traditional ‘waterfall’ approach.

Don’t take this post wrong. It is equally easy to screw up with a “waterfall” approach to project management. Particularly with inadequate management, documentation and requirements.

However, this is too good of an example of why everyone in a project should be pushed to write down with some degree of precision what they expect, how to know when it arrives and deadlines for meeting their expectations.

Without all of that in writing, shared writing with the entire team, project “success” will be a matter of face saving and not accomplishment of the original goals, whatever they may have been.

Dashboard Requirement Gathering Satire

Monday, July 22nd, 2013

Dashboard Requirement Gathering Satire by Nick Barclay.

From the post:

A colleague of mine put together a hilarious PeepzMovie that was inspired by some frustrating projects we’re working on currently.

If you’re a BI pro, do yourself a favor and take a few mins to watch it.

Watch the video at Nick’s post.

Show of hands: Who has not had this experience when developing requirements?


NSA — Untangling the Web: A Guide to Internet Research

Wednesday, May 15th, 2013

NSA — Untangling the Web: A Guide to Internet Research

A Freedom of Information Act (FOIA) request caused the NSA to disgorge its guide to web research, which is some six years out of date.

From the post:

The National Security Agency just released “Untangling the Web,” an unclassified how-to guide to Internet search. It’s a sprawling document, clocking in at over 650 pages, and is the product of many years of research and updating by a NSA information specialist whose name is redacted on the official release, but who is identified as Robyn Winder of the Center for Digital Content on the Freedom of Information Act request that led to its release.

It’s a droll document on many levels. First and foremost, it’s funny to think of officials who control some of the most sophisticated supercomputers and satellites ever invented turning to a .pdf file for tricks on how to track down domain name system information on an enemy website. But “Untangling the Web” isn’t for code-breakers or wire-tappers. The target audience seems to be staffers looking for basic factual information, like the preferred spelling of Kazakhstan, or telephonic prefix information for East Timor.

I take it as guidance on how “good” does your application or service need to be to pitch to the government?

I keep thinking to attract government attention, an application needs to fall just short of solving P = NP?

On the contrary, the government needs spell checkers, phone information and no doubt lots of other dull information, quickly.

Perhaps an app that signals fresh doughnuts from bakeries within X blocks would be just the thing. 😉

Requirements and Brown M&M’s Clauses

Wednesday, April 17th, 2013

Use a No Brown M&M’s Clause by Jim Harris.

From the post:

There is a popular story about David Lee Roth exemplifying the insane demands of a power-mad celebrity by insisting that Van Halen’s contracts with concert promoters contain a clause that a bowl of M&M’s has to be provided backstage with every single brown candy removed, upon pain of forfeiture of the show, with full compensation to the band.

At least once, Van Halen followed through, peremptorily canceling a show in Colorado when Roth found some brown M&M’s in his dressing room – a clear violation of the No Brown M&M’s Clause.

However, in his book The Checklist Manifesto: How to Get Things Right, Atul Gawande recounted the explanation that Roth provided in his memoir Crazy from the Heat. “Van Halen was the first band to take huge productions into tertiary, third-level markets. We’d pull up with nine eighteen-wheeler trucks, full of gear, where the standard was three trucks, max. And there were many, many technical errors – whether it was the girders couldn’t support the weight, or the flooring would sink in, or the doors weren’t big enough to move the gear through.”

Therefore, because there was so much equipment, requiring so much coordination to make their concerts function smoothly and safely, Van Halen’s contracts were massive. So, just as a little test to see if the contract had actually been read by the concert promoters, buried somewhere in the middle would be article 126: the infamous No Brown M&M’s Clause.

I would not use the same clause as IT consultants will simply scan for the M&M’s clause and delegate someone to do it.

But it would be a good idea for large requirement documents to insert some similar requirement for meetings, report binding, etc.

I don’t know where David Lee Roth got the idea but dictionary publishers do something similar.

List of words and their definitions cannot be copyrighted. For obvious reasons. We don’t want one dictionary to have a monopoly on the definition of one meter for example.

But, dictionary publishers make up words, definitions for those words and include those in their dictionaries. Being original works, they are subject to copyright.

How much you need in terms of requirements will vary.

What won’t vary is your need to know the consultants have at least read your requirements.

Use a Brown M&M’s clause, you won’t regret it.

Writing Effective Requirement Documents – An Overview

Friday, March 29th, 2013

Writing Effective Requirement Documents – An Overview

From the post:

In every UX Design project, the most important part is the requirements gathering process. This is an overview of some of the possible methods of requirements gathering.

Good design will take into consideration all business, user and functional requirements and even sometimes inform new functionality & generate new requirements, based on user comments and feedback. Without watertight requirements specification to work from, much of the design is left to assumptions and subjectivity. Requirements put a project on track & provide a basis for the design. A robust design always ties back to its requirements at every step of the design process.

Although there are many ways to translate project requirements, Use cases, User Stories and Scenarios are the most frequently used methods to capture them. Some elaborate projects may have a comprehensive Business Requirements Document (BRD), which forms the absolute basis for all deliverables for that project.

I will get a bit deeper into what each of this is and in which context each one is used…

Requirements are useful for any project. Especially useful for software projects. But critical for a successful topic map project.

Topic maps can represent or omit any subject of conversation, any relationship between subjects or any other information about a subject.

Not a good practice to assume others will make the same assumptions as you about the subjects to include or what information to include about them.

They might and they might not.

For any topic maps project, insist on a requirements document.

A good requirements document results in accountability for both sides.

The client for specifying what was desired and being responsible for changes and their impacts. The topic map author for delivering on the terms and detail specified in the requirements document.

MongoDB + Fractal Tree Indexes = High Compression

Friday, March 1st, 2013

MongoDB + Fractal Tree Indexes = High Compression by Tim Callaghan.

You may have heard that MapR Technologies broke the MinuteSort Record by sorting 15 billion 100-btye records in 60 seconds. Used 2,103 virtual instances in the Google Compute Engine and each instance had four virtual cores and one virtual disk, totaling 8,412 virtual cores and 2,103 virtual disks. Google Compute Engine, MapR Break MinuteSort Record.

So, the next time you have 8,412 virtual cores and 2,103 virtual disks, you know what is possible, 😉

But if you have less firepower than that, you will need to be clever:

One doesn’t have to look far to see that there is strong interest in MongoDB compression. MongoDB has an open ticket from 2009 titled “Option to Store Data Compressed” with Fix Version/s planned but not scheduled. The ticket has a lot of comments, mostly from MongoDB users explaining their use-cases for the feature. For example, Khalid Salomão notes that “Compression would be very good to reduce storage cost and improve IO performance” and Andy notes that “SSD is getting more and more common for servers. They are very fast. The problems are high costs and low capacity.” There are many more in the ticket.

In prior blogs we’ve written about significant performance advantages when using Fractal Tree Indexes with MongoDB. Compression has always been a key feature of Fractal Tree Indexes. We currently support the LZMA, quicklz, and zlib compression algorithms, and our architecture allows us to easily add more. Our large block size creates another advantage as these algorithms tend to compress large blocks better than small ones.

Given the interest in compression for MongoDB and our capabilities to address this functionality, we decided to do a benchmark to measure the compression achieved by MongoDB + Fractal Tree Indexes using each available compression type. The benchmark loads 51 million documents into a collection and measures the size of all files in the file system (–dbpath).

More benchmarks to follow and you should remember that all benchmarks are just that, benchmarks.

Benchmarks do not represent experience with your data, under your operating load and network conditions, etc.

Investigate software based on the first, purchase software based on the second.

A first failed attempt at Natural Language Processing

Sunday, November 25th, 2012

A first failed attempt at Natural Language Processing by Mark Needham

From the post:

One of the things I find fascinating about dating websites is that the profiles of people are almost identical so I thought it would be an interesting exercise to grab some of the free text that people write about themselves and prove the similarity.

I’d been talking to Matt Biddulph about some Natural Language Processing (NLP) stuff he’d been working on and he wrote up a bunch of libraries, articles and books that he’d found useful.

I started out by plugging the text into one of the many NLP libraries that Matt listed with the vague idea that it would come back with something useful.

I’m not sure exactly what I was expecting the result to be but after 5/6 hours of playing around with different libraries I’d got nowhere and parked the problem not really knowing where I’d gone wrong.

Last week I came across a paper titled “That’s What She Said: Double Entendre Identification” whose authors wanted to work out when a sentence could legitimately be followed by the phrase “that’s what she said”.

While the subject matter is a bit risque I found that reading about the way the authors went about solving their problem was very interesting and it allowed me to see some mistakes I’d made.

Vague problem statement

Unfortunately I didn’t do a good job of working out exactly what problem I wanted to solve – my problem statement was too general.

Question: How do you teach people how to create useful problem statements?

Pointers, suggestions?

Collaborative Systems: Easy To Miss The Mark

Sunday, October 21st, 2012

Collaborative Systems: Easy To Miss The Mark by Jocob Morgan.

From the post:

Map out use cases defining who you want collaborating and what results you want them to achieve. Skip this step in the beginning, and you’ll regret it in the end.

One of the things that organizations really need to consider when evaluating collaborative solutions is their use cases. Not only that, but also understanding the outcomes of those use cases and how they can map to a desired feature requirement. Use cases really help put things into perspective for companies who are seeking to understand the “why” before they figure out the “how.”

That’s what a use case is: the distilled essence of a role within your organization, how it will interact with some system, and the expected or desired result. Developing use cases makes your plans, requirements, and specifications less abstract because it forces you to come up with specific examples.

This is why we created a framework (inspired by Gil Yehuda) to address this. It breaks down as follows:

  • — Identify the overall business problem you are looking to solve (typically there are several).
  • — Narrow down the problem into specific use cases; each problem has several use cases.
  • — Describe the situation that needs to be present for that use case to be applicable.
  • — Clarify the desired action.
  • — State the desired result.

For topic maps I would write:

Map out use cases defining what data you want to identify and/or integrate and what results you expect from that identification or integration. Skip this step in the beginning, and you’ll regret it in the end.

If you don’t have an expectation of a measurable result (in businesses a profitable one), your efforts at semantic integration are premature.

How will you know when you have reached the end of a particular effort?

Requirements Engineering (3rd ed.)

Monday, October 15th, 2012

Requirements Engineering (3rd ed.) by Hull, Elizabeth, Jackson, Ken, Dick, Jeremy. Springer, 3rd ed., 2011, XVIII, 207 p. 131 illus., ISBN 978-1-84996-404-3.

From the webpage:

Using the latest research and driven by practical experience from industry, the third edition of this popular book provides useful information to practitioners on how to write and structure requirements. • Explains the importance of Systems Engineering and the creation of effective solutions to problems • Describes the underlying representations used in system modelling and introduces the UML2 • Considers the relationship between requirements and modelling • Covers a generic multi-layer requirements process • Discusses the key elements of effective requirements management • Explains the important concept of rich traceability In this third edition the authors have updated the overview of DOORS to include the changes featured in version 9.2. An expanded description of Product Family Management and a more explicit definition of Requirements Engineering are also included. Requirements Engineering is written for those who want to develop their knowledge of requirements engineering, whether practitioners or students.

I saw a review of this work on the October 2012 issue of Computing Reviews, where Diego Merani remarks:

The philosopher Seneca once said: “There is no fair wind for one who knows not whither he is bound.” This sentence encapsulates the essence of the book: the most common reasons projects fail involve incomplete requirements, poor planning, and the incorrect estimation of resources, risks, and challenges.

Requirements and the consequences of their absence rings true across software and other projects, including the authoring of topic maps.

Requirements: Don’t leave home without them!

Broken Telephone Game of Defining Software and UI Requirements [And Semantics]

Sunday, October 7th, 2012

The Broken Telephone Game of Defining Software and UI Requirements by Martin Crisp.

Martin is writing in a UI context but the lesson he teaches is equally applicable to any part of software/project management. (Even U.S. federal government big data projects.)

His counsel is not one of dispair, he outlines solutions that can lessen the impact of the broken telephone game.

But it is up to you to recognize the game that is afoot and to react accordingly.

From the post:

The broken telephone game is played all over the world. In it, according to Wikipedia, “one person whispers a message to another, which is passed through a line of people until the last player announces the message to the entire group. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly, and often amusingly, from the one uttered by the first.”

This game is also played inadvertently by a large number of organizations seeking to define software and UI requirements, using information passed from customers, to business analysts, to UI/UX designers, to developers and testers.

Here’s a typical example:

  • The BA or product owner elicits requirements from a customer and writes them down, often as a feature list and use cases.
  • The use cases are interpreted by the UI/UX team to develop UI mockups and storyboards.
  • Testing interprets the storyboards, mockups, and use cases to develop test cases,
  • Also, the developers will try to interpret the use cases, mockups, and storyboards to actually write the code.

As with broken telephone, at each handoff of information the original content is altered. The resulting approach includes a lot of re-work and escalating project costs due to combinations of the following:

  • Use cases don’t properly represent customer requirements.
  • UI/UX design is not consistent with the use cases.
  • Incorrect test cases create false bugs.
  • Missed test cases result in undiscovered bugs.
  • Developers build features that don’t meet customer needs.

The further down the broken telephone line the original requirements get, the more distorted they become. For this reason, UI storyboards, test cases, and code typically require a lot of reworking as requirements are misunderstood or improperly translated by the time they get to the UI and testing teams.

Moon Shots, Flying Ponies and Requirements

Tuesday, August 28th, 2012

At Bruce Eckert’s Mind View site I read:

If somebody comes up to you and says something like, “How do I make this pony fly to the moon?”, the question you need to ask is, “What problem are you trying to solve?” You’ll find out that they really need to collect gray rocks. Why they thought they had to fly to the moon, and use a pony to do it, only they know. People do get confused like this. — Max Kanat-Alexander

Everyone has their own “true” version of that story that can be swapped over beers at a conference.

Or at a “Users say the darnest things,” session.

Is that the key question? “What problem are you trying to solve?”

Or would it be better to ask: “What end result do you want?”

To keep it from being narrowly defined as a “problem,” it could be an opportunity, new product, service, etc.

And to avoid the solution being bound to include Lucene, Hadoop, MySQL, SQL Server, the Large Hadron Collider, etc.

Let’s find out what the goal is, then we can talk about solutions and what role technology will play.

Think of it this way, without an end result in mind, how will you know where to stop?

How Do You Define Failure?

Wednesday, June 6th, 2012

… business intelligence implementations are often called failures when they fail to meet the required objectives, lack user acceptance or are only implemented after numerous long delays.

Called failures? Sounds like failures to me. You?

News: The cause of such failures has been discovered:

…an improperly modeled repository not adhering to basic dimensional modeling principles


I would have said that not having a shared semantic, one shared by all the shareholders in the project, would be the root cause for most project failures.

I’m not particular about how you achieve that shared semantic. You could use white boards, sticky notes or have people physically act out the system. The important thing being to avoid the assumption that other stakeholders “know what I mean by….” They probably don’t. And several months into building of data structures, interfaces, etc., is a bad time to find out you assumed incorrectly.

The lack of a shared semantic can result in an “…improperly modeled repository…” but that is much later in the process.

Quotes from: Oracle Expert Shares Implementation Key

Whose Requirements Are They Anyway?

Monday, March 5th, 2012

Over the last 4,000+ postings I have read an even larger number of presentations, papers, etc.

We all start discussions from what we know best so those presentations/papers/etc. started with a position, product or technology best known to the author.

No surprise there.

What happens next is no surprise either but it isn’t the best next step, at least for users/customers.

Your requirements, generally stated, can be best met by the author’s product or technology.

I am certainly not blameless in that regard but is it the best way to approach a user/customer’s requirements?

By “best way” I mean a solution that mets the user/customer’s requirements, whether that includes your product/technology or not.

Which means changing the decision making process from:

  1. Choose SQL, NoSQL, Semantic Web, Linked Data, Topic Maps, Graphs, Cloud, non-Cloud, Web, non-Web, etc.
  2. Create solution based on choice in #1


  1. Define user/customer requirements
  2. Evaluate cost of meeting requirements against various technology options
  3. Decide on solution based on information from #2
  4. Create solution

I can’t give you the identity but I once consulted with a fairly old (100+) organization that had been sold a state of the art publishing system + installation. It was like a $500K dog that you had to step over going in the door. Great product, for its intended application space, utterly useless for the publishing work flow of the organization.

We all know stories like that one. Both in the private sector as well as in various levels of government around the world. I know a real horror story about an open source application that required support (they all do) which regularly fell over on its side, requiring experts to be flown in from another country. Failing wasn’t one of the requirements for the application, but open source mania lead to its installation.

I like open source projects and serve as the editor of the format (ODF) for several of them. But, choosing a technology based on ideology and not practical requirements is a bad choice. (full stop)

Its unreasonable to expect vendors to urge user/customers to critically evaluate their requirements against a range of products.

Users are going to have to step up and either perform those comparisons themselves or hire non-competing consultants to assist them.

A vendor with a product intended to meet your requirements (not theirs of making the sale) won’t object.

Perhaps that could be the first test of continuing discussions with a vendor?

BI Requirements Gathering: Leveraging What Exists

Friday, March 2nd, 2012

BI Requirements Gathering: Leveraging What Exists by Jonathan G. Geiger.

From the post:

Analysis of Existing Queries and Reports

Businesspeople who are looking for business intelligence capabilities typically are not starting from a clean slate. Over time, they have established a series queries and reports that are executed on an ad hoc or regular basis. These reports contain data that they receive and purportedly use. Understanding these provides both advantages and disadvantages when gathering requirements. The major advantage is that using the existing deliverables helps to provide a basis for discussion. Commenting on something concrete is easier than generating new ideas. With the existing reports in hand, key questions to ask include:

This post includes references to Jonathan’s posts on interviewing and facilitation.

These posts are great guides to use in developing BI requirements. Your circumstances will vary so you will need to adapt these techniques to your particular circumstances. But they are a great starting place.

If your programmers object to requirements gathering because of their “methodology,” I suggest you point them to: Top Ten Reasons Systems Projects Fail by Dr. Paul Dorsey. Or you can search for “project failure rate” and pick any other collection about project failure.

You will not find a single study that points to adequate requirements as a reason for project failure. Quite often inadequate requirements are mentioned but never the contrary. Suspect there is a lesson there. Can you guess what it is?