## Archive for the ‘Project Management’ Category

### Releasing Failed Code to Distract from Accountability

Sunday, December 10th, 2017

Dutch government publishes large project as Free Software by
Carmen Bianca Bakker.

From the post:

The Dutch Ministry of the Interior and Kingdom Relations released the source code and documentation of Basisregistratie Personen (BRP), a 100M€ IT system that registers information about inhabitants within the Netherlands. This comes as a great success for Public Code, and the FSFE applauds the Dutch government’s shift to Free Software.

Operation BRP is an IT project by the Dutch government that has been in the works since 2004. It has cost Dutch taxpayers upwards of 100 million Euros and has endured three failed attempts at revival, without anything to show for it. From the outside, it was unclear what exactly was costing taxpayers so much money with very little information to go on. After the plug had been pulled from the project earlier this year in July, the former interior minister agreed to publish the source code under pressure of Parliament, to offer transparency about the failed project. Secretary of state Knops has now gone beyond that promise and released the source code as Free Software (a.k.a. Open Source Software) to the public.

In 2013, when the first smoke signals showed, the former interior minister initially wanted to address concerns about the project by providing limited parts of the source code to a limited amount of people under certain restrictive conditions. The ministry has since made a complete about-face, releasing a snapshot of the (allegedly) full source code and documentation under the terms of the GNU Affero General Public License, with the development history soon to follow.

As far as the “…complete about-face…,” the American expression is: “You’ve been had.

Be appearing to agonize over the release of the source code, the “former interior minister” has made it appear the public has won a great victory for transparency.

Actually not.

Does the “transparency” offered by the source code show who authorized the expenditure of each part of the 100M€ total and who was paid that 100M€? Does source code “transparency” disclose project management decisions and who, in terms of government officials, approved those project decisions. For that matter, does source code “transparency” disclose discussions of project choices at all and who was present at those discussions?

It’s not hard to see that source code “transparency” is a deliberate failure on the part of the Dutch Ministry of the Interior and Kingdom Relations to be transparent. It has withheld, quite deliberately, any information that would enable Dutch citizens, programmers or otherwise, to have informed opinions about the failure of this project. Or to hold any accountable for its failure.

This may be:

…an unprecedented move of transparency by the Dutch government….

but only if the Dutch government is a black hole in terms of meaningful accountability for its software projects.

Which appears to be the case.

PS: Assuming Dutch citizens can pry project documentation out of the secretive Dutch Ministry of the Interior and Kingdom Relations, I know some Dutch topic mappers could assist with establishing transparency. If that’s what you want.

### You Are Not Google (Blasphemy I Know, But He Said It, Not Me)

Thursday, June 8th, 2017

You Are Not Google by Ozan Onay.

From the post:

Software engineers go crazy for the most ridiculous things. We like to think that we’re hyper-rational, but when we have to choose a technology, we end up in a kind of frenzy — bouncing from one person’s Hacker News comment to another’s blog post until, in a stupor, we float helplessly toward the brightest light and lay prone in front of it, oblivious to what we were looking for in the first place.

This is not how rational people make decisions, but it is how software engineers decide to use MapReduce.

Spoiler: Onay will also say you are not Amazon or LinkedIn.

Just so you know and can prepare for the ego shock.

Great read that invokes Poyla’s First Principle:

Understand the Problem

This seems so obvious that it is often not even mentioned, yet students are often stymied in their efforts to solve problems simply because they don’t understand it fully, or even in part. Polya taught teachers to ask students questions such as:

• Do you understand all the words used in stating the problem?
• What are you asked to find or show?
• Can you restate the problem in your own words?
• Can you think of a picture or a diagram that might help you understand the problem?
• Is there enough information to enable you to find a solution?

Onay coins a mnemonic for you to apply and points to additional reading.

Enjoy!

PS: Caution: Understanding a problem can cast doubt on otherwise successful proposals for funding. Your call.

### Obama on Fixing Government with Technology (sigh)

Thursday, October 13th, 2016

Like any true technology cultist, President Obama mentions technology, inefficiency, but never the people who make up government as the source of government “problems.” Nor does he appear to realize that technology cannot fix the people who make up government.

Those out-dated information systems he alludes to were built and are maintained under contract with vendors. Systems that are used by users who are accustomed to those systems and will resist changing to others. Still other systems rely upon those systems being as they are in terms of work flow. And so on. At its very core, the problem of government isn’t technology.

It’s the twin requirement that it be composed of and supplied by people, all of who have a vested interest and comfort level with the technology they use and, don’t forget, government has to operate 24/7, 365 days out of the year.

There is no time to take down part of the government to develop new technology, train users in its use and at the same time, run all the current systems which are, to some degree, meeting current requirements.

As an antidote to the technology cultism that infects President Obama and his administration, consider reading Geek Heresy, the description of which reads:

In 2004, Kentaro Toyama, an award-winning computer scientist, moved to India to start a new research group for Microsoft. Its mission: to explore novel technological solutions to the world’s persistent social problems. Together with his team, he invented electronic devices for under-resourced urban schools and developed digital platforms for remote agrarian communities. But after a decade of designing technologies for humanitarian causes, Toyama concluded that no technology, however dazzling, could cause social change on its own.

Technologists and policy-makers love to boast about modern innovation, and in their excitement, they exuberantly tout technology’s boon to society. But what have our gadgets actually accomplished? Over the last four decades, America saw an explosion of new technologies – from the Internet to the iPhone, from Google to Facebook – but in that same period, the rate of poverty stagnated at a stubborn 13%, only to rise in the recent recession. So, a golden age of innovation in the world’s most advanced country did nothing for our most prominent social ill.

Toyama’s warning resounds: Don’t believe the hype! Technology is never the main driver of social progress. Geek Heresy inoculates us against the glib rhetoric of tech utopians by revealing that technology is only an amplifier of human conditions. By telling the moving stories of extraordinary people like Patrick Awuah, a Microsoft millionaire who left his lucrative engineering job to open Ghana’s first liberal arts university, and Tara Sreenivasa, a graduate of a remarkable South Indian school that takes children from dollar-a-day families into the high-tech offices of Goldman Sachs and Mercedes-Benz, Toyama shows that even in a world steeped in technology, social challenges are best met with deeply social solutions.

Government is a social problem and to reach for a technology fix first, is a guarantee of yet another government failure.

### How-To Maintain Project Delivery Dates – Skip Critical Testing

Sunday, March 20th, 2016

David William documents a tried and true way to maintain a project schedule, skip critical testing in: Pentagon skips tests on key component of U.S.-based missile defense system.

How critical?

Here’s part of David’s description:

Against the advice of its own panel of outside experts, the U.S. Missile Defense Agency is forgoing tests meant to ensure that a critical component of the nation’s homeland missile defense system will work as intended.

The tests that are being skipped would evaluate the reliability of small motors designed to help keep rocket interceptors on course as they fly toward incoming warheads.

The components, called alternate divert thrusters, are vital to the high-precision guidance required to intercept and destroy an enemy warhead traveling at supersonic speed – a feat likened to hitting one speeding bullet with another.

The interceptors, deployed in underground silos at Vandenberg Air Force Base in Santa Barbara County and at Ft. Greely, Alaska, are the backbone of the Ground-based Midcourse Defense system (GMD) – the nation’s main defense against a sneak attack by North Korea or Iran.

Hmmm, hitting a supersonic target with a supersonic bullet and you don’t test the aiming mechanism that makes them collide?

How critical does that sound?

The consequences of failure, assuming the entire program isn’t welfare for the contractors and their employees, could be a nuke landing on the West Coast of the United States.

Does that make it sound more critical?

Or do we need to guess which city? Los Angeles, San Diego, would increase property values in San Jose so there would be an off-set to take into account.

Here’s my advice: Don’t ever skip critical testing or continue to participate in a project that skips critical testing. Walk away.

Not quietly, tell everyone you know of the skipped testing. NDAs be damned.

No one is well served by skipped testing.

A lack of testing has lead to the broken Internet of Things.

Is that what you want?

### Institutional Dementia At Big Blue?

Sunday, January 24th, 2016

Why over two-thirds of the Internet of Things projects will fail by Sushil Pramanick (Associate Partner, Consultative Sales, IoT Leader, IBM Analytics).

From the post:

When did you first become interested in the Internet of Things (IoT)? If you’re like me, you’ve probably been following the news related to the IoT for years. As technology lovers, I’ll bet we have a lot in common. We are intensely curious. We are problem-solvers, inventors and perhaps more than anything else, we are relentlessly dedicated to finding better answers to our everyday challenges. The IoT represents a chance for us—the thinkers—to move far beyond the limiting technologies of the past and to unlock new value, new insights and new opportunities.

In mid-2005, Gartner stated that over 50 percent of data warehouse projects failed due to lack of adoption with data quality issues and implementation failures. In 2012, this metric was further scaled back to fewer than 30 percent. The parallelism here is that the Internet of Things hype is similar to data warehouse and business intelligence hype two decades ago when many companies embarked on decentralized reporting and/or basic analytics solutions. The problem was that some companies tried to build in-house, large enterprise data warehouse platforms that were disconnected and inherently had integration and data quality issues. A decade later, 50 percent of these projects failed. Another decade later, another over 20 percent failed. Similarly, companies are now trying to embark on Internet of Things initiatives using very narrow, point-focused solutions with very little enterprise IoT strategy in place, and in some cases, engaging or building unproven solution architectures.

Project failure rates are hardly news. But I mention this to illustrate the failure of institutional memory at IBM.

It wasn’t that many years ago (2008) that IBM published a forty-eight page white paper, Making Change Work, that covers the same ground as Sushil Pramanick.

Do you think think “Consultative Sales, IBM Analytics” doesn’t talk to “IBM Global Business Services?”

Or is IBM’s institutional memory broken up by projects, departments, divisions, and communicated in part by formal documents but also by folklore, rumor and water fountain gossip?

A faulty institutional memory, with missed opportunities, duplicated projects, and a general failure to thrive, won’t threaten the existence of an IBM. At least not right away.

Can you say the same for your organization?

Interested?

### Flash Audit on OPM Infrastructure Update Plan

Wednesday, June 24th, 2015

Flash Audit Alert – U.S. Office ofPersonnel Management’s Infrastructure Improvement Project (Report No. 4A-CI-00-15-055)

Hot off the presses! Just posted online today!

From the report:

The U.S. Office of Personnel Management (OPM) Office ofthe Inspector General (OIG) is issuing this Flash Audit Alert to bring to your immediate attention serious concerns we have regarding the Office of the Chief Information Officer’ s (OCIO) infrastructure improvement project (Project). 1 This Project includes a full overhaul ofthe agency’s technical infrastructure by implementing additional information technology (IT) security controls and then migrating the entire infrastructure into a completely new environment (referred to as Shell).

Our primary concern is that the OCIO has not followed U.S . Office ofManagement and Budget (OMB) requirements and project management best practices. The OCIO has initiated this project without a complete understanding ofthe scope ofOPM’ s existing technical infrastructure or the scale and costs of the effort required to migrate it to the new environment.

In addition, we have concerns with the nontraditional Government procurement vehicle that was used to secure a sole-source contract with a vendor to manage the infrastructure overhaul. While we agree that the sole-source contract may have been appropriate for the initial phases of securing the existing technical environment, we do not agree that it is appropriate to use this vehicle for the long-term system migration efforts.

Several examples of critical processes that OPM has not completed for this project include:

• Project charter;
• Comprehensive list of project stakeholders;
• Feasibility study to address scope and timeline in concert with budgetary justification/cost estimates;
• Impact assessment for existing systems and stakeholders;
• Quality assurance plan and procedures for contractor oversight;
• Technological infrastructure acquisition plan;
• High-level test plan; and,
• Implementation plan to include resource planning, readiness assessment plan, success factors, conversion plan, and back-out plan.

The report isn’t that long, six (6) page in total, but it is a snap shot of bad project management in its essence.

I helped torpedo a project once upon a time where management defended a one paragraph email description of a proposed CMS system as being “agile.” The word they were looking for was “juvenile,” but they were unwilling to admit to years of mistakes in allowing the “programmer” (used very loosely) to remain employed.

What do you think of inspector generals as an audience for topic maps? They investigate large and disorganized agencies, repeatedly over time, with lots of players and documents. Thoughts?

PS: I read about the flash audit report several days ago but didn’t want to post about it until I could share a source for it. Would make great example material for a course on project management.

### Where Big Data Projects Fail

Thursday, May 14th, 2015

From the post:

Over the past 6 months I have seen the number of big data projects go up significantly and most of the companies I work with are planning to increase their Big Data activities even further over the next 12 months. Many of these initiatives come with high expectations but big data projects are far from fool-proof. In fact, I predict that half of all big data projects will fail to deliver against their expectations.

Failure can happen for many reasons, however there are a few glaring dangers that will cause any big data project to crash and burn. Based on my experience working with companies and organizations of all shapes and sizes, I know these errors are all too frequent. One thing they have in common is they are all caused by a lack of adequate planning.

To whet your appetite for the examples Marr uses, here are the main problems he identifies:

• Not starting with clear business objectives
• Not making a good business case
• Management Failure
• Poor communication
• Not having the right skills for the job

Marr’s post should be mandatory reading at the start of every proposed big data project. And after reading it, the project team should prepare a detailed statement of the business objectives and the business case, along with how it will be determined the business objectives will be measured.

Or to put it differently, no big data project should start without the ability to judge its success or failure.

### Why developers hate being interrupted

Tuesday, January 6th, 2015

Why developers hate being interrupted by Derek Johnson.

From the post:

Interruptions are to developers what kryptonite is to Superman—they kill productivity and there’s a significant recovery period.

There are two types of interruption: the planned meeting and the one where someone walks over to your desk to talk to you (or if you’re unlucky enough to have a desk phone it’s when the phone rings). The random interruption is akin to walking up to a someone building a lego tower, kicking it over and expecting them to continue from where they were the moment before you arrived. The planned meeting is a lot longer and kills productivity before, not just during and after. So, there are two types of problem that need addressed here.

Not a new problem but Derek does a powerful retelling of it. Along with suggestions to reduce interruptions.

I like the headphone poster most of all.

If you actually implement one or more of the suggestions in Derek’s post, you may want to read Peopleware: Productive Projects and Teams by Tom DeMarco and Timothy Lister. Book length treatment on productivity based on real world results.

Hint: If you want 10X developers, start with a 10X development environment. You’ll get closer that way than by any other known method.

Even if you have very good developer conditions, problems can still occur:

In my years at Bell Labs, we worked in two-person offices. They were spacious, quiet, and the phone could be diverted. I shared my office with Wendl Thomis who went on to build a small empire as an electronic toy maker. In those days, he was working on the ESS fault dictionary. The dictionary scheme relied upon the notion of n-space proximity, a concept that was hairy enough to challenge even Wendl’s powers of concentration. One afternoon, I was bent over a program listing while Wendl was staring into space, his feet propped up on the desk. Our boss came in and asked, “Wendl! What are you doing?” Wendl said, “I’m thinking.” And the boss said, “Can’t you do that at home?”

Yeah, really don’t want to have people “thinking” on the job. 😉

PS: Developers complain about interruptions in forums I frequent but be aware that the same principles apply to authors, designers, engineers, lawyers, mathematicians, etc. I mention that as the basis for forming alliances at work to support sane work conditions for all departments. Equality of privilege attracts more allies.

Monday, September 22nd, 2014

Care to name projects and standards that suffered from the project paradox?

I first saw this in a tweet by Tobias Fors

### Not just the government’s playbook

Wednesday, August 20th, 2014

Not just the government’s playbook by Mike Loukides.

From the post:

Whenever I hear someone say that “government should be run like a business,” my first reaction is “do you know how badly most businesses are run?” Seriously. I do not want my government to run like a business — whether it’s like the local restaurants that pop up and die like wildflowers, or megacorporations that sell broken products, whether financial, automotive, or otherwise.

If you read some elements of the press, it’s easy to think that healthcare.gov is the first time that a website failed. And it’s easy to forget that a large non-government website was failing, in surprisingly similar ways, at roughly the same time. I’m talking about the Common App site, the site high school seniors use to apply to most colleges in the US. There were problems with pasting in essays, problems with accepting payments, problems with the app mysteriously hanging for hours, and more.

I don’t mean to pick on Common App; you’ve no doubt had your own experience with woefully bad online services: insurance companies, Internet providers, even online shopping. I’ve seen my doctor swear at the Epic electronic medical records application when it crashed repeatedly during an appointment. So, yes, the government builds bad software. So does private enterprise. All the time. According to TechRepublic, 68% of all software projects fail. We can debate why, and we can even debate the numbers, but there’s clearly a lot of software #fail out there — in industry, in non-profits, and yes, in government.

With that in mind, it’s worth looking at the U.S. CIO’s Digital Services Playbook. It’s not ideal, and in many respects, its flaws reveal its origins. But it’s pretty good, and should certainly serve as a model, not just for the government, but for any organization, small or large, that is building an online presence.

See Mike’s post for the extracted thirteen (13) principles (plays in Obama-speak) for software projects.

While everybody needs a reminder, what puzzles me is that none of the principles are new. That being the case, shouldn’t we be asking:

Why haven’t projects been following these rules?

Reasoning that if we (collectively) know what makes software projects succeed, what are the barrier to implementing those steps in all software projects?

Re-stating rules that we already know to be true, without more, isn’t very helpful. Projects that start tomorrow with have a fresh warning in their ears and commit the same errors that doom 68% of all other projects.

My favorite suggestion and the one I have seen violated most often is:

Bring in experienced teams

I am told, “…our staff don’t know how to do X, Y or Z….” That sounds to me like a personnel problem. In an IT recession, a problem that isn’t hard to fix. But no, the project has to succeed with IT staff known to lack the project management or technical skills to succeed. You can guess the outcome of such projects in advance.

The restatement of project rules isn’t a bad thing to have but your real challenge is going to be following them. Suggestions for everyone’s benefit welcome!

### Why BI Projects Fail

Thursday, May 15th, 2014

Top reasons your Business Intelligence (BI) project will fail by Andrew Bourne.

Reasons 1) Data models are complex, 2) Dirty data, and 5) Decision making errors from misinterpretation of information, all have topic map like elements in them.

Andrew outlines the issues here and promises to take up each one separately and cover “…what to do about them:”

OK, I’m game.

There does seem to be a trend towards explanations for why “big data” projects are failing. As we saw in The Shrinking Big Data MarketPlace, a survey by VoltDB found that a full 72% of the respondents could not access or utilize the majority of their data.

I don’t view such reports as being “skeptical” about big data but more as being realistic that all the things necessary for a successful project of any kind, clear goals, hard work, good management are necessary for BI projects.

I will be following Andrew’s post and report back on where he comes down on issues relevant to topic maps.

I first saw this in a tweet by Gregory Piatetsky.

### Introduction to Process Maturity

Tuesday, April 29th, 2014

Introduction to Process Maturity by Michael Edson.

From the description:

Museum Web and New Media software projects offer tantalizing rewards, but the road to success can be paved with uncertainty and risk. To small organizations these risks can be overwhelming, and even large organizations with seemingly limitless resources can flounder in ways that profoundly affect staff morale, public impact, the health and fitness of our partners in the vendor community, and our own bottom lines. Something seems to happen between the inception of projects, when optimism and beneficial outcomes seem clear and attainable, and somewhere down the road when schedules, budgets, and outcomes go off course. What is it? And what can we do to gain control?

This paper, created for the 2008 annual conference of the American Association of Museums, describes some common ways that technology projects get into trouble. It examines a proven project-process framework called the Capability Maturity Model and how that model can provide insight and guidance to museum leaders and project participants, and it tells how to improve real-world processes that contribute to project success. The paper includes three brief case studies and a call-to-action which argues that museum leaders should make technology stewardship an urgent priority.

The intended audience is people who are interested in understanding and improving how museum-technology gets done. The paper’s primary focus is Web and New Media software projects, but the core ideas are applicable to projects of all kinds.

In web time it may seem like process advice from 2008 must be dated.

Not really, consider the following description of the then current federal government’s inability to complete technology projects:

As systems become increasingly complex, successful software development becomes increasingly difficult. Most major system developments are fraught with cost, schedule, and performance shortfalls. We have repeatedly reported on costs rising by millions of dollars, schedule delays of not months but years, and multibillion-dollar systems that don’t perform as envisioned.

The problem wasn’t just that the government couldn’t complete software projects on time or on budget, or that it couldn’t predict which projects it was currently working on would succeed or fail—though these were both significant and severe problems—but most worrisome from my perspective is that it couldn’t figure out which new projects it was capable of doing in the future. If a business case or museum mission justifies an investment in technology that justification is based on the assumption that the technology can be competently implemented. If instead the assumption is that project execution is a crap shoot, the business case and benefit-to-mission arguments crumble and managers are stuck, unable to move forward (because of the risk of failure) and unable to not move forward because business and mission needs still call.

There is no shortage of process/project management advice but I think Edson captures the essence needed for process/project success:

• Honestly assess your current processes and capabilities
• Improve processes and capabilities one level at a time

### US Government Content Processing: A Case Study

Monday, March 24th, 2014

US Government Content Processing: A Case Study by Stephen E Arnold.

From the post:

I know that the article “Sinkhole of Bureaucracy” is an example of a single case example. Nevertheless, the write up tickled my funny bone. With fancy technology, USA.gov, and the hyper modern content processing systems used in many Federal agencies, reality is stranger than science fiction.

This passage snagged my attention:

inside the caverns of an old Pennsylvania limestone mine, there are 600 employees of the Office of Personnel Management. Their task is nothing top-secret. It is to process the retirement papers of the government’s own workers. But that system has a spectacular flaw. It still must be done entirely by hand, and almost entirely on paper.

One of President Obama’s advisors is quote as describing the manual operation as “that crazy cave.”
….

Further in the post Stephen makes a good point when he suggests that in order to replace this operation you would first have to understand it.

But having said that, holding IT contractors accountable for failure would go a long way towards encouraging such understanding.

So far as I know, there have been no consequences for the IT contractors responsible for the health.gov meltdown.

Perhaps that is the first sign of IT management incompetence, no consequences for IT failures.

Yes?

### Coconut Headphones: Why Agile Has Failed

Saturday, March 15th, 2014

From the post:

The 2001 agile manifesto was an attempt to replace rigid, process and management heavy, development methodologies with a more human and software-centric approach. They identified that the programmer is the central actor in the creation of software, and that the best software grows and evolves organically in contact with its users.

My first real contact with the ideas of agile software development came from reading Bob Martin’s book ‘Agile Software Development’. I still think it’s one of the best books about software I’ve read. It’s a tour-de-force survey of modern (at the time) techniques; a recipe book of how to create flexible but robust systems. What might surprise people familiar with how agile is currently understood, is that the majority of the book is about software engineering, not management practices.
….

Something to get your blood pumping on a weekend. 😉

We all have horror stories to tell about various programming paradigms. For “agile” programming, I remember a lead programmer saying a paragraph in an email was sufficient documentation for a plan to replace a content management system with a custom system written on top of subversion. Need I say he had management support?

Fortunately that project died but not through any competence of management. But in all fairness, that wasn’t “agile programming” in any meaningful sense of the phrase.

If you think about it, just about any programming paradigm will yield good results, if you have good management and programmers. Incompetence of management or programmers, and the best programming paradigm in the world will not yield a good result.

Programming paradigms have the same drawback as religion, people are an essential to both.

A possible explanation for high project failure rates and religions that are practiced in word and not deed.

Yes?

### A Gresham’s Law for Crowdsourcing and Scholarship?

Friday, February 28th, 2014

A Gresham’s Law for Crowdsourcing and Scholarship? by Ben W. Brumfield.

Ben examines the difficulties of involving both professionals and “amateurs” in crowd-sourced projects.

The point of controversy being whether or not professionals will decline to be identified by projects that include amateurs?

There isn’t any smoking gun evidence and I suspect the reaction of both professionals and amateurs varies from field to field.

Still, it is something you may run across if you use crowd-sourcing to build semantic annotations and/or data archives.

### Why the Feds (U.S.) Need Topic Maps

Monday, January 6th, 2014

Earlier today I saw this offer to “license” technology for commercial development:

ORNL’s Piranha & Raptor Text Mining Technology

From the post:

UT-Battelle, LLC, acting under its Prime Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy (DOE) for the management and operation of the Oak Ridge National Laboratory (ORNL), is seeking a commercialization partner for the Piranha/Raptor text mining technologies. The ORNL Technology Transfer Office will accept licensing applications through January 31, 2014.

ORNL’s Piranha and Raptor text mining technology solves the challenge most users face: finding a way to sift through large amounts of data that provide accurate and relevant information. This requires software that can quickly filter, relate, and show documents and relationships. Piranha is JavaScript search, analysis, storage, and retrieval software for uncertain, vague, or complex information retrieval from multiple sources such as the Internet. With the Piranha suite, researchers have pioneered an agent approach to text analysis that uses a large number of agents distributed over very large computer clusters. Piranha is faster than conventional software and provides the capability to cluster massive amounts of textual information relatively quickly due to the scalability of the agent architecture.

While computers can analyze massive amounts of data, the sheer volume of data makes the most promising approaches impractical. Piranha works on hundreds of raw data formats, and can process data extremely fast, on typical computers. The technology enables advanced textual analysis to be accomplished with unprecedented accuracy on very large and dynamic data. For data already acquired, this design allows discovery of new opportunities or new areas of concern. Piranha has been vetted in the scientific community as well as in a number of real-world applications.

The Raptor technology enables Piranha to run on SharePoint and MS SQL servers and can also operate as a filter for Piranha to make processing more efficient for larger volumes of text. The Raptor technology uses a set of documents as seed documents to recommend documents of interest from a large, target set of documents. The computer code provides results that show the recommended documents with the highest similarity to the seed documents.

Gee, that sounds so very hard. Using seed documents to recommend documents “…from a large, target set of documents.”?

Many ways to do that but just looking for “Latent Dirichlet Allocation” in “.gov” domains, my total is 14,000 “hits.”

If you were paying for search technology to be developed, how many times would you pay to develop the same technology?

Just curious.

In order to have a sensible development of technology process, the government needs a topic map to track its development efforts. Not only to track but prevent duplicate development.

Imagine if every web project had to develop its own httpd server, instead of the vast majority of them using Apache HTTPD.

With a common server base, a community has developed to maintain and extend that base product. That can’t happen where the same technology is contracted for over and over again.

Suggestions on what might be an incentive for the Feds to change their acquisition processes?

### On Self-Licking Ice Cream Cones

Thursday, December 5th, 2013

On Self-Licking Ice Cream Cones by Peter Worden. 1992

Ben Brody in The definitive glossary of modern US military slang quotes the following definition for a Self-Licking Ice Cream Cone:

A military doctrine or political process that appears to exist in order to justify its own existence, often producing irrelevant indicators of its own success. For example, continually releasing figures on the amount of Taliban weapons seized, as if there were a finite supply of such weapons. While seizing the weapons, soldiers raid Afghan villages, enraging the residents and legitimizing the Taliban’s cause.

Wikipedia at (Self-licking ice cream cone) reports the phrase was first used by Pete Worden in “On Self-Licking Ice Cream Cones” in 1992 to describe the NASA bureaucracy.

The keywords for the document are: Ice Cream Cones; Pork; NASA; Mafia; Congress.

Birds of a feather I would say.

Worden isolates several problems:

Problems, National, The Budget Process

This unfortunate train of events has resulted in a NASA which, more than any other agency, believes it works only for the appropriations committees. The senior staff of those committees, who have little interest in science or space, effectively run NASA. NASA senior offiicials’ noses are usually found at waist level near those committee staffers.

Problems, Closer to Home, NASA

“The Self-Licking Ice Cream Cone”

Since NASA effectively works for the most porkish part of Congress, it is not surprising that their programs are designed to maximize and perpetuate jobs programs in key Congressional districts. The Space Shuttle-Space Station is an outrageous example. Almost two-thirds of NASA’s budget is tied up in this self-licking program. The Shuttle is an unbelievably costly was to get to space at $1 billion a pop. The Space Station is a silly design. Yet, this Station is designed so it can only be built by the Shuttle and the Shuttle is the only way to construct the Station…. “Inmates Running the Asylum” NASA’s vaulted “peer review” process is not a positive factor, but an example of the “pork” mentality within the scientific community. It results in needlessly complex programs whose primary objective is not putting instruments in orbit, but maximizing the number of constituencies and investigators, thereby maximizing the political invulnerability of the program…. “Mafia Tactics” …The EOS is a case in point. About a year ago, encouraged by criticism from some quarters of Congress and in the press, some scientists and satellite contractors began proposing small, cheap, near-term alternatives to the EOS “battlestars.” Senior NASA officials conducted, with impunity, an unbelievable campaign of threats against these critics. Members of the White House advisory committees were told they would not get NASA funding if they continued to probe the program…. “Shoot the Sick Horses, and their Trainers” It is outrageous that the Hubble disaster resulted in no repercussions. All we hear is that some un-named technician, no longer working for the contractor, made a mistake in the early 1980s. Even in the Defense Department, current officials would lost their jobs over allowing such an untested and expensive system to be launched. Compare Worden’s complaints to the security apparatus represented by the NSA and its kin. Have you heard of any repercussions for any of the security failures and/or outrages? Is there any doubt that the security apparatus exists solely to perpetuate the security apparatus? By definition the NSA is a Self-Licking Ice Cream Cone. Time to find a trash can. Hubble: The Hubble Space Telescope Optical Systems Failure Report (pdf) Long before all the dazzling images from Hubble, it was virtually orbiting space junk for several years. ### how to write a to-do list Wednesday, September 11th, 2013 Important: how to write a to-do list by Divya Pahwa. From the post: I remember trying out my first hour-by-hour schedule to help me get things done when I was 10. Wasn’t really my thing. I’ve since retired the hourly schedule, but I still rely on a daily to-do list. I went through the same motions every night in university. I wrote out, by hand, my to-do list for the next day, ranked by priority. Beside each task I wrote down the number of hours each task should take. This was and still is a habit and finding a system that works has been a struggle for me. I’ve tested out a variety of methods, bought a number of books on the subject, and experimented: colour-coded writing, post-it note reminders in the bathroom, apps, day-timers….you name it, I’ve tried it. In my moment of retrospection I still wasn’t sure if my current system was spot on. So, I went on an adventure to figure out the most effective way to not only write my daily to-do list but to get more things done. (…) A friend was recently tasked with reading the latest “fad” management book. I can’t mention its name in case it appears in a search, etc. But it is one of those big print, wide margins, “…this has never been said this way before…,” type books. Of course it has never been said that way before. Every rogue has a unique pitch for every fool they meet. I thought everyone knew that. Apparently not since rogues have to assure us they are unique in such publications. I can’t help my friend but when I saw this short post on to-do lists, I thought it might help both you and me. Oh, I keep to-do lists but too much stuff falls over to the next day, next day, etc. Some weeks I am better than others. Some weeks are worse. Take it as a reminder of a best practice. A best practice that will make you more productive at very little expense. No tapes, audio book, paperback book, software, binders (spiral or otherwise), etc. Hell, you don’t even need a smart phone to do it. 😉 Read Divya’s post and more importantly, put it into practice for a week. Did you get more done than the week before? ### Inside the world’s biggest agile software project disaster Tuesday, September 10th, 2013 From the post: In theory, it was a good idea – using a smart new methodology to unravel a legacy of bureaucratic tangles. In reality, execution of the world’s largest agile software project has been less than impressive. By developing its flagship Universal Credit (UC) digital project – an initiative designed to merge six separate benefits strands into one – using agile principles, the UK Department for Work and Pensions (DWP) hoped to decisively lay the ghosts of past DWP-backed digital projects to bed. Unfortunately, a report by the National Audit Office (NAO) has demonstrated that the UK government’s IT gremlins remain in rude health, with £34 million of new IT assets to date written off by the DWP on this project alone. Moreover, the report states that the project has failed to deliver its rollout targets, and that the DWP is now unsure how much of its current IT will be viable for a national rollout – all pretty damning indictments for an initiative that was supposed to be demonstrating the merits of the Agile Framework for central UK government systems. Perhaps one of the most biggest errors for implementing an agile approach highlighted by the NAO is the failure of the DWP to define how it would monitor progress or document decisions and the need to integrate the new systems with existing IT, procured and managed assuming the traditional ‘waterfall’ approach. (…) Don’t take this post wrong. It is equally easy to screw up with a “waterfall” approach to project management. Particularly with inadequate management, documentation and requirements. However, this is too good of an example of why everyone in a project should be pushed to write down with some degree of precision what they expect, how to know when it arrives and deadlines for meeting their expectations. Without all of that in writing, shared writing with the entire team, project “success” will be a matter of face saving and not accomplishment of the original goals, whatever they may have been. ### The Monstrous Cost of Work Failure Monday, September 9th, 2013 I first saw this posted by Randy Krum. Would you care to guess what accounts for 60% to 80% of project failures? According to the ASAPM (American Society for the Advancement of Project Management): According to the Meta Group, 60% – 80% of project failures can be attributed directly to poor requirements gathering, analysis, and management. (emphasis added) Requirements, what some programmers are too busy coding to collect and some managers fear because of accountability. Topic maps can’t solve your human management problems. Topic maps can address: • Miscommunication between business and IT –$30 Billion per year
• 58% of workers spending half of each workday, filing, deleting, sorting information

Reducing information shuffling is like adding more staff for the same bottom line.

Interested?

### Big Data Wisdom Courtesy of Monty Python

Thursday, February 28th, 2013

Big Data Wisdom Courtesy of Monty Python by Rik Tamm-Daniels.

From the post:

One of our favorite parts of the hilarious 1975 King Arthur parody, Monty Python and the Holy Grail, is the “Bridge of Death” scene: If a knight answered the bridge keeper’s three questions, he could safely cross the bridge; if not, he would be catapulted into… the Gorge of Eternal Peril!

Unfortunately, that’s exactly what happened to most of King Arthur’s knights, who were either stumped by a surprise trivia question like, “What is the capital of Assyria?” – or responded too indecisively when asked, “What is your favorite color?”

Fortunately when King Arthur was asked, “What is the airspeed velocity of an unladen swallow?” he wisely sought further details: “What do you mean – an African or European swallow?” The stunned bridge keeper said, “I don’t know… AAAGH!” Breaking his own rule, the bridge keeper was thrown over the edge, freeing King Arthur to continue his quest for the Holy Grail.

Many organizations are on “Big Data Holy Grail” quests of their own, looking to deliver game-changing business analytics, only to find themselves in a “boil-the-ocean” Big Data project that “after 24 months of building… has no real value.” Unfortunately, many CIOs and BI Directors have rushed into hasty Hadoop implementations, fueled by a need to ‘respond’ to Big Data and ‘not fall behind.’

That’s just one of the troublesome findings from a recent InformationWeek article by Doug Henschen, Vague Goals Seed Big Data Failures. Henschen’s article cited a recent Infochimps Big Data survey that revealed 55% of big data projects don’t get completed and that many others fall short of their objectives. The top reason for failed Big Data projects was “inaccurate scope”:

I don’t disagree with the need to define “success” and anticipated ROI before the project starts.

But if it makes you feel any better, a 45% rate of success isn’t all that bad, considering the average experience: Facts and Figures, a summary of project failure data.

A summary of nine (9) studies, 2005 until 2011.

One of the worst comments being:

A truly stunning 78% of respondents reported that the “Business is usually or always out of sync with project requirements”

Semantic technologies are not well served by projects that get funded but produce no tangible benefits.

Project officers may like that sort of thing but the average consumer and business leaders know better.

Promoting semantic technologies in general and topic maps in particular mean successful results in the eyes of users, not ours.

### Collaborative Systems: Easy To Miss The Mark

Sunday, October 21st, 2012

Collaborative Systems: Easy To Miss The Mark by Jocob Morgan.

From the post:

Map out use cases defining who you want collaborating and what results you want them to achieve. Skip this step in the beginning, and you’ll regret it in the end.

One of the things that organizations really need to consider when evaluating collaborative solutions is their use cases. Not only that, but also understanding the outcomes of those use cases and how they can map to a desired feature requirement. Use cases really help put things into perspective for companies who are seeking to understand the “why” before they figure out the “how.”

That’s what a use case is: the distilled essence of a role within your organization, how it will interact with some system, and the expected or desired result. Developing use cases makes your plans, requirements, and specifications less abstract because it forces you to come up with specific examples.

This is why we created a framework (inspired by Gil Yehuda) to address this. It breaks down as follows:

• — Identify the overall business problem you are looking to solve (typically there are several).
• — Narrow down the problem into specific use cases; each problem has several use cases.
• — Describe the situation that needs to be present for that use case to be applicable.
• — Clarify the desired action.
• — State the desired result.

For topic maps I would write:

Map out use cases defining what data you want to identify and/or integrate and what results you expect from that identification or integration. Skip this step in the beginning, and you’ll regret it in the end.

If you don’t have an expectation of a measurable result (in businesses a profitable one), your efforts at semantic integration are premature.

How will you know when you have reached the end of a particular effort?

### People and Process > Prescription and Technology

Monday, October 15th, 2012

Factors that affect software systems development project outcomes: A survey of research by Laurie McLeod and Stephen G. MacDonell. ACM Computing Surveys (CSUR) Surveys Volume 43 Issue 4, October 2011 Article No. 24, DOI: 10.1145/1978802.1978803.

Abstract:

Determining the factors that have an influence on software systems development and deployment project outcomes has been the focus of extensive and ongoing research for more than 30 years. We provide here a survey of the research literature that has addressed this topic in the period 1996–2006, with a particular focus on empirical analyses. On the basis of this survey we present a new classification framework that represents an abstracted and synthesized view of the types of factors that have been asserted as influencing project outcomes.

As with most survey work, particularly ones that summarize 177 papers, this is a long article, some fifty-six pages.

Let me try to tempt you into reading it by quoting from Angelica de Antonio’s review of it (in Computing Reviews, Oct. 2012):

An interesting discussion about the very concept of project outcome precedes the survey of factors, and an even more interesting discussion follows it. The authors stress the importance of institutional context in which the development project takes place (an aspect almost neglected in early research) and the increasing evidence that people and process have a greater effect on project outcomes than technology. A final reflection on what projects still continue to fail—even if we seem to know the factors that lead to success—raises a question on the utility of prescriptive factor-based research and leads to considerations that could inspire future research. (emphasis added)

Before you run off to the library or download a copy of the survey, two thoughts to keep in mind:

First, if “people and process” are more important than technology, where should we place the emphasis in projects involving semantics?

Second, if “prescription” can’t cure project failure, what are its chances with semantic diversity?

Thoughts?

### Requirements Engineering (3rd ed.)

Monday, October 15th, 2012

Requirements Engineering (3rd ed.) by Hull, Elizabeth, Jackson, Ken, Dick, Jeremy. Springer, 3rd ed., 2011, XVIII, 207 p. 131 illus., ISBN 978-1-84996-404-3.

From the webpage:

Using the latest research and driven by practical experience from industry, the third edition of this popular book provides useful information to practitioners on how to write and structure requirements. • Explains the importance of Systems Engineering and the creation of effective solutions to problems • Describes the underlying representations used in system modelling and introduces the UML2 • Considers the relationship between requirements and modelling • Covers a generic multi-layer requirements process • Discusses the key elements of effective requirements management • Explains the important concept of rich traceability In this third edition the authors have updated the overview of DOORS to include the changes featured in version 9.2. An expanded description of Product Family Management and a more explicit definition of Requirements Engineering are also included. Requirements Engineering is written for those who want to develop their knowledge of requirements engineering, whether practitioners or students.

I saw a review of this work on the October 2012 issue of Computing Reviews, where Diego Merani remarks:

The philosopher Seneca once said: “There is no fair wind for one who knows not whither he is bound.” This sentence encapsulates the essence of the book: the most common reasons projects fail involve incomplete requirements, poor planning, and the incorrect estimation of resources, risks, and challenges.

Requirements and the consequences of their absence rings true across software and other projects, including the authoring of topic maps.

Requirements: Don’t leave home without them!

### Broken Telephone Game of Defining Software and UI Requirements [And Semantics]

Sunday, October 7th, 2012

Martin is writing in a UI context but the lesson he teaches is equally applicable to any part of software/project management. (Even U.S. federal government big data projects.)

His counsel is not one of dispair, he outlines solutions that can lessen the impact of the broken telephone game.

But it is up to you to recognize the game that is afoot and to react accordingly.

From the post:

The broken telephone game is played all over the world. In it, according to Wikipedia, “one person whispers a message to another, which is passed through a line of people until the last player announces the message to the entire group. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly, and often amusingly, from the one uttered by the first.”

This game is also played inadvertently by a large number of organizations seeking to define software and UI requirements, using information passed from customers, to business analysts, to UI/UX designers, to developers and testers.

Here’s a typical example:

• The BA or product owner elicits requirements from a customer and writes them down, often as a feature list and use cases.
• The use cases are interpreted by the UI/UX team to develop UI mockups and storyboards.
• Testing interprets the storyboards, mockups, and use cases to develop test cases,
• Also, the developers will try to interpret the use cases, mockups, and storyboards to actually write the code.

As with broken telephone, at each handoff of information the original content is altered. The resulting approach includes a lot of re-work and escalating project costs due to combinations of the following:

• Use cases don’t properly represent customer requirements.
• UI/UX design is not consistent with the use cases.
• Incorrect test cases create false bugs.
• Missed test cases result in undiscovered bugs.
• Developers build features that don’t meet customer needs.

The further down the broken telephone line the original requirements get, the more distorted they become. For this reason, UI storyboards, test cases, and code typically require a lot of reworking as requirements are misunderstood or improperly translated by the time they get to the UI and testing teams.

### Designing Open Projects

Wednesday, August 15th, 2012

Designing Open Projects: Lessons From Internet Pioneers (PDF) by David Witzel.

From the foreword:

A key insight underpinning Witzel’s tips is that this is not a precise methodology to be followed. Instead, an open project approach should be viewed as a mindset. Leaders have to discern whether the challenges they are facing can best be solved using a closed or open approach, defined as follows:

• A closed project has a defined staff, budget, and outcome; and uses hierarchy and logic models to direct activities. It is particularly appropriate for problems with known solutions and stable environments, such as the development of a major highway project.
• An open project is useful to address challenges where the end may not be clear, the environment is rapidly changing, and/or the coordinating entity doesn’t have the authority or resources to directly create needed change. In these open projects, new stakeholders can join at will, roles are often informal, resources are shared, and actions and decisions are distributed throughout the system.

Witzel’s report provides guideposts on how to use an open project approach on appropriate large-scale efforts. We hope this report serves as an inspiration and practical guide to federal managers as they address the increasingly complex challenges facing our country that reach across federal agency—and often state, local, nonprofit, and private sector—boundaries.

I can think of examples of semantic integration projects that would work better with either model.

What factors would you consider before putting your next semantic integration project into one category or the other?

I first saw this at: Four short links: 15 August 2012 by Nat Torkington

### FBI’s Sentinel Project: 5 Lessons Learned[?]

Saturday, August 4th, 2012

FBI’s Sentinel Project: 5 Lessons Learned [?] by John Foley.

John writes of lessons learned from the Sentinel Project, which replaces the $170 million disaster, Virtual Case File system. Lessons you need to avoid applying to your information management projects, whether you use topic maps or no. 2. Agile development gets things done. The next big shift in strategy was Fulgham’s decision in September 2010 to wrest control of the project from prime contractor Lockheed Martin and use agile development to accelerate software deliverables. The thinking was that a hands-on, incremental approach would be faster because functionality would be developed, and adjustments made, in two-week “sprints.” The FBI missed its target date for finishing that work–September 2011–but it credits the agile methodology with ultimately getting the job done. Missing a start date by ten (10) months does not count as a success for most projects. Moreover, note how they define “success:” this week’s announcement that Sentinel, as of July 1, became available to all FBI employees is a major achievement. Available to all FBI employees? I would think using it by all FBI employees would be the measure of success. Yes? Can you think a success measure other than use by employees? 3. Commercial software plays an important role. Sentinel is based in part on commercial software, a fact that’s often overlooked because of all the custom coding and systems integration involved. Under the hood are EMC’s Documentum document management software, Oracle databases, IBM’s WebSphere middleware, Microsoft’s SharePoint, and Entrust’s PKI technology. Critics who say that Sentinel would have gone more smoothly if only it had been based on off-the-shelf software seem unaware that, in fact, it is. Commercial software? Sounds like a software Frankenstein to me. I wonder if they simply bought software based on the political clout of the vendors and then wired it together? What it sounds like. Do you have access to the system documentation? That could prove to be an interesting read. I can imagine legacy systems wired together with these components but if you are building a clean system, why the cut-n-paste from different vendors? 4. Agile development is cheaper, too. Sentinel came in under its$451 million budget. The caveat is that the FBI’s original cost estimate for Sentinel was \$425 million, but that was before Fulgham and Johnson took over, and they stayed within the budget they were given. The Inspector General might quibble with how the FBI accounts for the total project cost, having pointed out in the past that its tally didn’t reflect the agency’s staff costs. But the FBI wasn’t forced to go to Congress with its hand out. Agile development wasn’t only faster, but also cheaper.

Right, let’s simply lie to the prospective client about the true cost of development for a project. Their staff, who already have full time duties, can just tough it out and give us the review/feedback that we need to build a working system. Right.

This is true for IT projects in general but topic map projects in particular. Clients will have to resource the project properly from the beginning, not just with your time but the time of its staff and subject matter experts.

A good topic map, read a useful topic map, is going to reflect contributions from the client’s staff. You need to make the case to decision makers that the staff contributions are just as important as their present day to day tasks.

BTW, if agile development oh so useful, people would be using it. Like C, Java, C++.

Do you see marketing pieces for C, Java, C++?

Successful approaches/languages are used, not advertised.

### Who’s accountable for IT failure? (Parts 1 & 2)

Thursday, April 19th, 2012

Michael Krigsman has an excellent two part series IT failure:

Who’s accountable for IT failure? (Part One)

Who’s accountable for IT failure? (Part Two)

Michael goes through the horror stories and stats about IT failures (about 70%) in some detail.

But think about just the failure rate for a minute: 70%?

Would you drive a car with a 70% chance of failure?

Would you fly in a plane with a 70% chance of failure?