Accountability in a Computerized Society

Accountability in a Computerized Society by Helen Nissenbaum. (The ACM Digital Library reports a publication date of 1997, but otherwise there is no date of publication.)


This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: (1) the problem of many hands, (2) the problem of bugs, (3) blaming the computer, and (4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.

If a builder has built a house for a man and has not made his work sound, and the house which he has built has fallen down and so caused the death of the householder, that builder shall be put to death.

If it destroys property, he shall replace anything that it has destroyed; and, because he has not made sound the house which he has built and it has fallen down, he shall rebuild the house which has fallen down from his own property.

If a builder has built a house for a man and does not make his work perfect and a wall bulges, that builder shall put that wall into sound condition at his own cost.
—Laws of Hammu-rabi [229, 232, 233]1, circa 2027 B.C.

The leaky bucket style of security detailed in Back to Basics: Beyond Network Hygiene is echoed from this paper from 1997.

Where I disagree with the author is on the need for strict liability in order to reverse the descent into universally insecure computing environments.

Strict liability is typically used when society wants every possible means to be used to prevent damage from a product. Given the insecure habits and nature of software production, strict liability would be grind the software industry to a standstill. Which would be highly undesirable, considering all the buggy software presently in use.

One of the problems that Lindner and Gaycken uncover is a lack of financial incentive to prevent or fix bugs in software.

Some may protest that creating incentives for vendors to fix bugs they created is in some way immoral.

My response would be:

We know lacking incentives results in the bugs continuing to be produced and to remain unfixed. If incentives result in fewer bugs and faster fixes for those that already exists, what is your objection?

What we lack is a model for such incentives. Debating who has the unpaid responsibility for bugs seems pointless. We should be discussing an incentive model to get bugs detected and fixed.

Software vendors will be interested because at present patches and bug fixes are loss centers in their budgets.

Users will be interested because they won’t face routine hammer strikes from script kiddies to mid-level hackers.

The CNO (Computer Network Offense) crowd will be interested because fewer opportunities for script kiddies means more demand for their exceptional exploits.

Like they say, something for everybody.

The one thing no one should want is legislative action on this front. No matter how many legislators you own, the result is going to be bad.

I first saw this in Pete Warden’s Five Short Links for February 21, 2014.

Comments are closed.