The Costs and Profits of Poor Data Quality by Jim Harris.
From the post:
Continuing the theme of my two previous posts, which discussed when it’s okay to call data quality as good as it needs to get and when perfect data quality is necessary, in this post I want to briefly discuss the costs — and profits — of poor data quality.
Loraine Lawson interviewed Ted Friedman of Gartner Research about How to Measure the Cost of Data Quality Problems, such as the costs associated with reduced productivity, redundancies, business processes breaking down because of data quality issues, regulatory compliance risks, and lost business opportunities. David Loshin blogged about the challenge of estimating the cost of poor data quality, noting that many estimates, upon close examination, seem to rely exclusively on anecdotal evidence.
As usual, Jim does a very good job of illustrating costs and profits from poor data quality.
I have a slightly different question:
What could you know about data to spot that it is of poor quality?
It is one thing to find out after a space ship crashes that poor data quality was responsible, but it would be better to spot the error before hand. As in before the launch.
Probably data specific but are there any general types of information that would help you spot poor quality data?
Before you are 1,000 meters off the lunar surface. 😉