Debugging with the Scientific Method by Stuart Halloway.
This webpage points to a video of Stuart’s keynote address at Clojure/conj 2015 with the same title and has pointers to other resources on debugging.
Stuart summarizes the scientific method for debugging in his closing as:
know where you are going
make well-founded choices
write stuff down
Programmers, using Clojure or not, will profit from Stuart’s advice on debugging program code.
A group that Stuart does not mention, those of us interested in creating search interfaces for users will benefit as well.
We have all had a similar early library experience, we are facing (in my youth) what seems like an endless rack of card files with the desire to find information on a subject.
Of course the first problem, from Stuart’s summary, is that we don’t know where we are going. At best we have an ill-defined topic on which we are supposed to produce a report. Let’s say “George Washington, father of our country” for example. (Yes, U.S. specific but I wasn’t in elementary school outside of the U.S. Feel free to post or adapt this with other examples.)
The first step, with help from a librarian, is to learn the basic author, subject, title organization of the card catalog. And things like looking for “George Washington” starting with “George” isn’t likely to produce a useful result. Eliding over the other details that a librarian would convey, you are somewhat equipped to move to step two.
Understanding the basic organization and mechanics of a library card catalog, you can develop a plan to search for information on George Washington. Such a plan would include excluding works over the reading level of the searcher, for example.
The third step of course is to capture all the information that is found from the resources located by using the library card catalog.
I mention that scenario not just out of nostalgia for card catalogs but to illustrate the difference between a card catalog and its electronic counter-parts, which have an externally defined schema and search interfaces with no disclosed search semantics.
That is to say, if a user doesn’t find an expected result for their search, how do you debug that failure?
You could say the user should have used “term X” instead of “term Y” but that isn’t solving the search problem, that is fixing the user.
Fixing users, as any 12-step program can attest, is a very difficult and fraught with failure process.
Fixing search semantics, debugging search semantics as it were, can fix the search results for a large number of users with little or no effort on their part.
There are any number of examples of debugging or fixing search semantics but the most prominent one that comes to mine is spelling correction by search engines that result results with the “correct” spelling and offer the user an opportunity to pursue their “incorrect” spelling.
At one time search engines returned “no results” in the event of mis-spelled words.
The reason I mention this is you are likely to be debugging search semantics on a less than global search space scale but the same principle applies as does Stuart’s scientific method.
Treat complaints about search results as an opportunity to debug the search semantics of your application. Follow up with users and test your improved search semantics.
Recalling that is all events, some user signs your check, not your application.