Archive for the ‘Decision Making’ Category

From God to Google

Friday, December 4th, 2015


I point this out only partially in jest.

Legend has it that the Deity knows the future consequences of present actions.

Google, on the other hand, has the hubris to decide matters that will impact all of us without understanding any future consequences and/or consultation with those to be affected.

Take for example Google’s Loon project.

In brief the Loon project will release a series of balloons that float in the stratosphere to offer global cellphone service, which will enable anyone in the world to connect to the Internet.

Ground-based astronomy is difficult enough with increasing light pollution.


That map is only a portion of the world map at Dark Site Finder

In addition to growing light pollution, astronomers will have to content with random sky junk from Google’s Loon project. Balloons move at the whim of the wind, making it impossible for astronomers to dodge around Google sky junk for their observations.

Don’t imagine that the surveillance potential will be lost on nation states, which will quickly use Google balloons as cover for their own. How are you going to distinguish one balloon from another at the edge of space?

Moreover, despite protests to the contrary, Google’s motivation is fairly clear, the more people with access to the Internet, the more victims of Internet advertising for Google to deliver to its clients.

That the current Internet has adopted an ad-based model is no reason to presume that alternative models on a country by country basis could not be different. Bell telephone survived, nay thrived, for decades without a single ad being delivered by your phone service. Think about that, not a single ad and more reliable service over a wider area than any cellphone provider can boast of today.

Hidden deeper in Google’s agenda is the White West Paternalism that has decided, like being Christian was once upon a time, having access to the Internet is a necessity for everyone. The third world is still paying for missionary intervention in the 17th century and later, must we really repeat that sad episode?

The goal isn’t to benefit people who don’t have access to the Internet but to remake them into more closely fitting our “ideal” of a technology-based society.

The common and infantile belief that technology will bring democracy and other assorted benefits was thoroughly rebutted in 
Why Technology Hasn’t Delivered More Democracy by Thomas Carothers. Democracy and other social benefits are far more complicated, people complicated, than any simple technological “fix” can address.

I think the word I am search for is “hubris.”

Unfortunately, unlike an individual case of “hubris,” when spreading the Internet fails to produce the desired outcomes, it won’t impact only Google, but the rest of humanity as well.

I first saw the Google image in a tweet by Marko A. Rodriguez.

20 Cognitive Biases That Screw Up Your Decisions

Saturday, October 10th, 2015

Samantha Lee and Shana Lebowitz created an infographic (Business Insider) of common cognitive biases.

Entertaining, informative, but what key insight is missing from from this infographic?


The original at Business Insider is easier to read.

What missing is the question: Where do I stand to see my own cognitive bias?

If I were already aware of it, I would avoid it in decision making. Yes?

So if I am not aware of it, how do I get outside of myself to spot such a bias?

One possible solution, with the emphasis on possible, is to consult with others who may not share your cognitive biases. They may have other ones, ones that are apparent to you but not to them.

No guarantees on that solution because most people don’t appreciate having their cognitive biases pointed out. Particularly if they are central to their sense of identity and self-worth.

Take the management at the Office of Personnel Management (OPM), who have been repeatedly demonstrated to not only be incompetent in matters of cybersecurity but of management in general.

Among other biases, Office of Personnel Management suffers from 7. Confirmation bias, 8. Conservatism bias, 10. Ostrich effect, 17. Selective perception, and 20. Zero-risk bias.

The current infestation of incompetents at the Office of Personnel Management is absolutely convinced, judging from their responses to their Inspector General reports urging modern project management practices, that no change is necessary.

Personally I would fire everyone from the elevator operator (I’m sure they probably still have one) to the top and terminal all retirement and health benefits. Would not cure the technology problems at OPM but would provide the opportunity to have a fresh start at addressing it.

Cognitive biases, self-interest and support of other incompetents, doom reform at the OPM. You may as well wish upon a star.

I first saw this in a tweet by Christophe Lalanne.

Project Paradox

Monday, September 22nd, 2014

project decisions

Care to name projects and standards that suffered from the project paradox?

I first saw this in a tweet by Tobias Fors

Thinking, Fast and Slow (Review) [And Subject Identity]

Friday, November 15th, 2013

A statistical review of ‘Thinking, Fast and Slow’ by Daniel Kahneman by Patrick Burns.

From the post:

We are good intuitive grammarians — even quite small children intuit language rules. We can see that from mistakes. For example: “I maked it” rather than the irregular “I made it”.

In contrast those of us who have training and decades of experience in statistics often get statistical problems wrong initially.

Why should there be such a difference?

Our brains evolved for survival. We have a mind that is exquisitely tuned for finding things to eat and for avoiding being eaten. It is a horrible instrument for finding truth. If we want to get to the truth, we shouldn’t start from here.

A remarkable aspect of your mental life is that you are rarely stumped. … you often have answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.

The review focuses mainly on statistical issues in “Thinking Fast and Slow” but I think you will find it very entertaining.

I deeply appreciate Patrick’s quoting of:

A remarkable aspect of your mental life is that you are rarely stumped. … you often have answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.

In particular:

…relying on evidence that you can neither explain nor defend.

which resonates with me on subject identification.

Think about how we search for subjects, which of necessity involves some notion of subject identity.

What if a colleague asks if they should consult the records of the Order of the Garter to find more information on “Lady Gaga?”

Not entirely unreasonable since “Lady” is conferred upon female recipients of the Order of the Garter.

No standard search technique would explain why your colleague should not start with the Order of the Garter records.

Although I think most of us would agree such a search would be far afield. 😉

Every search starts with a searcher relying upon what they “know,” suspect or guess to be facts about a “subject” to search on.

At the end of the search, the characteristics of the subject as found, turn out to be the characteristics we were searching for all along.

I say all that to suggest that we need not bother users to say how in fact to identity the objects of their searches.

Rather the question should be:

What pointers or contexts are the most helpful to you when searching? (May or may not be properties of the search objective.)

Recalling that properties of the search objective are how we explain successful searches, not how we perform them.

Calling upon users to explain or make explicit what they themselves don’t understand, seems like a poor strategy for adoption of topic maps.

Capturing what “works” for a user, without further explanation or difficulty seems like the better choice.

PS: Should anyone ask about “Lady Gaga,” you can mention that Glamour magazine featured her on its cover, naming her Woman of the Year (December 2013 issue). I know that only because of a trip to the local drug store for a flu shot.

Promised I would be “in and out” in minutes. Literally true I suppose, it only took 50 minutes with four other people present when I arrived.

I have a different appreciation of “minutes” from the pharmacy staff. 😉

Crowdsourcing Multi-Label Classification for Taxonomy Creation

Monday, November 4th, 2013

Crowdsourcing Multi-Label Classification for Taxonomy Creation by Jonathan Bragg, Mausam and Daniel S. Weld.


Recent work has introduced CASCADE, an algorithm for creating a globally-consistent taxonomy by crowdsourcing microwork from many individuals, each of whom may see only a tiny fraction of the data (Chilton et al. 2013). While CASCADE needs only unskilled labor and produces taxonomies whose quality approaches that of human experts, it uses significantly more labor than experts. This paper presents DELUGE, an improved workflow that produces taxonomies with comparable quality using significantly less crowd labor. Specifically, our method for crowdsourcing multi-label classification optimizes CASCADE’s most costly step (categorization) using less than 10% of the labor required by the original approach. DELUGE’s savings come from the use of decision theory and machine learning, which allow it to pose microtasks that aim to maximize information gain.

An extension of work reported at Cascade: Crowdsourcing Taxonomy Creation.

While the reduction in required work is interesting, the ability to sustain more complex workflows looks like the more important.

That will require the development of workflows to be optimized, at least for subject identification.

Or should I say validation of subject identification?

What workflow do you use for subject identification and/or validation of subject identification?

Topic Maps in Lake Wobegon

Wednesday, May 15th, 2013

Jim Harris writes in The Decision Wobegon Effect:

In his book The Most Human Human, Brian Christian discussed what Baba Shiv of the Stanford Graduate School of Business called the decision dilemma, “where there is no objectively best choice, where there are simply a number of subjective variables with trade-offs between them. The nature of the situation is such that additional information probably won’t even help. In these cases – consider the parable of the donkey that, halfway between two bales of hay and unable to decide which way to walk, starves to death – what we want, more than to be correct, is to be satisfied with our choice (and out of the dilemma).”


Jim describes the Wobegon effect, an effect that blinds decision makers to alternative bales of hay.

Topic maps are composed of a mass of decisions, both large and small.

Is the Wobegon effect affecting your topic map authoring?

Check Jim’s post and think about your topic map authoring practices.

Advances in Neural Information Processing Systems (NIPS)

Sunday, April 7th, 2013

Advances in Neural Information Processing Systems (NIPS)

From the homepage:

The Neural Information Processing Systems (NIPS) Foundation is a non-profit corporation whose purpose is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. Neural information processing is a field which benefits from a combined view of biological, physical, mathematical, and computational sciences.

Links to videos from NIPS 2012 meetings are featured on the homepage. The topics are as wide ranging as the foundation’s description.

A tweet from Chris Diehl, wondering what to do with “old hardbound NIPS proceedings (NIPS 11)” led me to: Advances in Neural Information Processing Systems (NIPS) [Online Papers], which has the papers from 1987 to 2012 by volume and a search interface to the same.

Quite a remarkable collection just from a casual skim of some of the volumes.

Unless you need to fill book shelf space, suggest you bookmark the NIPS Online Papers.

Fast Data Gets A Jump On Big Data

Tuesday, March 12th, 2013

Fast Data Gets A Jump On Big Data by Hasan Rizvi.

The title reminded me of a post by Sam Hunting that asked: “How come we’ve got Big Data and not Good Data?”

Now “big data” is to give way to “fast data.”

From the post:

Today, both IT and business users alike are facing business scenarios where they need better information to differentiate, innovate, and radically transform their business.

In many cases, that transformation is being enabled by a move to “Big Data.” Organizations are increasingly collecting vast quantities of real-time data from a variety of sources, from online social media data to highly-granular transactional data to data from embedded sensors. Once collected, users or businesses are mining the data for meaningful patterns that can be used to drive business decisions or actions.

Big Data uses specialized technologies (like Hadoop and NoSQL) to process vast amounts of information in bulk. But most of the focus on Big Data so far has been on situations where the data being managed is basically fixed—it’s already been collected and stored in a Big Data database.

This is where Fast Data comes in. Fast Data is a complimentary approach to Big Data for managing large quantities of “in-flight” data that helps organizations get a jump on those business-critical decisions. Fast Data is the continuous access and processing of events and data in real-time for the purposes of gaining instant awareness and instant action. Fast Data can leverage Big Data sources, but it also adds a real-time component of being able to take action on events and information before they even enter a Big Data system.

Sorry Sam, “good data” misses out again.

Data isn’t the deciding factor in human decision making, instant or otherwise, see Thinking, Fast and Slow by Daniel Kahnman.

Supplying decision makers with good data and sufficient time to consider it, is the route to better decision making.

Of course, that leaves time to discover the poor quality of data provided by fast/big data delivery mechanisms.

The #NIPS2012 Videos are out

Monday, January 21st, 2013

The #NIPS2012 Videos are out by Igor Carron.

From the post:

Videolectures came through earlier than last year. woohoo! Presentations relevant to Nuit Blanche were featured earlier here. Videos for the presentations for the Posner Lectures, Invited Talks and Oral Sessions of the conference are here. Videos for the presentations for the different Workshops are here. Some videos are not available because the presenters have not given their permission to the good folks at Videolectures. If you know any of them, let them know the world is waiting.

Just in case Netflix is down. 😉

Documenting decisions separately from use cases

Thursday, March 15th, 2012

Documenting decisions separately from use cases by James Taylor.

From the post:

I do propose making decisions visible. By visible, I mean a separate and explicit step for each decision being made. These steps help the developer identify where possible alternate and exception paths may be placed. These decision points occur when an actor’s input drives the scenario down various paths.

I could not have put this better myself. I am a strong believer in this kind of separation, and of documenting how the decision is made independently of the use case so it can be reused. The only thing I would add is that these decisions need to be decomposed and analyzed, not simply documented. Many of these decisions are non-trivial and decomposing them to find the information, know-how and decisions on which they depend can be tremendously helpful.

James describes development and documentation of use cases and decisions in a context broader than software development. His point on decomposition of decisions is particularly important for systems designed to integrate information.

He describes decomposition of decisions as leading to discovery of “information, know-how and decisions on which they depend….”

Compare and contrast that with simple mapping decisions that map one column in a table to another. Can you say on what basis that mapping was made? Or with more complex systems, what “know-how” is required or on what other decisions that mapping may depend?

If your integration software/practice/system doesn’t encourage or allow such decomposition of decisions, you may need another system.

James also cover’s some other decision management materials that you may find useful in designing, authoring, evaluating information systems. (I started to say “semantic information systems” but all information systems have semantics, so that would be prepending an unnecessary noise word.)


Tuesday, September 6th, 2011

JT on EDM – James Taylor on Everything Decision Management

From the about page:

James Taylor is a leading expert in Decision Management and an independent consultant specializing in helping companies automate and improve critical decisions. Previously James was a Vice President at Fair Isaac Corporation where he developed and refined the concept of enterprise decision management or EDM. Widely credited with the invention of the term and the best known proponent of the approach, James helped create the Decision Management market and is its most passionate advocate.

James has 20 years experience in all aspects of the design, development, marketing and use of advanced technology including CASE tools, project planning and methodology tools as well as platform development in PeopleSoft’s R&D team and consulting with Ernst and Young. He has consistently worked to develop approaches, tools and platforms that others can use to build more effective information systems.

Another mainstream IT/data site that you would do well to read.

Multiple Criteria Decision Aid Bibliography

Wednesday, July 6th, 2011

Multiple Criteria Decision Aid Bibliography

I stumbled over this site while looking for a free copy of Amos Tversky’s “Features of Similarity” paper to cite for my readers. (I never was able to find a copy that wasn’t behind a pay-per-view wall. Sorry.)

It is maintained by the LAMSADE laboratory as materials on decision making, which identification of a subject certainly falls into that category.

The LAMSADE laboratory has been established in 1974 as a joint laboratory of the Université Paris-Dauphine and the CNRS. Its central research activity lies at the interface of two fundamental scientific areas: Computer Science and Decision Making (and, more generally, Operations Research).

LAMSADE’s research themes are both theoretical and applied and cover decision making, decision theory, social choice, operations research, combinatorial optimization, computational complexity, mathematical programming, interactions between decision and artificial intelligence, massive data computation, and information systems.

And yes, it is no mistake, the first entry in the bibliography is from 1736.