Archive for the ‘Psychology’ Category

Why Terrorism Sells

Saturday, May 20th, 2017

Daniel Gilbert, Edgar Pierce Professor of Psychology at Harvard University, explains the lack of a focused response on global warming, and incidentally explains the popularity of terrorism in one presentation.

When I say “popularity of terorism,” I don’t mean terrorism is widespread, but fear of terrorism is and funding to combat terrorism defies accounting.

Terrorism has four characteristics, all of which global warming lack:

  • Intentional: We are hard-wired to judge the intent of others.
  • Immoral: Food/sex rules. Killing us falls under “immoral.”
  • Imminent: Clear and present danger. (As in maybe today.)
  • Instantaneous: Bombs, bullets, fast enough to be dangers.

Gilbert’s focus was on climate change but his presentation has helped me understand why terrorism sells.

Here is an image of the human brain Gilbert uses in his presentation:

The part of most brains that is fighting terrorism?

That would be the big dark blue part.

The part capable of recognizing death by terrorist and asteroid are about the same?

That would be the small red part.

Assuming the small red part, which does planning, etc., isn’t overwhelmed by plotting routes to banks for the money you have earned fighting terrorism.

Why my sales pitch on terrorism fails: I’m pushing against decisions made by the big dark blue part that benefit the small red part (career, success, profit).

Two lessons from Gilbert’s presentation:

First, look for issues/needs with these characteristics:

  • Intentional: We are hard-wired to judge the intent of others.
  • Immoral: Food/sex rules. Killing us falls under “immoral.”
  • Imminent: Clear and present danger. (As in maybe today.)
  • Instantaneous: Bombs, bullets, fast enough to be dangers.

Second, craft sales pitch to big dark blue part of the brain that benefit the small red part of the brain (career, success, profit).

If you or a company you know has a pitch man/woman who can handle the fear angle, I’m looking for work.

Just keep me away from your fearful clients. 😉

Developing Expert p-Hacking Skills

Saturday, July 2nd, 2016

Introducing the p-hacker app: Train your expert p-hacking skills by Ned Bicare.

Ned’s p-hacker app will be welcomed by everyone who publishes where p-values are accepted.

Publishers should mandate authors and reviewers to submit six p-hacker app results along with any draft that contains, or is a review of, p-values.

The p-hacker app results won’t improve a draft and/or review, but when compared to the draft, will improve the publication in which it might have appeared.

From the post:

My dear fellow scientists!

“If you torture the data long enough, it will confess.”

This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis.

In fact, the art of creative data analysis has experienced despicable attacks over the last years. A small but annoyingly persistent group of second-stringers tries to denigrate our scientific achievements. They drag psychological science through the mire.

These people propagate stupid method repetitions; and what was once one of the supreme disciplines of scientific investigation – a creative data analysis of a data set – has been crippled to conducting an empty-headed step-by-step pre-registered analysis plan. (Come on: If I lay out the full analysis plan in a pre-registration, even an undergrad student can do the final analysis, right? Is that really the high-level scientific work we were trained for so hard?).

They broadcast in an annoying frequency that p-hacking leads to more significant results, and that researcher who use p-hacking have higher chances of getting things published.

What are the consequence of these findings? The answer is clear. Everybody should be equipped with these powerful tools of research enhancement!

The art of creative data analysis

Some researchers describe a performance-oriented data analysis as “data-dependent analysis”. We go one step further, and call this technique data-optimal analysis (DOA), as our goal is to produce the optimal, most significant outcome from a data set.

I developed an online app that allows to practice creative data analysis and how to polish your p-values. It’s primarily aimed at young researchers who do not have our level of expertise yet, but I guess even old hands might learn one or two new tricks! It’s called “The p-hacker” (please note that ‘hacker’ is meant in a very positive way here. You should think of the cool hackers who fight for world peace). You can use the app in teaching, or to practice p-hacking yourself.

Please test the app, and give me feedback! You can also send it to colleagues: http://shinyapps.org/apps/p-hacker.

Enjoy!

Laypersons vs. Scientists – “…laypersons may be prone to biases…”

Saturday, March 12th, 2016

The “distinction” between laypersons and scientists is more a world view about some things than “all scientists are rational” or “all laypersons are irrational.” Scientists and laypersons can be just as rational and/or irrational, depending upon the topic at hand.

Having said that, The effects of social identity threat and social identity affirmation on laypersons’ perception of scientists by Peter Nauroth, et al., finds, unsurprisingly, that if a layperson’s social identity is threatened by research, they have a less favorable view of the scientists involved.

Abstract:

Public debates about socio-scientific issues (e.g. climate change or violent video games) are often accompanied by attacks on the reputation of the involved scientists. Drawing on the social identity approach, we report a minimal group experiment investigating the conditions under which scientists are perceived as non-prototypical, non-reputable, and incompetent. Results show that in-group affirming and threatening scientific findings (compared to a control condition) both alter laypersons’ evaluations of the study: in-group affirming findings lead to more positive and in-group threatening findings to more negative evaluations. However, only in-group threatening findings alter laypersons’ perceptions of the scientists who published the study: scientists were perceived as less prototypical, less reputable, and less competent when their research results imply a threat to participants’ social identity compared to a non-threat condition. Our findings add to the literature on science reception research and have implications for understanding the public engagement with science.

Perceived attacks on personal identity have negative consequences for the “reception” of science.

Implications for public engagement with science

Our findings have immediate implications for public engagement with science activities. When laypersons perceive scientists as less competent, less reputable, and not representative of the scientific community and the scientist’s opinion as deviating from the current scientific state-of-the-art, laypersons might be less willing to participate in constructive discussions (Schrodt et al., 2009). Furthermore, our mediation analysis suggests that these negative perceptions deepen the trench between scientists and laypersons concerning the current scientific state-of-the-art. We speculate that these biases might actually even lead to engagement activities to backfire: instead of developing a mutual understanding they might intensify laypersons’ misconceptions about the scientific state-of-the-art. Corroborating this hypothesis, Binder et al. (2011) demonstrated that discussions about controversial science topics may in fact polarize different groups around a priori positions. Additional preliminary support for this hypothesis can also be found in case studies about public engagement activities in controversial socio-scientific issues. Some of these reports (for two examples, see Lezaun and Soneryd, 2007) indicate problems to maintain a productive atmosphere between laypersons and experts in the discussion sessions.

Besides these practical implications, our results also add further evidence to the growing body of literature questioning the validity of the deficit model in science communication according to which people’s attitudes toward science are mainly determined by their knowledge about science (Sturgis and Allum, 2004). We demonstrated that social identity concerns profoundly influence laypersons’ perceptions and evaluations of scientific results regardless of laypersons’ knowledge. However, our results also question whether involving laypersons in policy decision processes based upon scientific evidence is reasonable in all socio-scientific issues. Particularly when the scientific evidence has potential negative consequences for social groups, our research suggests that laypersons may be prone to biases based upon their social affiliations. For example, if regular video game players were involved in decision-making processes concerning potential sales restrictions of violent video games, they would be likely to perceive scientific evidence demonstrating detrimental effects of violent video games as shoddy and the respective researchers as disreputable (Greitemeyer, 2014; Nauroth et al., 2014, 2015).(emphasis added)

The principle failure of this paper is its failure to study the scientific community and its reaction within science to research that attacks the personal identity of its participants.

I don’t think it is reading too much into the post: Academic, Not Industrial Secrecy, where one group said:

We want restrictions on who could do the analyses.

to say that attacks on personal identity leads to boorish behavior on the part of scientists.

Laypersons and scientists emit a never ending stream of examples of prejudice, favoritism, sycophancy, sloppy reasoning, to say nothing of careless and/or low quality work.

Reception of science among laypersons might improve if the scientific community abandoned its facade of “it’s objective, it’s science.”

That facade was tiresome by WWII and to keep repeating now is a disservice to the scientific community.

All of our efforts, in any field, are human endeavors and thus subject to the vagaries and uncertainties human interaction.

Live with it.

Statistical Reporting Errors in Psychology (1985–2013) [1 in 8]

Tuesday, October 27th, 2015

Do you remember your parents complaining about how far the latest psychology report departed from their reality?

Turns out there may be a scientific reason why those reports were as far off as your parents thought (or not).

The prevalence of statistical reporting errors in psychology (1985–2013) by Michèle B. Nuijten , Chris H. J. Hartgerink, Marcel A. L. M. van Assen, Sacha Epskamp, Jelte M. Wicherts, reports:

This study documents reporting errors in a sample of over 250,000 p-values reported in eight major psychology journals from 1985 until 2013, using the new R package “statcheck.” statcheck retrieved null-hypothesis significance testing (NHST) results from over half of the articles from this period. In line with earlier research, we found that half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. In contrast to earlier findings, we found that the average prevalence of inconsistent p-values has been stable over the years or has declined. The prevalence of gross inconsistencies was higher in p-values reported as significant than in p-values reported as nonsignificant. This could indicate a systematic bias in favor of significant results. Possible solutions for the high prevalence of reporting inconsistencies could be to encourage sharing data, to let co-authors check results in a so-called “co-pilot model,” and to use statcheck to flag possible inconsistencies in one’s own manuscript or during the review process.

This is an open access article so dig in for all the details discovered by the authors.

The R package statcheck: Extract Statistics from Articles and Recompute P Values is quite amazing. The manual for statcheck should have you up and running in short order.

I did puzzle over the proposed solutions:

Possible solutions for the high prevalence of reporting inconsistencies could be to encourage sharing data, to let co-authors check results in a so-called “co-pilot model,” and to use statcheck to flag possible inconsistencies in one’s own manuscript or during the review process.

All of those are good suggestions but we already have the much valued process of “peer review” and the value-add of both non-profit and commercial publishers. Surely those weighty contributions to the process of review and publication should be enough to quell this “…systematic bias in favor of significant results.”

Unless, of course, dependence on “peer review” and the value-add of publishers for article quality is entirely misplaced. Yes?

What area with “p-values reported as significant” will fall to statcheck next?

Replication in Psychology?

Friday, May 1st, 2015

First results from psychology’s largest reproducibility test by Monya Baker.

From the post:

An ambitious effort to replicate 100 research findings in psychology ended last week — and the data look worrying. Results posted online on 24 April, which have not yet been peer-reviewed, suggest that key findings from only 39 of the published studies could be reproduced.

But the situation is more nuanced than the top-line numbers suggest (See graphic, ‘Reliability test’). Of the 61 non-replicated studies, scientists classed 24 as producing findings at least “moderately similar” to those of the original experiments, even though they did not meet pre-established criteria, such as statistical significance, that would count as a successful replication.

The project, known as the “Reproducibility Project: Psychology”, is the largest of a wave of collaborative attempts to replicate previously published work, following reports of fraud and faulty statistical analysis as well as heated arguments about whether classic psychology studies were robust. One such effort, the ‘Many Labs’ project, successfully reproduced the findings of 10 of 13 well-known studies3.

Replication is a “hot” issue and likely to get hotter if peer review shifts to be “open.”

Do you really want to be listed as a peer reviewer for a study that cannot be replicated?

Perhaps open peer review will lead to more accountability of peer reviewers.

Yes?

Banning p < .05 In Psychology [Null Hypothesis Significance Testing Procedure (NHSTP)]

Friday, February 27th, 2015

The recent banning of the Null Hypothesis Significance Testing Procedure (NHSTP) in psychology should be a warning to would be data scientists that even “well established” statistical procedures may be deeply flawed.

Sorry, you may not have seen the news. In Basic and Applied Social Psychology (BASP), Banning Null Hypothesis Significance Testing Procedure (NHSTP) (2015) David Trafimow and Michael Marks write

The Basic and Applied Social Psychology (BASP) 2014 Editorial emphasized that the null hypothesis significance testing procedure (NHSTP) is invalid, and thus authors would be not required to perform it (Trafimow, 2014). However, to allow authors a grace period, the Editorial stopped short of actually banning the NHSTP. The purpose of the present Editorial is to announce that the grace period is over. From now on, BASP is banning the NHSTP.

You may be more familiar with seeing p < .05 rather than Null Hypothesis Significance Testing Procedure (NHSTP).

David Trafimow cites in the 2014 editorial warning about NHSTP his earlier work, Hypothesis Testing and Theory Evaluation at the Boundaries: Surprising Insights From Bayes’s Theorem (2003) as justifying non-use and the later ban of NHSTP.

His argument is summarized in the introduction:

Despite a variety of different criticisms, the standard nullhypothesis significance-testing procedure (NHSTP) has dominated psychology over the latter half of the past century. Although NHSTP has its defenders when used “properly” (e.g., Abelson, 1997; Chow, 1998; Hagen, 1997; Mulaik, Raju, & Harshman, 1997), it has also been subjected to virulent attacks (Bakan, 1966; Cohen, 1994; Rozeboom, 1960; Schmidt, 1996). For example, Schmidt and Hunter (1997) argue that NHSTP is “logically indefensible and retards the research enterprise by making it difficult to develop cumulative knowledge” (p. 38). According to Rozeboom (1997), “Null-hypothesis significance testing is surely the most bone-headedly misguided procedure ever institutionalized in the rote training of science students” (p. 336). The most important reason for these criticisms is that although one can calculate the probability of obtaining a finding given that the null hypothesis is true, this is not equivalent to calculating the probability that the null hypothesis is true given that one has obtained a finding. Thus, researchers are in the position of rejecting the null hypothesis even though they do not know its probability of being true (Cohen, 1994). One way around this problem is to use Bayes’s theorem to calculate the probability of the null hypothesis given that one has obtained a finding, but using Bayes’s theorem carries with it some problems of its own, including a lack of information necessary to make full use of the theorem. Nevertheless, by treating the unknown values as variables, it is possible to conduct some analyses that produce some interesting conclusions regarding NHSTP. These analyses clarify the relations between NHSTP and Bayesian theory and quantify exactly why the standard practice of rejecting the null hypothesis is, at times, a highly questionable procedure. In addition, some surprising findings come out of the analyses that bear on issues pertaining not only to hypothesis testing but also to the amount of information gained from findings and theory evaluation. It is possible that the implications of the following analyses for information gain and theory evaluation are as important as the NHSTP debate.

The most important lines for someone who was trained with the null hypothesis as an undergraduate many years ago:

The most important reason for these criticisms is that although one can calculate the probability of obtaining a finding given that the null hypothesis is true, this is not equivalent to calculating the probability that the null hypothesis is true given that one has obtained a finding. Thus, researchers are in the position of rejecting the null hypothesis even though they do not know its probability of being true (Cohen, 1994).

If you don’t know the probability of the null hypothesis, any conclusion you draw is on very shaky grounds.

Do you think any of the big data “shake-n-bake” mining/processing services are going to call that problem to your attention? True enough, such services may “empower” users but if “empowerment” means producing meaningless results, no thanks.

Trafimow cites Jacob Cohen’s The Earth is Round (p < .05) (1994) in his 2003 work. Cohen is angry and in full voice as only a senior scholar can afford to be.

Take the time to read both Trafimow and Cohen. Many errors are lurking outside your door but that will help you recognize this one.

More Bad Data News – Psychology

Friday, February 20th, 2015

Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science by Coosje L. S. Veldkamp, et al. (PLOS Published: December 10, 2014 DOI: 10.1371/journal.pone.0114876)

Abstract:

Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

If you are relying on statistical reports from psychology publications, you need to keep the last part of that abstract firmly in mind:

Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

That is an impressive error rate. Imagine incorrect GPS locations 63% of the time and your car starting only 80% of the time. I would take that as a sign that something was seriously wrong.

Not an amazing results considering reports of contamination in genome studies and bad HR data, not to mention that only 6% of landmark cancer research projects could be replicated.

At the root of the problem are people. People just like you and me.

People who did not follow (or in some cases record) a well defined process that included independent verification results they obtained.

Independent verification is never free but then neither are the consequences of errors. Choose carefully.

Most HR Data Is Bad Data

Thursday, February 19th, 2015

Most HR Data Is Bad Data by Marcus Buckingham.

“Bad data” can come in any number of forms and Marcus Buckingham focuses on one of the most pernicious: Data that is flawed at its inception. Data that doesn’t measure what it purports to measure. Performance evaluation data.

From the post:

Over the last fifteen years a significant body of research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance. The effect that ruins our ability to rate others has a name: the Idiosyncratic Rater Effect, which tells us that my rating of you on a quality such as “potential” is driven not by who you are, but instead by my own idiosyncrasies—how I define “potential,” how much of it I think I have, how tough a rater I usually am. This effect is resilient — no amount of training seems able to lessen it. And it is large — on average, 61% of my rating of you is a reflection of me.

In other words, when I rate you, on anything, my rating reveals to the world far more about me than it does about you. In the world of psychometrics this effect has been well documented. The first large study was published in 1998 in Personnel Psychology; there was a second study published in the Journal of Applied Psychology in 2000; and a third confirmatory analysis appeared in 2010, again in Personnel Psychology. In each of the separate studies, the approach was the same: first ask peers, direct reports, and bosses to rate managers on a number of different performance competencies; and then examine the ratings (more than half a million of them across the three studies) to see what explained why the managers received the ratings they did. They found that more than half of the variation in a manager’s ratings could be explained by the unique rating patterns of the individual doing the rating— in the first study it was 71%, the second 58%, the third 55%.

You have to follow the Idiosyncratic Rater Effect link to find the references Buckingham cites so I have repeated them (with links and abstracts) below:

Trait, Rater and Level Effects in 360-Degree Performance Ratings by Michael K. Mount, et al., Personnel Psychology, 1998, 51, 557-576.

Abstract:

Method and trait effects in multitrait-multirater (MTMR) data were examined in a sample of 2,350 managers who participated in a developmental feedback program. Managers rated their own performance and were also rated by two subordinates, two peers, and two bosses. The primary purpose of the study was to determine whether method effects are associated with the level of the rater (boss, peer, subordinate, self) or with each individual rater, or both. Previous research which has tacitly assumed that method effects are associated with the level of the rater has included only one rater from each level; consequently, method effects due to the rater’s level may have been confounded with those due to the individual rater. Based on confirmatory factor analysis, the present results revealed that of the five models tested, the best fit was the 10-factor model which hypothesized 7 method factors (one for each individual rater) and 3 trait factors. These results suggest that method variance in MTMR data is more strongly associated with individual raters than with the rater’s level. Implications for research and practice pertaining to multirater feedback programs are discussed.

Understanding the Latent Structure of Job Performance Ratings, by Michael K. Mount, Steven E. Scullen, Maynard Goff, Journal of Applied Psychology, 2000, Vol. 85, No. 6, 956-970 (I looked but apparently the APA hasn’t gotten the word about access to abstracts online, etc.)

Rater Source Effects are Alive and Well After All by Brian Hoffman, et al., Personnel Psychology, 2010, 63, 119-151.

Abstract:

Recent research has questioned the importance of rater perspective effects on multisource performance ratings (MSPRs). Although making a valuable contribution, we hypothesize that this research has obscured evidence for systematic rater source effects as a result of misspecified models of the structure of multisource performance ratings and inappropriate analytic methods. Accordingly, this study provides a reexamination of the impact of rater source on multisource performance ratings by presenting a set of confirmatory factor analyses of two large samples of multisource performance rating data in which source effects are modeled in the form of second-order factors. Hierarchical confirmatory factor analysis of both samples revealed that the structure of multisource performance ratings can be characterized by general performance, dimensional performance, idiosyncratic rater, and source factors, and that source factors explain (much) more variance in multisource performance ratings whereas general performance explains (much) less variance than was previously believed. These results reinforce the value of collecting performance data from raters occupying different organizational levels and have important implications for research and practice.

For students: Can you think of other sources that validate the Idiosyncratic Rater Effect?

What about algorithms that make recommendations based on user ratings of movies? Isn’t the premise of recommendations that the ratings tell us more about the rater than about the movie? So we can make the “right” recommendation for a person very similar to the rater?

I don’t know that it means anything but a search with a popular search engine turns up only 258 “hits” for “Idiosyncratic Rater Effect.” On the other hand, “recommendation system” turns up 424,000 “hits” and that sounds low to me considering the literature on recommendation.

Bottom line on data quality is that widespread use of data is no guarantee of quality.

What ratings reflect is useful in one context (recommendation) and pernicious in another (employment ratings).

I first saw this in a tweet by Neil Saunders.

Rare Find: Honest General Speaks Publicly About IS (ISIL, ISIS)

Monday, December 29th, 2014

In Battle to Defang ISIS, U.S. Targets Its Psychology by Eric Schmitt.

From the post:

Maj. Gen. Michael K. Nagata, commander of American Special Operations forces in the Middle East, sought help this summer in solving an urgent problem for the American military: What makes the Islamic State so dangerous?

Trying to decipher this complex enemy — a hybrid terrorist organization and a conventional army — is such a conundrum that General Nagata assembled an unofficial brain trust outside the traditional realms of expertise within the Pentagon, State Department and intelligence agencies, in search of fresh ideas and inspiration. Business professors, for example, are examining the Islamic State’s marketing and branding strategies.

“We do not understand the movement, and until we do, we are not going to defeat it,” he said, according to the confidential minutes of a conference call he held with the experts. “We have not defeated the idea. We do not even understand the idea.” (emphasis added)

An honest member of the any administration in Washington is so unusual that I wanted to draw your attention to Maj. General Michael K. Nagata.

His problem, as you will quickly recognize, is one of a diversity of semantics. What is heard one way by a Western audience is heard completely differently by an audience with a different tradition.

The general may not think of it as “progress,” but getting Washington policy makers to acknowledge that there is a legitimate semantic gap between Western policy makers and IS is a huge first step. It can’t be grudging or half-hearted. Western policy makers have to acknowledge that there are honest views of the world that are different from their own. IS isn’t practicing dishonest, deception, perversely refusing to acknowledge the truth of Western statements, etc. Members of IS have an honest but different semantic view of the world.

If the good general can get policy makers to take that step, then and only then can the discussion of what that “other” semantic is and how to map it into terms comprehensible to Western policy makers can begin. If that step isn’t taken, then the resources necessary to explore and map that “other” semantic are never going to be allocated. And even if allocated, the results will never figure into policy making with regard to IS.

Failing on any of those three points: failing to concede the legitimacy of the IS semantic, failing to allocate resources to explore and understand the IS semantic, failing to incorporate an understanding of the IS semantic into policy making, is going to result in a failure to “defeat” IS, if that remains a goal after understanding its semantic.

Need an example? Consider the Viet-Nam war, in which approximately 58,220 Americans died and millions of Vietnamese, Laotions and Cambodians died, not counting long term injuries among all of the aforementioned. In case you have not heard, the United States lost the Vietnam War.

The reasons for that loss are wide and varied but let me suggest two semantic differences that may have played a role in that defeat. First, the Vietnamese have a long term view of repelling foreign invaders. Consider that Vietnam was occupied by the Chinese from 111 BCE until 938 CE, a period of more than one thousand (1,000) years. American war planners had a war semantic of planning for the next presidential election, not a winning strategy for a foe with a semantic that was two hundred and fifty (250) times longer.

The other semantic difference (among many others) was the understanding of “democracy,” which is usually heralded by American policy makers as a grand prize resulting from American involvement. In Vietnam, however, the villages and hamlets already had what some would consider democracy for centuries. (Beyond Hanoi: Local Government in Vietnam) Different semantic for “democracy” to be sure but one that was left unexplored in the haste to import a U.S. semantic of the concept.

Fighting a war where you don’t understand the semantics in play for the “other” side is risky business.

General Nagata has taken the first step towards such an understanding by admitting that he and his advisors don’t understand the semantics of IS. The next step should be to find someone who does. May I suggest talking to members of IS under informal meeting arrangements? Such that diplomatic protocols and news reporting doesn’t interfere with honest conversations? I suspect IS members are as ignorant of U.S. semantics as U.S. planners are of IS semantics so there would be some benefit for all concerned.

Such meetings would yield more accurate understandings than U.S. born analysts who live in upper middle-class Western enclaves and attempt to project themselves into foreign cultures. The understanding derived from such meetings could well contradict current U.S. policy assessments and objectives. Whether any administration has the political will to act upon assessments that aren’t the product of a shared post-Enlightenment semantic remains to be seen. But such a assessments must be obtained first to answer that question.

Would topic maps help in such an endeavor? Perhaps, perhaps not. The most critical aspect of such a project would be conceding for all purposes, the legitimacy of the “other” semantic, where “other” depends on what side you are on. That is a topic map “state of mind” as it were, where all semantics are treated equally and not any one as more legitimate than any other.


PS: A litmus test for Major General Michael K. Nagata to use in assembling a team to attempt to understand IS semantics: Have each applicant write their description of the 9/11 hijackers in thirty (30) words or less. Any applicant who uses any variant of coward, extremist, terrorist, fanatic, etc. should be wished well and sent on their way. Not a judgement on their fitness for other tasks but they are not going to be able to bridge the semantic gap between current U.S. thinking and that of IS.

The CIA has a report on some of the gaps but I don’t know if it will be easier for General Nagata to ask the CIA for a copy or to just find a copy on the Internet. It illustrates, for example, why the American strategy of killing IS leadership is non-productive if not counter-productive.

If you have the means, please forward this post to General Nagata’s attention. I wasn’t able to easily find a direct means of contacting him.

How Language Shapes Thought:…

Wednesday, December 24th, 2014

How Language Shapes Thought: The languages we speak affect our perceptions of the world by Lera Boroditsky.

From the article:

I am standing next to a five-year old girl in pormpuraaw, a small Aboriginal community on the western edge of Cape York in northern Australia. When I ask her to point north, she points precisely and without hesitation. My compass says she is right. Later, back in a lecture hall at Stanford University, I make the same request of an audience of distinguished scholars—winners of science medals and genius prizes. Some of them have come to this very room to hear lectures for more than 40 years. I ask them to close their eyes (so they don’t cheat) and point north. Many refuse; they do not know the answer. Those who do point take a while to think about it and then aim in all possible directions. I have repeated this exercise at Harvard and Princeton and in Moscow, London and Beijing, always with the same results.

A five-year-old in one culture can do something with ease that eminent scientists in other cultures struggle with. This is a big difference in cognitive ability. What could explain it? The surprising answer, it turns out, may be language.

Michael Nielson mentioned this article in a tweet about a new book due out from Lera in the Fall of 2015.

Looking further I found: 7,000 Universes: How the Language We Speak Shapes the Way We Think [Kindle Edition] by Lera Boroditsky. (September, 2015, available for pre-order now)

As Michael says, looking forward to seeing this book! Sounds like a good title to forward to Steve Newcomb. Steve would argue, correctly I might add, any natural language may contain an infinite number of possible universes of discourse.

I assume some of this issue will be caught by your testing topic map UIs with actual users in whatever subject domain and language you are offering information. That is rather than consider the influence of language in the abstract, you will be silently taking it into account in user feedback. You are testing your topic map deliverables with live users before delivery. Yes?

There are other papers by Lera available for your leisure reading.

Thinking, Fast and Slow (Review) [And Subject Identity]

Friday, November 15th, 2013

A statistical review of ‘Thinking, Fast and Slow’ by Daniel Kahneman by Patrick Burns.

From the post:

We are good intuitive grammarians — even quite small children intuit language rules. We can see that from mistakes. For example: “I maked it” rather than the irregular “I made it”.

In contrast those of us who have training and decades of experience in statistics often get statistical problems wrong initially.

Why should there be such a difference?

Our brains evolved for survival. We have a mind that is exquisitely tuned for finding things to eat and for avoiding being eaten. It is a horrible instrument for finding truth. If we want to get to the truth, we shouldn’t start from here.

A remarkable aspect of your mental life is that you are rarely stumped. … you often have answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.

The review focuses mainly on statistical issues in “Thinking Fast and Slow” but I think you will find it very entertaining.

I deeply appreciate Patrick’s quoting of:

A remarkable aspect of your mental life is that you are rarely stumped. … you often have answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.

In particular:

…relying on evidence that you can neither explain nor defend.

which resonates with me on subject identification.

Think about how we search for subjects, which of necessity involves some notion of subject identity.

What if a colleague asks if they should consult the records of the Order of the Garter to find more information on “Lady Gaga?”

Not entirely unreasonable since “Lady” is conferred upon female recipients of the Order of the Garter.

No standard search technique would explain why your colleague should not start with the Order of the Garter records.

Although I think most of us would agree such a search would be far afield. 😉

Every search starts with a searcher relying upon what they “know,” suspect or guess to be facts about a “subject” to search on.

At the end of the search, the characteristics of the subject as found, turn out to be the characteristics we were searching for all along.

I say all that to suggest that we need not bother users to say how in fact to identity the objects of their searches.

Rather the question should be:

What pointers or contexts are the most helpful to you when searching? (May or may not be properties of the search objective.)

Recalling that properties of the search objective are how we explain successful searches, not how we perform them.

Calling upon users to explain or make explicit what they themselves don’t understand, seems like a poor strategy for adoption of topic maps.

Capturing what “works” for a user, without further explanation or difficulty seems like the better choice.


PS: Should anyone ask about “Lady Gaga,” you can mention that Glamour magazine featured her on its cover, naming her Woman of the Year (December 2013 issue). I know that only because of a trip to the local drug store for a flu shot.

Promised I would be “in and out” in minutes. Literally true I suppose, it only took 50 minutes with four other people present when I arrived.

I have a different appreciation of “minutes” from the pharmacy staff. 😉

Are You A Facebook Slacker? (Or, “Don’t “Like” Me, Support Me!”)

Sunday, November 10th, 2013

Their title reads: The Nature of Slacktivism: How the Social Observability of an Initial Act of Token Support Affects Subsequent Prosocial Action by Kirk Kristofferson, Katherine White, John Peloza. (Kirk Kristofferson, Katherine White, John Peloza. The Nature of Slacktivism: How the Social Observability of an Initial Act of Token Support Affects Subsequent Prosocial Action. Journal of Consumer Research, 2013; : 000 DOI: 10.1086/674137)

Abstract:

Prior research offers competing predictions regarding whether an initial token display of support for a cause (such as wearing a ribbon, signing a petition, or joining a Facebook group) subsequently leads to increased and otherwise more meaningful contributions to the cause. The present research proposes a conceptual framework elucidating two primary motivations that underlie subsequent helping behavior: a desire to present a positive image to others and a desire to be consistent with one’s own values. Importantly, the socially observable nature (public vs. private) of initial token support is identified as a key moderator that influences when and why token support does or does not lead to meaningful support for the cause. Consumers exhibit greater helping on a subsequent, more meaningful task after providing an initial private (vs. public) display of token support for a cause. Finally, the authors demonstrate how value alignment and connection to the cause moderate the observed effects.

From the introduction:

We define slacktivism as a willingness to perform a relatively costless, token display of support for a social cause, with an accompanying lack of willingness to devote significant effort to enact meaningful change (Davis 2011; Morozov 2009a).

From the section: The Moderating Role of Social Observability: The Public versus Private Nature of Support:

…we anticipate that consumers who make an initial act of token support in public will be no more likely to provide meaningful support than those who engaged in no initial act of support.

Four (4) detailed studies and an extensive review of the literature are offered to support the author’s conclusions.

The only source that I noticed missing was:

10 Two men went up into the temple to pray; the one a Pharisee, and the other a publican.

11 The Pharisee stood and prayed thus with himself, God, I thank thee, that I am not as other men are, extortioners, unjust, adulterers, or even as this publican.

12 I fast twice in the week, I give tithes of all that I possess.

13 And the publican, standing afar off, would not lift up so much as his eyes unto heaven, but smote upon his breast, saying, God be merciful to me a sinner.

14 I tell you, this man went down to his house justified rather than the other: for every one that exalteth himself shall be abased; and he that humbleth himself shall be exalted.

King James Version, Luke 18: 10-14.

The authors would reverse the roles of the Pharisee and the publican, to find the Pharisee contributes “meaningful support,” and the publican has not.

We contrast token support with meaningful support, which we define as consumer contributions that require a significant cost, effort, or behavior change in ways that make tangible contributions to the cause. Examples of meaningful support include donating money and volunteering time and skills.

If you are trying to attract “meaningful support” for your cause or organization, i.e., avoid slackers, there is much to learn here.

If you are trying to move beyond the “cheap grace” (Bonhoeffer)* of “meaningful support” and towards “meaningful change,” there is much to be learned here as well.

Governments, corporations, ad agencies and even your competitors are manipulating the public understanding of “meaningful support” and “meaningful change.” And acceptable means for both.

You can play on their terms and lose, or you can define your own terms and roll the dice.

Questions?


* I know the phrase “cheap grace” from Bonhoeffer but in running a reference to ground, I saw a statement in Wikipedia that Bonhoeffer learned that phrase from Adam Clayton Powell, Sr.. Homiletics have never been a strong interest of mine but I will try to run down some sources on sermons by Adam Clayton Powell, Sr.

How to design better data visualisations

Tuesday, October 15th, 2013

How to design better data visualisations by Graham Odds.

From the post:

Over the last couple of centuries, data visualisation has developed to the point where it is in everyday use across all walks of life. Many recognise it as an effective tool for both storytelling and analysis, overcoming most language and educational barriers. But why is this? How are abstract shapes and colours often able to communicate large amounts of data more effectively than a table of numbers or paragraphs of text? An understanding of human perception will not only answer this question, but will also provide clear guidance and tools for improving the design of your own visualisations.

In order to understand how we are able to interpret data visualisations so effectively, we must start by examining the basics of how we perceive and process information, in particular visual information.

Graham pushes all of my buttons by covering:

A reading list from this post would take months to read and years to fully digest.

No time like the present!

Psychological Studies of Policy Reasoning

Monday, November 19th, 2012

Psychological Studies of Policy Reasoning by Adam Wyner.

From the post:

The New York Times had an article on the difficulties that the public has to understand complex policy proposals – I’m Right (For Some Reason). The points in the article relate directly to the research I’ve been doing at Liverpool on the IMPACT Project, for we decompose a policy proposal into its constituent parts for examination and improved understanding. See our tool live: Structured Consultation Tool

Policy proposals are often presented in an encapsulated form (a sound bite). And those receiving it presume that they understand it, the illusion of explanatory depth discussed in a recent article by Frank Keil (a psychology professor at Cornell when and where I was a Linguistics PhD student). This is the illusion where people believe they understand a complex phenomena with greater precision, coherence, and depth than they actually do; they overestimate their understanding. To philosophers, this is hardly a new phenomena, but showing it experimentally is a new result.

In research about public policy, the NY Times authors, Sloman and Fernbach, describe experiments where people state a position and then had to justify it. The results showed that participants softened their views as a result, for their efforts to justify it highlighted the limits of their understanding. Rather than statements of policy proposals, they suggest:

An approach to get people to state how they would distinguish or not, two subjects?

Would it make a difference if the questions were oral or in writing?

Since a topic map is an effort to capture a domain expert’s knowledge, tools to elicit that knowledge are important.

Journal of Experimental Psychology: Applied

Friday, October 5th, 2012

Journal of Experimental Psychology: Applied

From the website:

The mission of the Journal of Experimental Psychology: Applied® is to publish original empirical investigations in experimental psychology that bridge practically oriented problems and psychological theory.

The journal also publishes research aimed at developing and testing of models of cognitive processing or behavior in applied situations, including laboratory and field settings. Occasionally, review articles are considered for publication if they contribute significantly to important topics within applied experimental psychology.

Areas of interest include applications of perception, attention, memory, decision making, reasoning, information processing, problem solving, learning, and skill acquisition. Settings may be industrial (such as human–computer interface design), academic (such as intelligent computer-aided instruction), forensic (such as eyewitness memory), or consumer oriented (such as product instructions).

I browsed several recent issues of the Journal of Experimental Psychology: Applied while researching the Todd Rogers post. Fascinating stuff and some of it will find its way into interfaces or other more “practical” aspects of computer science.

Something to temper the focus on computer facing work.

No computer has ever originated a purchase order or contract. Might not hurt to know something about the entities that do.