Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 28, 2019

How-To Black Box Google’s Algorithm of Oppression

Filed under: Algorithms,Bias,Feminism,Search Algorithms,Search Data,Searching,sexism — Patrick Durusau @ 6:55 pm

Safiya Noble’s Algorithms of Oppression highlights the necessity of asking members of marginalized communities about their experiences with algorithms. I can read the terms that Noble uses in her Google searches and her analysis of the results. What I can’t do, as a older white male, is authentically originate queries of a Black woman scholar or estimate her reaction to search results.

That inability to assume a role in a marginalized community extends across all marginalized communities and in between them. To understand the impact of oppressive algorithms, such as Google’s search algorithms, we must:

  1. Empower everyone who can use a web browser with the ability to black box Google’s algorithm of oppression, and
  2. Listen to their reports of queries and experiences with results of queries.

Enpowering everyone to participate in testing Google’s algorithms avoids relying on reports about the experiences of marginalized communities. We will be listening to members of those communities.

In it’s simplest form, your black boxing of Google start with a Google search box, then:

your search terms site:website OR site:website

That search string states your search terms and is then followed by an OR list of websites you want searched. The results are Google’s ranking of your search against specified websites.

Here’s an example ran while working on this post:

terrorism trump IS site:nytimes.com OR site:fox.com OR site:wsj.com

Without running the search yourself, what distribution of articles to you expect to see? (I also tested this using Tor to make sure my search history wasn’t creating an issue.)

By count of the results: nytimes.com 87, fox.com 0, wsj.com 18.

Suprised? I was. I wonder how the Washington Post stacks up against the New York Times? Same terms: nytimes 49, washingtonpost.com 52.

Do you think those differences are accidental? (I don’t.)

I’m not competent to create a list of Black websites for testing Google’s algorithm of oppression but the African American Literature Book Club has a list of the top 50 Black-Owned Websites. In addition, they offer a list of 300 Black-owned websites and host the search engine Huria Search, which only searches Black-owned websites.

To save you the extraction work, here are the top 50 Black-owned websites ready for testing against each other and other sites in the bowels of Google:

essence.com OR howard.edu OR blackenterprise.com OR thesource.com OR ebony.com OR blackplanet.com OR sohh.com OR blackamericaweb.com OR hellobeautiful.com OR allhiphop.com OR worldstarhiphop.com OR eurweb.com OR rollingout.com OR thegrio.com OR atlantablackstar.com OR bossip.com OR blackdoctor.org OR blackpast.org OR lipstickalley.com OR newsone.com OR madamenoire.com OR morehouse.edu OR diversityinc.com OR spelman.edu OR theybf.com OR hiphopwired.com OR aalbc.com OR stlamerican.com OR afro.com OR phillytrib.com OR finalcall.com OR mediatakeout.com OR lasentinel.net OR blacknews.com OR blavity.com OR cassiuslife.com OR jetmag.com OR blacklivesmatter.com OR amsterdamnews.com OR diverseeducation.com OR deltasigmatheta.org OR curlynikki.com OR atlantadailyworld.com OR apa1906.net OR theshaderoom.com OR notjustok.com OR travelnoire.com OR thecurvyfashionista.com OR dallasblack.com OR forharriet.com

Please spread the word to “young Black girls” to use Noble’s phrase, Black women in general, all marginalized communities, they need not wait for experts with programming staffs to detect marginalization at Google. Experts have agendas, discover your own and tell the rest of us about it.

January 16, 2019

Finding Bias in Data Mining/Science: An Exercise

Filed under: Bias,Data Mining — Patrick Durusau @ 9:34 pm

I follow the MIT Technology Review on Twitter: @techreview and was amazed to see this AM:

Period of relative peace?

Really? Smells like someone has been cooking the numbers! (Another way to say bias in data mining/science.)

Unlike annual reports from corporations and foundations, you read the MIT post in full, Data mining adds evidence that war is baked into the structure of society, the original paper, Pattern Analysis of World Conflicts over the past 600 years by Gianluca Martelloni, Francesca Di Patti, and Ugo Bardi, plus you can access the data used for the paper: Conflict by Dr. Peter Brecke.

From general news awareness, the claim of “…period of relative peace…” should trigger skepticism on your part. I take it as a generally accepted fact that the United States was bombing somewhere in the world every day of the Obama administration and that has continued under the current U.S. president. It’s relatively peaceful in New York, London, and Berlin, but other places in the world, not so much.

I skimmed the original article and encountered this remark: “…a relatively peaceful period during the 18th century is noticeable.” I don’t remember 18th history all that well but that strikes me as inconsistent with what I do remember. I have to wonder who was peaceful and who was not in the data set. Not saying it is wrong from one view of the data, but what underlies that statement?

Take this article, along with its data set as an exercise in finding bias in data mining/science. Bias doesn’t mean the paper or its conclusions are necessarily wrong, but choices were made with regard to the data and that shaped their paper and its conclusions.

PS: A cursory glace at the paper also finds the data used ends with the year 2000. Small comfort to the estimated 32 million Muslims who have died since 9/11, during this “period of relative peace.” You need to ask peace for who and at what price?

November 11, 2018

Hiding Places for Bias in Deep Learning

Filed under: Bias,Deep Learning — Patrick Durusau @ 8:17 pm

Are Deep Policy Gradient Algorithms Truly Policy Gradient Algorithms? by Andrew Ilyas, et al.

Abstract:

We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. We propose a fine-grained analysis of state-of-the-art methods based on key aspects of this framework: gradient estimation, value prediction, optimization landscapes, and trust region enforcement. We find that from this perspective, the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict. Our analysis suggests first steps towards solidifying the foundations of these algorithms, and in particular indicates that we may need to move beyond the current benchmark-centric evaluation methodology.

Although written as an evaluation of the framework for deep policy gradient algorithms with suggestions for improvement, it isn’t hard to see how the same factors create hiding places for bias in deep learning algorithms.

  • Gradient Estimation: we find that even while agents are improving in terms of reward, the gradient
    estimates used to update their parameters are often virtually uncorrelated with the true gradient.
  • Value Prediction: our experiments indicate that value networks successfully solve the supervised learning task they are trained on, but do not fit the true value function. Additionally, employing a value network as a baseline function only marginally decreases the variance of gradient estimates (but dramatically increases agent’s performance).
  • Optimization Landscapes: we also observe that the optimization landscape induced by modern policy gradient algorithms is often not reflective of the underlying true reward landscape, and that the latter is often poorly behaved in the relevant sample regime.
  • Trust Regions: our findings show that deep policy gradient algorithms sometimes violate theoretically motivated trust regions. In fact, in proximal policy optimization, these violations stem from a fundamental problem in the algorithm’s design.

The key take-away is that if you can’t explain the behavior of an algorithm, then how do you detect or guard against bias in such an algorithm? Or as the authors put it:

Deep reinforcement learning (RL) algorithms are rooted in a well-grounded framework of classical RL, and have shown great promise in practice. However, as our investigations uncover, this framework fails to explain much of the behavior of these algorithms. This disconnect impedes our understanding of why these algorithms succeed (or fail). It also poses a major barrier to addressing key challenges facing deep RL, such as widespread brittleness and poor reproducibility (cf. Section 4 and [3, 4]).

Do you plan on offering ignorance about your algorithms as a defense for discrimination?

Interesting.

June 26, 2018

Reading While White

Filed under: Bias,News,Politics,Texts — Patrick Durusau @ 12:54 pm

40 Ways White People Say ‘White People’ Without Actually Saying ‘White People’ came up on my Facebook feed. I don’t think people of color need any guidance on when “white people” is being said without saying “white people.” They have a lifetime of experience detecting it.

On the other hand, “white people” have a lifetime of eliding over when someone says “white people” without using those precise terms.

What follows is a suggestion of a tool that may assist white readers in detecting when “white people” is being said, but not in explicit terms.

Download the sed script, reading-while-white.txt and Remarks by President Trump at Protecting American Workers Roundtable (save as HTML page) to test the script.

Remember to chmod on the sed script, then:

reading-while-white.sed remarks-president-trump-protecting-american-workers-roundtable > reading.while.white.roundtable.html

The top of the document should read:

The replacement text will appear as:

and,

I use “white people” to replace all implied uses of white people and preserve the text as written in the following parentheses.

Hard coded for HTML format of White House pages but just reform the <h1> line to apply to other sites.

Places to apply Reading While White:

  1. CNN
  2. Fox News
  3. The Guardian
  4. National Public Radio
  5. New York Times
  6. Wall Street Journal
  7. Washington Post

Save your results! Share them with your friends!

Educate white readers about implied “white people!”

I’m looking for an easier way to share such transformations in a browser.

Do you know of a browser project that does NOT enforce a stylesheet and file having to originate from the same source? That would make a great viewer for such transformations. (Prohibited in most browsers as a “security” issue. Read “content provider in control” for “security” and you come closer to the mark.)

June 20, 2018

Intentional Ignorance For Data Science: Ignore All Females

Filed under: Bias,Bioinformatics,Biology,Biomedical,Data Science — Patrick Durusau @ 4:10 pm

When Research Excludes Female Rodents, Human Women Lose by Naseem Jamnia.

From the post:


Even when I was at the University of Pennsylvania, one of the best research institutes in the world, I talked to researchers who were reluctant to use female rodents in their studies, especially if they weren’t examining sex differences. For example, one of the labs I was interested in working in looked at social behavior in a mouse model of autism—but only in male mice, even though we need more studies of autism in girls/female models. PhD-level scientists told me that the estrous cycle (the rodent menstrual cycle) introduced too many complications. But consider that the ultimate goal of biomedical research is to understand the mechanisms of disease so that we can ultimately treat them in humans. By excluding female animals—not to mention intersex animals, which I’ll get to in a bit—modern scientists perpetuate the historical bias of a medical community that frequently dismisses, pathologizes, and actively harms non-male patients.

The scientific implications of not using female animals in scientific and biomedical research are astounding. How can we generalize a drug’s effect if we only look at part of the population? Given that sex hormones have a ton of roles outside of reproduction—for example, in brain development, cell growth and division, and gene regulation—are there interactions we don’t know about? We already know that certain diseases often present differently in men and women—for example, stroke and heart disease—so a lack of female animal studies means we can’t fully understand these differing mechanisms. On top of it all, a 2014 Nature paper showed that rodents behave differently depending on the researcher’s gender (it appears they react to the scent of secreted androgens) which puts decades of research into question.

Jamnia’s not describing medical research in the 19th century, nor at Tuskegee, Alabama or Nazi medical experiments.

Jamnia is describing the current practice of medical research, today, now.

This is beyond bias in data sampling, this is intentional ignorance of more than half of all the people on earth.

I hasten to add, this isn’t new, it has been known and maintained throughout the 20th century and thus far in the 21st.

The lack of newness should not diminish your rage against intentional ignorance of how drugs and treatments impact, ultimately, women.

If you won’t tolerate intentional ignorance of females in data science (you should not), then don’t tolerant of intentional ignorance in medical research.

Ban funding of projects that exclude female test subjects.

So-called “researchers” can continue to exclude female test subjects, just not on your dime.

February 5, 2018

Unfairness By Algorithm

Filed under: Bias,Computer Science — Patrick Durusau @ 5:40 pm

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making by Lauren Smith.

From the post:

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Recent discussions have highlighted legal and ethical issues raised by the use of sensitive data for hiring, policing, benefits determinations, marketing, and other purposes. These conversations can become mired in definitional challenges that make progress towards solutions difficult. There are few easy ways to navigate these issues, but if stakeholders hold frank discussions, we can do more to promote fairness, encourage responsible data use, and combat discrimination.

To facilitate these discussions, the Future of Privacy Forum (FPF) attempted to identify, articulate, and categorize the types of harm that may result from automated decision-making. To inform this effort, FPF reviewed leading books, articles, and advocacy pieces on the topic of algorithmic discrimination. We distilled both the harms and potential mitigation strategies identified in the literature into two charts. We hope you will suggest revisions, identify challenges, and help improve the document by contacting lsmith@fpf.org. In addition to presenting this document for consideration for the FTC Informational Injury workshop, we anticipate it will be useful in assessing fairness, transparency and accountability for artificial intelligence, as well as methodologies to assess impacts on rights and freedoms under the EU General Data Protection Regulation.

The primary attraction are two tables, Potential Harms from Automated Decision-Making and Potential Mitigation Sets.

Take the tables as a starting point for analysis.

Some “unfair” practices, such as increased auto insurance prices for night-shift workers, which results in differential access to insurance, is an actuarial question. Insurers are not public charities and can legally discriminate based on perceived risk.

December 11, 2017

Mathwashing:…

Filed under: Algorithms,Bias,Mathematics — Patrick Durusau @ 8:32 pm

Mathwashing: How Algorithms Can Hide Gender and Racial Biases by Kimberley Mok.

From the post:

Scholars have long pointed out that the way languages are structured and used can say a lot about the worldview of their speakers: what they believe, what they hold sacred, and what their biases are. We know humans have their biases, but in contrast, many of us might have the impression that machines are somehow inherently objective. But does that assumption apply to a new generation of intelligent, algorithmically driven machines that are learning our languages and training from human-generated datasets? By virtue of being designed by humans, and by learning natural human languages, might these artificially intelligent machines also pick up on some of those same human biases too?

It seems that machines can and do indeed assimilate human prejudices, whether they are based on race, gender, age or aesthetics. Experts are now finding more evidence that supports this phenomenon of algorithmic bias. As sets of instructions that help machines to learn, reason, recognize patterns and perform tasks on their own, algorithms increasingly pervade our lives. And in a world where algorithms already underlie many of those big decisions that can change lives forever, researchers are finding that many of these algorithms aren’t as objective as we assume them to be.

If you have ever suffered from the delusion that algorithms, any algorithm is “objective,” this post is a must read. Or re-read to remind yourself that “objectivity” is a claim used to put your position beyond question for self-interest. Nothing more.

For my part, I’m not sure what’s unclear about data collected, algorithms chosen, interpretation of results, all being the results of bias?

There may be acceptable biases, or degrees of bias, but the goal of any measurement is a result, which automatically biases a measurer in favor of phenomena that can be measured by a convenient technique. Phenomena that cannot be easily measured, no matter how important, won’t be included.

By the same token, “bias-correction” is the introduction of an acceptable bias and/or limiting bias to what is seen as, to the person judging the presence of bias, to an acceptable level of bias.

Bias is omnipresent and while evaluating algorithms is important, always bear in mind you are choosing acceptable bias over unacceptable bias.

Or to mis-quote the Princess Bride: “Bias is everywhere. Anyone who says differently is selling something.”

June 7, 2017

Where the Greeks and Romans White Supremacists?

Filed under: Art,Bias,Diversity,History,Humanities — Patrick Durusau @ 3:02 pm

Why We Need to Start Seeing the Classical World in Color by Sarah E. Bond.

From the post:

Modern technology has revealed an irrefutable, if unpopular, truth: many of the statues, reliefs, and sarcophagi created in the ancient Western world were in fact painted. Marble was a precious material for Greco-Roman artisans, but it was considered a canvas, not the finished product for sculpture. It was carefully selected and then often painted in gold, red, green, black, white, and brown, among other colors.

A number of fantastic museum shows throughout Europe and the US in recent years have addressed the issue of ancient polychromy. The Gods in Color exhibit travelled the world between 2003–15, after its initial display at the Glyptothek in Munich. (Many of the photos in this essay come from that exhibit, including the famed Caligula bust and the Alexander Sarcophagus.) Digital humanists and archaeologists have played a large part in making those shows possible. In particular, the archaeologist Vinzenz Brinkmann, whose research informed Gods in Color, has done important work, applying various technologies and ultraviolet light to antique statues in order to analyze the minute vestiges of paint on them and then recreate polychrome versions.

Acceptance of polychromy by the public is another matter. A friend peering up at early-20th-century polychrome terra cottas of mythological figures at the Philadelphia Museum of Art once remarked to me: “There is no way the Greeks were that gauche.” How did color become gauche? Where does this aesthetic disgust come from? To many, the pristine whiteness of marble statues is the expectation and thus the classical ideal. But the equation of white marble with beauty is not an inherent truth of the universe. Where this standard came from and how it continues to influence white supremacist ideas today are often ignored.

Most museums and art history textbooks contain a predominantly neon white display of skin tone when it comes to classical statues and sarcophagi. This has an impact on the way we view the antique world. The assemblage of neon whiteness serves to create a false idea of homogeneity — everyone was very white! — across the Mediterranean region. The Romans, in fact, did not define people as “white”; where, then, did this notion of race come from?

A great post and reminder that learning history (or current events) through a particular lens isn’t the same as the only view of history (or current events).

I originally wrote “an accurate view of history….” but that’s not true. At best we have one or more views and when called upon to act, make decisions upon those views. “Accuracy” is something that lies beyond our human grasp.

The reminder I would add to this post is that recognition of a lens, in this case, the absence of color in our learning of history, isn’t overcome by our naming it and perhaps nodding in agreement, yes, that was a short fall in our learning.

“Knowing” about the coloration of familiar art work doesn’t erase centuries of considering it without color. No amount of pretending will make it otherwise.

Humanists should learn about and promote the use of colorization so the youth of today learn different traditions than the ones we learned.

May 13, 2017

Bigoted Use of Stingray Technology vs. Other Ills

Filed under: #BLM,Bias,Ethics,Government — Patrick Durusau @ 8:33 pm

Racial Disparities in Police ‘Stingray’ Surveillance, Mapped by George Joseph.

From the post:

Louise Goldsberry, a Florida nurse, was washing dishes when she looked outside her window and saw a man pointing a gun at her face. Goldsberry screamed, dropped to the floor, and crawled to her bedroom to get her revolver. A standoff ensued with the gunman—who turned out to be an agent with the U.S. Marshals’ fugitive division.

Goldsberry, who had no connection to a suspect that police were looking for, eventually surrendered and was later released. Police claimed that they raided her apartment because they had a “tip” about the apartment complex. But, according to Slate, the reason the “tip” was so broad was because the police had obtained only the approximate location of the suspect’s phone—using a “Stingray” phone tracker, a little-understood surveillance device that has quietly spread from the world of national security into that of domestic law enforcement.

Goldsberry’s story illustrates a potential harm of Stingrays not often considered: increased police contact for people who get caught in the wide dragnets of these interceptions. To get a sense of the scope of this surveillance, CityLab mapped police data from three major cities across the U.S., and found that this burden is not shared equally.

How not equally?

Baltimore, Maryland.

The map at Joseph’s post is interactive, along with maps for Tallahassee, Florida and Milwaukee, Minnesota.

I oppose government surveillance overall but am curious, is Stingray usage a concern of technology/privacy advocates or is there a broader base for opposing it?

Consider the following facts gathered by Bill Quigley:

Were you shocked at the disruption in Baltimore? What is more shocking is daily life in Baltimore, a city of 622,000 which is 63 percent African American. Here are ten numbers that tell some of the story.

One. Blacks in Baltimore are more than 5.6 times more likely to be arrested for possession of marijuana than whites even though marijuana use among the races is similar. In fact, Baltimore county has the fifth highest arrest rate for marijuana possessions in the USA.

Two. Over $5.7 million has been paid out by Baltimore since 2011 in over 100 police brutality lawsuits. Victims of severe police brutality were mostly people of color and included a pregnant woman, a 65 year old church deacon, children, and an 87 year old grandmother.

Three. White babies born in Baltimore have six more years of life expectancy than African American babies in the city.

Four. African Americans in Baltimore are eight times more likely to die from complications of HIV/AIDS than whites and twice as likely to die from diabetes related causes as whites.

Five. Unemployment is 8.4 percent city wide. Most estimates place the unemployment in the African American community at double that of the white community. The national rate of unemployment for whites is 4.7 percent, for blacks it is 10.1.

Six.African American babies in Baltimore are nine times more likely to die before age one than white infants in the city.

Seven. There is a twenty year difference in life expectancy between those who live in the most affluent neighborhood in Baltimore versus those who live six miles away in the most impoverished.

Eight. 148,000 people, or 23.8 percent of the people in Baltimore, live below the official poverty level.

Nine. 56.4 percent of Baltimore students graduate from high school. The national rate is about 80 percent.

Ten. 92 percent of marijuana possession arrests in Baltimore were of African Americans, one of the highest racial disparities in the USA.

(The “Shocking” Statistics of Racial Disparity in Baltimore)

Which of those facts would you change before tackling the problem of racially motivated use of Stingray technology?

I see several that I would rate much higher than the vagaries of Stingray surveillance.

You?

April 7, 2017

Fact Check now available in Google… [Whose “Facts?”]

Filed under: Bias,Journalism,News,Reporting — Patrick Durusau @ 8:15 pm

Fact Check now available in Google Search and News around the world by Justin Kosslyn and Cong Yu.

From the post:

Google was built to help people find useful information by surfacing the great content that publishers and sites create. This access to high quality information is what drives people to use the web and for contributors to continue to engage and invest in it.

However, with thousands of new articles published online every minute of every day, the amount of content confronting people online can be overwhelming. And unfortunately, not all of it is factual or true, making it hard for people to distinguish fact from fiction. That’s why last October, along with our partners at Jigsaw, we announced that in a few countries we would start enabling publishers to show a “Fact Check” tag in Google News for news stories. This label identifies articles that include information fact checked by news publishers and fact-checking organizations.

After assessing feedback from both users and publishers, we’re making the Fact Check label in Google News available everywhere, and expanding it into Search globally in all languages. For the first time, when you conduct a search on Google that returns an authoritative result containing fact checks for one or more public claims, you will see that information clearly on the search results page. The snippet will display information on the claim, who made the claim, and the fact check of that particular claim.

And the fact checking criteria?


For publishers to be included in this feature, they must be using the Schema.org ClaimReview markup on the specific pages where they fact check public statements (documentation here), or they can use the Share the Facts widget developed by the Duke University Reporters Lab and Jigsaw. Only publishers that are algorithmically determined to be an authoritative source of information will qualify for inclusion. Finally, the content must adhere to the general policies that apply to all structured data markup, the Google News Publisher criteria for fact checks, and the standards for accountability and transparency, readability or proper site representation as articulated in our Google News General Guidelines. If a publisher or fact check claim does not meet these standards or honor these policies, we may, at our discretion, ignore that site’s markup.

An impressive 115 separate organizations are approved fact checkers but most of them, the New York Times for example, publish “facts” from the US State Department, US Department of Defense, members of US Congress, White House, and other dubious sources of information.

Not to mention how many times have you read the New York Times supporting:

  • Palestinian Martyrs
  • State destruction of Afro-American homes as retribution for crimes
  • Supporting armed white encampments in traditionally Afro-American neighborhoods

No?

Do you think perhaps the New York Times has a “point of view?”

We all do you know. Have a point of view.

What I find troubling about “fact checking” by Google is that some points of view, such as that of the NYT, are going to be privileged as “facts,” whereas other points of view will not enjoy such a privilege.

Need I mention that not so long ago the entire Middle East was thrown into disarray, a disarray that continues to this day, because the “facts” as judged by the NTY and others, said that Saddam Hussein possessed weapons of mass destruction?

I have no doubt that a fact checking Google at the time would have said it’s a fact that Saddam Hussein possessed weapons of mass destruction, at least until years after that had been proven to be false. Everybody who was anybody said it was a fact. Must be true.

As a super-Snopes, if I hear a rumor about Pete Rose and the Baseball Hall of Fame, Google fact checking may be useful.

For more subtle questions, consider whose “facts” in evaluating a Google fact check response.

March 27, 2017

How Do You Spell Media Bias? M-U-S-L-I-M

Filed under: Bias,Journalism,News,Reporting — Patrick Durusau @ 4:11 pm

Disclosure: I have contempt for news reports that hype acts of terrorism. Even more so when little more than criminal acts by Muslims are bemoaned as existential threats to Western society. Just so you know I’m not in a position to offer a balanced view of Ronald Bailey’s post.

Do Muslims Commit Most U.S. Terrorist Attacks?: Nope. Not even close. by Ronald Bailey.

From the post:

“It’s gotten to a point where it’s not even being reported. In many cases, the very, very dishonest press doesn’t want to report it,” asserted President Donald Trump a month ago. He was referring to a purported media reticence to report on terror attacks in Europe. “They have their reasons, and you understand that,” he added. The implication, I think, is that the politically correct press is concealing terrorists’ backgrounds.

To bolster the president’s claims, the White House then released a list of 78 terror attacks from around the globe that Trump’s minions think were underreported. All of the attackers on the list were Muslim—and all of the attacks had been reported by multiple news outlets.

Some researchers at Georgia State University have an alternate idea: Perhaps the media are overreporting some of the attacks. Political scientist Erin Kearns and her colleagues raise that possibility in a preliminary working paper called “Why Do Some Terrorist Attacks Receive More Media Attention Than Others?

For those five years, the researchers found, Muslims carried out only 11 out of the 89 attacks, yet those attacks received 44 percent of the media coverage. (Meanwhile, 18 attacks actually targeted Muslims in America. The Boston marathon bombing generated 474 news reports, amounting to 20 percent of the media terrorism coverage during the period analyzed. Overall, the authors report, “The average attack with a Muslim perpetrator is covered in 90.8 articles. Attacks with a Muslim, foreign-born perpetrator are covered in 192.8 articles on average. Compare this with other attacks, which received an average of 18.1 articles.”

While the authors rightly question the equality of terrorist reporting, which falsely creates a link between Muslims and terrorism in the United States, I question the appropriateness of a media focus on terrorism at all.

Aside from the obvious lure that fear sells and fear of Muslims sells very well in the United States, the human cost from domestic terrorist attacks, not just those by Muslims, hardly justifies crime blotter coverage.

Consider that in 2014, there were 33,559 deaths due to gun violence and 32 from terrorism.

But as I said, fear sells and fear of Muslims sells very well.

Terrorism or more properly the fear of terrorism has been exploited to distort government priorities and to reduce the rights of all citizens. Media participation/exploitation of that fear is a matter of record.

The question now is whether the media will knowingly continue its documented bigotry or choose another course?

The paper:

Kearns, Erin M. and Betus, Allison and Lemieux, Anthony, Why Do Some Terrorist Attacks Receive More Media Attention Than Others? (March 5, 2017). Available at SSRN: https://ssrn.com/abstract=2928138

February 6, 2017

Eight Days in March: [Bias by Omission]

Filed under: Bias,Journalism,News,Reporting — Patrick Durusau @ 9:19 pm

Eight Days in March: How the World Searched for Terror Attacks by Google Trends.

eight-days-in-march-460

Cities that searched for these attacks:

cities-searching-460

See the original for full impact but do you notice a bias by omission?

What about the terrorist bombings by the United States and its allies in Syria and Iraq, that happened every day mentioned in this graphic?

Operation Inherent Resolve reports:

Between Aug. 8, 2014 and Jan. 30, 2017, U.S. and partner-nation aircraft have flown an estimated 136,069 sorties in support of operations in Iraq and Syria.

That’s 906 days or 150 sorties on average per day.

Or for eight days in March, 1200 acts of terrorism in Iraq and Syria.

Readers who are unaware of the crimes against the people of Iraq and Syria won’t notice the bias in this graphic.

Every biased graphic is an opportunity to broaden a reader’s awareness.

Take advantage of them.

January 27, 2017

You’re the fact-checker now [Wineberg/McGrew Trafficking In Myths]

Filed under: Bias,Critical Reading,Journalism,News — Patrick Durusau @ 11:08 am

You’re the fact-checker now

From the post:

No matter what media stream you depend on for news, you know that news has changed in the past few years. There’s a lot more of it, and it’s getting harder to tell what’s true, what’s biased, and what may be outright deceptive. While the bastions of journalism still employ editors and fact-checkers to screen information for you, if you’re getting your news and assessing information from less venerable sources, it’s up to you to determine what’s credible.

“We are talking about the basic duties of informed citizenship,” says Sam Wineburg, Margaret Jacks Professor of Education.

Wineburg and Sarah McGrew, a doctoral candidate in education, tested the ability of thousands of students ranging from middle school to college to evaluate the reliability of online news. What they found was discouraging: even social media-savvy students at elite universities were woefully unskilled at determining whether or not information came from reliable, unbiased sources.

Winburg and McGrew arrived at the crisis of “biased” news decades, if not centuries too late.

Manufacturing Consent: The Political Economy of the Mass Media by Edward S. Herman and Noam Chomsky, published in 2002, traces the willing complicity of the press in any number of fictions that served the interests of the government and others.

There is a documentary by Mark Achbar and Peter Wintonick about Noam Chomsky and Manufacturing Consent. Total run time is: 2 hours, 40 minutes and 24 seconds. I read the book, did not watch the video. But if you prefer video:

https://www.youtube.com/watch?v=YHa6NflkW3Y

Herman and Chomsky don’t report some of the earlier examples of biased news.

Egyptian accounts of the Battle of Kadesh claim a decisive victory in 1274 or 1273 BCE over the Hittites, accounts long accepted as the literal truth. More recent research treats the Egyptian claims as akin to US claims to winning the war on terrorism.

Winning wars makes good press but no intelligent person takes such claims uncritically.

For the exact details, consider:

The Road to Kadesh: A Historical Interpretation of the Battle Reliefs of King Sety I at Karnak

and, “The Battle of Kadesh: A Debate between the Egyptian and Hittite Perspectives:”

Or as another example of biased reporting, consider the text of You’re the fact-checker now.

From the post:

“Accurate information is an absolutely essential ingredient to civic health,” says Wineburg.

Ok, so what do you make of the lack of evidence for:

…it’s getting harder to tell what’s true, what’s biased, and what may be outright deceptive[?]

I grant there’s a common myth of a time when it was easier to tell “what’s true, what’s biased and what may be outright deceptive.” But the existence of a common myth doesn’t equate to factual truth.

An article exhorting readers to become fact-checkers that is premised on a myth, in Wineburg’s own words, has a “shaky foundation.”

Sources have always been biased and some calculated to deceive, from those that reported total Egyptian victory at Kadesh to more recent examples by Herman and Chomsky.

Careful readers treat all sources as suspect, especially those not considered suspect by others.


Semi-careful readers may object that I have cited no evidence for:

…it’s getting harder to tell what’s true, what’s biased, and what may be outright deceptive.

being a myth.

“Myth” in this context is a rhetorical flourish to describe the lack of evidence presented by Winburg and McGrew for that proposition.

To establish such a claim, the alleged current inability of students to discern between trustworthy and untrustworthy sources requires:

  1. A baseline of what is true, biased, deceptive for time period X.
  2. Test of students (or others) for discernment of truth/bias/deception in reports during period X.
  3. A baseline of what is true, biased, deceptive for time period Y.
  4. Proof the baselines for periods X and Y are in fact comparable.
  5. Proof the tests and their results are comparable for periods X and Y.
  6. Test of students (or others) for discernment of truth/bias/deception in reports during period Y.
  7. Evaluation of the difference (if any) between the results of tests for periods X and Y.

at a minimum. I have only captured the major steps that come to mind. No doubt readers can supply others that I have overlooked.

Absent such research, analysis and proofs, that can be replicated by others, Wineberg and McGrew are trafficking in common prejudice and nothing more.

Such trafficking is useful for funding purposes but it doesn’t advance the discussion of training readers in critical evaluation of sources.

December 14, 2016

Be Undemocratic – Think For Other People – Courtesy of Slate

Filed under: Bias,Censorship,Free Speech,Politics — Patrick Durusau @ 9:20 am

Feeling down? Left out of the “big boys” internet censor game by the likes of Facebook and Twitter?

Dry your eyes! Slate has ridden to your rescue!

Will Oremus writes in: Only You Can Stop the Spread of Fake News:


Slate has created a new tool for internet users to identify, debunk, and—most importantly—combat the proliferation of bogus stories. Conceived and built by Slate developers, with input and oversight from Slate editors, it’s a Chrome browser extension called This Is Fake, and you can download and install it for free either on its home page or in the Chrome web store. The point isn’t just to flag fake news; you probably already know it when you see it. It’s to remind you that, anytime you see fake news in your feed, you have an opportunity to interrupt its viral transmission, both within your network and beyond.

I’m glad Slate is taking the credit/blame for This is Fake.

Can you name a more undemocratic position than assuming your fellow voters are incapable of making intelligent choices about the news they consume.

Well, everybody but you and your friends. Right?

Thanks for your offer to help Slate, but no thanks.

December 11, 2016

Cognitive Bias Exercises

Filed under: Bias — Patrick Durusau @ 5:02 pm

I encountered Cognitive bias cheat sheet – Because thinking is hard by Buster Benson today along with the visualization contributed by John Manoogian III.

Benson divided Wikipedia’s list of cognitive biases into twenty groupings and summarizes those into four principles to use and four truths about our solutions.

That’s handy but how do I practice spotting those cognitive biases?

I started with Problem 1: Too much information., the first group:

We notice things that are already primed in memory or repeated often. (emphasis in original)

(You notice I have revealed one of my cognitive “biases.” I can’t stand to have lists in non-alphabetical order.)

Users are invited to write a one-sentence definition for each bias and then to supply examples of each one.

Scoring: 1 point for each example of a bias, 3 points if it’s in your own work.

Spotting the biases, as you see them is one aspect of the exercise.

Group discussion of results will hone your cognitive bias spotting skills to a fine edge.

My first cut on problem 1, group 1.

Suggestions? Comments?

December 4, 2016

Pence, Stephanopoulos and False Statements

Filed under: Bias,Government,Journalism,News,Reporting — Patrick Durusau @ 8:23 pm

‘This Week’ Transcript: Vice President-Elect Mike Pence and Gen. David Petraeus, covers President-elect Donald Trump’s tweet:

In addition to winning the electoral college in a landslide, I won the popular vote if you deduct the millions of people who voted illegally.

That portion of the transcript reads as follows (apologies for the long quote but I think you will agree its all relevant):


STEPHANOPOULOS: As I said, President-Elect Trump has been quite active on Twitter, including this week at the beginning of this week, that tweet which I want to show right now, about the popular vote.

And he said, “In addition to winning the electoral college in a landslide, I won the popular vote if you deduct the millions of people who voted illegally.”

That claim is groundless. There’s no evidence to back it up.

Is it responsible for a president-elect to make false statements like that?

PENCE: Well, look, I think four years ago the Pew Research Center found that there were millions of inaccurate voter registrations.

STEPHANOPOULOS: Yes, but the author of this said he — he has said it is not any evidence about what happened in this election or any evidence of voter fraud.

PENCE: I think what, you know, what is — what is historic here is that our president-elect won 30 to 50 states, he won more counties than any candidate on our side since Ronald Reagan.

And the fact that some partisans, who are frustrated with the outcome of the election and disappointed with the outcome of the election, are pointing to the popular vote, I can assure you, if this had been about the popular vote, Donald Trump and I have been campaigning a whole lot more in Illinois and California and New York.

STEPHANOPOULOS: And no one is questioning your victory, certainly I’m not questioning your victory. I’m asking just about that tweet, which I want to say that he said he would have won the popular vote if you deduct the millions of people who voted illegally. That statement is false. Why is it responsible to make it?

PENCE: Well, I think the president-elect wants to call to attention the fact that there has been evidence over many years of…

STEPHANOPOULOS: That’s not what he said.

PENCE: …voter fraud. And expressing that reality Pew Research Center found evidence of that four years ago.

STEPHANPOULOS: That’s not the evidence…

PENCE: …that certainly his right.

But, you know…

STEPHANOPOULOS: It’s his right to make false statements?

PENCE: Well, it’s his right to express his opinion as president-elect of the United States.

I think one of the things that’s refreshing about our president-elect and one of the reasons why I think he made such an incredible connection with people all across this country is because he tells you what’s on his mind.

STEPHANOPOULOS: But why is it refreshing to make false statements?

PENCE: Look, I don’t know that that is a false statement, George, and neither do you. The simple fact is that…

STEPHANOPOULOS: I know there’s no evidence for it.

PENCE: There is evidence, historic evidence from the Pew Research Center of voter fraud that’s taken place. We’re in the process of investigating irregularities in the state of Indiana that were leading up to this election. The fact that voter fraud exists is…

STEPHANPOULOS: But can you provide any evidence — can you provide any evidence to back up that statement?

PENCE; Well, look, I think he’s expressed his opinion on that. And he’s entitled to express his opinion on that. And I think the American people — I think the American people find it very refreshing that they have a president who will tell them what’s on his mind. And I think the connection that he made in the course…

STEPHANOPOULOS: Whether it’s true or not?

PENCE: Well, they’re going to tell them — he’s going to say what he believes to be true and I know that he’s always going to speak in that way as president.
….

Just to be clear, I agree with Stepanopoulos and others who say there is no evidence of millions of illegal votes being cast in the 2016 presidential election.

After reading Stephanopoulos press Pence on this false statement by President-elect Trump, can you recall Stepanopoulos or another other major reporter pressing President Obama on his statements about terrorism, such as:


Tonight I want to talk with you about this tragedy, the broader threat of terrorism and how we can keep our country safe. The FBI is still gathering the facts about what happened in San Bernardino, but here’s what we know. The victims were brutally murdered and injured by one of their co-workers and his wife. So far, we have no evidence that the killers were directed by a terrorist organization overseas or that they were part of a broader conspiracy here at home. But it is clear that the two of them had gone down the dark path of radicalization, embracing a perverted interpretation of Islam that calls for war against America and the West. They had stockpiled assault weapons, ammunition, and pipe bombs.

So this was an act of terrorism designed to kill innocent people. Our nation has been at war with terrorists since Al Qaeda killed nearly 3,000 Americans on 9/11. In the process, we’ve hardened our defenses, from airports, to financial centers, to other critical infrastructure. Intelligence and law enforcement agencies have disrupted countless plots here and overseas and worked around the clock to keep us safe.

Our military and counterterrorism professionals have relentlessly pursued terrorist networks overseas, disrupting safe havens in several different countries, killing Osama Bin Laden, and decimating Al Qaeda’s leadership.

Over the last few years, however, the terrorist threat has evolved into a new phase. As we’ve become better at preventing complex multifaceted attacks like 9/11, terrorists turn to less complicated acts of violence like the mass shootings that are all too common in our society. It is this type of attack that we saw at Fort Hood in 2009, in Chattanooga earlier this year, and now in San Bernardino.

And as groups like ISIL grew stronger amidst the chaos of war in Iraq and then Syria, and as the Internet erases the distance between countries, we see growing efforts by terrorists to poison the minds of people like the Boston Marathon bombers and the San Bernardino killers.

For seven years, I’ve confronted this evolving threat each and every morning in my intelligence briefing, and since the day I took this office, I have authorized U.S. forces to take out terrorists abroad precisely because I know how real the danger is.
Here’s what Obama said in his Sunday night address: An annotated transcript

Really? “…because I know how real the danger is.

Do you recall anyone pressing President Obama on his claims about the danger of terrorism?

If you ever get to pose such a question to President Obama, remind him that 685 American die every day from medial errors, 44,0000 Americans die every 6 months due to excessive alcohol consumption, and that 430 Americans died between 2000 and 2013 due to falling furniture.

Can you think of a single instance when Obama’s flights of fancy about terrorism were challenged as Stephanopoulos did Trump’s delusion about illegal voters?

The media can and should challenge such flights of fancy.

At the same time, they should challenge those favored by other politicians, their editors, fellow journalists and advertisers.

PS: The medical error article: Medical error—the third leading cause of death in the US, BMJ 2016; 353 doi: http://dx.doi.org/10.1136/bmj.i2139 (Published 03 May 2016) Cite this as: BMJ 2016;353:i2139 (The Guardian article, my source, didn’t include a link to the original article.)

September 28, 2016

Election Prediction and STEM [Concealment of Bias]

Filed under: Bias,Government,Politics,Prediction — Patrick Durusau @ 8:21 pm

Election Prediction and STEM by Sheldon H. Jacobson.

From the post:

Every U.S. presidential election attracts the world’s attention, and this year’s election will be no exception. The decision between the two major party candidates, Hillary Clinton and Donald Trump, is challenging for a number of voters; this choice is resulting in third-party candidates like Gary Johnson and Jill Stein collectively drawing double-digit support in some polls. Given the plethora of news stories about both Clinton and Trump, November 8 cannot come soon enough for many.

In the Age of Analytics, numerous websites exist to interpret and analyze the stream of data that floods the airwaves and newswires. Seemingly contradictory data challenges even the most seasoned analysts and pundits. Many of these websites also employ political spin and engender subtle or not-so-subtle political biases that, in some cases, color the interpretation of data to the left or right.

Undergraduate computer science students at the University of Illinois at Urbana-Champaign manage Election Analytics, a nonpartisan, easy-to-use website for anyone seeking an unbiased interpretation of polling data. Launched in 2008, the site fills voids in the national election forecasting landscape.

Election Analytics lets people see the current state of the election, free of any partisan biases or political innuendos. The methodologies used by Election Analytics include Bayesian statistics, which estimate the posterior distributions of the true proportion of voters that will vote for each candidate in each state, given both the available polling data and the states’ previous election results. Each poll is weighted based on its age and its size, providing a highly dynamic forecasting mechanism as Election Day approaches. Because winning a state translates into winning all the Electoral College votes for that state (with Nebraska and Maine using Congressional districts to allocate their Electoral College votes), winning by one vote or 100,000 votes results in the same outcome in the Electoral College race. Dynamic programming then uses the posterior probabilities to compile a probability mass function for the Electoral College votes. By design, Election Analytics cuts through the media chatter and focuses purely on data.

If you have ever taken a social science methodologies course then you know:

Election Analytics lets people see the current state of the election, free of any partisan biases or political innuendos.

is as false as anything uttered by any of the candidates seeking nomination and/or the office of the U.S. presidency since January 1, 2016.

It’s an annoying conceit when you realize that every poll is biased, however clean the subsequent number crunching of the numbers may be.

Bias one step removed isn’t the absence of bias, but the concealment of bias.

September 10, 2016

Weapons of Math Destruction:… [Constructive Knowledge of Discriminatory Impact?]

Filed under: Bias,Mathematics,Modeling — Patrick Durusau @ 8:03 pm

Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives by Cory Doctorow.

From the post:

I’ve been writing about the work of Cathy “Mathbabe” O’Neil for years: she’s a radical data-scientist with a Harvard PhD in mathematics, who coined the term “Weapons of Math Destruction” to describe the ways that sloppy statistical modeling is punishing millions of people every day, and in more and more cases, destroying lives. Today, O’Neil brings her argument to print, with a fantastic, plainspoken, call to arms called (what else?) Weapons of Math Destruction.

weapons-math-destruction-460

I’ve followed Cathy’s posts long enough to recommend Weapons of Math Destruction sight unseen. (Publication date September 6, 2016.)

Warning: If you read Weapons of Math Destruction, unlike executives who choose models based on their “gut,” or “instinct,” you may be charged with constructive knowledge of how you model discriminates against group X or Y.

If, like a typical Excel user, you can honestly say “I type in the numbers here and the output comes out there,” it’s going to be hard to prove any intent to discriminate.

You are no more responsible for a result than a pump handle is responsible for cholera.

Doctorow’s conclusion:


O’Neil’s book is a vital crash-course in the specialized kind of statistical knowledge we all need to interrogate the systems around us and demand better.

depends upon your definition of “better.”

“Better” depends on your goals or those of a client.

Yes?

PS: It is important to understand models/statistics/data so you can shape results to be your definition of “better.” But acknowledging all results are shaped. The critical question is “What shape do you want?”

June 24, 2016

…possibly biased? Try always biased.

Filed under: Artificial Intelligence,Bias,Machine Learning — Patrick Durusau @ 4:24 pm

Artificial Intelligence Has a ‘Sea of Dudes’ Problem by Jack Clark.

From the post:


Much has been made of the tech industry’s lack of women engineers and executives. But there’s a unique problem with homogeneity in AI. To teach computers about the world, researchers have to gather massive data sets of almost everything. To learn to identify flowers, you need to feed a computer tens of thousands of photos of flowers so that when it sees a photograph of a daffodil in poor light, it can draw on its experience and work out what it’s seeing.

If these data sets aren’t sufficiently broad, then companies can create AIs with biases. Speech recognition software with a data set that only contains people speaking in proper, stilted British English will have a hard time understanding the slang and diction of someone from an inner city in America. If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.

“I call it a sea of dudes,” said Margaret Mitchell, a researcher at Microsoft. Mitchell works on computer vision and language problems, and is a founding member—and only female researcher—of Microsoft’s “cognition” group. She estimates she’s worked with around 10 or so women over the past five years, and hundreds of men. “I do absolutely believe that gender has an effect on the types of questions that we ask,” she said. “You’re putting yourself in a position of myopia.”

Margaret Mitchell makes a pragmatic case for diversity int the workplace, at least if you want to avoid male biased AI.

Not that a diverse workplace results in an “unbiased” AI, it will be a biased AI that isn’t solely male biased.

It isn’t possible to escape bias because some person or persons has to score “correct” answers for an AI. The scoring process imparts to the AI being trained, the biases of its judge of correctness.

Unless someone wants to contend there are potential human judges without biases, I don’t see a way around imparting biases to AIs.

By being sensitive to evidence of biases, we can in some cases choose the biases we want an AI to possess, but an AI possessing no biases at all, isn’t possible.

AIs are, after all, our creations so it is only fair that they be made in our image, biases and all.

May 23, 2016

Bias? What Bias? We’re Scientific!

Filed under: Bias,Machine Learning,Prediction,Programming — Patrick Durusau @ 8:37 pm

This ProPublica story by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, isn’t short but it is worth your time to not only read, but to download the data and test their analysis for yourself.

Especially if you have the mis-impression that algorithms can avoid bias. Or that clients will apply your analysis with the caution that it deserves.

Finding a bias in software, like finding a bug, is a good thing. But that’s just one, there is no estimate of how many others may exist.

And as you will find, clients may not remember your careful explanation of the limits to your work. Or apply it in ways you don’t anticipate.

Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks.

Here’s the first story to try to lure you deeper into this study:

ON A SPRING AFTERNOON IN 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an unlocked kid’s blue Huffy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs.

Just as the 18-year-old girls were realizing they were too big for the tiny conveyances — which belonged to a 6-year-old boy — a woman came running after them saying, “That’s my kid’s stuff.” Borden and her friend immediately dropped the bike and scooter and walked away.

But it was too late — a neighbor who witnessed the heist had already called the police. Borden and her friend were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80.

Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store.

Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile.

Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth of electronics.

This analysis demonstrates that malice isn’t required for bias to damage lives. Whether the biases are in software, in its application, in the interpretation of its results, the end result is the same, damaged lives.

I don’t think bias in software is avoidable but here, here no one was even looking.

What role do you think budget justification/profit making played in that blindness to bias?

March 29, 2016

Bias For Sale: How Much and What Direction Do You Want?

Filed under: Advertising,Bias,Government,Politics,Searching — Patrick Durusau @ 1:50 pm

Epstein and Robertson pitch it a little differently but that is the bottom line of: The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.

Abstract:

Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India’s 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.

I’m not surprised by SEME (search engine manipulation effect).

Although I would probably be more neutral and say: Search Engine Impact on Voting.

Whether you consider one result or another as the result of “manipulation” is a matter of perspective. No search engine strives to delivery “false” information to users.

Gary Anthes in Search Engine Agendas, Communications of the ACM, Vol. 59 No. 4, pages 19-21, writes:

In the novel 1984, George Orwell imagines a society in which powerful but hidden forces subtly shape peoples’ perceptions of the truth. By changing words, the emphases put on them, and their presentation, the state is able to alter citizens’ beliefs and behaviors in ways of which they are unaware.

Now imagine today’s Internet search engines did just that kind of thing—that subtle biases in search engine results, introduced deliberately or accidentally, could tip elections unfairly toward one candidate or another, all without the knowledge of voters.

That may seem an unlikely scenario, but recent research suggests it is quite possible. Robert Epstein and Ronald E. Robertson, researchers at the American Institute for Behavioral Research and Technology, conducted experiments that showed the sequence of results from politically oriented search queries can affect how users vote, especially among undecided voters, and biased rankings of search results usually go undetected by users. The outcomes of close elections could result from the deliberate tweaking of search algorithms by search engine companies, and such manipulation would be extremely difficult to detect, the experiments suggest.

Gary’s post is a good supplement to the original article, covering some of the volunteers who are ready to defend the rest of us from biased search results.

Or as I would put it, to inject their biases into search results as opposed to other biases they perceive as being present.

If you are more comfortable describing the search results you want presented as “fair and equitable,” etc., please do so but I prefer the honesty of naming biases as such.

Or as David Bowie once said:

Make your desired bias, direction, etc., a requirement and allow data scientists to get about the business of conveying it.

Certainly what “ethical” data scientists are doing at Google as they conspire with the US government and others to overthrow governments, play censor to fight “terrorists,” and undertake other questionable activities.

I object to some of Google’s current biases because I would have them be biased in a different direction.

Let’s sell your bias/perspective to users with a close eye on the bright line of the law.

Game?

November 22, 2015

A Challenge to Data Scientists

Filed under: Bias,Data Science — Patrick Durusau @ 1:25 pm

A Challenge to Data Scientists by Renee Teate.

From the post:

As data scientists, we are aware that bias exists in the world. We read up on stories about how cognitive biases can affect decision-making. We know that, for instance, a resume with a white-sounding name will receive a different response than the same resume with a black-sounding name, and that writers of performance reviews use different language to describe contributions by women and men in the workplace. We read stories in the news about ageism in healthcare and racism in mortgage lending.

Data scientists are problem solvers at heart, and we love our data and our algorithms that sometimes seem to work like magic, so we may be inclined to try to solve these problems stemming from human bias by turning the decisions over to machines. Most people seem to believe that machines are less biased and more pure in their decision-making – that the data tells the truth, that the machines won’t discriminate.

Renee’s post summarizes a lot of information about bias, inside and outside of data science and issues this challenge:

Data scientists, I challenge you. I challenge you to figure out how to make the systems you design as fair as possible.

An admirable sentiment but one hard part is defining “…as fair as possible.”

Being professionally trained in a day to day “hermeneutic of suspicion,” as opposed to Paul Ricoeur‘s analysis of texts (Paul Ricoeur and the Hermeneutics of Suspicion: A Brief Overview and Critique by G.D. Robinson.), I have yet to encounter a definition of “fair” that does not define winners and losers.

Data science relies on classification, which has as its avowed purpose the separation of items into different categories. Some categories will be treated differently than others. Otherwise there would be no reason to perform the classification.

Another hard part is that employers of data scientists are more likely to say:

Analyze data X for market segments responding to ad campaign Y.

As opposed to:

What do you think about our ads targeting tweens by the use of sexual-content for our unhealthy product A?

Or change the questions to fit those asked of data scientists at any government intelligence agency.

The vast majority of data scientists are hired as data scientists, not amateur theologians.

Competence in data science has no demonstrable relationship to competence in ethics, fairness, morality, etc. Data scientists can have opinions about the same but shouldn’t presume to poach on other areas of expertise.

How you would feel if a competent user of spreadsheets decided to label themselves a “data scientist?”

Keep that in mind the next time someone starts to pontificate on “ethics” in data science.

PS: Renee is in the process of creating and assembling high quality resources for anyone interested in data science. Be sure to explore her blog and other links after reading her post.

October 10, 2015

20 Cognitive Biases That Screw Up Your Decisions

Filed under: Bias,Decision Making — Patrick Durusau @ 12:55 pm

Samantha Lee and Shana Lebowitz created an infographic (Business Insider) of common cognitive biases.

Entertaining, informative, but what key insight is missing from from this infographic?

cognitive-bias

The original at Business Insider is easier to read.

What missing is the question: Where do I stand to see my own cognitive bias?

If I were already aware of it, I would avoid it in decision making. Yes?

So if I am not aware of it, how do I get outside of myself to spot such a bias?

One possible solution, with the emphasis on possible, is to consult with others who may not share your cognitive biases. They may have other ones, ones that are apparent to you but not to them.

No guarantees on that solution because most people don’t appreciate having their cognitive biases pointed out. Particularly if they are central to their sense of identity and self-worth.

Take the management at the Office of Personnel Management (OPM), who have been repeatedly demonstrated to not only be incompetent in matters of cybersecurity but of management in general.

Among other biases, Office of Personnel Management suffers from 7. Confirmation bias, 8. Conservatism bias, 10. Ostrich effect, 17. Selective perception, and 20. Zero-risk bias.

The current infestation of incompetents at the Office of Personnel Management is absolutely convinced, judging from their responses to their Inspector General reports urging modern project management practices, that no change is necessary.

Personally I would fire everyone from the elevator operator (I’m sure they probably still have one) to the top and terminal all retirement and health benefits. Would not cure the technology problems at OPM but would provide the opportunity to have a fresh start at addressing it.

Cognitive biases, self-interest and support of other incompetents, doom reform at the OPM. You may as well wish upon a star.

I first saw this in a tweet by Christophe Lalanne.

August 19, 2015

Non-News: Algorithms Are Biased

Filed under: Algorithms,Bias — Patrick Durusau @ 10:48 am

Programming and prejudice

From the post:

Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That’s why many companies use algorithms to help weed out job applicants when hiring for a new position.

But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.

The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms.

Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery’s Conference on Knowledge Discovery and Data Mining in Sydney, Australia.

“There’s a growing industry around doing resume filtering and resume scanning to look for job applicants, so there is definitely interest in this,” says Venkatasubramanian. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”

It’s a puff piece and therefore misses that all algorithms are biased, but some algorithms are biased in ways not permitted under current law.

The paper, which this piece avoids citing for some reason, Certifying and removing disparate impact by Michael Feldman, Sorelle Friedler, John Moeller, Carlos Scheidegger, Suresh Venkatasubramanian

The abstract for the paper does a much better job of setting the context for this research:

What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender, religious practice) and an explicit description of the process.

When the process is implemented using computers, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the algorithm, we propose making inferences based on the data the algorithm uses.

We make four contributions to this problem. First, we link the legal notion of disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.

If you are a bank, you want a loan algorithm to be biased against people with a poor history of paying their debts. The distinction being that is a legitimate basis for discrimination among loan applicants.

The lesson here is that all algorithms are biased, the question is whether the bias is in your favor or not.

Suggestion: Only bet when using your own dice (algorithm).

August 9, 2015

Machine Learning and Human Bias: An Uneasy Pair

Filed under: Bias,Machine Learning — Patrick Durusau @ 10:45 am

Machine Learning and Human Bias: An Uneasy Pair by Jason Baldridge.

From the post:

“We’re watching you.” This was the warning that the Chicago Police Department gave to more than 400 people on its “Heat List.” The list, an attempt to identify the people most likely to commit violent crime in the city, was created with a predictive algorithm that focused on factors including, per the Chicago Tribune, “his or her acquaintances and their arrest histories – and whether any of those associates have been shot in the past.”

Algorithms like this obviously raise some uncomfortable questions. Who is on this list and why? Does it take race, gender, education and other personal factors into account? When the prison population of America is overwhelmingly Black and Latino males, would an algorithm based on relationships disproportionately target young men of color?

There are many reasons why such algorithms are of interest, but the rewards are inseparable from the risks. Humans are biased, and the biases we encode into machines are then scaled and automated. This is not inherently bad (or good), but it raises the question: how do we operate in a world increasingly consumed with “personal analytics” that can predict race, religion, gender, age, sexual orientation, health status and much more.

Jason’s post is a refreshing step back from the usual “machine learning isn’t biased like people are,” sort of stance.

Of course machine learning is biased, always biased. The algorithms are biased themselves, to say nothing of the programmers who inexactly converted those algorithms into code. It would not be much of an algorithm if it could not vary its results based on its inputs. That’s discrimination no matter how you look at it.

The difference, at least in some cases, is that discrimination is acceptable in some cases and not others. One imagines that only women are eligible for birth control pill prescriptions. That’s a reasonable discrimination. Other bases for discrimination, not so much.

And machine learning is further biased by the data we choose to input to the already biased implementation of a biased algorithm.

That isn’t a knock on machine learning but a caveat when confronted with a machine learning result, look behind the result to the data, the implementation of the algorithm and the algorithm itself before taking serious action based on the result.

Of course, the first question I would ask is: “Why is this person showing me this result and want do they expect me to do based on it?”

That they are trying to help me on my path to becoming self-actualized isn’t my first reaction.

Yours?

March 16, 2015

Bias? What Bias?

Filed under: Bias,Facebook,Social Media,Social Sciences,Twitter — Patrick Durusau @ 6:09 pm

Scientists Warn About Bias In The Facebook And Twitter Data Used In Millions Of Studies by Brid-Aine Parnell.

From the post:

Social media like Facebook and Twitter are far too biased to be used blindly by social science researchers, two computer scientists have warned.

Writing in today’s issue of Science, Carnegie Mellon’s Juergen Pfeffer and McGill’s Derek Ruths have warned that scientists are treating the wealth of data gathered by social networks as a goldmine of what people are thinking – but frequently they aren’t correcting for inherent biases in the dataset.

If folks didn’t already know that scientists were turning to social media for easy access to the pat statistics on thousands of people, they found out about it when Facebook allowed researchers to adjust users’ news feeds to manipulate their emotions.

Both Facebook and Twitter are such rich sources for heart pounding headlines that I’m shocked, shocked that anyone would suggest there is bias in the data! 😉

Not surprisingly, people participate in social media for reasons entirely of their own and quite unrelated to the interests or needs of researchers. Particular types of social media attract different demographics than other types. I’m not sure how you could “correct” for those biases, unless you wanted to collect better data for yourself.

Not that there are any bias free data sets but some are so obvious that it hardly warrants mentioning. Except that institutions like the Brookings Institute bump and grind on Twitter data until they can prove the significance of terrorist social media. Brookings knows better but terrorism is a popular topic.

Not to make data carry all the blame, the test most often applied to data is:

Will this data produce a result that merits more funding and/or will please my supervisor?

I first saw this in a tweet by Persontyle.

January 13, 2014

The myth of the aimless data explorer

Filed under: Bias,Data — Patrick Durusau @ 7:14 pm

The myth of the aimless data explorer by Enrico Bertini.

From the post:

There is a sentence I have heard or read multiple times in my journey into (academic) visualization: visualization is a tool people use when they don’t know what question to ask to their data.

I have always taken this sentence as a given and accepted it as it is. Good, I thought, we have a tool to help people come up with questions when they have no idea what to do with their data. Isn’t that great? It sounded right or at least cool.

But as soon as I started working on more applied projects, with real people, real problems, real data they care about, I discovered this all excitement for data exploration is just not there. People working with data are not excited about “playing” with data, they are excited about solving problems. Real problems. And real problems have questions attached, not just curiosity. There’s simply nothing like undirected data exploration in the real world.

I think Enrico misses the reason why people use/like the phrase: visualization is a tool people use when they don’t know what question to ask to their data.

Visualization privileges the “data” as the source of whatever result is displayed by the visualization.

It’s not me! That’s what the data says!

Hardly. Someone collected the data. Not at random, stuffing whatever bits came along in a bag. Someone cleaned the data with some notion of what “clean” meant. Someone choose the data that is now being called upon for a visualization. And those are clumsy steps that collapse many distinct steps into only three.

To put it another way, data never exists without choices being made. And it is the sum of those choices that influence the visualizations that are even possible from some data set.

The short term for what Enrico overlooks is bias.

I would recast his title to read: The myth of the objective data explorer.

Having said that, I don’t mean that all bias is bad.

If I were collecting data on Ancient Near Eastern (ANE) languages, I would of necessity be excluding the language traditions of the entire Western Hemisphere. It could even be that data from the native cultures of the Western Hemisphere will be lost while I am preserving data from the ANE.

So we have bias and a bad outcome, from someone’s point of view because of that bias. Was that a bad thing? I would argue not.

It isn’t every possible to collect all the potential data that can be collected. We all make values judgments about the data we choose to collect and what we choose to ignore.

Rather than pretending that we possess objectivity in any meaningful sense, we are better off to state our biases to the extent we know them. At least others will be forewarned that we are just like them.

November 4, 2011

Confidence Bias: Evidence from Crowdsourcing

Filed under: Bias,Confidence Bias,Crowd Sourcing,Interface Research/Design — Patrick Durusau @ 6:10 pm

Confidence Bias: Evidence from Crowdsourcing Crowdflower

From the post:

Evidence in experimental psychology suggests that most people overestimate their own ability to complete objective tasks accurately. This phenomenon, often called confidence bias, refers to “a systematic error of judgment made by individuals when they assess the correctness of their responses to questions related to intellectual or perceptual problems.” 1 But does this hold up in crowdsourcing?

We ran an experiment to test for a persistent difference between people’s perceptions of their own accuracy and their actual objective accuracy. We used a set of standardized questions, focusing on the Verbal and Math sections of a common standardized test. For the 829 individuals who answered more than 10 of these questions, we asked for the correct answer as well as an indication of how confident they were of the answer they supplied.

We didn’t use any Gold in this experiment. Instead, we incentivized performance by rewarding those finishing in the top 10%, based on objective accuracy.

I am not sure why crowdsourcing would make a difference on the question of overestimation of ability but now the answer is in, N0. But do read the post for the details, I think you will find it useful when doing user studies.

For example, when you ask a user if some task is too complex as designed, are they likely to overestimate their ability to complete it, either to avoid being embarrassed in front of others or admitting that they really didn’t follow your explanation?

My suspicion is yes and so in addition to simply asking users if they understand particular search or other functions with an interface, you need to also film them using the interface with no help from you (or others).

You will remember in Size Really Does Matter… that Blair and Maron reported that lawyers over estimated their accuracy in document retrieval by 55%. Of course, the question of retrieval is harder to evaluate than those in the Crowdflower experiment but it is a bias you need to keep in mind.

Powered by WordPress