Archive for the ‘Ethics’ Category

Who Does Cyber Security Benefit?

Tuesday, October 3rd, 2017

Indoctrinating children to benefit the wealthy starts at a young age: ‘Hackathon’ challenges teens to improve cyber security.

Improving cyber security is taught as an ethical imperative, but without asking who that “imperative” benefits.

OxFam wrote earlier this year:

Eight men own the same wealth as the 3.6 billion people who make up the poorest half of humanity, according to a new report published by Oxfam today to mark the annual meeting of political and business leaders in Davos.

Oxfam’s report, ‘An economy for the 99 percent’, shows that the gap between rich and poor is far greater than had been feared. It details how big business and the super-rich are fuelling the inequality crisis by dodging taxes, driving down wages and using their power to influence politics. It calls for a fundamental change in the way we manage our economies so that they work for all people, and not just a fortunate few.

New and better data on the distribution of global wealth – particularly in India and China – indicates that the poorest half of the world has less wealth than had been previously thought. Had this new data been available last year, it would have shown that nine billionaires owned the same wealth as the poorest half of the planet, and not 62, as Oxfam calculated at the time.
… From: Just 8 men own same wealth as half the world

It’s easy to see the cyber security of SWIFT, “secure financial messaging system,” benefits:

the “[the e]ight men own the same wealth as the 3.6 billion people who make up the poorest half of humanity”

more than “…the 3.6 billion people who make up the poorest half of humanity.”

Do you have any doubt about that claim in principle? The exact numbers of inequality don’t interest me as much as the understanding that information systems and their cyber security benefit some people more than others.

Once we establish the principle of differential in benefits from cyber security, then we can ask: Who does cyber security X benefit?

To continue with the SWIFT example, I would not volunteer to walk across the street to improve its cyber security. It is an accessory to a predatory financial system that exploits billions. You could be paid to improve its cyber security but tech people at large have no moral obligation to help SWIFT.

If anyone says you have an obligation to improve cyber security, ask who benefits?

Yes?

Air Gapping USB Sticks For Journalists (Or Not! For Others)

Friday, August 25th, 2017

CIRCLean – USB key sanitizer

Journalists are likely to get USB sticks from unknown and/or untrustworthy sources. CIRCLean copies potentially dangerous files on an untrustworthy USB stick, converts those files to a safe format and saves them to your trusted USB stick. (Think of it as not sticking a potentially infected USB into your computer.)

Visual instructions on using CIRCLean:

Written instructions based on those for CIRCLean, without illustrations:

  1. Unplug the device.
  2. Plug the untrusted USB stick into the top usb slot.
  3. Plug your own, trusted USB stick into the bottom usb slot.
  4. Note: Make sure your USB stick is bigger than the untrusted one. The extracted documents are sometimes bigger than the original ones.

  5. Connect the power to the device.
  6. If your device has a diode, wait until the blinking stops.
  7. Otherwise, plug a headset and listen to the music that is played during the conversion. When the music stops, the conversion is finished.

  8. Unplug the device and remove the USB keys

Label all untrusted USB sticks. “Untrusted” means it has an origin other than you. Unicode U+2620 ‘skull and crossbones” works, ☠. Or a bit larger:


(Image from http://graphemica.com/)

It’s really that easy!

On The Flip Side

Modifying the CIRCLean source to maintain its present capabilities but adding your malware to the “trusted” USB stick offers a number of exciting possibilities.

Security is all the rage in the banking industry, making a Raspberry Pi (with diode), an attractive case, and your USB malware great banking convention swag.

Listing of banking conferences are maintained by the American Bankers Association, the European Banking Association, and Asian Banking & Finance, to name just a few.

A low-cost alternative to a USB cleaning/malware installing Raspberry Pi would to use infected USB sticks as sway. “Front Office Staff: After Hours” or some similar title. If that sounds sexist, it is, but traps use bait based on their target’s proclivities, not yours.

PS: Ethics/legality:

The ethics of spreading malware to infrastructures based on a “white, cisheteropatriarchal*” point of view, I leave for others to discuss.

The legality of spreading malware depends on who’s doing the spreading and who’s being harmed. Check with legal counsel.

* A phrase I stole from: Women’s Suffrage Leaders Left Out Black Women. A great read.

Foreign Intelligence Gathering Laws (and ethics)

Thursday, August 3rd, 2017

Foreign Intelligence Gathering Laws from the Law Library of the Library of Congress.

From the webpage:

This report offers a review of laws regulating the collection of intelligence in the European Union (EU) and Belgium, France, Germany, Netherlands, Portugal, Romania, Sweden, and the United Kingdom. This report updates a report on the same topic issued from 2014. Because issues of national security are under the jurisdiction of individual EU Member States and are regulated by domestic legislation, individual country surveys provide examples of how the European nations control activities of their intelligence agencies and what restrictions are imposed on information collection. All EU Member States follow EU legislation on personal data protection, which is a part of the common European Union responsibility.

If you are investigating or reporting on breaches of intelligence gathering laws in “the European Union (EU) and Belgium, France, Germany, Netherlands, Portugal, Romania, Sweden, and the United Kingdom,” this will be useful. Otherwise, for the other one hundred and eighty-eight (188), you are SOL.

Other than as a basis for outrage, it’s not clear how useful intelligence gathering laws are in fact. The secrecy of intelligence operations makes practical oversight impossible and if leaks are to be credited, no known intelligence agency obeys such laws other than accidentally.

Moreover, as the U.S. Senate report on torture demonstrates, even war criminals are protected from prosecution in the name of intelligence gathering.

I take my cue from the CIA‘s position, as captured by Bob Dylan in Tweeter and the Monkey Man:

“It was you to me who taught
In Jersey anything’s legal as long as you don’t get caught.”

Disarming yourself with law or ethics in any encounter with an intelligence agency, which honors neither, means you will lose.

Choose your strategies accordingly.

Ethics, Data Scientists, Google, Wage Discrimination Against Women

Sunday, May 28th, 2017

Accused of underpaying women, Google says it’s too expensive to get wage data by Sam Levin.

From the post:

Google argued that it was too financially burdensome and logistically challenging to compile and hand over salary records that the government has requested, sparking a strong rebuke from the US Department of Labor (DoL), which has accused the Silicon Valley firm of underpaying women.

Google officials testified in federal court on Friday that it would have to spend up to 500 hours of work and $100,000 to comply with investigators’ ongoing demands for wage data that the DoL believes will help explain why the technology corporation appears to be systematically discriminating against women.

Noting Google’s nearly $28bn annual income as one of the most profitable companies in the US, DoL attorney Ian Eliasoph scoffed at the company’s defense, saying, “Google would be able to absorb the cost as easy as a dry kitchen sponge could absorb a single drop of water.”

Disclosure: I assume Google is resisting disclosure because it has in fact has a history of engaging in discrimination against women. It may or may not be discriminating this month/year, but if known, the facts will support the government’s claim. The $100,000 alleged cost is chump change to prove such a charge groundless. Resistance signals the charge has merit.

Levin’s post gives me reason to doubt Google will prevail on this issue or on the merits in general. Read it in full.

My question is what of the ethical obligations of data scientists at Google?

Should data scientists inside Google come forward with the requested information?

Should data scientists inside Google stage a work slow down to protest Googles’ resistance?

Exactly what should ethical data scientists do when their employer is the 500 pound gorilla in their field?

Do you think Google executives need a memo from their data scientists cluing them in on the ethical issues here?

Possibly not, this is old fashioned gender discrimination.

Google’s resistance signals to all of its mid-level managers that gender based discrimination will be defended.

Does that really qualify for “Don’t be evil?”

Bigoted Use of Stingray Technology vs. Other Ills

Saturday, May 13th, 2017

Racial Disparities in Police ‘Stingray’ Surveillance, Mapped by George Joseph.

From the post:

Louise Goldsberry, a Florida nurse, was washing dishes when she looked outside her window and saw a man pointing a gun at her face. Goldsberry screamed, dropped to the floor, and crawled to her bedroom to get her revolver. A standoff ensued with the gunman—who turned out to be an agent with the U.S. Marshals’ fugitive division.

Goldsberry, who had no connection to a suspect that police were looking for, eventually surrendered and was later released. Police claimed that they raided her apartment because they had a “tip” about the apartment complex. But, according to Slate, the reason the “tip” was so broad was because the police had obtained only the approximate location of the suspect’s phone—using a “Stingray” phone tracker, a little-understood surveillance device that has quietly spread from the world of national security into that of domestic law enforcement.

Goldsberry’s story illustrates a potential harm of Stingrays not often considered: increased police contact for people who get caught in the wide dragnets of these interceptions. To get a sense of the scope of this surveillance, CityLab mapped police data from three major cities across the U.S., and found that this burden is not shared equally.

How not equally?

Baltimore, Maryland.

The map at Joseph’s post is interactive, along with maps for Tallahassee, Florida and Milwaukee, Minnesota.

I oppose government surveillance overall but am curious, is Stingray usage a concern of technology/privacy advocates or is there a broader base for opposing it?

Consider the following facts gathered by Bill Quigley:

Were you shocked at the disruption in Baltimore? What is more shocking is daily life in Baltimore, a city of 622,000 which is 63 percent African American. Here are ten numbers that tell some of the story.

One. Blacks in Baltimore are more than 5.6 times more likely to be arrested for possession of marijuana than whites even though marijuana use among the races is similar. In fact, Baltimore county has the fifth highest arrest rate for marijuana possessions in the USA.

Two. Over $5.7 million has been paid out by Baltimore since 2011 in over 100 police brutality lawsuits. Victims of severe police brutality were mostly people of color and included a pregnant woman, a 65 year old church deacon, children, and an 87 year old grandmother.

Three. White babies born in Baltimore have six more years of life expectancy than African American babies in the city.

Four. African Americans in Baltimore are eight times more likely to die from complications of HIV/AIDS than whites and twice as likely to die from diabetes related causes as whites.

Five. Unemployment is 8.4 percent city wide. Most estimates place the unemployment in the African American community at double that of the white community. The national rate of unemployment for whites is 4.7 percent, for blacks it is 10.1.

Six.African American babies in Baltimore are nine times more likely to die before age one than white infants in the city.

Seven. There is a twenty year difference in life expectancy between those who live in the most affluent neighborhood in Baltimore versus those who live six miles away in the most impoverished.

Eight. 148,000 people, or 23.8 percent of the people in Baltimore, live below the official poverty level.

Nine. 56.4 percent of Baltimore students graduate from high school. The national rate is about 80 percent.

Ten. 92 percent of marijuana possession arrests in Baltimore were of African Americans, one of the highest racial disparities in the USA.

(The “Shocking” Statistics of Racial Disparity in Baltimore)

Which of those facts would you change before tackling the problem of racially motivated use of Stingray technology?

I see several that I would rate much higher than the vagaries of Stingray surveillance.

You?

Addictive Technology (And the Problem Is?)

Thursday, May 4th, 2017

Tech Companies are Addicting People! But Should They Stop? by Nir Eyal.

From the post:

To understand technology addiction (or any addiction for that matter) you need to understand the Q-tip. Perhaps you’ve never noticed there’s a scary warning on every box of cotton swabs that reads, “CAUTION: Do not enter ear canal…Entering the ear canal could cause injury.” How is it that the one thing most people do with Q-tips is the thing manufacturers explicitly warn them not to do?

“A day doesn’t go by that I don’t see people come in with Q-tip-related injuries,” laments Jennifer Derebery, an inner ear specialist in Los Angeles and the past president of the American Academy of Otolaryngology. “I tell my husband we ought to buy stock in the Q-tips company; it supports my practice.” It’s not just that people do damage to their ears with Q-tips, it’s that they keep doing damage. Some even call it an addiction.

On one online forum, a user asks, “Anyone else addicted to cleaning their ears with Q-tips?…I swear to God if I go more than a week without sticking Q-tips in my ears, I go nuts. It’s just so damn addicting…” Elsewhere, another ear-canal enterer also associates ear swabbing with dependency: “How can I detox from my Q-tips addiction?” The phenomenon is so well known that MADtv based a classic sketch on a daughter having to hide Q-tip use from her parents like a junkie.

Q-tip addiction shares something in common with other, more prevalent addictions like gambling, heroin, and even Facebook use. Understanding what I call, the Q-tip Effect, raises important questions about products we use every day, and the responsibilities their makers have in relation to the welfare of their users.
… (emphasis in original)

It’s a great post on addiction (read the definition), technology, etc., but Nir loses me here:


However, there’s a difference between accepting the unavoidable edge cases among unknown users and knowingly promoting the Q-tip Effect. When it comes to companies that know exactly who’s using, how, and how much, much more can be done. To do the right thing by their customers, companies have an obligation to help when they know someone wants to stop, but can’t. Silicon Valley technology companies are particularly negligent by this ethical measure.

The only basis for this “…obligation to help when they know someone wants to stop, but can’t” appears to be Nir’s personal opinion.

That’s ok and he is certainly entitled to it, but Nir hasn’t offered to pay the cost of meeting his projected ethical obligation.

People enjoy projecting ethical obligations on others, from the anti-abortion, anti-birth control, anti-drugs, etc.

Imposing moral obligations that others pay for is more popular in the U.S. than adultery. I don’t have any hard numbers on that last point. Let’s say imposing moral obligations paid for by others is wildly popular and leave it at that.

If I had a highly addictive (in Nir’s sense) app, I would be using the profits to rent backhoes for anyone who needed one along the DAPL pipeline. No questions asked.

It’s an absolute necessity to raise ethical questions about technology and society in general.

But my first question is always: Who pays the cost of your ethical concern?

If it’s not you, that says a lot to me about your concern.

Non-Fox News journalists: Investigate Bill O’Reilly & Fox News Reporters

Tuesday, April 4th, 2017

Fox News journalists: Don’t stay silent amid Bill O’Reilly controversy by Kyle Pope.

From the post:

WHAT DOES IT TELL US WHEN advertisers get ahead of reporters in matters of newsroom ethics? It tells us something is seriously wrong at Fox News, and it’s time for the real journalists at the network (and beyond) to make themselves heard.

On Tuesday, more companies moved to distance themselves from the network and its host, Bill O’Reilly, in response to a April 1 piece in The New York Times detailing sexual harassment allegations against Fox’s top-rated host and cash cow. The alleged behavior ranges the gamut of smut, from unwanted advances to phone calls in which O’Reilly—he of an $18 million-a-year salary from Rupert Murdoch et al—sounds as if he is masturbating.
… (emphasis in original)

Pope’s call for legitimate journalists at Fox to step forward is understandable, but too little too late.

From campus rape at UT Austin to the conviction of former Penn State President Graham Spanier’s conviction for failing to report alleged child abuse, it is always that case that somebody knew what was going on and remained silent.

What did the “legitimate journalists” at Fox News and when?

Will the journalism community toss 0’Reilly to the wolves and give his colleagues a free pass?

That’s seems like an odd sense of ethics for journalists.

Yes?

Q&A Cathy O’Neil…

Wednesday, January 4th, 2017

Q&A Cathy O’Neil, author of ‘Weapons of Math Destruction,’ on the dark side of big data by Christine Zhang.

From the post:

Cathy O’Neil calls herself a data skeptic. A former hedge fund analyst with a PhD in mathematics from Harvard University, the Occupy Wall Street activist left finance after witnessing the damage wrought by faulty math in the wake of the housing crash.

In her latest book, “Weapons of Math Destruction,” O’Neil warns that the statistical models hailed by big data evangelists as the solution to today’s societal problems, like which teachers to fire or which criminals to give longer prison terms, can codify biases and exacerbate inequalities. “Models are opinions embedded in mathematics,” she writes.

Great interview that hits enough high points to leave you wanting to learn more about Cathy and her analysis.

On that score, try:

Read her mathbabe blog.

Follow @mathbabedotorg.

Read Weapons of math destruction : how big data increases inequality and threatens democracy.

Try her new business: ORCAA [O’Neil Risk Consulting and Algorithmic Auditing].

From the ORCAA homepage:


ORCAA’s mission is two-fold. First, it is to help companies and organizations that rely on time and cost-saving algorithms to get ahead of this wave, to understand and plan for their litigation and reputation risk, and most importantly to use algorithms fairly.

The second half of ORCAA’s mission is this: to develop rigorous methodology and tools, and to set rigorous standards for the new field of algorithmic auditing.

There are bright line cases, sentencing, housing, hiring discrimination where “fair” has a binding legal meaning. And legal liability for not being “fair.”

Outside such areas, the search for “fairness” seems quixotic. Clients are entitled to their definitions of “fair” in those areas.

neveragain.tech [Or at least not any further]

Friday, December 16th, 2016

neveragain.tech [Or at least not any further]

Write a list of things you would never do. Because it is possible that in the next year, you will do them. —Sarah Kendzior [1]

We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration’s proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable.

We have educated ourselves on the history of threats like these, and on the roles that technology and technologists played in carrying them out. We see how IBM collaborated to digitize and streamline the Holocaust, contributing to the deaths of six million Jews and millions of others. We recall the internment of Japanese Americans during the Second World War. We recognize that mass deportations precipitated the very atrocity the word genocide was created to describe: the murder of 1.5 million Armenians in Turkey. We acknowledge that genocides are not merely a relic of the distant past—among others, Tutsi Rwandans and Bosnian Muslims have been victims in our lifetimes.

Today we stand together to say: not on our watch, and never again.

I signed up but FYI, the databases we are pledging to not build, already exist.

The US Census Bureau collects information on race, religion and national origin.

The Statistical Abstract of the United States: 2012 (131st Edition) Section 1. Population confirms the Census Bureau has this data:

Population tables are grouped by category as follows:

  • Ancestry, Language Spoken At Home
  • Elderly, Racial And Hispanic Origin Population Profiles
  • Estimates And Projections By Age, Sex, Race/Ethnicity
  • Estimates And Projections–States, Metropolitan Areas, Cities
  • Households, Families, Group Quarters
  • Marital status And Living Arrangements
  • Migration
  • National Estimates And Projections
  • Native And Foreign-Born Populations
  • Religion

To be fair, the privacy principles of the Census Bureau state:

Respectful Treatment of Respondents: Are our efforts reasonable and did we treat you with respect?

  • We promise to ensure that any collection of sensitive information from children and other sensitive populations does not violate federal protections for research participants and is done only when it benefits the public good.

Disclosure: I like the US Census Bureau. Left to their own devices, I don’t have any reasonable fear of their mis-using the data in question.

But that’s the question isn’t it? Will the US Census Bureau be left to its own policies and traditions?

I view the various “proposed data collection policies” of the incoming administrations as intentional distractions. While everyone is focused on Trump’s Theater of the Absurd, appointments and policies at the US Census Bureau, may achieve the same ends.

Sign the pledge yes, but use FOIA requests, personal contacts with Census staff, etc., to keep track of the use of dangerous data at the Census Bureau and elsewhere.


Instructions for adding your name to the pledge are found at: https://github.com/neveragaindottech/neveragaindottech.github.io/.

Assume Census Bureau staff are committed to their privacy and appropriate use policies. A friendly approach will be far more productive than a confrontational or suspicious one. Let’s work with them to maintain their agency’s long history of data security.

Programming has Ethical Consequences?

Friday, November 25th, 2016

Has anyone tracked down the blinding flash that programming has ethical consequences?

Programmers are charged to point out ethical dimensions and issues not noticed by muggles.

This may come as a surprise but programmers in the broader sense have been aware of ethical dimensions to programming for decades.

Perhaps the best known example of a road to Damascus type event is the Trinity atomic bomb test in New Mexico. Oppenheimer recalling a line from the Bhagavad Gita:

“Now I am become Death, the destroyer of worlds.”

To say nothing of the programmers who labored for years to guarantee world wide delivery of nuclear warheads in 30 minutes or less.

But it isn’t necessary to invoke a nuclear Armageddon to find ethical issues that have faced programmers prior to the current ethics frenzy.

Any guesses as to how red line maps were created?

Do you think “red line” maps just sprang up on their own? Or was someone collecting, collating and analyzing the data, much as we would do now but more slowly?

Every act of collecting, collating and analyzing data, now with computers, can and probably does have ethical dimensions and issues.

Programmers can and should raise ethical issues, especially when they may be obscured or clouded by programming techniques or practices.

However, programmers announcing ethical issues to their less fortunate colleagues isn’t likely to lead to a fruitful discussion.

None/Some/All … Are Suicide Bombers & Probabilistic Programming Languages

Tuesday, November 8th, 2016

The Design and Implementation of Probabilistic Programming Languages by Noah D. Goodman and Andreas Stuhlmüller.

Abstract:

Probabilistic programming languages (PPLs) unify techniques for the formal description of computation and for the representation and use of uncertain knowledge. PPLs have seen recent interest from the artificial intelligence, programming languages, cognitive science, and natural languages communities. This book explains how to implement PPLs by lightweight embedding into a host language. We illustrate this by designing and implementing WebPPL, a small PPL embedded in Javascript. We show how to implement several algorithms for universal probabilistic inference, including priority-based enumeration with caching, particle filtering, and Markov chain Monte Carlo. We use program transformations to expose the information required by these algorithms, including continuations and stack addresses. We illustrate these ideas with examples drawn from semantic parsing, natural language pragmatics, and procedural graphics.

If you want to sharpen the discussion of probabilistic programming languages, substitute in the pragmatics example:

‘none/some/all of the children are suicide bombers’,

The substitution raises the issue of how “certainty” can/should vary depending upon the gravity of results.

Who is a nice person?, has low stakes.

Who is a suicide bomber?, has high stakes.

Deep-Fried Data […money laundering for bias…]

Tuesday, October 4th, 2016

Deep-Fried Data by Maciej Ceglowski. (paper) (video of same presentation) Part of Collections as Data event at the Library of Congress.

If the “…money laundering for bias…” quote doesn’t capture your attention, try:


I find it helpful to think of algorithms as a dim-witted but extremely industrious graduate student, whom you don’t fully trust. You want a concordance made? An index? You want them to go through ten million photos and find every picture of a horse? Perfect.

You want them to draw conclusions on gender based on word use patterns? Or infer social relationships from census data? Now you need some adult supervision in the room.

Besides these issues of bias, there’s also an opportunity cost in committing to computational tools. What irks me about the love affair with algorithms is that they remove a lot of the potential for surprise and serendipity that you get by working with people.

If you go searching for patterns in the data, you’ll find patterns in the data. Whoop-de-doo. But anything fresh and distinctive in your digital collections will not make it through the deep frier.

We’ve seen entire fields disappear down the numerical rabbit hole before. Economics came first, sociology and political science are still trying to get out, bioinformatics is down there somewhere and hasn’t been heard from in a while.

A great read and equally enjoyable presentation.

Enjoy!

Moral Machine [Research Design Failure]

Tuesday, October 4th, 2016

Moral Machine

From the webpage:

Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.

We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.

If you’re feeling creative, you can also design your own scenarios, for you and others to browse, share, and discuss.

The first time I recall hearing this type of discussion was over thirty years ago when a friend, taking an ethics class related the following problem:

You are driving a troop transport with twenty soldiers in the back and are about to enter a one lane bridge. You see a baby sitting in the middle of the bridge. Do you serve, going down an embankment, killing all on board or do you go straight?

A lively college classroom discussion erupted and continued for the entire class. Various theories and justifications were offered, etc. When the class bell rang, the professor announced the child perished 59 minutes, 59 seconds ago.

As you may guess, not a single person in the class called out “Swerve” when the question was posed.

The exercise was to illustrate that many “moral” decisions are made at the limits of human reaction time. Typically, 150 and 300 milliseconds. (Speedy Science: How Fast Can You React? is a great activity from Scientific American to test your reaction time.)

The examples in MIT’s Moral Machine perpetuate the myth that moral decisions are the result of reflection and consideration of multiple factors.

Considered moral decisions do exist. Dietrich Bonhoeffer deciding to participate in a conspiracy to assassinate Adolf Hitler. Lyndon Johnson supporting civil rights in the South. But those are not the subject of the “Moral Machine.”

Nor is the “Moral Machine” even a useful simulation of what a driven and/or driverless car would confront. Visibility isn’t an issue as it often is, there are no distractions, no smart phones ringing, no conflicting input from passengers, etc.

In short, the “Moral Machine” creates a fictional choice, about which to solicit your “moral” advice, under conditions you will never experience.

Separating pedestrians from vehicles (once suggested by Buckminster Fuller I think) is a far more useful exercise than college level discussion questions.

Are You A Moral Manipulator?

Thursday, September 29th, 2016

I appreciated Nir’s reminder about the #1 rule for drug dealers.

If you don’t know it, the video is only a little over six minutes long.

Enjoy!

Ethics for Powerful Algorithms

Sunday, August 28th, 2016

Ethics for Powerful Algorithms by Abe Gong.

Abe’s four questions:

  1. Are the statistics solid?
  2. Who wins? Who loses?
  3. Are those changes to power structures healthy?
  4. How can we mitigate harms?

Remind me of my favorite scene from Labyrinth:

Transcript:

Sarah: That’s not fair!
Jareth: You say that so often, I wonder what your basis for comparison is?

Isn’t the question of “fairness” one for your client?

DATNAV: …Navigate and Integrate Digital Data in Human Rights Research [Ethics]

Wednesday, August 24th, 2016

DATNAV: New Guide to Navigate and Integrate Digital Data in Human Rights Research by Zara Rahman.

From the introduction in the Guide:

From online videos of rights violations, to satellite images of environmental degradation, to eyewitness accounts disseminated on social media, we have access to more relevant data today than ever before. When used responsibly, this data can help human rights professionals in the courtroom, when working with governments and journalists, and in documenting historical record.

Acquiring, disseminating and storing digital data is also becoming increasingly affordable. As costs continue to decrease and new platforms are
developed, opportunities for harnessing these data sources for human rights work increase.

But integrating data collection and management into the day to day work of human rights research and documentation can be challenging, even overwhelming, for individuals and organisations. This guide is designed to help you navigate and integrate new data forms into your human rights work.

It is the result of a collaboration between Amnesty International, Benetech, and The Engine Room that began in late 2015. We conducted a series of interviews, community consultations, and surveys to understand whether digital data was being integrated into human rights work. In the vast majority of cases, we found that it wasn’t. Why?

Mainly, human rights researchers appeared to be overwhelmed by the possibilities. In the face of limited resources, not knowing how to get started or whether it would be worthwhile, most people we spoke to refrained from even attempting to strengthen their work with digital data.

To support everyone in the human rights field in navigating this complex environment, we convened a group of 16 researchers and technical experts in a castle outside Berlin, Germany in May 2016 to draft this guide over four days of intense reflection and writing.

There are additional reading resources at: https://engn.it/datnav.

The issue of ethics comes up quickly in human rights research and here the authors write:

Seven things to consider before using digital data for human rights

  1. Would digital data genuinely help answer your research questions? What are the pros and cons of the particular source or medium? What might you learn from past uses of similar technology?
  2. What sources are likely to be collecting or capturing the kinds of information you need? What is the context in which it is being produced and used? Will the people or organisations on which your work is focused be receptive to these types of data?
  3. How easily will new forms of data integrate into your existing workflow? Do you realistically have the time and money to collect, store, analyze and especially to verify this data? Can anyone on your team comfortably support the technology?
  4. Who owns or controls the data you will be using? Companies, government, or adversaries? How difficult is it to get? Is it a fair or legal collection method? What is the internal stance on this? Do you have true informed consent from individuals?
  5. How will digital divides and differences in local access to online platforms, computers or phones, affect representation of different populations? Would conclusions based on the data reinforce inequalities, stereotypes or blind spots?
  6. Are organisational protocols for confidentiality and security in digital communication and data handling sufficiently robust to deal with risks to you, your partners and sources? Are security tools and processes updated frequently enough?
  7. Do you have safeguards in place to prevent and deal with any secondary trauma from viewing digital content that you or your partners may experience at personal and organisational levels?

(Page 15)

Before I reveal my #0 consideration, consider the following story as setting the background.

At a death penalty seminar (certainly a violation of human rights), a practitioner reported a case where the prosecuting attorney said a particular murder case was a question of “good versus evil.” In the course of preparing for that case, it was discovered that while teaching a course for paralegals, the prosecuting attorney had a sexual affair with one of his students. Affidavits were obtained, etc., and a motion was filed in the pending criminal case entitled: Motion To Define Good and Evil.

There was a mix of opinions on whether blind-siding the prosecuting attorney with his personal failings, with the fallout for his family, was a legitimate approach?

My question was: Did they consider asking the prosecuting attorney to take the death penalty off the table, in exchange for not filing the Motion To Define Good and Evil? A question of effective use of the information and not about the legitimacy of using it.

For human rights violations, my #0 Question would be:

0. Can the information be used to stop and/or redress human rights violations without harming known human rights victims?

The other seven questions, like “…all deliberate speed…,” are a game played by non-victims.

The Ethics of Data Analytics

Sunday, August 21st, 2016

The Ethics of Data Analytics by Kaiser Fung.

Twenty-one slides on ethics by Kaiser Fung, author of: Junk Charts (data visualization blog), and Big Data, Plainly Spoken (comments on media use of statistics).

Fung challenges you to reach your own ethical decisions and acknowledges there are a number of guides to such decision making.

Unfortunately, Fung does not include professional responsibility requirements, such as the now out-dated Canon 7 of the ABA Model Code Of Professional Responsibility:

A Lawyer Should Represent a Client Zealously Within the Bounds of the Law

That canon has a much storied history, which is capably summarized in Whatever Happened To ‘Zealous Advocacy’? by Paul C. Sanders.

In what became known as Queen Caroline’s Case, the House of Lords sought to dissolve the marriage of King George the IV

George IV 1821 color

to Queen Caroline

CarolineOfBrunswick1795

on the grounds of her adultery. Effectively removing her as queen of England.

Queen Caroline was represented by Lord Brougham, who had evidence of a secret prior marriage by King George the IV to Catholic (which was illegal), Mrs Fitzherbert.

Portrait of Mrs Maria Fitzherbert, wife of George IV

Brougham’s speech is worth your reading in full but the portion most often cited for zealous defense reads as follows:


I once before took leave to remind your lordships — which was unnecessary, but there are many whom it may be needful to remind — that an advocate, by the sacred duty of his connection with his client, knows, in the discharge of that office, but one person in the world, that client and none other. To save that client by all expedient means — to protect that client at all hazards and costs to all others, and among others to himself — is the highest and most unquestioned of his duties; and he must not regard the alarm, the suffering, the torment, the destruction, which he may bring upon any other; nay, separating even the duties of a patriot from those of an advocate, he must go on reckless of the consequences, if his fate it should unhappily be, to involve his country in confusion for his client.

The name Mrs. Fitzherbert never slips Lord Brougham’s lips but the House of Lords has been warned that may not remain to be the case, should it choose to proceed. The House of Lords did grant the divorce but didn’t enforce it. Saving fact one supposes. Queen Caroline died less than a month after the coronation of George IV.

For data analysis, cybersecurity, or any of the other topics I touch on in this blog, I take the last line of Lord Brougham’s speech:

To save that client by all expedient means — to protect that client at all hazards and costs to all others, and among others to himself — is the highest and most unquestioned of his duties; and he must not regard the alarm, the suffering, the torment, the destruction, which he may bring upon any other; nay, separating even the duties of a patriot from those of an advocate, he must go on reckless of the consequences, if his fate it should unhappily be, to involve his country in confusion for his client.

as the height of professionalism.

Post-engagement of course.

If ethics are your concern, have that discussion with your prospective client before you are hired.

Otherwise, clients have goals and the task of a professional is how to achieve them. Nothing more.

Who Decides On Data Access?

Sunday, July 31st, 2016

In a Twitter dust-up following The Privileged Cry: Boo, Hoo, Hoo Over Release of OnionScan Data the claim was made by [Λ•]ltSciFi@altscifi_that:

@SarahJamieLewis You take an ethical stance. @patrickDurusau does not. Note his regression to a childish tone. Also: schneier.com/blog/archives/…

To which I responded:

@altscifi_ @SarahJamieLewis Interesting. Questioning genuflection to privilege is a “childish tone?” Is name calling the best you can do?

Which earned this response from [Λ•]ltSciFi@altscifi_:

@patrickDurusau @SarahJamieLewis Not interested in wasting time arguing with you. Your version of “genuflection” doesn’t merit the effort.

Anything beyond name calling is too much effort for [Λ•]ltSciFi@altscifi_. Rather than admit they haven’t thought about the issue of the ethics of data access beyond “me too!,” it saves face to say discussion is a waste of time.

I have never denied that access to data can raise ethical issues or that such issues merit discussion.

What I do object to is that in such discussions, it has been my experience (important qualifier), that those urging ethics of data access have someone in mind to decide on data access. Almost invariably, themselves.

Take the recent “weaponized transparency” rhetoric of the Sunlight Foundation as an example. We can argue about the ethics of particular aspects of the DNC data leak, but the fact remains that the Sunlight Foundation considers itself, and not you, as the appropriate arbiter of access to an unfiltered version of that data.

I assume the Sunlight Foundation would include as appropriate arbiters many of the usual news organizations what accept leaked documents and reveal to the public only so much as they choose to reveal.

Not to pick on the Sunlight Foundation, there is an alphabet soup of U.S. government agencies that make similar claims of what should or should not be revealed to the public. I have no more sympathy for their claims of the right to limit data access than more public minded organizations.

Take the data dump of OnionScan data for example. Sarah Jamie Lewis may choose to help sites for victims of abuse (a good thing in my opinion) whereas others of us may choose to fingerprint and out government spy agencies (some may see that as a bad thing).

The point being that the OnionScan data dump enables more people to make those “ethical” choices and to not be preempted because data such as the OnionScan data should not be widely available.

BTW, in a later tweet Sarah Jamie Lewis says:

In which I am called privileged for creating an open source tool & expressing concerns about public deanonymization.

Missing the issue entirely as she was quoted as expressing concerns over the OnionScan data dump. Public deanonymization, is a legitimate concern so long as we all get to decide those concerns for ourselves. Lewis is trying to dodge the issue of her weak claim over the data dump for the stronger one over public deanonymization.

Unlike most of the discussants you will find, I don’t want to decide on what data you can or cannot see.

Why would I? I can’t foresee all uses and/or what data you might combine it with. Or with what intent?

If you consider the history of data censorship by governments, we haven’t done terribly well in our choices of censors or in the results of their censorship.

Let’s allow people to exercise their own sense of ethics. We could hardly do worse than we have so far.

Are Non-AI Decisions “Open to Inspection?”

Thursday, June 16th, 2016

Ethics in designing AI Algorithms — part 1 by Michael Greenwood.

From the post:

As our civilization becomes more and more reliant upon computers and other intelligent devices, there arises specific moral issue that designers and programmers will inevitably be forced to address. Among these concerns is trust. Can we trust that the AI we create will do what it was designed to without any bias? There’s also the issue of incorruptibility. Can the AI be fooled into doing something unethical? Can it be programmed to commit illegal or immoral acts? Transparency comes to mind as well. Will the motives of the programmer or the AI be clear? Or will there be ambiguity in the interactions between humans and AI? The list of questions could go on and on.

Imagine if the government uses a machine-learning algorithm to recommend applications for student loan approvals. A rejected student and or parent could file a lawsuit alleging that the algorithm was designed with racial bias against some student applicants. The defense could be that this couldn’t be possible since it was intentionally designed so that it wouldn’t have knowledge of the race of the person applying for the student loan. This could be the reason for making a system like this in the first place — to assure that ethnicity will not be a factor as it could be with a human approving the applications. But suppose some racial profiling was proven in this case.

If directed evolution produced the AI algorithm, then it may be impossible to understand why, or even how. Maybe the AI algorithm uses the physical address data of candidates as one of the criteria in making decisions. Maybe they were born in or at some time lived in poverty‐stricken regions, and that in fact, a majority of those applicants who fit these criteria happened to be minorities. We wouldn’t be able to find out any of this if we didn’t have some way to audit the systems we are designing. It will become critical for us to design AI algorithms that are not just robust and scalable, but also easily open to inspection.

While I can appreciate the desire to make AI algorithms that are “…easily open to inspection…,” I feel compelled to point out that human decision making has resisted such openness for thousands of years.

There are the tales we tell each other about “rational” decision making but those aren’t how decisions are made, rather they are how we justify decisions made to ourselves and others. Not exactly the same thing.

Recall the parole granting behavior of israeli judges that depended upon the proximity to their last meal. Certainly all of those judges would argue for their “rational” decisions but meal time was a better predictor than any other. (Extraneous factors in judicial decisions)

My point being that if we struggle to even articulate the actual basis for non-AI decisions, where is our model for making AI decisions “open to inspection?” What would that look like?

You could say, for example, no discrimination based on race. OK, but that’s not going to work if you want to purposely setup scholarships for minority students.

When you object, “…that’s not what I meant! You know what I mean!…,” well, I might, but try convincing an AI that has no social context of what you “meant.”

The openness of AI decisions to inspection is an important issue but the human record in that regard isn’t encouraging.

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

Thursday, April 7th, 2016

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil.

math-weapons

From the description at Amazon:

We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated. But as Cathy O’Neil reveals in this shocking book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his race or neighborhood), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.

Tracing the arc of a person’s life, from college to retirement, O’Neil exposes the black box models that shape our future, both as individuals and as a society. Models that score teachers and students, sort resumes, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health—all have pernicious feedback loops. They don’t simply describe reality, as proponents claim, they change reality, by expanding or limiting the opportunities people have. O’Neil calls on modelers to take more responsibility for how their algorithms are being used. But in the end, it’s up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.

Even if you have qualms about Cathy’s position, you have to admit that is a great book cover!

When I was in law school, I had F. Hodge O’Neal for corporation law. He is the O’Neal in O’Neal and Thompson’s Oppression of Minority Shareholders and LLC Members, Rev. 2d.

The publisher’s blurb is rather generous in saying:

Cited extensively, O’Neal and Thompson’s Oppression of Minority Shareholders and LLC Members shows how to take appropriate steps to protect minority shareholder interests using remedies, tactics, and maneuvers sanctioned by federal law. It clarifies the underlying cause of squeeze-outs and suggests proven arrangements for avoiding them.

You could read Oppression of Minority Shareholders and LLC Members that way but when corporate law is taught with war stories from the antics of the robber barons forward, you get the impression that isn’t why people read it.

Not that I doubt Cathy’s sincerity, on the contrary, I think she is very sincere about her warnings.

Where I disagree with Cathy is in thinking democracy is under greater attack now or that inequality is any greater problem than before.

If you read The Half Has Never Been Told: Slavery and the Making of American Capitalism by Edward E. Baptist:

half-history

carefully, you will leave it with deep uncertainty about the relationship of American government, federal, state and local to any recognizable concept of democracy. Or for that matter to the “equality” of its citizens.

Unlike Cathy as well, I don’t expect that shaming people is going to result in “better” or more “honest” data analysis.

What you can do is arm yourself to do battle on behalf of your “side,” both in terms of exposing data manipulation by others and concealing your own.

Perhaps there is room in the marketplace for a book titled: Suppression of Unfavorable Data. More than hiding data, what data to not collect? How to explain non-collection/loss? How to collect data in the least useful ways?

You would have to write it as a how to avoid these very bad practices but everyone would know what you meant. Could be the next business management best seller.

A social newsgathering ethics code from ONA

Saturday, April 2nd, 2016

Common ground: A social newsgathering ethics code from ONA by Eric Carvin.

From the post:

Today, we’re introducing the ONA Social Newsgathering Ethics Code, a set of best practices that cover everything from verification to rights issues to the health and safety of sources — and of journalists themselves.

We’re launching the code with support from a number of news organizations, including the BBC, CNN, AFP, Storyful and reported.ly. You can see the complete list of supporters at the end of the code.

We’re constantly reminded of the need for best practices such as these. The recent bombings in Brussels, Ankara, Lahore and Yemen, among others, provided yet another stark and tragic reminder of how information and imagery spread, in a matter of moments, from the scene of an unexpected news event to screens around the world.

Moments like these challenge us, as journalists, to tell a fast-moving story in a way that’s informative, detailed and accurate. These days, a big part of that job involves wading through a roiling sea of digital content and making sense out of what we surface.

There is one tenet of this ethics code that should be applied in all cases, not just user-generated content:

Being transparent with the audience about the verification status of UGC.

If you applied that principle to stories based on statements from the FBI would read:

Unconfirmed reports from the FBI say….

Yes?

How would you confirm a report from the FBI?

Ask another FBI person to repeat what was said by the first one?

Obtain the FBI sources and cross-check with those sources the report of the FBI?

If not the second one, why not?

Cost? Time? Convenience?

Which of those results in your parroting reports from the FBI most often?

Is that an ethical issue or is the truthfulness of the FBI assumed, all evidence to the contrary notwithstanding?

“Ethical” Botmakers Censor Offensive Content

Saturday, March 26th, 2016

There are almost 500,000 “hits” from “tay ai” in one popular search engine today.

Against that background, I ran into: How to Make a Bot That Isn’t Racist by Sarah Jeong.

From the post:

…I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking.

“The makers of @TayandYou absolutely 10000 percent should have known better,” thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. “It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet.”

Thricedotted and others belong to an established community of botmakers on Twitter that have been creating and experimenting for years. There’s a Bot Summit. There’s a hashtag (#botALLY).

As I spoke to each botmaker, it became increasingly clear that the community at large was tied together by crisscrossing lines of influence. There is a well-known body of talks, essays, and blog posts that form a common ethical code. The botmakers have even created open source blacklists of slurs that have become Step 0 in keeping their bots in line.

Not researching prior art is as bad as not Reading The Fine Manual (RTFM) before posting help queries to heavy traffic developer forums.

Tricedotted claims a prior obligation of TayandYou’s creators to block offensive content:

For thricedotted, TayandYou failed from the start. “You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit,” they said. “It blows my mind, because surely they’ve been working on this for a while, surely they’ve been working with Twitter data, surely they knew this shit existed. And yet they put in absolutely no safeguards against it?!” (emphasis in original)

No doubt Microsoft wishes that it had blocked offensive content in hindsight, but I don’t see a general ethical obligation to block or censor offensive content.

For example:

  • A bot created to follow public and private accounts of elected officials and it only re-tweeted posts that did contain racial slurs? With @news-organization handles in the tweets.
  • A bot based on matching FEC (Federal Election Commission) donation records + Twitter accounts and it re-tweets racist/offensive tweets along with campaign donation identifiers and the candidate in question.
  • A bot that follows accounts known for racist/offensive tweets for the purpose of building archives of those tweets, publicly accessible, to prevent the sanitizing of tweet archives in the future. (like with TayandYou)

Any of those strike you as “unethical?”

I wish the Georgia legislature and the U.S. Congress would openly used racist and offensive language.

They act in racist and offensive ways so they should be openly racist and offensive. Makes it easier to whip up effective opposition against known racists, etc.

Which is, of course, why they self-censor to not use racist language.

The world is full of offensive people and we should make they own their statements.

Creating a false, sanitized view that doesn’t offend some n+1 sensitivities, is just that, a false view of the world.

If you are looking for an ethical issue, creating views of the world that help conceal racism, sexism, etc., is a better starting place than offensive ephemera.

Wrestling With Inclusion at LambdaConf [Why Wrestle? Just Do It.]

Friday, March 25th, 2016

Wrestling With Inclusion at LambdaConf by John A De Goes.

From the post:

Last year, StrangeLoop rescinded an invitation to a speaker because of the controversy that erupted (nay, exploded) when his talk was announced.

The controversy had nothing to do with the talk, which by all accounts was a great fit for the eclectic topics served up every year by the conference. Rather, the controversy surrounded the speaker’s political views, which were penned under a pseudonym years prior.

I learned about all this quite recently, and for a very unexpected reason: the same speaker submitted a talk to LambdaConf.

The gender- and person-anonymized talk was endorsed by the review committee, and made it all the way onto the schedule board before a volunteer brought the issue to our attention.

My immediate reaction could be described as a combination of shock and horror. No conference organizer ever wants to face a controversial hailstorm like this!

Far, far too long to read, unless you are interested in an example of public justification taken to its extreme.

Not that I disagree with the decision to include the speaker.

I do disagree that any speaker should be singled out for the sort of vetting that is described in John’s post.

All speakers should be accorded the presumption that they will obey local laws and not attempt to physically harm other conference attendees and will obey any code of conduct for the conference.

Absent evidence to the contrary. Evidence as reports, confirmed by news accounts and/or police reports of attacks at prior conferences or violation of other conference codes of conduct.

If a speaker endangers other attendees and/or violates conference rules of conduct, then don’t allow them to return. But don’t mimic the worse aspects of the developing police state in the United States and attempt to anticipate someone violating a norm of conduct.

Anticipatory regulation of possible future conduct is unfair to the person in question.

Not to mention being a distraction from advancing the subject of your conference.

As John’s post so ably demonstrates.

Imagine the useful articles, posts, code that could have been written with all that effort and strain.

Subject to documented prior arrests for violence against other attendees and/or violation of rules of conduct, modulo declarations to do the same, be inclusive.

What more need be said?

PS: Some people will disagree with that position but they can occupy their own digital space and time with un-responded to comments and diatribes. The right to speak does not imply an obligation to listen.

Hiring Ethics: Current Skills versus 10 Years Out of Date – Your Call

Sunday, March 20th, 2016

Cyber-security ethics: the ex-hacker employment conundrum by Davey Winder.

From the post:

Secure Trading, a payments and cyber-security group, has announced that it has appointed Mustafa Al Bassam as a security advisor on the company’s technology and services, including a new blockchain research project. Al Bassam, however, is perhaps better known as Tflow, a former core member of the LulzSec hacker group.

According to Wikipedia, Tflow played an integral part in the Anonymous operation that hacked the HBGaryFederal servers in 2011, and leaked more than 70,000 private emails.

As director of a team that includes ethical hackers, Trustwave’s Lawrence Munro says he would “never knowingly hire someone with a criminal record, especially if their record included breaches of the Computer Misuse Act.” Munro reckons such a thing would be a red flag for him, and while it “may seem draconian to omit individuals who are open about their past brushes with the law” it’s simply not worth the risk when there are white hats available.

The most common figure I remember is that the black hats are ahead by about a decade in the cybersecurity race.

There’s an ethical dilemma, you can hire up to ten year out of date white hats or you can hire cutting edge black hat talent.

Hired “yeses” about your security or the security of your clients doesn’t impact the ability of others to hack those systems.

Are you going to hire “yes” talent or the best talent?

Ethical Wednesdays:…Eyewitness Footage

Tuesday, March 8th, 2016

Ethical Wednesdays: Additional Resources on Ethics and Eyewitness Footage by Madeleine Bair.

From the post:

For the past three months, WITNESS has been sharing work from our new Ethical Guidelines for Using Eyewitness Videos for Human Rights and Advocacy. We wrap up our blog series by sharing a few of the resources that provided us with valuable expertise and perspectives in our work to develop guidelines (the full series can be found here).

Not all of the resources below are aimed at human rights documentation, and not all specifically address eyewitness footage. But the challenge ensuring that new forms of information gathering and data management are implemented safely and ethically affects many industries, and the following guidance from the fields of crisis response, journalism, and advocacy is relevant to our own work using eyewitness footage for human rights. (For a full list of the resources we referred to in our Ethical Guidelines, download the guide for a complete list in the appendix.)

ICRC’s Professional Standards for Protection Work Carried out by Humanitarian and human rights actors in armed conflict and other situations of violence – The 2nd Edition of the International Committee of the Red Cross’s manual includes new chapters developed to address the ethics of new technologies used to collect information and manage data. While not specific to video footage, its chapter on Managing Sensitive Protection Information provides a relevant discussion on the assessing informed of information found online. “It is often very difficult or even impossible to identify the original source of the information found on the Internet and to ascertain whether the information obtained has been collected fairly/lawfully with the informed consent of the persons to whom this data relates. In other words, personal data accessible on the Internet is not always there as a result of a conscious choice of the individuals concerned to share information in the public domain.”

Quite a remarkable series of posts and additional resources.

There are a number of nuances to the ethics of eyewitness footage that caught me unawares.

My prior experience was shaped around having a client and other than my client, all else was acceptable collateral damage.

That isn’t the approach taken in these posts so you will have to decide which approach, or some mixture of the two works for you.

I agree it is unethical to cause needless suffering, but if you have a smoking gun, you should be prepared to use it to maximum effectiveness.

Fearing Cyber-Terrorism (Ethical Data Science Anyone?)

Wednesday, March 2nd, 2016

Discussions of the ethics of data science are replete with examples of not discriminating against individuals based on race (a crime in some contexts), violation of privacy expectations, etc.

What I have not seen, perhaps poor searching on my part, are discussions of the ethical obligation of data scientists to persuade would be clients that their fears are baseless and/or refusing to participate in projects based on fear mongering.

Here’s a recent example of the type of fear mongering I have in mind:

Cyberterrorism Is the Next ‘Big Threat,’ Says Former CIA Chief


The cyberwar could get much hotter soon, in the estimation of former CIA counter-intelligence director Barry Royden, a 40-year intel veteran, who told Business Insider the threat of cyberterrorism is pervasive, evasive, and so damned invasive that, sooner or later, someone will give into temptation, pull the trigger, and unleash chaos.

Ooooh, chaos. That sounds serious, except that it is the product of paranoid fantasy and and a desire to game the appropriations process.

Consider that in 2004, Gabriel Weimann, United States Institute of Peace, debunks cyberterrorism in Cyberterrorism How Real Is the Threat?.

Fast forward eight years and you find Peter W. Singer (Brookings) writing in The Cyber Terror Bogeyman says:

We have let our fears obscure how terrorists really use the Internet.

About 31,300. That is roughly the number of magazine and journal articles written so far that discuss the phenomenon of cyber terrorism.

Zero. That is the number of people that who been hurt or killed by cyber terrorism at the time this went to press.

In many ways, cyber terrorism is like the Discovery Channel’s “Shark Week,” when we obsess about shark attacks despite the fact that you are roughly 15,000 times more likely to be hurt or killed in an accident involving a toilet. But by looking at how terror groups actually use the Internet, rather than fixating on nightmare scenarios, we can properly prioritize and focus our efforts. (emphasis in original)

That’s a data point isn’t it?

The quantity of zero. Yes?

In terms of data science narrative, the:

…we obsess about shark attacks despite the fact that you are roughly 15,000 times more likely to be hurt or killed in an accident involving a toilet.

is particularly impressive. Anyone with a data set of the ways people have been injured or killed in cases involving a toilet?

The ethical obligation of data scientists comes into focus when:

The Military Cyber Spending reserved by the Pentagon for cyber operations next year is $5 Billion, part of the comprehensive $496 billion fiscal 2015 budget

What are the ethics of taking $millions for work that you know is unnecessary and perhaps even useless?

Do you humor the client, and in the case of government, loot the public till?

Does it make a difference (ethically speaking) that someone else will take the money if you don’t?

Any examples of data scientists not taking on work based on the false threat of cyber-terrorism?

PS: Just in case anyone brings up the Islamic State, the bogeyman of the month of late, point them to: ISIS’s Cyber Caliphate hacks the wrong Google. The current cyber abilities of the Islamic State make them more of a danger to themselves than anyone else. (That’s a factual observation and not an attempt to provide “material support or resources” to the Islamic State.)

Manhandled

Friday, February 12th, 2016

Manhandled by Robert C. Martin.

From the post:

Warning: Possible sexual abuse triggers.

One of my regular bike-riding podcasts is Astronomy Cast, by Dr. Pamela Gay and Fraser Cain. Indeed, if you go to cleancodeproject.com you’ll see that Astronomy Cast is one of the charities on my list of favorites. Make a contribution and I will send you a green Clean Code wristband, or coffee cup, or sweatshirt. If you listen to Astronomy Cast you’ll also find that I am a sponsor.

This podcast is always about science; and the science content is quite good. It’s techie. It’s geeky. It’s right up my alley. I’ve listened to almost every one of the 399 episodes. If you like science — especially science about space and astronomy, this is a great resource.

But yesterday was different. Yesterday was episode 399; and it was not about science at all. It was entitled: Women in Science; and it was about — sexual harassment.

Not the big kind that gets reported. Not the notorious kind that gets people fired. Not that kind — though there’s enough of that to go around. No, this was about commonplace, everyday, normal sexual harassment.

Honestly, I didn’t know there was such a thing. I’ve always thought that sexual harassment was anomalous behavior perpetrated by a few disgusting, arrogant men in positions of power. It never occurred to me that sexual harassment was an everyday, commonplace, run-of-the-mill, what-else-is-new occurrence. But I listened, aghast, as I heard Dr. Gay recount tales of it. Tales of the kind of sexual harassment that women in Science regularly encounter; and have simply come to expect as a normal fact of life.

You need to read Bob’s post in full but in particular his concluding advice:

  • You never lay your hands on someone with sexual intent without their explicit permission. It does not matter how drunk you are. It does not matter how drunk they are. You never, ever manhandle someone without their very explicit consent. And if they work for you, or if you have power over them, then you must never make the advance, and must never accept the consent.
  • What’s more: if you see harassment in progress, or even something you suspect is harassment, you intervene! You stop it! Even if it means you’ll lose a friend, or your job, you stop it!

Bob makes those points as a matter of “professionalism” for programmers but being considerate of others, is part and parcel of being a decent human being.

The Ethical Data Scientist

Thursday, February 4th, 2016

The Ethical Data Scientist by Cathy O’Neil.

From the post:

….
After the financial crisis, there was a short-lived moment of opportunity to accept responsibility for mistakes with the financial community. One of the more promising pushes in this direction was when quant and writer Emanuel Derman and his colleague Paul Wilmott wrote the Modeler’s Hippocratic Oath, which nicely sums up the list of responsibilities any modeler should be aware of upon taking on the job title.

The ethical data scientist would strive to improve the world, not repeat it. That would mean deploying tools to explicitly construct fair processes. As long as our world is not perfect, and as long as data is being collected on that world, we will not be building models that are improvements on our past unless we specifically set out to do so.

At the very least it would require us to build an auditing system for algorithms. This would be not unlike the modern sociological experiment in which job applications sent to various workplaces differ only by the race of the applicant—are black job seekers unfairly turned away? That same kind of experiment can be done directly to algorithms; see the work of Latanya Sweeney, who ran experiments to look into possible racist Google ad results. It can even be done transparently and repeatedly, and in this way the algorithm itself can be tested.

The ethics around algorithms is a topic that lives only partly in a technical realm, of course. A data scientist doesn’t have to be an expert on the social impact of algorithms; instead, she should see herself as a facilitator of ethical conversations and a translator of the resulting ethical decisions into formal code. In other words, she wouldn’t make all the ethical choices herself, but rather raise the questions with a larger and hopefully receptive group.

First, the link for the Modeler’s Hippocratic Oath takes you to a splash page at Wiley for Derman’s book: My Life as a Quant: Reflections on Physics and Finance.

The Financial Modelers’ Manifesto (PDF) and The Financial Modelers’ Manifesto (HTML), are valid links as of today.

I commend the entire text of The Financial Modelers’ Manifesto to you for repeated reading but for present purposes, let’s look at the Modelers’ Hippocratic Oath:

~ I will remember that I didn’t make the world, and it doesn’t satisfy my equations.

~ Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.

~ I will never sacrifice reality for elegance without explaining why I have done so.

~ Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

~ I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension

It may just be me but I don’t see a charge being laid on data scientists to be the ethical voices in organizations using data science.

Do you see that charge?

To to put it more positively, aren’t other members of the organization, accountants, engineers, lawyers, managers, etc., all equally responsible for spurring “ethical conversations?” Why is this a peculiar responsibility for data scientists?

I take a legal ethics view of the employer – employee/consultant relationship. The client is the ultimate arbiter of the goal and means of a project, once advised of their options.

Their choice may or may not be mine but I haven’t ever been hired to play the role of Jiminy Cricket.

Jiminy_Cricket

It’s heady stuff to be responsible for bringing ethical insights to the clueless but sometimes the clueless have ethical insights on their on, or not.

Data scientists can and should raise ethical concerns but no more or less than any other member of a project.

As you can tell from reading this blog, I have very strong opinions on a wide variety of subjects. That said, unless a client hires me to promote those opinions, the goals of the client, by any legal means, are my only concern.

PS: Before you ask, no, I would not work for Donald Trump. But that’s not an ethical decision. That’s simply being a good citizen of the world.

When back doors backfire [Uncorrected Tweet From Economist Hits 1.1K Retweets]

Sunday, January 3rd, 2016

When back doors backfire

From the post:

encryption-economist

Push back against back doors

Calls for the mandatory inclusion of back doors should therefore be resisted. Their potential use by criminals weakens overall internet security, on which billions of people rely for banking and payments. Their existence also undermines confidence in technology companies and makes it hard for Western governments to criticise authoritarian regimes for interfering with the internet. And their imposition would be futile in any case: high-powered encryption software, with no back doors, is available free online to anyone who wants it.

Rather than weakening everyone’s encryption by exploiting back doors, spies should use other means. The attacks in Paris in November succeeded not because terrorists used computer wizardry, but because information about their activities was not shared. When necessary, the NSA and other agencies can usually worm their way into suspects’ computers or phones. That is harder and slower than using a universal back door—but it is safer for everyone else.

By my count on two (2) tweets from The Economist, they are running at 50% correspondence between their tweets and actual content.

You may remember my checking their tweet about immigrants yesterday, that got 304 retweets (and was wrong) in Fail at The Economist Gets 304 Retweets!.

Today I saw the When back doors backfire tweet and I followed the link to the post to see if it corresponded to the tweet.

Has anyone else been checking on tweet/story correspondence at The Economist (zine)? The twitter account is: @TheEconomist.

I ask because no correcting tweet has appeared in @TheEconomist tweet feed. I know because I just looked at all of its tweets in chronological order.

Here is the uncorrected tweet:

econ-imm-tweet

As of today, the uncorrected tweet on immigrants has 1.1K retweets and 707 likes.

From the Economist article on immigrants:

Refugee resettlement is the least likely route for potential terrorists, says Kathleen Newland at the Migration Policy Institute, a think-tank. Of the 745,000 refugees resettled since September 11th, only two Iraqis in Kentucky have been arrested on terrorist charges, for aiding al-Qaeda in Iraq.

Do retweets and likes matter more than factual accuracy, even as reported in the tweeted article?

Is this a journalism ethics question?

What’s the standard journalism position on retweet-bait tweets?

Neural Networks, Recognizing Friendlies, $Billions; Friendlies as Enemies, $Priceless

Thursday, December 24th, 2015

Elon Musk merits many kudos for the recent SpaceX success.

At the same time, Elon has been nominated for Luddite of the Year, along with Bill Gates and Stephen Hawking, for fanning fears of artificial intelligence.

One favorite target for such fears are autonomous weapons systems. Hannah Junkerman annotated a list of 18 posts, articles and books on such systems for Just Security.

While moralists are wringing their hands, military forces have not let grass grow under their feet with regard to autonomous weapon systems. As Michael Carl Haas reports in Autonomous Weapon Systems: The Military’s Smartest Toys?:

Military forces that rely on armed robots to select and destroy certain types of targets without human intervention are no longer the stuff of science fiction. In fact, swarming anti-ship missiles that acquire and attack targets based on pre-launch input, but without any direct human involvement—such as the Soviet Union’s P-700 Granit—have been in service for decades. Offensive weapons that have been described as acting autonomously—such as the UK’s Brimstone anti-tank missile and Norway’s Joint Strike Missile—are also being fielded by the armed forces of Western nations. And while governments deny that they are working on armed platforms that will apply force without direct human oversight, sophisticated strike systems that incorporate significant features of autonomy are, in fact, being developed in several countries.

In the United States, the X-47B unmanned combat air system (UCAS) has been a definite step in this direction, even though the Navy is dodging the issue of autonomous deep strike for the time being. The UK’s Taranis is now said to be “merely” semi-autonomous, while the nEUROn developed by France, Greece, Italy, Spain, Sweden and Switzerland is explicitly designed to demonstrate an autonomous air-to-ground capability, as appears to be case with Russia’s MiG Skat. While little is known about China’s Sharp Sword, it is unlikely to be far behind its competitors in conceptual terms.

The reasoning of military planners in favor of autonomous weapons systems isn’t hard to find, especially when one article describes air-to-air combat between tactically autonomous and machine-piloted aircraft versus piloted aircraft this way:


This article claims that a tactically autonomous, machine-piloted aircraft whose design capitalizes on John Boyd’s observe, orient, decide, act (OODA) loop and energy-maneuverability constructs will bring new and unmatched lethality to air-to-air combat. It submits that the machine’s combined advantages applied to the nature of the tasks would make the idea of human-inhabited platforms that challenge it resemble the mismatch depicted in The Charge of the Light Brigade.

Here’s the author’s mock-up of sixth-generation approach:

fighter-six-generation

(Select the image to see an undistorted view of both aircraft.)

Given the strides being made on the use of neural networks, I would be surprised if they are not at the core of present and future autonomous weapons systems.

You can join the debate about the ethics of autonomous weapons but the more practical approach is to read How to trick a neural network into thinking a panda is a vulture by Julia Evans.

Autonomous weapon systems will be developed by a limited handful of major military powers, at least at first, which means the market for counter-measures, such as turning such weapons against their masters, will bring a premium price. Far more than the offensive development side. Not to mention there will be a far larger market for counter-measures.

Deception, one means of turning weapons against their users, has a long history, not the earliest of which is the tale of Esau and Jacob (Genesis, chapter 26):

11 And Jacob said to Rebekah his mother, Behold, Esau my brother is a hairy man, and I am a smooth man:

12 My father peradventure will feel me, and I shall seem to him as a deceiver; and I shall bring a curse upon me, and not a blessing.

13 And his mother said unto him, Upon me be thy curse, my son: only obey my voice, and go fetch me them.

14 And he went, and fetched, and brought them to his mother: and his mother made savoury meat, such as his father loved.

15 And Rebekah took goodly raiment of her eldest son Esau, which were with her in the house, and put them upon Jacob her younger son:

16 And she put the skins of the kids of the goats upon his hands, and upon the smooth of his neck:

17 And she gave the savoury meat and the bread, which she had prepared, into the hand of her son Jacob.

Julia’s post doesn’t cover the hard case of seeing Jacob as Esau up close but in a battle field environment, the equivalent of mistaking a panda for a vulture, may be good enough.

The primary distinction that any autonomous weapons system must make is the friendly/enemy distinction. The term “friendly fire” was coined to cover cases where human directed weapons systems fail to make that distinction correctly.

The historical rate of “friendly fire” or fratricide is 2% but Mark Thompson reports in The Curse of Friendly Fire, that the actual fratricide rate in the 1991 Gulf war was 24%.

#Juniper, just to name one recent federal government software failure, is evidence that robustness isn’t an enforced requirement for government software.

Apply that lack of requirements to neural networks in autonomous weapons platforms and you have the potential for both developing and defeating autonomous weapons systems.

Julia’s post leaves you a long way from defeating an autonomous weapons platform but it is a good starting place.

PS: Defeating military grade neural networks will be good training for defeating more sophisticated ones used by commercial entities.