Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 4, 2019

Avoided Ethics Guidelines

Filed under: Ethics,Facebook,Google+,Government — Patrick Durusau @ 10:46 am

Ethical guidelines issued by engineers’ organization fail to gain traction by Nicolas Kayser-Bril.

The world’s largest professional association of engineers released its ethical guidelines for automated systems last March. A review by AlgorithmWatch shows that Facebook and Google have yet to acknowledge them.

In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

If you want to intentionally ignore these guidelines as well, they are at: Ethics in Action.

Understanding “ethics” are defined within and are supportive of a system, given the racist, misogynistic, homophobic, transphobic, capitalist exploitation economy of today, I find discussions of “ethics” quixotic.

Governments and corporations have no “ethics” even within the present system and following ethics based on what should be the system, only disarms you in the presence of impacable enemies. The non-responses by Google and Facebook are fair warning that you are “ethical” in your relationships with them, only with due regard for the police lurking nearby.

May I suggest you find a sharper stick than “you’re unethical” when taking on governments, corporations and systems. They shrug that sort of comment off like water off a duck’s back. Look around, new and sharper sticks are being invented everyday.

November 15, 2018

Before You Make a Thing [Technology and Society]

Filed under: Computer Science,Ethics,Politics — Patrick Durusau @ 10:55 am

Before You Make a Thing: some tips for approaching technology and society by Jentery Sayers.

From the webpage:

This is a guide for Technology and Society 200 (Fall 2018; 60 undergraduate students) at the University of Victoria. It consists of three point-form lists. The first is a series of theories and concepts drawn from assigned readings, the second is a rundown of practices corresponding with projects we studied, and the third itemizes prototyping techniques conducted in the course. All are intended to distill material from the term and communicate its relevance to project design and development. Some contradiction is inevitable. Thank you for your patience.

An extraordinary summary of the Prototyping Pasts + Futures class, whose description reads:

An offering in the Technology and Society minor at UVic, this course is about the entanglement of Western technologies with society and culture. We’ll examine some histories of these entanglements, discuss their effects today, and also speculate about their trajectories. One important question will persist throughout the term: How can and should we intervene in technologies as practices? Rather than treating technologies as tools we use or objects we examine from the outside, we’ll prototype with and through them as modes of inquiry. You’ll turn patents into 3-D forms, compose and implement use scenarios, “datify” old tech, and imagine a device you want to see in the world. You’ll document your research and development process along the way, reflect on what you learned, present your prototypes and findings, and also build a vocabulary of keywords for technology and society. I will not assume that you’re familiar with fields such as science and technology studies, media studies, critical design, or experimental art, and the prototyping exercises will rely on low-tech approaches. Technical competency required: know how to send an email.

Deeply impressive summary of the “Theories and Concepts,” “Practices,” and “Prototyping Techniques” from Prototyping Pasts + Futures.

Whether you want a benign impact of your technology or are looking to put a fine edge on it, this is the resource for you!

Not to mention learning a great deal that will help you better communicate to clients the probable outcomes of their requests.

Looking forward to spending some serious time with these materials.

Enjoy!

August 22, 2018

Politics of Code [If a question is not about power…, you didn’t understand the question.]

Filed under: Ethics,Programming,sexism — Patrick Durusau @ 9:04 pm

Politics of Code by Prof. Jacob Gaboury.

From the syllabus:

This course begins with the twin propositions that all technology is inherently political, and that digital technologies have come to define our contemporary media landscape. Software, hardware, and code shape the practices and discourses of our digital culture, such that in order to understand the present we must take seriously the politics of the digital. Beginning with an overview of cybernetics, information theory, systems theory, and distributed communications networks, the course will primarily focus on the politics and theory of the past twenty years, from the utopian discourses of the early web to the rise of immaterial labor economies and the quantification and management of subjects and populations. The course will be structured around close readings of specific technologies such as distributed networks, programming languages, and digital software platforms in an effort to ground critical theory with digital practice. Our ultimate goal will be to identify a political theory of the present age – one that takes seriously the role of computation and digitization.

If you don’t already have a reading program for the Fall of 2018, give this syllabus and its reading list serious consideration!

If time and interest permit, consider my suggestion: “If a question is not about power…, you didn’t understand the question.”

Uncovering who benefits from answers won’t get you any closer to a neutral decision making process but you can be more honest about the side you have chosen and why.

August 2, 2018

Archives for the Dark Web: A Field Guide for Study

Filed under: Archives,Dark Web,Ethics,Journalism,Tor — Patrick Durusau @ 4:48 pm

Archives for the Dark Web: A Field Guide for Study by Robert A. Gehl.

Abstract:

This chapter provides a field guide for other digital humanists who want to study the Dark Web. In order to focus the chapter, I emphasize my belief that, in order to study the cultures of Dark Web sites and users, the digital humanist must engage with these systems’ technical infrastructures. I will provide specific reasons why I believe that understanding the technical details of Freenet, Tor, and I2P will benefit any researchers who study these systems, even if they focus on end users, aesthetics, or Dark Web cultures. To this end, I offer a catalog of archives and resources researchers could draw on and a discussion of why researchers should build their own archives. I conclude with some remarks about ethics of Dark Web research.

Highly recommended read but it falls short on practical archiving advice for starting researchers and journalists.

Digital resources, Dark Web or no, can be emphemeral. Archiving produces the only reliable and persistent record of resources as you encountered them.

I am untroubled by Gehl’s concern for research ethics. Research ethics can disarm and distract scholars in the face of amoral enemies. Governments and their contractors, to name only two such enemies, exhibit no ethical code other than self-advantage.

Those who harm innocents, rely on my non-contractual ethics at their own peril.

April 29, 2018

Processing “Non-Hot Mike” Data (Audio Processing for Data Scientists)

Filed under: Ethics,Politics,Privacy,Speech Recognition — Patrick Durusau @ 6:32 pm

A “hot mike” is one that is transmitting your comments, whether you know the mike is activated or not.

For example, a “hot mike” in 2017 caught this jewel:

Israeli Prime Minister Benjamin Netanyahu called the European Union “crazy” at a private meeting with the leaders of four Central European countries, unaware that a microphone was transmitting his comments to reporters outside.

“The EU is the only association of countries in the world that conditions the relations with Israel, that produces technology and every area, on political conditions. The only ones! Nobody does it. It’s crazy. It’s actually crazy. There is no logic here,” Netanyahu said Wednesday in widely reported remarks.

Netanyahu was meeting with the leaders of Hungary, Slovakia, Czech Republic and Poland, known as the Visegrad Group.

The microphone was switched off after about 15 minutes, according to reports.

A common aspect of “hot mike” comments is the speaker knew the microphone was present, but assumed it was turned off. In “hot mike” cases, the speaker is known and the relevance of their comments usually obvious.

But what about “non-hot mike” comments? That is comments made by a speaker with no sign of a microphone?

Say casual conversation in a restaurant, at a party, in a taxi, in a conversation at home or work, or anywhere in between?

Laws governing the interception of conversations are vast and complex so before processing any conversation data you suspect to be intercepted, seek legal counsel. This post assumes you have been properly cautioned and chosen to proceed with processing conversation data.

Royal Jain, in Intro to audio processing world for a Data scientist, begins a series of posts to help bridge the gap between NLP and speech/audio processing. Jain writes:

Coming from NLP background I had difficulties in understanding the concepts of speech/audio processing even though a lot of underlying science and concepts were the same. This blog series is an attempt to make the transition easier for people having similar difficulties. The First part of this series describes the feature space which is used by most machine learning/deep learning models.

Looking forward to more posts in this series!

Data science ethics advocates will quickly point out that privacy concerns surround the interception of private conversations.

They’re right!

But when the privacy in question belows to those who plan, fund and execute regime-change wars, killing hundreds of thousands and making refugees out of millions more, generally increasing human misery on a global scale, I have an answer to the ethics question. My question is one of risk assessment.

You?

April 26, 2018

Ethics and Law in AI ML

Filed under: Artificial Intelligence,Ethics,Machine Learning — Patrick Durusau @ 3:50 pm

Ethics and Law in AI ML

Data Science Renee, author of Women in Data Science (Twitter list with over 1400 members), has created a Twitter list focused on ethics and law in AI/ML.

When discussions of ethics for data scientists come up, remember that many players, corporations, governments, military organizations, spy agencies, abide by no code of ethics or laws. Adjust your ethics expectations accordingly.

February 12, 2018

Improving Your Phishing Game

Filed under: Cybersecurity,Ethics,Phishing for Leaks,Security — Patrick Durusau @ 7:52 pm

Did you know that KnowBe4 publishes quarterly phishing test analysis? Ranks the top lines that get links in phishing emails followed.

The entire site of KnowBe4 is a reference source if you don’t want to fall for or look like a Nigerian spammer when it comes to phishing emails.

Their definition of phishing:

Phishing is the process of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity using bulk email which tries to evade spam filters.

Emails claiming to be from popular social web sites, banks, auction sites, or IT administrators are commonly used to lure the unsuspecting public. It’s a form of criminally fraudulent social engineering.

I think:

It’s a form of criminally fraudulent social engineering.

sounds a bit harsh and not nuanced at all.

For example, these aren’t criminally fraudulent cases of phishing:

  • CIA sends phishing emails to foreign diplomats
  • FBI sends phishing emails to anti-war and social reform groups
  • NSA sends phishing emails to government officials (ours, theirs, etc.)

Phishing is an amoral weapon, just like any other weapon.

If you use phishing to uncover child sex traffickers, is that a criminally fraudulent use of phishing? Not to me.

If you hear a different conclusion in a windy discussion of ethics, don’t bother to write. I’ll just treat it as spam.

Don’t let other people make broad ethical pronouncements on your behalf. They have an agenda and it’s not likely to be one in your interest.

Meanwhile, improve your phishing game!

February 1, 2018

NSA Exploits – Mining Malware – Ethics Question

Filed under: Cybersecurity,Ethics,Hacking,NSA,Security — Patrick Durusau @ 9:24 pm

New Monero mining malware infected 500K PCs by using 2 NSA exploits

From the post:

It looks like the craze of cryptocurrency mining is taking over the world by storm as every new day there is a new malware targeting unsuspecting users to use their computing power to mine cryptocurrency. Recently, the IT security researchers at Proofpoint have discovered a Monero mining malware that uses leaked NSA (National Security Agency) EternalBlue exploit to spread itself.

The post also mentions use of the NSA exploit, EsteemAudit.

A fair number of leads and worth your time to read in detail.

I suspect most of the data science ethics crowd will down vote the use of NSA exploits (EternalBlue, EsteemAudit) for cyrptocurrency mining.

Here’s a somewhat harder data science ethics question:

Is it ethical to infect 500,000+ Windows computers belonging to a government for the purpose of obtaining internal documents?

Does your answer depend upon which government and what documents?

Governments don’t take your rights into consideration. Should you take their laws into consideration?

George “Machine Gun” Kelly (Bank Commissioner), DJ Patil (Data Science Ethics)

Filed under: Data Science,Ethics — Patrick Durusau @ 9:04 pm

A Code of Ethics for Data Science by DJ Patil. (Former U.S. Chief Data Scientist)

From the post:


With the old adage that with great power comes great responsibility, it’s time for the data science community to take a leadership role in defining right from wrong. Much like the Hippocratic Oath defines Do No Harm for the medical profession, the data science community must have a set of principles to guide and hold each other accountable as data science professionals. To collectively understand the difference between helpful and harmful. To guide and push each other in putting responsible behaviors into practice. And to help empower the masses rather than to disenfranchise them. Data is such an incredible lever arm for change, we need to make sure that the change that is coming, is the one we all want to see.

So how do we do it? First, there is no single voice that determines these choices. This MUST be community effort. Data Science is a team sport and we’ve got to decide what kind of team we want to be.

Consider the specifics of Patil’s regime (2015-2017), when government data scientists:

  • Mined information on U.S. citizens. (check)
  • Mined information on non-U.S. citizens. (check)
  • Hackd computer systems of both citizens and non-citizens. (check)
  • Spread disinformation both domestically and abroad. (check)

Unless you want to resurrect George “Machine Gun” Kelly to be your banking commissioner, Patil is a poor choice to lead a charge on ethics.

Despite violations of U.S. law during his tenure as U.S. Chief Data Scientist, Patil was responsible for NO prosecutions, investigations or even whistle-blowing on a single government data scientist.

Patil’s lemming traits come to the fore when he says:


And finally, our democratic systems have been under attack using our very own data to incite hate and sow discord.

Patil ignores two very critical aspects of that claim:

  1. There has been no, repeat no forensic evidence released to support that claim. All that supports it are claims by people who claim to have seen something, but they can’t say what.
  2. The United States (that would be us), has tried to overthrow governments seventy-two times during the Cold War. Sometimes the U.S. has succeeded. Posts on Twitter and Facebook pale by comparison.

Don’t mistake Patil’s use of the term “ethics” as meaning what you mean by “ethics.” Based on his prior record and his post, you can guess that Patil’s “ethics” gives a wide berth to abusive governments and corporations.

January 29, 2018

Have You Been Drafted by Data Science Ethics?

Filed under: Data Science,Ethics — Patrick Durusau @ 8:25 pm

I ask because Strava‘s recent heatmap release (Fitness tracking app Strava gives away location of secret US army bases) is being used as a platform to urge unpaid consideration of government and military interests by data scientists.

Consider Ray Crowell‘s Strava Heatmaps: Why Ethics in Design Matters which presumes data scientists have an unpaid obligation to consider the interests of the military:

From the post:


These organizations have been warned for years (including by myself) of the information/operational security (specifically with pattern of life, that is, the data collected and analyzed establish an individual’s past behavior, determine their current behavior, and predict their future behavior) implications associated with social platforms and advanced analytical technology. I spent my career stabilizing this intersection between national security and progress — having a deep understanding of the protection of lives, billion-dollar weapon systems, and geopolitical assurances and on the other side, the power of many of these technological advancements in enabling access to health and wellness for all.

Getting at this balance requires us to not get enamored by the idea or implications of ethically sound solutions, but rather exposing our design practices to ethical scrutiny.

These tools are not only beneficial for the designer, but for the user as well. I mention these specifically for institutions like the Defense Department, impacted from the Strava heatmap and frankly many other technologies being employed both sanctioned and unsanctioned by military members and on military installations. These tools are beneficial the institution’s leadership to “reverse engineer” what technologies on the market can do by way of harm … in balance with the good. I learned a long time ago, from wiser mentors than myself, that you don’t know what you’re missing, if you’re not looking to begin with.

Crowell imposes an unpaid ethical obligation any unsuspecting reader/data scientist to consider their impact on government or military organizations.

In that effort, Crowell is certainly not alone:

If you contract to work for a government or military group, you owe them an ethical obligation of your best efforts. Just as for any other client.

However, volunteering unpaid assistance for military or government organizations, damages the market for data scientists.

Now that’s unethical!

PS: I agree there are ethical obligations to consider the impact of your work on disenfranchised, oppressed or abused populations. Governments and military organizations don’t qualify as any of those.

October 3, 2017

Who Does Cyber Security Benefit?

Filed under: Cybersecurity,Ethics,Malware,Security — Patrick Durusau @ 2:04 pm

Indoctrinating children to benefit the wealthy starts at a young age: ‘Hackathon’ challenges teens to improve cyber security.

Improving cyber security is taught as an ethical imperative, but without asking who that “imperative” benefits.

OxFam wrote earlier this year:

Eight men own the same wealth as the 3.6 billion people who make up the poorest half of humanity, according to a new report published by Oxfam today to mark the annual meeting of political and business leaders in Davos.

Oxfam’s report, ‘An economy for the 99 percent’, shows that the gap between rich and poor is far greater than had been feared. It details how big business and the super-rich are fuelling the inequality crisis by dodging taxes, driving down wages and using their power to influence politics. It calls for a fundamental change in the way we manage our economies so that they work for all people, and not just a fortunate few.

New and better data on the distribution of global wealth – particularly in India and China – indicates that the poorest half of the world has less wealth than had been previously thought. Had this new data been available last year, it would have shown that nine billionaires owned the same wealth as the poorest half of the planet, and not 62, as Oxfam calculated at the time.
… From: Just 8 men own same wealth as half the world

It’s easy to see the cyber security of SWIFT, “secure financial messaging system,” benefits:

the “[the e]ight men own the same wealth as the 3.6 billion people who make up the poorest half of humanity”

more than “…the 3.6 billion people who make up the poorest half of humanity.”

Do you have any doubt about that claim in principle? The exact numbers of inequality don’t interest me as much as the understanding that information systems and their cyber security benefit some people more than others.

Once we establish the principle of differential in benefits from cyber security, then we can ask: Who does cyber security X benefit?

To continue with the SWIFT example, I would not volunteer to walk across the street to improve its cyber security. It is an accessory to a predatory financial system that exploits billions. You could be paid to improve its cyber security but tech people at large have no moral obligation to help SWIFT.

If anyone says you have an obligation to improve cyber security, ask who benefits?

Yes?

August 25, 2017

Air Gapping USB Sticks For Journalists (Or Not! For Others)

Filed under: Cybersecurity,Ethics,Malware,Security — Patrick Durusau @ 12:34 pm

CIRCLean – USB key sanitizer

Journalists are likely to get USB sticks from unknown and/or untrustworthy sources. CIRCLean copies potentially dangerous files on an untrustworthy USB stick, converts those files to a safe format and saves them to your trusted USB stick. (Think of it as not sticking a potentially infected USB into your computer.)

Visual instructions on using CIRCLean:

Written instructions based on those for CIRCLean, without illustrations:

  1. Unplug the device.
  2. Plug the untrusted USB stick into the top usb slot.
  3. Plug your own, trusted USB stick into the bottom usb slot.
  4. Note: Make sure your USB stick is bigger than the untrusted one. The extracted documents are sometimes bigger than the original ones.

  5. Connect the power to the device.
  6. If your device has a diode, wait until the blinking stops.
  7. Otherwise, plug a headset and listen to the music that is played during the conversion. When the music stops, the conversion is finished.

  8. Unplug the device and remove the USB keys

Label all untrusted USB sticks. “Untrusted” means it has an origin other than you. Unicode U+2620 ‘skull and crossbones” works, ☠. Or a bit larger:


(Image from http://graphemica.com/)

It’s really that easy!

On The Flip Side

Modifying the CIRCLean source to maintain its present capabilities but adding your malware to the “trusted” USB stick offers a number of exciting possibilities.

Security is all the rage in the banking industry, making a Raspberry Pi (with diode), an attractive case, and your USB malware great banking convention swag.

Listing of banking conferences are maintained by the American Bankers Association, the European Banking Association, and Asian Banking & Finance, to name just a few.

A low-cost alternative to a USB cleaning/malware installing Raspberry Pi would to use infected USB sticks as sway. “Front Office Staff: After Hours” or some similar title. If that sounds sexist, it is, but traps use bait based on their target’s proclivities, not yours.

PS: Ethics/legality:

The ethics of spreading malware to infrastructures based on a “white, cisheteropatriarchal*” point of view, I leave for others to discuss.

The legality of spreading malware depends on who’s doing the spreading and who’s being harmed. Check with legal counsel.

* A phrase I stole from: Women’s Suffrage Leaders Left Out Black Women. A great read.

August 3, 2017

Foreign Intelligence Gathering Laws (and ethics)

Filed under: Ethics,Government,Intelligence — Patrick Durusau @ 10:47 am

Foreign Intelligence Gathering Laws from the Law Library of the Library of Congress.

From the webpage:

This report offers a review of laws regulating the collection of intelligence in the European Union (EU) and Belgium, France, Germany, Netherlands, Portugal, Romania, Sweden, and the United Kingdom. This report updates a report on the same topic issued from 2014. Because issues of national security are under the jurisdiction of individual EU Member States and are regulated by domestic legislation, individual country surveys provide examples of how the European nations control activities of their intelligence agencies and what restrictions are imposed on information collection. All EU Member States follow EU legislation on personal data protection, which is a part of the common European Union responsibility.

If you are investigating or reporting on breaches of intelligence gathering laws in “the European Union (EU) and Belgium, France, Germany, Netherlands, Portugal, Romania, Sweden, and the United Kingdom,” this will be useful. Otherwise, for the other one hundred and eighty-eight (188), you are SOL.

Other than as a basis for outrage, it’s not clear how useful intelligence gathering laws are in fact. The secrecy of intelligence operations makes practical oversight impossible and if leaks are to be credited, no known intelligence agency obeys such laws other than accidentally.

Moreover, as the U.S. Senate report on torture demonstrates, even war criminals are protected from prosecution in the name of intelligence gathering.

I take my cue from the CIA‘s position, as captured by Bob Dylan in Tweeter and the Monkey Man:

“It was you to me who taught
In Jersey anything’s legal as long as you don’t get caught.”

Disarming yourself with law or ethics in any encounter with an intelligence agency, which honors neither, means you will lose.

Choose your strategies accordingly.

May 28, 2017

Ethics, Data Scientists, Google, Wage Discrimination Against Women

Filed under: Data Science,Ethics — Patrick Durusau @ 4:50 pm

Accused of underpaying women, Google says it’s too expensive to get wage data by Sam Levin.

From the post:

Google argued that it was too financially burdensome and logistically challenging to compile and hand over salary records that the government has requested, sparking a strong rebuke from the US Department of Labor (DoL), which has accused the Silicon Valley firm of underpaying women.

Google officials testified in federal court on Friday that it would have to spend up to 500 hours of work and $100,000 to comply with investigators’ ongoing demands for wage data that the DoL believes will help explain why the technology corporation appears to be systematically discriminating against women.

Noting Google’s nearly $28bn annual income as one of the most profitable companies in the US, DoL attorney Ian Eliasoph scoffed at the company’s defense, saying, “Google would be able to absorb the cost as easy as a dry kitchen sponge could absorb a single drop of water.”

Disclosure: I assume Google is resisting disclosure because it has in fact has a history of engaging in discrimination against women. It may or may not be discriminating this month/year, but if known, the facts will support the government’s claim. The $100,000 alleged cost is chump change to prove such a charge groundless. Resistance signals the charge has merit.

Levin’s post gives me reason to doubt Google will prevail on this issue or on the merits in general. Read it in full.

My question is what of the ethical obligations of data scientists at Google?

Should data scientists inside Google come forward with the requested information?

Should data scientists inside Google stage a work slow down to protest Googles’ resistance?

Exactly what should ethical data scientists do when their employer is the 500 pound gorilla in their field?

Do you think Google executives need a memo from their data scientists cluing them in on the ethical issues here?

Possibly not, this is old fashioned gender discrimination.

Google’s resistance signals to all of its mid-level managers that gender based discrimination will be defended.

Does that really qualify for “Don’t be evil?”

May 13, 2017

Bigoted Use of Stingray Technology vs. Other Ills

Filed under: #BLM,Bias,Ethics,Government — Patrick Durusau @ 8:33 pm

Racial Disparities in Police ‘Stingray’ Surveillance, Mapped by George Joseph.

From the post:

Louise Goldsberry, a Florida nurse, was washing dishes when she looked outside her window and saw a man pointing a gun at her face. Goldsberry screamed, dropped to the floor, and crawled to her bedroom to get her revolver. A standoff ensued with the gunman—who turned out to be an agent with the U.S. Marshals’ fugitive division.

Goldsberry, who had no connection to a suspect that police were looking for, eventually surrendered and was later released. Police claimed that they raided her apartment because they had a “tip” about the apartment complex. But, according to Slate, the reason the “tip” was so broad was because the police had obtained only the approximate location of the suspect’s phone—using a “Stingray” phone tracker, a little-understood surveillance device that has quietly spread from the world of national security into that of domestic law enforcement.

Goldsberry’s story illustrates a potential harm of Stingrays not often considered: increased police contact for people who get caught in the wide dragnets of these interceptions. To get a sense of the scope of this surveillance, CityLab mapped police data from three major cities across the U.S., and found that this burden is not shared equally.

How not equally?

Baltimore, Maryland.

The map at Joseph’s post is interactive, along with maps for Tallahassee, Florida and Milwaukee, Minnesota.

I oppose government surveillance overall but am curious, is Stingray usage a concern of technology/privacy advocates or is there a broader base for opposing it?

Consider the following facts gathered by Bill Quigley:

Were you shocked at the disruption in Baltimore? What is more shocking is daily life in Baltimore, a city of 622,000 which is 63 percent African American. Here are ten numbers that tell some of the story.

One. Blacks in Baltimore are more than 5.6 times more likely to be arrested for possession of marijuana than whites even though marijuana use among the races is similar. In fact, Baltimore county has the fifth highest arrest rate for marijuana possessions in the USA.

Two. Over $5.7 million has been paid out by Baltimore since 2011 in over 100 police brutality lawsuits. Victims of severe police brutality were mostly people of color and included a pregnant woman, a 65 year old church deacon, children, and an 87 year old grandmother.

Three. White babies born in Baltimore have six more years of life expectancy than African American babies in the city.

Four. African Americans in Baltimore are eight times more likely to die from complications of HIV/AIDS than whites and twice as likely to die from diabetes related causes as whites.

Five. Unemployment is 8.4 percent city wide. Most estimates place the unemployment in the African American community at double that of the white community. The national rate of unemployment for whites is 4.7 percent, for blacks it is 10.1.

Six.African American babies in Baltimore are nine times more likely to die before age one than white infants in the city.

Seven. There is a twenty year difference in life expectancy between those who live in the most affluent neighborhood in Baltimore versus those who live six miles away in the most impoverished.

Eight. 148,000 people, or 23.8 percent of the people in Baltimore, live below the official poverty level.

Nine. 56.4 percent of Baltimore students graduate from high school. The national rate is about 80 percent.

Ten. 92 percent of marijuana possession arrests in Baltimore were of African Americans, one of the highest racial disparities in the USA.

(The “Shocking” Statistics of Racial Disparity in Baltimore)

Which of those facts would you change before tackling the problem of racially motivated use of Stingray technology?

I see several that I would rate much higher than the vagaries of Stingray surveillance.

You?

May 4, 2017

Addictive Technology (And the Problem Is?)

Filed under: Advertising,Ethics,Marketing — Patrick Durusau @ 3:08 pm

Tech Companies are Addicting People! But Should They Stop? by Nir Eyal.

From the post:

To understand technology addiction (or any addiction for that matter) you need to understand the Q-tip. Perhaps you’ve never noticed there’s a scary warning on every box of cotton swabs that reads, “CAUTION: Do not enter ear canal…Entering the ear canal could cause injury.” How is it that the one thing most people do with Q-tips is the thing manufacturers explicitly warn them not to do?

“A day doesn’t go by that I don’t see people come in with Q-tip-related injuries,” laments Jennifer Derebery, an inner ear specialist in Los Angeles and the past president of the American Academy of Otolaryngology. “I tell my husband we ought to buy stock in the Q-tips company; it supports my practice.” It’s not just that people do damage to their ears with Q-tips, it’s that they keep doing damage. Some even call it an addiction.

On one online forum, a user asks, “Anyone else addicted to cleaning their ears with Q-tips?…I swear to God if I go more than a week without sticking Q-tips in my ears, I go nuts. It’s just so damn addicting…” Elsewhere, another ear-canal enterer also associates ear swabbing with dependency: “How can I detox from my Q-tips addiction?” The phenomenon is so well known that MADtv based a classic sketch on a daughter having to hide Q-tip use from her parents like a junkie.

Q-tip addiction shares something in common with other, more prevalent addictions like gambling, heroin, and even Facebook use. Understanding what I call, the Q-tip Effect, raises important questions about products we use every day, and the responsibilities their makers have in relation to the welfare of their users.
… (emphasis in original)

It’s a great post on addiction (read the definition), technology, etc., but Nir loses me here:


However, there’s a difference between accepting the unavoidable edge cases among unknown users and knowingly promoting the Q-tip Effect. When it comes to companies that know exactly who’s using, how, and how much, much more can be done. To do the right thing by their customers, companies have an obligation to help when they know someone wants to stop, but can’t. Silicon Valley technology companies are particularly negligent by this ethical measure.

The only basis for this “…obligation to help when they know someone wants to stop, but can’t” appears to be Nir’s personal opinion.

That’s ok and he is certainly entitled to it, but Nir hasn’t offered to pay the cost of meeting his projected ethical obligation.

People enjoy projecting ethical obligations on others, from the anti-abortion, anti-birth control, anti-drugs, etc.

Imposing moral obligations that others pay for is more popular in the U.S. than adultery. I don’t have any hard numbers on that last point. Let’s say imposing moral obligations paid for by others is wildly popular and leave it at that.

If I had a highly addictive (in Nir’s sense) app, I would be using the profits to rent backhoes for anyone who needed one along the DAPL pipeline. No questions asked.

It’s an absolute necessity to raise ethical questions about technology and society in general.

But my first question is always: Who pays the cost of your ethical concern?

If it’s not you, that says a lot to me about your concern.

April 4, 2017

Non-Fox News journalists: Investigate Bill O’Reilly & Fox News Reporters

Filed under: Ethics,Journalism,News,Reporting — Patrick Durusau @ 5:57 pm

Fox News journalists: Don’t stay silent amid Bill O’Reilly controversy by Kyle Pope.

From the post:

WHAT DOES IT TELL US WHEN advertisers get ahead of reporters in matters of newsroom ethics? It tells us something is seriously wrong at Fox News, and it’s time for the real journalists at the network (and beyond) to make themselves heard.

On Tuesday, more companies moved to distance themselves from the network and its host, Bill O’Reilly, in response to a April 1 piece in The New York Times detailing sexual harassment allegations against Fox’s top-rated host and cash cow. The alleged behavior ranges the gamut of smut, from unwanted advances to phone calls in which O’Reilly—he of an $18 million-a-year salary from Rupert Murdoch et al—sounds as if he is masturbating.
… (emphasis in original)

Pope’s call for legitimate journalists at Fox to step forward is understandable, but too little too late.

From campus rape at UT Austin to the conviction of former Penn State President Graham Spanier’s conviction for failing to report alleged child abuse, it is always that case that somebody knew what was going on and remained silent.

What did the “legitimate journalists” at Fox News and when?

Will the journalism community toss 0’Reilly to the wolves and give his colleagues a free pass?

That’s seems like an odd sense of ethics for journalists.

Yes?

January 4, 2017

Q&A Cathy O’Neil…

Filed under: BigData,Ethics,Mathematics,Modeling,Statistics — Patrick Durusau @ 2:30 pm

Q&A Cathy O’Neil, author of ‘Weapons of Math Destruction,’ on the dark side of big data by Christine Zhang.

From the post:

Cathy O’Neil calls herself a data skeptic. A former hedge fund analyst with a PhD in mathematics from Harvard University, the Occupy Wall Street activist left finance after witnessing the damage wrought by faulty math in the wake of the housing crash.

In her latest book, “Weapons of Math Destruction,” O’Neil warns that the statistical models hailed by big data evangelists as the solution to today’s societal problems, like which teachers to fire or which criminals to give longer prison terms, can codify biases and exacerbate inequalities. “Models are opinions embedded in mathematics,” she writes.

Great interview that hits enough high points to leave you wanting to learn more about Cathy and her analysis.

On that score, try:

Read her mathbabe blog.

Follow @mathbabedotorg.

Read Weapons of math destruction : how big data increases inequality and threatens democracy.

Try her new business: ORCAA [O’Neil Risk Consulting and Algorithmic Auditing].

From the ORCAA homepage:


ORCAA’s mission is two-fold. First, it is to help companies and organizations that rely on time and cost-saving algorithms to get ahead of this wave, to understand and plan for their litigation and reputation risk, and most importantly to use algorithms fairly.

The second half of ORCAA’s mission is this: to develop rigorous methodology and tools, and to set rigorous standards for the new field of algorithmic auditing.

There are bright line cases, sentencing, housing, hiring discrimination where “fair” has a binding legal meaning. And legal liability for not being “fair.”

Outside such areas, the search for “fairness” seems quixotic. Clients are entitled to their definitions of “fair” in those areas.

December 16, 2016

neveragain.tech [Or at least not any further]

Filed under: Data Science,Ethics,Government,Politics — Patrick Durusau @ 9:55 am

neveragain.tech [Or at least not any further]

Write a list of things you would never do. Because it is possible that in the next year, you will do them. —Sarah Kendzior [1]

We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration’s proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable.

We have educated ourselves on the history of threats like these, and on the roles that technology and technologists played in carrying them out. We see how IBM collaborated to digitize and streamline the Holocaust, contributing to the deaths of six million Jews and millions of others. We recall the internment of Japanese Americans during the Second World War. We recognize that mass deportations precipitated the very atrocity the word genocide was created to describe: the murder of 1.5 million Armenians in Turkey. We acknowledge that genocides are not merely a relic of the distant past—among others, Tutsi Rwandans and Bosnian Muslims have been victims in our lifetimes.

Today we stand together to say: not on our watch, and never again.

I signed up but FYI, the databases we are pledging to not build, already exist.

The US Census Bureau collects information on race, religion and national origin.

The Statistical Abstract of the United States: 2012 (131st Edition) Section 1. Population confirms the Census Bureau has this data:

Population tables are grouped by category as follows:

  • Ancestry, Language Spoken At Home
  • Elderly, Racial And Hispanic Origin Population Profiles
  • Estimates And Projections By Age, Sex, Race/Ethnicity
  • Estimates And Projections–States, Metropolitan Areas, Cities
  • Households, Families, Group Quarters
  • Marital status And Living Arrangements
  • Migration
  • National Estimates And Projections
  • Native And Foreign-Born Populations
  • Religion

To be fair, the privacy principles of the Census Bureau state:

Respectful Treatment of Respondents: Are our efforts reasonable and did we treat you with respect?

  • We promise to ensure that any collection of sensitive information from children and other sensitive populations does not violate federal protections for research participants and is done only when it benefits the public good.

Disclosure: I like the US Census Bureau. Left to their own devices, I don’t have any reasonable fear of their mis-using the data in question.

But that’s the question isn’t it? Will the US Census Bureau be left to its own policies and traditions?

I view the various “proposed data collection policies” of the incoming administrations as intentional distractions. While everyone is focused on Trump’s Theater of the Absurd, appointments and policies at the US Census Bureau, may achieve the same ends.

Sign the pledge yes, but use FOIA requests, personal contacts with Census staff, etc., to keep track of the use of dangerous data at the Census Bureau and elsewhere.


Instructions for adding your name to the pledge are found at: https://github.com/neveragaindottech/neveragaindottech.github.io/.

Assume Census Bureau staff are committed to their privacy and appropriate use policies. A friendly approach will be far more productive than a confrontational or suspicious one. Let’s work with them to maintain their agency’s long history of data security.

November 25, 2016

Programming has Ethical Consequences?

Filed under: Ethics,Programming — Patrick Durusau @ 8:38 pm

Has anyone tracked down the blinding flash that programming has ethical consequences?

Programmers are charged to point out ethical dimensions and issues not noticed by muggles.

This may come as a surprise but programmers in the broader sense have been aware of ethical dimensions to programming for decades.

Perhaps the best known example of a road to Damascus type event is the Trinity atomic bomb test in New Mexico. Oppenheimer recalling a line from the Bhagavad Gita:

“Now I am become Death, the destroyer of worlds.”

To say nothing of the programmers who labored for years to guarantee world wide delivery of nuclear warheads in 30 minutes or less.

But it isn’t necessary to invoke a nuclear Armageddon to find ethical issues that have faced programmers prior to the current ethics frenzy.

Any guesses as to how red line maps were created?

Do you think “red line” maps just sprang up on their own? Or was someone collecting, collating and analyzing the data, much as we would do now but more slowly?

Every act of collecting, collating and analyzing data, now with computers, can and probably does have ethical dimensions and issues.

Programmers can and should raise ethical issues, especially when they may be obscured or clouded by programming techniques or practices.

However, programmers announcing ethical issues to their less fortunate colleagues isn’t likely to lead to a fruitful discussion.

November 8, 2016

None/Some/All … Are Suicide Bombers & Probabilistic Programming Languages

The Design and Implementation of Probabilistic Programming Languages by Noah D. Goodman and Andreas Stuhlmüller.

Abstract:

Probabilistic programming languages (PPLs) unify techniques for the formal description of computation and for the representation and use of uncertain knowledge. PPLs have seen recent interest from the artificial intelligence, programming languages, cognitive science, and natural languages communities. This book explains how to implement PPLs by lightweight embedding into a host language. We illustrate this by designing and implementing WebPPL, a small PPL embedded in Javascript. We show how to implement several algorithms for universal probabilistic inference, including priority-based enumeration with caching, particle filtering, and Markov chain Monte Carlo. We use program transformations to expose the information required by these algorithms, including continuations and stack addresses. We illustrate these ideas with examples drawn from semantic parsing, natural language pragmatics, and procedural graphics.

If you want to sharpen the discussion of probabilistic programming languages, substitute in the pragmatics example:

‘none/some/all of the children are suicide bombers’,

The substitution raises the issue of how “certainty” can/should vary depending upon the gravity of results.

Who is a nice person?, has low stakes.

Who is a suicide bomber?, has high stakes.

October 4, 2016

Deep-Fried Data […money laundering for bias…]

Filed under: Ethics,Machine Learning — Patrick Durusau @ 6:45 pm

Deep-Fried Data by Maciej Ceglowski. (paper) (video of same presentation) Part of Collections as Data event at the Library of Congress.

If the “…money laundering for bias…” quote doesn’t capture your attention, try:


I find it helpful to think of algorithms as a dim-witted but extremely industrious graduate student, whom you don’t fully trust. You want a concordance made? An index? You want them to go through ten million photos and find every picture of a horse? Perfect.

You want them to draw conclusions on gender based on word use patterns? Or infer social relationships from census data? Now you need some adult supervision in the room.

Besides these issues of bias, there’s also an opportunity cost in committing to computational tools. What irks me about the love affair with algorithms is that they remove a lot of the potential for surprise and serendipity that you get by working with people.

If you go searching for patterns in the data, you’ll find patterns in the data. Whoop-de-doo. But anything fresh and distinctive in your digital collections will not make it through the deep frier.

We’ve seen entire fields disappear down the numerical rabbit hole before. Economics came first, sociology and political science are still trying to get out, bioinformatics is down there somewhere and hasn’t been heard from in a while.

A great read and equally enjoyable presentation.

Enjoy!

Moral Machine [Research Design Failure]

Filed under: Ethics,Research Methods — Patrick Durusau @ 3:45 pm

Moral Machine

From the webpage:

Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.

We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.

If you’re feeling creative, you can also design your own scenarios, for you and others to browse, share, and discuss.

The first time I recall hearing this type of discussion was over thirty years ago when a friend, taking an ethics class related the following problem:

You are driving a troop transport with twenty soldiers in the back and are about to enter a one lane bridge. You see a baby sitting in the middle of the bridge. Do you serve, going down an embankment, killing all on board or do you go straight?

A lively college classroom discussion erupted and continued for the entire class. Various theories and justifications were offered, etc. When the class bell rang, the professor announced the child perished 59 minutes, 59 seconds ago.

As you may guess, not a single person in the class called out “Swerve” when the question was posed.

The exercise was to illustrate that many “moral” decisions are made at the limits of human reaction time. Typically, 150 and 300 milliseconds. (Speedy Science: How Fast Can You React? is a great activity from Scientific American to test your reaction time.)

The examples in MIT’s Moral Machine perpetuate the myth that moral decisions are the result of reflection and consideration of multiple factors.

Considered moral decisions do exist. Dietrich Bonhoeffer deciding to participate in a conspiracy to assassinate Adolf Hitler. Lyndon Johnson supporting civil rights in the South. But those are not the subject of the “Moral Machine.”

Nor is the “Moral Machine” even a useful simulation of what a driven and/or driverless car would confront. Visibility isn’t an issue as it often is, there are no distractions, no smart phones ringing, no conflicting input from passengers, etc.

In short, the “Moral Machine” creates a fictional choice, about which to solicit your “moral” advice, under conditions you will never experience.

Separating pedestrians from vehicles (once suggested by Buckminster Fuller I think) is a far more useful exercise than college level discussion questions.

September 29, 2016

Are You A Moral Manipulator?

Filed under: Ethics,Marketing,Persuasion — Patrick Durusau @ 6:58 pm

I appreciated Nir’s reminder about the #1 rule for drug dealers.

If you don’t know it, the video is only a little over six minutes long.

Enjoy!

August 28, 2016

Ethics for Powerful Algorithms

Filed under: Algorithms,Ethics — Patrick Durusau @ 8:47 pm

Ethics for Powerful Algorithms by Abe Gong.

Abe’s four questions:

  1. Are the statistics solid?
  2. Who wins? Who loses?
  3. Are those changes to power structures healthy?
  4. How can we mitigate harms?

Remind me of my favorite scene from Labyrinth:

Transcript:

Sarah: That’s not fair!
Jareth: You say that so often, I wonder what your basis for comparison is?

Isn’t the question of “fairness” one for your client?

August 24, 2016

DATNAV: …Navigate and Integrate Digital Data in Human Rights Research [Ethics]

Filed under: Ethics,Human Rights,Humanities — Patrick Durusau @ 2:54 pm

DATNAV: New Guide to Navigate and Integrate Digital Data in Human Rights Research by Zara Rahman.

From the introduction in the Guide:

From online videos of rights violations, to satellite images of environmental degradation, to eyewitness accounts disseminated on social media, we have access to more relevant data today than ever before. When used responsibly, this data can help human rights professionals in the courtroom, when working with governments and journalists, and in documenting historical record.

Acquiring, disseminating and storing digital data is also becoming increasingly affordable. As costs continue to decrease and new platforms are
developed, opportunities for harnessing these data sources for human rights work increase.

But integrating data collection and management into the day to day work of human rights research and documentation can be challenging, even overwhelming, for individuals and organisations. This guide is designed to help you navigate and integrate new data forms into your human rights work.

It is the result of a collaboration between Amnesty International, Benetech, and The Engine Room that began in late 2015. We conducted a series of interviews, community consultations, and surveys to understand whether digital data was being integrated into human rights work. In the vast majority of cases, we found that it wasn’t. Why?

Mainly, human rights researchers appeared to be overwhelmed by the possibilities. In the face of limited resources, not knowing how to get started or whether it would be worthwhile, most people we spoke to refrained from even attempting to strengthen their work with digital data.

To support everyone in the human rights field in navigating this complex environment, we convened a group of 16 researchers and technical experts in a castle outside Berlin, Germany in May 2016 to draft this guide over four days of intense reflection and writing.

There are additional reading resources at: https://engn.it/datnav.

The issue of ethics comes up quickly in human rights research and here the authors write:

Seven things to consider before using digital data for human rights

  1. Would digital data genuinely help answer your research questions? What are the pros and cons of the particular source or medium? What might you learn from past uses of similar technology?
  2. What sources are likely to be collecting or capturing the kinds of information you need? What is the context in which it is being produced and used? Will the people or organisations on which your work is focused be receptive to these types of data?
  3. How easily will new forms of data integrate into your existing workflow? Do you realistically have the time and money to collect, store, analyze and especially to verify this data? Can anyone on your team comfortably support the technology?
  4. Who owns or controls the data you will be using? Companies, government, or adversaries? How difficult is it to get? Is it a fair or legal collection method? What is the internal stance on this? Do you have true informed consent from individuals?
  5. How will digital divides and differences in local access to online platforms, computers or phones, affect representation of different populations? Would conclusions based on the data reinforce inequalities, stereotypes or blind spots?
  6. Are organisational protocols for confidentiality and security in digital communication and data handling sufficiently robust to deal with risks to you, your partners and sources? Are security tools and processes updated frequently enough?
  7. Do you have safeguards in place to prevent and deal with any secondary trauma from viewing digital content that you or your partners may experience at personal and organisational levels?

(Page 15)

Before I reveal my #0 consideration, consider the following story as setting the background.

At a death penalty seminar (certainly a violation of human rights), a practitioner reported a case where the prosecuting attorney said a particular murder case was a question of “good versus evil.” In the course of preparing for that case, it was discovered that while teaching a course for paralegals, the prosecuting attorney had a sexual affair with one of his students. Affidavits were obtained, etc., and a motion was filed in the pending criminal case entitled: Motion To Define Good and Evil.

There was a mix of opinions on whether blind-siding the prosecuting attorney with his personal failings, with the fallout for his family, was a legitimate approach?

My question was: Did they consider asking the prosecuting attorney to take the death penalty off the table, in exchange for not filing the Motion To Define Good and Evil? A question of effective use of the information and not about the legitimacy of using it.

For human rights violations, my #0 Question would be:

0. Can the information be used to stop and/or redress human rights violations without harming known human rights victims?

The other seven questions, like “…all deliberate speed…,” are a game played by non-victims.

August 21, 2016

The Ethics of Data Analytics

Filed under: Data Analysis,Data Science,Ethics,Graphics,Statistics,Visualization — Patrick Durusau @ 4:00 pm

The Ethics of Data Analytics by Kaiser Fung.

Twenty-one slides on ethics by Kaiser Fung, author of: Junk Charts (data visualization blog), and Big Data, Plainly Spoken (comments on media use of statistics).

Fung challenges you to reach your own ethical decisions and acknowledges there are a number of guides to such decision making.

Unfortunately, Fung does not include professional responsibility requirements, such as the now out-dated Canon 7 of the ABA Model Code Of Professional Responsibility:

A Lawyer Should Represent a Client Zealously Within the Bounds of the Law

That canon has a much storied history, which is capably summarized in Whatever Happened To ‘Zealous Advocacy’? by Paul C. Sanders.

In what became known as Queen Caroline’s Case, the House of Lords sought to dissolve the marriage of King George the IV

George IV 1821 color

to Queen Caroline

CarolineOfBrunswick1795

on the grounds of her adultery. Effectively removing her as queen of England.

Queen Caroline was represented by Lord Brougham, who had evidence of a secret prior marriage by King George the IV to Catholic (which was illegal), Mrs Fitzherbert.

Portrait of Mrs Maria Fitzherbert, wife of George IV

Brougham’s speech is worth your reading in full but the portion most often cited for zealous defense reads as follows:


I once before took leave to remind your lordships — which was unnecessary, but there are many whom it may be needful to remind — that an advocate, by the sacred duty of his connection with his client, knows, in the discharge of that office, but one person in the world, that client and none other. To save that client by all expedient means — to protect that client at all hazards and costs to all others, and among others to himself — is the highest and most unquestioned of his duties; and he must not regard the alarm, the suffering, the torment, the destruction, which he may bring upon any other; nay, separating even the duties of a patriot from those of an advocate, he must go on reckless of the consequences, if his fate it should unhappily be, to involve his country in confusion for his client.

The name Mrs. Fitzherbert never slips Lord Brougham’s lips but the House of Lords has been warned that may not remain to be the case, should it choose to proceed. The House of Lords did grant the divorce but didn’t enforce it. Saving fact one supposes. Queen Caroline died less than a month after the coronation of George IV.

For data analysis, cybersecurity, or any of the other topics I touch on in this blog, I take the last line of Lord Brougham’s speech:

To save that client by all expedient means — to protect that client at all hazards and costs to all others, and among others to himself — is the highest and most unquestioned of his duties; and he must not regard the alarm, the suffering, the torment, the destruction, which he may bring upon any other; nay, separating even the duties of a patriot from those of an advocate, he must go on reckless of the consequences, if his fate it should unhappily be, to involve his country in confusion for his client.

as the height of professionalism.

Post-engagement of course.

If ethics are your concern, have that discussion with your prospective client before you are hired.

Otherwise, clients have goals and the task of a professional is how to achieve them. Nothing more.

July 31, 2016

Who Decides On Data Access?

Filed under: Ethics,Journalism,News,Reporting — Patrick Durusau @ 9:35 am

In a Twitter dust-up following The Privileged Cry: Boo, Hoo, Hoo Over Release of OnionScan Data the claim was made by [Λ•]ltSciFi@altscifi_that:

@SarahJamieLewis You take an ethical stance. @patrickDurusau does not. Note his regression to a childish tone. Also: schneier.com/blog/archives/…

To which I responded:

@altscifi_ @SarahJamieLewis Interesting. Questioning genuflection to privilege is a “childish tone?” Is name calling the best you can do?

Which earned this response from [Λ•]ltSciFi@altscifi_:

@patrickDurusau @SarahJamieLewis Not interested in wasting time arguing with you. Your version of “genuflection” doesn’t merit the effort.

Anything beyond name calling is too much effort for [Λ•]ltSciFi@altscifi_. Rather than admit they haven’t thought about the issue of the ethics of data access beyond “me too!,” it saves face to say discussion is a waste of time.

I have never denied that access to data can raise ethical issues or that such issues merit discussion.

What I do object to is that in such discussions, it has been my experience (important qualifier), that those urging ethics of data access have someone in mind to decide on data access. Almost invariably, themselves.

Take the recent “weaponized transparency” rhetoric of the Sunlight Foundation as an example. We can argue about the ethics of particular aspects of the DNC data leak, but the fact remains that the Sunlight Foundation considers itself, and not you, as the appropriate arbiter of access to an unfiltered version of that data.

I assume the Sunlight Foundation would include as appropriate arbiters many of the usual news organizations what accept leaked documents and reveal to the public only so much as they choose to reveal.

Not to pick on the Sunlight Foundation, there is an alphabet soup of U.S. government agencies that make similar claims of what should or should not be revealed to the public. I have no more sympathy for their claims of the right to limit data access than more public minded organizations.

Take the data dump of OnionScan data for example. Sarah Jamie Lewis may choose to help sites for victims of abuse (a good thing in my opinion) whereas others of us may choose to fingerprint and out government spy agencies (some may see that as a bad thing).

The point being that the OnionScan data dump enables more people to make those “ethical” choices and to not be preempted because data such as the OnionScan data should not be widely available.

BTW, in a later tweet Sarah Jamie Lewis says:

In which I am called privileged for creating an open source tool & expressing concerns about public deanonymization.

Missing the issue entirely as she was quoted as expressing concerns over the OnionScan data dump. Public deanonymization, is a legitimate concern so long as we all get to decide those concerns for ourselves. Lewis is trying to dodge the issue of her weak claim over the data dump for the stronger one over public deanonymization.

Unlike most of the discussants you will find, I don’t want to decide on what data you can or cannot see.

Why would I? I can’t foresee all uses and/or what data you might combine it with. Or with what intent?

If you consider the history of data censorship by governments, we haven’t done terribly well in our choices of censors or in the results of their censorship.

Let’s allow people to exercise their own sense of ethics. We could hardly do worse than we have so far.

June 16, 2016

Are Non-AI Decisions “Open to Inspection?”

Filed under: Artificial Intelligence,Ethics,Machine Learning — Patrick Durusau @ 4:46 pm

Ethics in designing AI Algorithms — part 1 by Michael Greenwood.

From the post:

As our civilization becomes more and more reliant upon computers and other intelligent devices, there arises specific moral issue that designers and programmers will inevitably be forced to address. Among these concerns is trust. Can we trust that the AI we create will do what it was designed to without any bias? There’s also the issue of incorruptibility. Can the AI be fooled into doing something unethical? Can it be programmed to commit illegal or immoral acts? Transparency comes to mind as well. Will the motives of the programmer or the AI be clear? Or will there be ambiguity in the interactions between humans and AI? The list of questions could go on and on.

Imagine if the government uses a machine-learning algorithm to recommend applications for student loan approvals. A rejected student and or parent could file a lawsuit alleging that the algorithm was designed with racial bias against some student applicants. The defense could be that this couldn’t be possible since it was intentionally designed so that it wouldn’t have knowledge of the race of the person applying for the student loan. This could be the reason for making a system like this in the first place — to assure that ethnicity will not be a factor as it could be with a human approving the applications. But suppose some racial profiling was proven in this case.

If directed evolution produced the AI algorithm, then it may be impossible to understand why, or even how. Maybe the AI algorithm uses the physical address data of candidates as one of the criteria in making decisions. Maybe they were born in or at some time lived in poverty‐stricken regions, and that in fact, a majority of those applicants who fit these criteria happened to be minorities. We wouldn’t be able to find out any of this if we didn’t have some way to audit the systems we are designing. It will become critical for us to design AI algorithms that are not just robust and scalable, but also easily open to inspection.

While I can appreciate the desire to make AI algorithms that are “…easily open to inspection…,” I feel compelled to point out that human decision making has resisted such openness for thousands of years.

There are the tales we tell each other about “rational” decision making but those aren’t how decisions are made, rather they are how we justify decisions made to ourselves and others. Not exactly the same thing.

Recall the parole granting behavior of israeli judges that depended upon the proximity to their last meal. Certainly all of those judges would argue for their “rational” decisions but meal time was a better predictor than any other. (Extraneous factors in judicial decisions)

My point being that if we struggle to even articulate the actual basis for non-AI decisions, where is our model for making AI decisions “open to inspection?” What would that look like?

You could say, for example, no discrimination based on race. OK, but that’s not going to work if you want to purposely setup scholarships for minority students.

When you object, “…that’s not what I meant! You know what I mean!…,” well, I might, but try convincing an AI that has no social context of what you “meant.”

The openness of AI decisions to inspection is an important issue but the human record in that regard isn’t encouraging.

April 7, 2016

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

Filed under: BigData,Data Science,Ethics,History,Mathematics — Patrick Durusau @ 9:19 pm

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil.

math-weapons

From the description at Amazon:

We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated. But as Cathy O’Neil reveals in this shocking book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his race or neighborhood), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.

Tracing the arc of a person’s life, from college to retirement, O’Neil exposes the black box models that shape our future, both as individuals and as a society. Models that score teachers and students, sort resumes, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health—all have pernicious feedback loops. They don’t simply describe reality, as proponents claim, they change reality, by expanding or limiting the opportunities people have. O’Neil calls on modelers to take more responsibility for how their algorithms are being used. But in the end, it’s up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.

Even if you have qualms about Cathy’s position, you have to admit that is a great book cover!

When I was in law school, I had F. Hodge O’Neal for corporation law. He is the O’Neal in O’Neal and Thompson’s Oppression of Minority Shareholders and LLC Members, Rev. 2d.

The publisher’s blurb is rather generous in saying:

Cited extensively, O’Neal and Thompson’s Oppression of Minority Shareholders and LLC Members shows how to take appropriate steps to protect minority shareholder interests using remedies, tactics, and maneuvers sanctioned by federal law. It clarifies the underlying cause of squeeze-outs and suggests proven arrangements for avoiding them.

You could read Oppression of Minority Shareholders and LLC Members that way but when corporate law is taught with war stories from the antics of the robber barons forward, you get the impression that isn’t why people read it.

Not that I doubt Cathy’s sincerity, on the contrary, I think she is very sincere about her warnings.

Where I disagree with Cathy is in thinking democracy is under greater attack now or that inequality is any greater problem than before.

If you read The Half Has Never Been Told: Slavery and the Making of American Capitalism by Edward E. Baptist:

half-history

carefully, you will leave it with deep uncertainty about the relationship of American government, federal, state and local to any recognizable concept of democracy. Or for that matter to the “equality” of its citizens.

Unlike Cathy as well, I don’t expect that shaming people is going to result in “better” or more “honest” data analysis.

What you can do is arm yourself to do battle on behalf of your “side,” both in terms of exposing data manipulation by others and concealing your own.

Perhaps there is room in the marketplace for a book titled: Suppression of Unfavorable Data. More than hiding data, what data to not collect? How to explain non-collection/loss? How to collect data in the least useful ways?

You would have to write it as a how to avoid these very bad practices but everyone would know what you meant. Could be the next business management best seller.

Older Posts »

Powered by WordPress