Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 4, 2016

Moral Machine [Research Design Failure]

Filed under: Ethics,Research Methods — Patrick Durusau @ 3:45 pm

Moral Machine

From the webpage:

Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.

We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.

If you’re feeling creative, you can also design your own scenarios, for you and others to browse, share, and discuss.

The first time I recall hearing this type of discussion was over thirty years ago when a friend, taking an ethics class related the following problem:

You are driving a troop transport with twenty soldiers in the back and are about to enter a one lane bridge. You see a baby sitting in the middle of the bridge. Do you serve, going down an embankment, killing all on board or do you go straight?

A lively college classroom discussion erupted and continued for the entire class. Various theories and justifications were offered, etc. When the class bell rang, the professor announced the child perished 59 minutes, 59 seconds ago.

As you may guess, not a single person in the class called out “Swerve” when the question was posed.

The exercise was to illustrate that many “moral” decisions are made at the limits of human reaction time. Typically, 150 and 300 milliseconds. (Speedy Science: How Fast Can You React? is a great activity from Scientific American to test your reaction time.)

The examples in MIT’s Moral Machine perpetuate the myth that moral decisions are the result of reflection and consideration of multiple factors.

Considered moral decisions do exist. Dietrich Bonhoeffer deciding to participate in a conspiracy to assassinate Adolf Hitler. Lyndon Johnson supporting civil rights in the South. But those are not the subject of the “Moral Machine.”

Nor is the “Moral Machine” even a useful simulation of what a driven and/or driverless car would confront. Visibility isn’t an issue as it often is, there are no distractions, no smart phones ringing, no conflicting input from passengers, etc.

In short, the “Moral Machine” creates a fictional choice, about which to solicit your “moral” advice, under conditions you will never experience.

Separating pedestrians from vehicles (once suggested by Buckminster Fuller I think) is a far more useful exercise than college level discussion questions.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress