Archive for the ‘Artificial Intelligence’ Category

“Ethical” Botmakers Censor Offensive Content

Saturday, March 26th, 2016

There are almost 500,000 “hits” from “tay ai” in one popular search engine today.

Against that background, I ran into: How to Make a Bot That Isn’t Racist by Sarah Jeong.

From the post:

…I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking.

“The makers of @TayandYou absolutely 10000 percent should have known better,” thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. “It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet.”

Thricedotted and others belong to an established community of botmakers on Twitter that have been creating and experimenting for years. There’s a Bot Summit. There’s a hashtag (#botALLY).

As I spoke to each botmaker, it became increasingly clear that the community at large was tied together by crisscrossing lines of influence. There is a well-known body of talks, essays, and blog posts that form a common ethical code. The botmakers have even created open source blacklists of slurs that have become Step 0 in keeping their bots in line.

Not researching prior art is as bad as not Reading The Fine Manual (RTFM) before posting help queries to heavy traffic developer forums.

Tricedotted claims a prior obligation of TayandYou’s creators to block offensive content:

For thricedotted, TayandYou failed from the start. “You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit,” they said. “It blows my mind, because surely they’ve been working on this for a while, surely they’ve been working with Twitter data, surely they knew this shit existed. And yet they put in absolutely no safeguards against it?!” (emphasis in original)

No doubt Microsoft wishes that it had blocked offensive content in hindsight, but I don’t see a general ethical obligation to block or censor offensive content.

For example:

  • A bot created to follow public and private accounts of elected officials and it only re-tweeted posts that did contain racial slurs? With @news-organization handles in the tweets.
  • A bot based on matching FEC (Federal Election Commission) donation records + Twitter accounts and it re-tweets racist/offensive tweets along with campaign donation identifiers and the candidate in question.
  • A bot that follows accounts known for racist/offensive tweets for the purpose of building archives of those tweets, publicly accessible, to prevent the sanitizing of tweet archives in the future. (like with TayandYou)

Any of those strike you as “unethical?”

I wish the Georgia legislature and the U.S. Congress would openly used racist and offensive language.

They act in racist and offensive ways so they should be openly racist and offensive. Makes it easier to whip up effective opposition against known racists, etc.

Which is, of course, why they self-censor to not use racist language.

The world is full of offensive people and we should make they own their statements.

Creating a false, sanitized view that doesn’t offend some n+1 sensitivities, is just that, a false view of the world.

If you are looking for an ethical issue, creating views of the world that help conceal racism, sexism, etc., is a better starting place than offensive ephemera.

“Not Understanding” was Tay’s Vulnerability?

Friday, March 25th, 2016

Peter Lee (Corporate Vice President, Microsoft Research) posted Learning from Tay’s introduction where he says:


Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

But Peter never specifies what “vulnerability” Tay suffered from.

To find out what why Tay was “vulnerable,” you have to read Microsoft is deleting its AI chatbot’s incredibly racist tweets by Rob Price where he points out:


The reason it spouted garbage is that racist humans on Twitter quickly spotted a vulnerability — that Tay didn’t understand what it was talking about — and exploited it. (emphasis added)

Hmmm, how soon do you think Microsoft can confer on Tay the ability to “…understand what it [is] talking about…?”

I’m betting that’s not going to happen.

Tay can “learn” (read mimic) language patterns of users but if she speaks to racist users she will say racist things. Or religious, ISIS, sexist, Buddhist, trans-gender, or whatever things.

It isn’t ever going to be a question of Tay “understanding,” but rather of humans creating rules that prevent Tay from imitating certain speech patterns.

She will have no more or less “understanding” than before but her speech patterns will be more acceptable to some segments of users.

I have no doubt the result of Tay’s first day in the world was not what Microsoft wanted or anticipated.

That said, people are a ugly lot and I don’t mean a minority of them. All of us are better some days than others and about some issues and not others.

To the extent that Tay was designed to imitate people, I consider the project to be a success. If you think Tay should react the way some people imagine we should act, then it was a failure.

There’s an interesting question for Easter weekend:

Should an artificial intelligence act as we do or should it act as we ought to do?

PS: I take Peter’s comments about “…do not represent who we are or what we stand for, nor how we designed Tay…” at face value. However, the human heart is a dark place and to pretend that is true of a minority or sub-group, is to ignore the lessons of history.

AI Masters Go, Twitter, Not So Much (Log from @TayandYou?)

Thursday, March 24th, 2016

Microsoft deletes ‘teen girl’ AI after it became a Hitler-loving sex robot within 24 hours by Helena Horton.

From the post:

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot.

Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is.

The headline was suggested to me by a tweet from Peter Seibel:

Interesting how wide the gap is between two recent AI: AlphaGo and TayTweets. The Turing Test is *hard*. http://gigamonkeys.com/turing/.

In preparation for the next AI celebration, does anyone have a complete log of the tweets from Tay Tweets?

I prefer non-revisionist history where data doesn’t disappear. You can imagine the use Stalin would have made of that capability.

Project AIX: Using Minecraft to build more intelligent technology

Monday, March 14th, 2016

Project AIX: Using Minecraft to build more intelligent technology by Allison Linn.

From the post:

In the airy, loft-like Microsoft Research lab in New York City, five computer scientists are spending their days trying to get a Minecraft character to climb a hill.

That may seem like a pretty simple job for some of the brightest minds in the field, until you consider this: The team is trying to train an artificial intelligence agent to learn how to do things like climb to the highest point in the virtual world, using the same types of resources a human has when she learns a new task.

That means that the agent starts out knowing nothing at all about its environment or even what it is supposed to accomplish. It needs to understand its surroundings and figure out what’s important – going uphill – and what isn’t, such as whether it’s light or dark. It needs to endure a lot of trial and error, including regularly falling into rivers and lava pits. And it needs to understand – via incremental rewards – when it has achieved all or part of its goal.

“We’re trying to program it to learn, as opposed to programming it to accomplish specific tasks,” said Fernando Diaz, a senior researcher in the New York lab and one of the people working on the project.

The research project is possible thanks to AIX, a platform developed by Katja Hofmann and her colleagues in Microsoft’s Cambridge, UK, lab and unveiled publicly on Monday. AIX allows computer scientists to use the world of Minecraft as a testing ground for conducting research designed to improve artificial intelligence.

The project is in closed beta now but said to be going open source in the summer of 2016.

Someone mentioned quite recently the state of documentation on Minecraft. Their impression was there is a lot of information but poorly organized.

If you are interested in exploring Minecraft for the release this summer, see: How to Install Minecraft on Ubuntu or Any Other Linux Distribution.

Lee Sedol “busted up” AlphaGo – Game 4

Monday, March 14th, 2016

Lee Sedol defeats AlphaGo in masterful comeback – Game 4 by David Ormerod.

From the post:

Expectations were modest on Sunday, as Lee Sedol 9p faced the computer Go program AlphaGo for the fourth time.

Lee Sedol 9 dan, obviously relieved to win his first game.

After Lee lost the first three games, his chance of winning the five game match had evaporated.

His revised goal, and the hope of millions of his fans, was that he might succeed in winning at least one game against the machine before the match concluded.

However, his prospects of doing so appeared to be bleak, until suddenly, just when all seemed to be lost, he pulled a rabbit out of a hat.

And he didn’t even have a hat!

Lee Sedol won game four by resignation.

A reversal of roles but would you say that Sedol “busted up” AlphaGo?

Looking forward to the results of Game 5!

Automating Amazon/Hotel/Travel Reviews (+ Human Intelligence Test (HIT))

Sunday, February 28th, 2016

The Neural Network That Remembers by Zachary C. Lipton & Charles Elkan.

From the post:

On tap at the brewpub. A nice dark red color with a nice head that left a lot of lace on the glass. Aroma is of raspberries and chocolate. Not much depth to speak of despite consisting of raspberries. The bourbon is pretty subtle as well. I really don’t know that find a flavor this beer tastes like. I would prefer a little more carbonization to come through. It’s pretty drinkable, but I wouldn’t mind if this beer was available.

Besides the overpowering bouquet of raspberries in this guy’s beer, this review is remarkable for another reason. It was produced by a computer program instructed to hallucinate a review for a “fruit/vegetable beer.” Using a powerful artificial-intelligence tool called a recurrent neural network, the software that produced this passage isn’t even programmed to know what words are, much less to obey the rules of English syntax. Yet, by mining the patterns in reviews from the barflies at BeerAdvocate.com, the program learns how to generate similarly coherent (or incoherent) reviews.

The neural network learns proper nouns like “Coors Light” and beer jargon like “lacing” and “snifter.” It learns to spell and to misspell, and to ramble just the right amount. Most important, the neural network generates reviews that are contextually relevant. For example, you can say, “Give me a 5-star review of a Russian imperial stout,” and the software will oblige. It knows to describe India pale ales as “hoppy,” stouts as “chocolatey,” and American lagers as “watery.” The neural network also learns more colorful words for lagers that we can’t put in print.

This particular neural network can also run in reverse, taking any review and recognizing the sentiment (star rating) and subject (type of beer). This work, done by one of us (Lipton) in collaboration with his colleagues Sharad Vikram and Julian McAuley at the University of California, San Diego, is part of a growing body of research demonstrating the language-processing capabilities of recurrent networks. Other related feats include captioning images, translating foreign languages, and even answering e-mail messages. It might make you wonder whether computers are finally able to think.

(emphasis in original)

An enthusiastic introduction and projection of the future of recurrent neural networks! Quite a bit so.

My immediate thought was what a time saver a recurrent neural network would be for “evaluation” requests that appear in my inbox with alarming regularity.

What about a service that accepts forwarded emails and generates a review for the book, seller, hotel, travel, etc., which is returned to you for cut-n-paste?

That would be about as “intelligent” as the amount of attention most of us devote to such requests.

You could set the service to mimic highly followed reviewers so over time you would move up the ranks of reviewers.

I mention Amazon, hotel, travel reviews but those are just low-lying fruit. You could do journal book reviews with a different data set.

Near the end of the post the authors write:


In this sense, the computer-science community is evaluating recurrent neural networks via a kind of Turing test. We try to teach a computer to act intelligently by training it to imitate what people produce when faced with the same task. Then we evaluate our thinking machine by seeing whether a human judge can distinguish between its output and what a human being might come up with.

While the very fact that we’ve come this far is exciting, this approach may have some fundamental limitations. For instance, it’s unclear how such a system could ever outstrip the capabilities of the people who provide the training data. Teaching a machine to learn through imitation might never produce more intelligence than was present collectively in those people.

One promising way forward might be an approach called reinforcement learning. Here, the computer explores the possible actions it can take, guided only by some sort of reward signal. Recently, researchers at Google DeepMind combined reinforcement learning with feed-forward neural networks to create a system that can beat human players at 31 different video games. The system never got to imitate human gamers. Instead it learned to play games by trial and error, using its score in the video game as a reward signal.

Instead of asking whether computers can think, the more provocative question is “whether people think for a large range of daily activities?”

Consider it as the Human Intelligence Test (HIT).

How much “intelligence” does it take to win a video game?

Eye/hand coordination to be sure, attention, but what “intelligence” is involved?

Computers may “eclipse” human beings at non-intelligent activities, as a shovel “eclipses” our ability to dig with our bare hands.

But I’m not overly concerned.

Are you?

Danger of Hackers vs. AI

Sunday, January 31st, 2016

An interactive graphical history of large data breaches by Mark Gibbs.

From the post:

If you’re trying to convince your management to beef up the organization’s security to protect against data breaches, an interactive infographic from Information Is Beautiful might help.

Built with IIB’s forthcoming VIZsweet data visualization tools, the World’s Biggest Data Breaches visualization combines data from DataBreaches.net, IdTheftCentre, and press reports to create a timeline of breaches that involved the loss of 30,000 or more records (click the image below to go to the interactive version). What’s particularly interesting is that while breaches were caused by accidental publishing, configuration errors, inside job, lost or stolen computer, lost or stolen media, or just good old poor security, the majority of events and the largest, were due to hacking.

Make sure the powers that be understand that you don’t have to be a really big organization for a serious data breach to happen.

See Mark’s post for the image and link to the interactive graphic.

Hackers (human intelligence) are kicking cybersecurity’s ass 24 x 7.

Danger of AI (artificial intelligence), maybe, someday, it might be a problem, but we don’t know or to what extent.

What priority do you assign these issues in your IT budget?

If you said hackers are #1, congratulations! You have an evidence-based IT budgeting process.

Otherwise, well, see you at DragonCon. I’m sure you will have lots of free time when you aren’t in the unemployment line.

PS: Heavy spending on what is mis-labeled as “artificial intelligence” is perfectly legitimate. Think of it as training computers to do tasks humans can’t do or that machines can do more effectively. Calling it AI loads it with unnecessary baggage.

Google’s Go Victory/AI Danger Summarized In One Sentence

Sunday, January 31st, 2016

Google’s Go Victory Is Just A Glimpse Of How Powerful AI Will Be by Cade Metz.

Cade manages to summarize the implications of the Google Go victory and the future danger of AI in one concise sentence:

Bostrom’s book makes the case that AI could be more dangerous than nuclear weapons, not only because human could misuse it but because we could build AI systems that we are somehow not able to control.

If you don’t have time for the entire article, that sentence summarizes the article as well.

Pay particular attention to the part that reads: “…that we are somehow not able to control.

Is that like a Terex 33-19 “Titan”

640px-SparTitan

with a nuclear power supply and no off switch? (Yes, that is a person in the second wheel from the front.)

We learned only recently that consciousness, at least as we understand the term now, is a product of chaotic and cascading connections. Consciousness May Be the Product of Carefully Balanced Chaos [Show The Red Card].

One supposes that positronic brains (warning: fiction) must share that chaotic characteristic.

However, Cade and Bostrom fail to point to any promising research on the development of positronic brains.

That’s not to deny that poor choices could be made by an AI designed by Aussies. If projected global warming exceeds three degrees Celsius, set off a doomsday bomb. (On the Beach)

The lesson there is two-fold: Don’t build doomsday weapons. Don’t put computers in charge of them.

The danger from AI is in the range of a gamma ray burst ending civilization. If that high.

On the other hand, if you want work has a solid background in science fiction, prone to sound bites in the media and attracts doomsday groupies of all genders, it doesn’t require a lot of research.

The only real requirement is to wring your hands over some imagined scenario that you can’t say will occur or how that will doom us all. Throw in some of the latest buzz words and you have a presentation/speech/book.

Consciousness May Be the Product of Carefully Balanced Chaos [Show The Red Card]

Thursday, January 28th, 2016

Consciousness May Be the Product of Carefully Balanced Chaos by sciencehabit.

From the posting:

The question of whether the human consciousness is subjective or objective is largely philosophical. But the line between consciousness and unconsciousness is a bit easier to measure. In a new study (abstract) of how anesthetic drugs affect the brain, researchers suggest that our experience of reality is the product of a delicate balance of connectivity between neurons—too much or too little and consciousness slips away. During wakeful consciousness, participants’ brains generated “a flurry of ever-changing activity”, and the fMRI showed a multitude of overlapping networks activating as the brain integrated its surroundings and generated a moment to moment “flow of consciousness.” After the propofol kicked in, brain networks had reduced connectivity and much less variability over time. The brain seemed to be stuck in a rut—using the same pathways over and over again.

These researchers need to be shown the red card as they say in soccer.

I thought it was agreed that during the Human Brain Project, no one would research or publish new information about the human brain, in order to allow the EU project to complete its “working model” of the human brain.

The Human Brain Project is a butts in seats and/or hotels project and a gum ball machine will be able to duplicate its results. But discovering vast amounts of unknown facts demonstrates the lack of an adequate foundation for the project at its inception.

In other words, more facts may decrease public support for ill-considered WPA projects for science.

Calling the “judgement,” favoritism would be a more descriptive term, of award managers into question, surely merits the “red card” in this instance.

(Note to readers: This post is to be read as sarcasm. The excellent research reported Enzo Tagliazucchi, et al. in Large-scale signatures of unconsciousness are consistent with a departure from critical dynamics is an indication of some of the distance between current research and replication of a human brain.)

The full abstract if you are interested:

Loss of cortical integration and changes in the dynamics of electrophysiological brain signals characterize the transition from wakefulness towards unconsciousness. In this study, we arrive at a basic model explaining these observations based on the theory of phase transitions in complex systems. We studied the link between spatial and temporal correlations of large-scale brain activity recorded with functional magnetic resonance imaging during wakefulness, propofol-induced sedation and loss of consciousness and during the subsequent recovery. We observed that during unconsciousness activity in frontothalamic regions exhibited a reduction of long-range temporal correlations and a departure of functional connectivity from anatomical constraints. A model of a system exhibiting a phase transition reproduced our findings, as well as the diminished sensitivity of the cortex to external perturbations during unconsciousness. This framework unifies different observations about brain activity during unconsciousness and predicts that the principles we identified are universal and independent from its causes.

The “official” version of this article lies behind a paywall but you can see it at: http://arxiv.org/pdf/1509.04304.pdf for free.

Kudos to the authors for making their work accessible to everyone!

I first saw this in a Facebook post by Simon St. Laurent.

Top 100 AI Influencers of 2015 – Where Are They Now? [Is There A Curator In The House?]

Wednesday, January 6th, 2016

Top 100 Artificial and Robotics Influencers 2015.

Kirk Borne tweeted the Top 100 … link today.

More interesting than most listicles but as static HTML, it doesn’t lend itself to re-use.

For example, can you tell me:

  • Academic publications anyone listed had in 2014? (One assumes the year they were judged against for the 2015 list.)
  • Academic publications anyone listed had in 2015?
  • Which of these people were co-authors?
  • Which of these people have sent tweets on AI?
  • etc.

Other than pandering to our love of lists, lists appear organized and we like organization at little or no cost, what does an HTML listicle have to say for itself?

This is a top candidate for one or two XQuery posts next week. I need to finish this week on making congressional roll call vote documents useful. See: Jazzing Up Roll Call Votes For Fun and Profit (XQuery) for the start of that series.

We Know How You Feel [A Future Where Computers Remain Imbeciles]

Wednesday, December 16th, 2015

We Know How You Feel by Raffi Khatchadourian.

From the post:

Three years ago, archivists at A.T. & T. stumbled upon a rare fragment of computer history: a short film that Jim Henson produced for Ma Bell, in 1963. Henson had been hired to make the film for a conference that the company was convening to showcase its strengths in machine-to-machine communication. Told to devise a faux robot that believed it functioned better than a person, he came up with a cocky, boxy, jittery, bleeping Muppet on wheels. “This is computer H14,” it proclaims as the film begins. “Data program readout: number fourteen ninety-two per cent H2SOSO.” (Robots of that era always seemed obligated to initiate speech with senseless jargon.) “Begin subject: Man and the Machine,” it continues. “The machine possesses supreme intelligence, a faultless memory, and a beautiful soul.” A blast of exhaust from one of its ports vaporizes a passing bird. “Correction,” it says. “The machine does not have a soul. It has no bothersome emotions. While mere mortals wallow in a sea of emotionalism, the machine is busy digesting vast oceans of information in a single all-encompassing gulp.” H14 then takes such a gulp, which proves overwhelming. Ticking and whirring, it begs for a human mechanic; seconds later, it explodes.

The film, titled “Robot,” captures the aspirations that computer scientists held half a century ago (to build boxes of flawless logic), as well as the social anxieties that people felt about those aspirations (that such machines, by design or by accident, posed a threat). Henson’s film offered something else, too: a critique—echoed on television and in novels but dismissed by computer engineers—that, no matter a system’s capacity for errorless calculation, it will remain inflexible and fundamentally unintelligent until the people who design it consider emotions less bothersome. H14, like all computers in the real world, was an imbecile.

Today, machines seem to get better every day at digesting vast gulps of information—and they remain as emotionally inert as ever. But since the nineteen-nineties a small number of researchers have been working to give computers the capacity to read our feelings and react, in ways that have come to seem startlingly human. Experts on the voice have trained computers to identify deep patterns in vocal pitch, rhythm, and intensity; their software can scan a conversation between a woman and a child and determine if the woman is a mother, whether she is looking the child in the eye, whether she is angry or frustrated or joyful. Other machines can measure sentiment by assessing the arrangement of our words, or by reading our gestures. Still others can do so from facial expressions.

Our faces are organs of emotional communication; by some estimates, we transmit more data with our expressions than with what we say, and a few pioneers dedicated to decoding this information have made tremendous progress. Perhaps the most successful is an Egyptian scientist living near Boston, Rana el Kaliouby. Her company, Affectiva, formed in 2009, has been ranked by the business press as one of the country’s fastest-growing startups, and Kaliouby, thirty-six, has been called a “rock star.” There is good money in emotionally responsive machines, it turns out. For Kaliouby, this is no surprise: soon, she is certain, they will be ubiquitous.

This is a very compelling look at efforts that have in practice made computers more responsive to the emotions of users. With the goal of influencing users based upon the emotions that are detected.

Sound creepy already?

The article is fairly long but a great insight into progress already being made and that will be made in the not too distant future.

However, “emotionally responsive machines” remain the same imbeciles as they were in the story of H14. That is to say they can only “recognize” emotions much as they can “recognize” color. To be sure it “learns” but its reaction upon recognition remains a matter of programming and/or training.

The next wave of startups will create programmable emotional images of speakers, edging the arms race for privacy just another step down the road. If I were investing in startups, I would concentrate on those to defeat emotional responsive computers.

If you don’t want to wait for a high tech way to defeat emotionally responsive computers, may I suggest a fairly low tech solution:

Wear a mask!

One of my favorites:

Egyptian_Guy_Fawkes_Mask

(From https://commons.wikimedia.org/wiki/Category:Masks_of_Guy_Fawkes. There are several unusual images there.)

Or choose any number of other masks at your nearest variety store.

A hard mask that conceals your eyes and movement of your face will defeat any “emotionally responsive computer.”

If you are concerned about your voice giving you away, search for “voice changer” for over 4 million “hits” on software to alter your vocal characteristics. Much of it for free.

Defeating “emotionally responsive computers” remains like playing checkers against an imbecile. If you lose, it’s your own damned fault.

PS: If you have a Max Headroom type TV and don’t want to wear a mask all the time, consider this solution for its camera:

120px-Cutting_tool_2

Any startups yet based on defeating the Internet of Things (IoT)? Predicting 2016/17 will be the year for those to take off.

Introducing OpenAI [Name Surprise: Not SkyNet II or Terminator]

Friday, December 11th, 2015

Introducing OpenAI by Greg Brockman, Ilya Sutskever, and the OpenAI team.

From the webpage:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

Background

Artificial intelligence has always been a surprising field. In the early days, people thought that solving certain tasks (such as chess) would lead us to discover human-level intelligence algorithms. However, the solution to each task turned out to be much less general than people were hoping (such as doing a search over a huge number of moves).

The past few years have held another flavor of surprise. An AI technique explored for decades, deep learning, started achieving state-of-the-art results in a wide variety of problem domains. In deep learning, rather than hand-code a new algorithm for each problem, you design architectures that can twist themselves into a wide range of algorithms based on the data you feed them.

This approach has yielded outstanding results on pattern recognition problems, such as recognizing objects in images, machine translation, and speech recognition. But we’ve also started to see what it might be like for computers to be creative, to dream, and to experience the world.

Looking forward

AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.

OpenAI

Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.

Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.

You can follow us on Twitter at @open_ai or email us at info@openai.com.

Seeing that Elon Musk is the co-chair of this project I was surprised the name wasn’t SkyNet II or Terminator. But OpenAI is a more neutral one and given the planned transparency of the project, a good one.

I also appreciate the project not being engineered for the purpose of spending money over a ten year term. Doing research first and then formulating plans for the next step in research sounds like a more sensible plan.

Whether any project ever achieves “artificial intelligence” equivalent to human intelligence or not, this project may be a template for how to usefully explore complex scientific questions.

[A] Game-Changing Go Engine (Losing mastery over computers? Hardly.)

Friday, December 4th, 2015

How Facebook’s AI Researchers Built a Game-Changing Go Engine

From the post:

One of the last bastions of human mastery over computers is the game of Go—the best human players beat the best Go engines with ease.

That’s largely because of the way Go engines work. These machines search through all possible moves to find the strongest.

While this brute force approach works well in draughts and chess, it does not work well in Go because of the sheer number of possible positions on a board. In draughts, the number of board positions is around 10^20; in chess it is 10^60.

But in Go it is 10^100—that’s significantly more than the number of particles in the universe. Searching through all these is unfeasible even for the most powerful computers.

So in recent years, computer scientists have begun to explore a different approach. Their idea is to find the most powerful next move using a neural network to evaluate the board. That gets around the problem of searching. However, neural networks have yet to match the level of good amateur players or even the best search-based Go engines.

Today, that changes thanks to the work of Yuandong Tian at Facebook AI Research in Menlo Park and Yan Zhu at Rutgers University in New Jersey. These guys have combined a powerful neural network approach with a search-based machine to create a Go engine that plays at an impressively advanced level and has room to improve.

The new approach is based in large part on advances that have been made in neural network-based machine learning in just the last year or two. This is the result of a better understanding of how neural networks work and the availability of larger and better databases to train them.

This is how Tian and Zhu begin. They start with a database of some 250,000 real Go games. They used 220,000 of these as a training database. They used the rest to test the neural network’s ability to predict the next moves that were played in real games.

If you want the full details, check out:

Better Computer Go Player with Neural Network and Long-term Prediction by Yuandong Tian, Yan Zhu.

Abstract:

Competing with top human players in the ancient game of Go has been a long-term goal of artificial intelligence. Go’s high branching factor makes traditional search techniques ineffective, even on leading-edge hardware, and Go’s evaluation function could change drastically with one stone change. Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that search is not strictly necessary for machine Go players. A pure pattern-matching approach, based on a Deep Convolutional Neural Network (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly (2012)] if its search budget is limited. We extend this idea in our bot named darkforest, which relies on a DCNN designed for long-term predictions. Darkforest substantially improves the win rate for pattern-matching approaches against MCTS-based approaches, even with looser search budgets. Against human players, darkforest achieves a stable 1d-2d level on KGS Go Server, estimated from free games against human players. This substantially improves the estimated rankings reported in Clark & Storkey (2015), where DCNN-based bots are estimated at 4k-5k level based on performance against other machine players. Adding MCTS to darkforest creates a much stronger player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90% of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with 10,000 rollouts 95.5% of the time.

The author closes with this summary:

This kind of research is still in its early stages, so improvements are likely in the near future. It may be that humans are about to lose their mastery over computers in yet another area.

I may have to go read the article again because the program as described:

  • Did not invent the game of Go or any of its rules.
  • Did not play any of the 220,000 actual Go games used for training.

That is to say that the Game of Go was invented by people and people playing Go supplied the basis for this Go playing computer.

Not to take anything away from the program or these researchers, but humans are hardly about to lose “…mastery over computers in yet another area.”

Humans remain the creators of such games, the source of training data and the measure against which the computer measures itself.

Who do you think is master in such a relationship?*

* Modulo that the DHS wants to make answers from computers to be the basis for violating your civil liberties. But that’s a different type of “mastery” issue.

Why Neurons Have Thousands of Synapses! (Quick! Someone Call the EU Brain Project!)

Thursday, November 12th, 2015

Single Artificial Neuron Taught to Recognize Hundreds of Patterns.

From the post:

Artificial intelligence is a field in the midst of rapid, exciting change. That’s largely because of an improved understanding of how neural networks work and the creation of vast databases to help train them. The result is machines that have suddenly become better at things like face and object recognition, tasks that humans have always held the upper hand in (see “Teaching Machines to Understand Us”).

But there’s a puzzle at the heart of these breakthroughs. Although neural networks are ostensibly modeled on the way the human brain works, the artificial neurons they contain are nothing like the ones at work in our own wetware. Artificial neurons, for example, generally have just a handful of synapses and entirely lack the short, branched nerve extensions known as dendrites and the thousands of synapses that form along them. Indeed, nobody really knows why real neurons have so many synapses.

Today, that changes thanks to the work of Jeff Hawkins and Subutai Ahmad at Numenta, a Silicon Valley startup focused on understanding and exploiting the principles behind biological information processing. The breakthrough these guys have made is to come up with a new theory that finally explains the role of the vast number of synapses in real neurons and to create a model based on this theory that reproduces many of the intelligent behaviors of real neurons.

A very enjoyable and accessible summary of a paper on the cutting edge of neuroscience!

Relevant for another concern, that I will be covering in the near future, but the post concludes with:


One final point is that this new thinking does not come from an academic environment but from a Silicon Valley startup. This company is the brain child of Jeff Hawkins, an entrepreneur, inventor and neuroscientist. Hawkins invented the Palm Pilot in the 1990s and has since turned his attention to neuroscience full-time.

That’s an unusual combination of expertise but one that makes it highly likely that we will see these new artificial neurons at work on real world problems in the not too distant future. Incidentally, Hawkins and Ahmad call their new toys Hierarchical Temporal Memory neurons or HTM neurons. Expect to hear a lot more about them.

If you want all the details, see:

Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex by Jeff Hawkins, Subutai Ahmad.

Abstract:

Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.

BTW, did I mention the full source code is available at: https://github.com/numenta/nupic?

Coming from a startup, this discovery doesn’t have a decade of support for travel, meals, lodging, support staff, publications, administrative overhead, etc., for a cast of hundreds across the EU. But, then that decade would not have resulted in such a fundamental discovery in any event.

Is that a hint about the appropriate vehicle for advancing fundamental discoveries in science?

Lessons in Truthful Disparagement

Friday, October 30th, 2015

Cathy O’Neil, mathbabe featured a guest post on her blog about the EU Human Brain project.

I am taking notes on truthful disparagement from Dirty Rant About The Human Brain Project.

Just listing the main section headers:

  1. We have no fucking clue how to simulate a brain.
  2. We have no fucking clue how to wire up a brain.
  3. We have no fucking clue what makes human brains work so well.
  4. We have no fucking clue what the parameters are.
  5. We have no fucking clue what the important thing to simulate is.

The guest post was authored by a neuroscientist.

Cathy has just posted her slides for a day long workshop on data science (to be held in Stockholm), if you want something serious to read after you stop laughing about the EU Human Brain Project.

The Bogus Bogeyman of the Brainiac Robot Overlord

Thursday, September 10th, 2015

The Bogus Bogeyman of the Brainiac Robot Overlord by James Kobielus.

From the post:


One of the most overused science-fiction tropes is that of the super-intelligent “robot overlord” that, through human negligence or malice, has enslaved us all. Any of us can name at least one of these off the tops of our heads (e.g., “The Matrix” series). Fear of this Hollywood-fueled cultural bogeyman has stirred up anxiety about the role of machine learning, cognitive computing, and artificial intelligence (AI) in our lives, as I discussed in this recent IBM Big Data & Analytics Hub blog. It’s even fostering uneasiness about the supposedly sinister potential for our smart devices to become “smarter” than us and thereby invisibly monitor and manipulate our every action. I discussed that matter in this separate blog.

This issue will be with us forever, much the way that UFO conspiracy theorists have kept their article of faith alive in the popular mind since the early Cold War era. In the Hollywood-stoked popular mindset that surrounds this issue, the supposed algorithmic overlords represent the evil puppets dangled among us by “Big Brother,” diabolical “technocrats,” and other villains for whom there’s no Superman who might come to our rescue.

Highly entertaining take on the breathless reports that we have to stop some forms of research now or we will be enslaved and then eradicated by our machines.

You could construct a very large bulldozer and instruct it to flatten Los Angeles but that’s not an AI problem, that an HI (human intelligence) issue.

I first saw this because Bob DuCharme tweeted:

“The Bogus Bogeyman of the Brainiac Robot Overlord”: best article title I’ve seen in a long time

+1! to that!

Here’s Why Elon Musk Is Wrong About AI

Wednesday, July 8th, 2015

Here’s Why Elon Musk Is Wrong About AI by Chris V. Nicholson.

From the post:

Nothing against Elon Musk, but the campaign he’s leading against AI is an unfortunate distraction from the true existential threats to humanity: global warming and nuclear proliferation.

Last year was the hottest year on record. We humans as a whole are just a bunch of frogs in an planet-sized pot of boiling water. We’re cooking ourselves with coal and petroleum, pumping carbon dioxide into the air. Smart robots should be the least of our worries.

Pouring money into AI ethics research is the wrong battle to pick because a) it can’t be won, b) it shouldn’t be fought, and c) to survive, humans must focus on other, much more urgent, issues. In the race to destroy humanity, other threats are much better bets than AI.

Not that I disagree with Nicholson, there are much more important issues to worry about than rogue AI, but that overlooks one critical aspect of the argument by Musk.

Musk has said to the world that he’s worried about AI and, more importantly, he has $7 Million+ for anyone who worries about it with him.

Your choices are:

  1. Ignore Musk because building an artificial intelligence when we don’t understand human intelligence seems too remote to be plausible, or
  2. Agree with Musk and if you are in a research group, take a chance on a part of $7 Million in grants.

I am firmly in the #1 camp because I have better things to do with my time attending UFO type meetings. Unfortunately, there are a lot of people in the #2 camp. Just depends on how much money is being offered.

There are any number of research projects that legitimately push the boundaries of knowledge. Unfortunately the government and others also fund projects that are wealth re-distribution programs for universities, hotels, transportation, meeting facilities and the like.

PS: There is a lot of value in the programs being explored under the misnomer of “artificial intelligence.” I don’t have an alternative moniker to suggest but it needs one.

#DallasPDShooting

Sunday, June 14th, 2015

AJ+ tweets today:

A gunman fired at police and had a van full of pipe bombs, but no one called him a terrorist. #DallasPDShooting

That’s easy enough to explain:

  1. He wasn’t setup by the FBI
  2. He wasn’t Muslim
  3. He wasn’t black

Next question.

New Survey Technique! Ask Village Idiots

Thursday, April 30th, 2015

I was deeply disappointed to see Scientific Computing with the headline: ‘Avengers’ Stars Wary of Artificial Intelligence by Ryan Pearson.

The respondents are all talented movie stars but acting talent and even celebrity doesn’t give them insight into issues such as artificial intelligence. You might as well ask football coaches about the radiation hazards of a possible mission to Mars. Football coaches, the winning ones anyway, are bright and intelligent folks, but as a class, aren’t the usual suspects to ask about inter-planetary radiation hazards.

President Reagan was known to confuse movies with reality but that was under extenuating circumstances. Confusing people acting in movies with people who are actually informed on a subject doesn’t make for useful news reporting.

Asking Chris Hemsworth who plays Thor in Avengers: Age of Ultron what the residents of Asgard think about relief efforts for victims of the recent earthquake in Nepal would be as meaningful.

They still publish the National Enquirer. A much better venue for “surveys” of the uninformed.

Unstructured Topic Map-Like Data Powering AI

Monday, March 23rd, 2015

Artificial Intelligence Is Almost Ready for Business by Brad Power.

From the post:

Such mining of digitized information has become more effective and powerful as more info is “tagged” and as analytics engines have gotten smarter. As Dario Gil, Director of Symbiotic Cognitive Systems at IBM Research, told me:

“Data is increasingly tagged and categorized on the Web – as people upload and use data they are also contributing to annotation through their comments and digital footprints. This annotated data is greatly facilitating the training of machine learning algorithms without demanding that the machine-learning experts manually catalogue and index the world. Thanks to computers with massive parallelism, we can use the equivalent of crowdsourcing to learn which algorithms create better answers. For example, when IBM’s Watson computer played ‘Jeopardy!,’ the system used hundreds of scoring engines, and all the hypotheses were fed through the different engines and scored in parallel. It then weighted the algorithms that did a better job to provide a final answer with precision and confidence.”

Granting that the tagging and annotation is unstructured, unlike a topic map, but it is as unconstrained by first order logic and other crippling features of RDF and OWL. Out of that mass of annotations, algorithms can construct useful answers.

Imagine what non-experts (Stanford logic refugees need not apply) could author about your domain, to be fed into an AI algorithm. That would take more effort than relying upon users chancing upon subjects of interest but it would also give you greater precision in the results.

Perhaps, just perhaps, one of the errors in the early topic maps days was the insistence on high editorial quality at the outset, as opposed to allowing editorial quality to emerge out of data.

As an editor I’m far more in favor of the former than the latter but seeing the latter work, makes me doubt that stringent editorial control is the only path to an acceptable degree of editorial quality.

What would a rough-cut topic map authoring interface look like?

Suggestions?

Can recursive neural tensor networks learn logical reasoning?

Thursday, March 19th, 2015

Can recursive neural tensor networks learn logical reasoning? by Samuel R. Bowman.

Abstract:

Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of “some animal walks” from “some dog walks” or “some cat walks,” given that dogs and cats are animals. This model learns representations that generalize well to new types of reasoning pattern in all but a few cases, a result which is promising for the ability of learned representation models to capture logical reasoning.

From the introduction:

Natural language inference (NLI), the ability to reason about the truth of a statement on the basis of some premise, is among the clearest examples of a task that requires comprehensive and accurate natural language understanding [6].

I stumbled over that line in Samuel’s introduction because it implies, at least to me, that there is a notion of truth that resides outside of ourselves as speakers and hearers.

Take his first example:

Consider the statement all dogs bark. From this, one can infer quite a number of other things. One can replace the first argument of all (the first of the two predicates following it, here dogs) with any more specific category that contains only dogs and get a valid inference: all puppies bark; all collies bark.

Contrast that with one the premises that starts my day:

All governmental statements are lies of omission or commission.

Yet, firmly holding that as a “fact” of the world, I write to government officials, post ranty blog posts about government policies, urge others to attempt to persuade government to take certain positions.

Or as Leonard Cohen would say:

Everybody knows that the dice are loaded

Everybody rolls with their fingers crossed

It’s not that I think Samuel is incorrect about monotonicity for “logical reasoning” but monotonicity is a far cry from how people reason day to day.

Rather than creating “reasoning” that is such a departure from human inference, why not train a deep learning system to “reason” by exposing it to the same inputs and decisions made by human decision makers? Imitation doesn’t require understanding of human “reasoning,” just the ability to engage in the same behavior under similar circumstances.

That would reframe Samuel’s question to read: Can recursive neural tensor networks learn human reasoning?

I first saw this in a tweet by Sharon L. Bolding.

Researchers just built a free, open-source version of Siri

Sunday, March 15th, 2015

Researchers just built a free, open-source version of Siri by Jordan Norvet.

From the post:

Major tech companies like Apple and Microsoft have been able to provide millions of people with personal digital assistants on mobile devices, allowing people to do things like set alarms or get answers to questions simply by speaking. Now, other companies can implement their own versions, using new open-source software called Sirius — an allusion, of course, to Apple’s Siri.

Today researchers from the University of Michigan are giving presentations on Sirius at the International Conference on Architectural Support for Programming Languages and Operating Systems in Turkey. Meanwhile, Sirius also made an appearance on Product Hunt this morning.

“Sirius … implements the core functionalities of an IPA (intelligent personal assistant) such as speech recognition, image matching, natural language processing and a question-and-answer system,” the researchers wrote in a new academic paper documenting their work. The system accepts questions and commands from a mobile device, processes information on servers, and provides audible responses on the mobile device.

Read the full academic paper (PDF) to learn more about Sirius. Find Sirius on GitHub here.

Opens up the possibility of a IPA (intelligent personal assistant) that has custom intelligence. Are your day-to-day tasks Apple cookie-cutter tasks or do they go beyond that?

The security implications are interesting as well. What if your IPA “reads” on a news stream that you have been arrested? Or if you fail to check in within some time window?

I first saw this in a tweet by Data Geek.

Artificial Neurons and Single-Layer Neural Networks…

Sunday, March 15th, 2015

Artificial Neurons and Single-Layer Neural Networks – How Machine Learning Algorithms Work Part 1 by Sebastian Raschka.

From the post:

This article offers a brief glimpse of the history and basic concepts of machine learning. We will take a look at the first algorithmically described neural network and the gradient descent algorithm in context of adaptive linear neurons, which will not only introduce the principles of machine learning but also serve as the basis for modern multilayer neural networks in future articles.

Machine learning is one of the hottest and most exciting fields in the modern age of technology. Thanks to machine learning, we enjoy robust email spam filters, convenient text and voice recognition, reliable web search engines, challenging chess players, and, hopefully soon, safe and efficient self-driving cars.

Without any doubt, machine learning has become a big and popular field, and sometimes it may be challenging to see the (random) forest for the (decision) trees. Thus, I thought that it might be worthwhile to explore different machine learning algorithms in more detail by not only discussing the theory but also by implementing them step by step.
To briefly summarize what machine learning is all about: “[Machine learning is the] field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Machine learning is about the development and use of algorithms that can recognize patterns in data in order to make decisions based on statistics, probability theory, combinatorics, and optimization.

The first article in this series will introduce perceptrons and the adaline (ADAptive LINear NEuron), which fall into the category of single-layer neural networks. The perceptron is not only the first algorithmically described learning algorithm [1], but it is also very intuitive, easy to implement, and a good entry point to the (re-discovered) modern state-of-the-art machine learning algorithms: Artificial neural networks (or “deep learning” if you like). As we will see later, the adaline is a consequent improvement of the perceptron algorithm and offers a good opportunity to learn about a popular optimization algorithm in machine learning: gradient descent.

Starting point for what appears to be a great introduction to neural networks.

While you are at Sebastian’s blog, it is very much worthwhile to look around. You will be pleasantly surprised.

Futures of text

Wednesday, March 4th, 2015

Futures of text by Jonathan Libov.

From the post:

I believe comfort, not convenience, is the most important thing in software, and text is an incredibly comfortable medium. Text-based interaction is fast, fun, funny, flexible, intimate, descriptive and even consistent in ways that voice and user interface often are not. Always bet on text:

Text is the most socially useful communication technology. It works well in 1:1, 1:N, and M:N modes. It can be indexed and searched efficiently, even by hand. It can be translated. It can be produced and consumed at variable speeds. It is asynchronous. It can be compared, diffed, clustered, corrected, summarized and filtered algorithmically. It permits multiparty editing. It permits branching conversations, lurking, annotation, quoting, reviewing, summarizing, structured responses, exegesis, even fan fic. The breadth, scale and depth of ways people use text is unmatched by anything.

[Apologies, I lost some of Jonathan’s layout of the quote.]

Jonathan focuses on the use of text/messaging for interactions in a mobile environment, with many examples and suggestions for improvements along the way.

One observation that will have the fearful of an AI future (Elon Musk among others) running for the hills:

Messaging is the only interface in which the machine communicates with you much the same as the way you communicate with it. If some of the trends outlined in this post pervade, it would mark a qualitative shift in how we interact with computers. Whereas computer interaction to date has largely been about discrete, deliberate events — typing in the command line, clicking on files, clicking on hyperlinks, tapping on icons — a shift to messaging- or conversational-based UI’s and implicit hyperlinks would make computer interaction far more fluid and natural.

What’s more, messaging AI benefits from an obvious feedback loop: The more we interact with bots and messaging UI’s, the better it’ll get. That’s perhaps true for GUI as well, but to a far lesser degree. Messaging AI may get better at a rate we’ve never seen in the GUI world. Hold on tight.[Emphasis added.]

Think of it this way, a GUI locks you into the developer’s imagination. A text interface empowers the user and the AI’s imagination. I’m betting on the latter.

BTW, Jonathan ends with a great list of further reading on messaging and mobile applications.

Enjoy!

I first saw this in a tweet by Aloyna Medelyan.

Elon Musk Must Be Wringing His Hands, Again

Wednesday, February 25th, 2015

Google develops computer program capable of learning tasks independently by Hannah Devlin.

From the post:

Google scientists have developed the first computer program capable of learning a wide variety of tasks independently, in what has been hailed as a significant step towards true artificial intelligence.

The same program, or “agent” as its creators call it, learnt to play 49 different retro computer games, and came up with its own strategies for winning. In the future, the same approach could be used to power self-driving cars, personal assistants in smartphones or conduct scientific research in fields from climate change to cosmology.

The research was carried out by DeepMind, the British company bought by Google last year for £400m, whose stated aim is to build “smart machines”.

Demis Hassabis, the company’s founder said: “This is the first significant rung of the ladder towards proving a general learning system can work. It can work on a challenging task that even humans find difficult. It’s the very first baby step towards that grander goal … but an important one.”

Truly a remarkable achievement.

I haven’t found a more detailed description of the strategies developed by the “agent,” but it would be interesting to try those out on retro computer games.

The post is a good one and worth your time to read.

It closes by contrasting Elon Musk’s fears of an AI apocalypse with Google’s assurance that any danger is decades away.

I take a great deal of reassurance from the “agent” being supplied with the retro video games.

The “agent” did not choose to become a master of Asteroids, with the intent of being the despair of all other gamers at the local arcade.

However good an “agent” may become, at any task, from video games to surgery, the question is who chooses for the task to be performed? Granting we probably want to lock out commands like: “Make me a suitcase size nuclear weapon.” and that sort of thing.

The Future of AI: Refections From A Dark Mirror

Tuesday, February 17th, 2015

You have seen Artificial Intelligence could make us extinct, warn Oxford University researchers or similar pieces in the news of late.

With the usual sound bites (shortened even more here):

  • Oxford researchers: “intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.”
  • Elon Musk, the man behind PayPal, Tesla Motors and SpaceX,… ‘our biggest existential threat
  • Bill Gates backed up Musk’s concerns…”I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
  • The Greatest Living Physicist? Stephen Hawking…”The development of full artificial intelligence could spell the end of the human race. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

This is what is known as the “argument from authority” (a fallacy).

As the Wikipedia article on argument from authority notes:

…authorities can come to the wrong judgments through error, bias, dishonesty, or falling prey to groupthink. Thus, the appeal to authority is not a generally reliable argument for establishing facts.[7]

This article and others like it must use the “argument from authority” fallacy because they have no facts with which to persuade you of the danger of future AI. It isn’t often that you find others, outside of science fiction, who admit their alleged dangers are invented out of whole clothe.

The Oxford Researchers attempt to dress their alarmist assertions up to sound better than “appeal to authority:”

Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), 485 and would probably act in a way to boost their own intelligence and acquire maximal resources for almost all initial AI motivations. 486 And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence.

This makes extremely intelligent AIs a unique risk, 488 in that extinction is more likely than lesser impacts. An AI would only turn on humans if it foresaw a likely chance of winning; otherwise it would remain fully integrated into society. And if an AI had been able to successfully engineer a civilisation collapse, for instance, then it could certainly drive the remaining humans to extinction.

Let’s briefly compare the statements made about some future AI with the sources cited by the authors.

486 See Omohundro, Stephen M.: The basic AI drives. Frontiers in Artificial Intelligence and applications 171 (2008): 483

The Basic AI Drives offers the following abstract:

One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

Omohundro reminds me of Alan Greenspan, who had to admit to Congress that his long held faith in “…rational economic behavior…” of investors to be mistaken.

From wikipedia:

In Congressional testimony on October 23, 2008, Greenspan finally conceded error on regulation. The New York Times wrote, “a humbled Mr. Greenspan admitted that he had put too much faith in the self-correcting power of free markets and had failed to anticipate the self-destructive power of wanton mortgage lending. … Mr. Greenspan refused to accept blame for the crisis but acknowledged that his belief in deregulation had been shaken.” Although many Republican lawmakers tried to blame the housing bubble on Fannie Mae and Freddie Mac, Greenspan placed far more blame on Wall Street for bundling subprime mortgages into securities.[80]

Like Greenspan, Omohundro has created a hedge around intelligence that he calls “rational economic behavior,” which has its roots Boolean logic. The problem is that Omonundro, as so many others, appears to know Boole’s An Investigation of the Laws of Thought by reputation and/or repetition by others.

Boole was very careful to point out that his rules were only one aspect of what it means to “reason,” saying at pp. 327-328:

But the very same class of considerations shows with equal force the error of those who regard the study of Mathematics, and of their applications, as a sufficient basis either of knowledge or of discipline. If the constitution of the material frame is mathematical, it is not merely so. If the mind, in its capacity of formal reasoning, obeys, whether consciously or unconsciously, mathematical laws, it claims through its other capacities of sentiment and action, through its perceptions of beauty and of moral fitness, through its deep springs of emotion and affection, to hold relation to a different order of things. There is, moreover, a breadth of intellectual vision, a power of sympathy with truth in all its forms and manifestations, which is not measured by the force and subtlety of the dialectic faculty. Even the revelation of the material universe in its boundless magnitude, and pervading order, and constancy of law, is not necessarily the most fully apprehended by him who has traced with minutest accuracy the steps of the great demonstration. And if we embrace in our survey the interests and duties of life, how little do any processes of mere ratiocination enable us to comprehend the weightier questions which they present! As truly, therefore, as the cultivation of the mathematical or deductive faculty is a part of intellectual discipline, so truly is it only a part. The prejudice which would either banish or make supreme any one department of knowledge or faculty of mind, betrays not only error of judgment, but a defect of that intellectual modesty which is inseparable from a pure devotion to truth. It assumes the office of criticising a constitution of things which no human appointment has established, or can annul. It sets aside the ancient and just conception of truth as one though manifold. Much of this error, as actually existent among us, seems due to the special and isolated character of scientific teaching—which character it, in its turn, tends to foster. The study of philosophy, notwithstanding a few marked instances of exception, has failed to keep pace with the advance of the several departments of knowledge, whose mutual relations it is its province to determine. It is impossible, however, not to contemplate the particular evil in question as part of a larger system, and connect it with the too prevalent view of knowledge as a merely secular thing, and with the undue predominance, already adverted to, of those motives, legitimate within their proper limits, which are founded upon a regard to its secular advantages. In the extreme case it is not difficult to see that the continued operation of such motives, uncontrolled by any higher principles of action, uncorrected by the personal influence of superior minds, must tend to lower the standard of thought in reference to the objects of knowledge, and to render void and ineffectual whatsoever elements of a noble faith may still survive.

As far as “drives” of an AI, we only have one speculation on such drives and not any factual evidence. Restricting the future model of AI to current mis-understandings of what it means to reason doesn’t seem like a useful approach.

487 See Muehlhauser, Luke, and Louie Helm.: Intelligence Explosion and Machine Ethics. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer (2012)

Muehlhauser and Helm are cited for the proposition:

And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence.

The abstract for Intelligence Explosion and Machine Ethics reads:

Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence,” we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity.”

What follows is a delightful discussion of the difficulties of constructing moral rules of universal application and how many moral guidance for AIs could reach unintended consequences. I take the essay as evidence of our imprecision in moral reasoning and the need to do better for ourselves and any future AI. Its relationship to “…driven to construct a world without humans or without meaningful features of human existence” is tenuous at best.

For their most extreme claim:

This makes extremely intelligent AIs a unique risk, 488 in that extinction is more likely than lesser impacts.

the authors rely upon the most reliable source, themselves:

488 Dealing with most risks comes under the category of decision theory: finding the right approaches to maximise the probability of the most preferred options. But an intelligent agent can react to decisions in a way the environment cannot, meaning that interactions with AIs are better modelled by the more complicated discipline of game theory.

For the extinction by a future AI being more likely, the authors only have self-citation as authority.

To summarize, the claims about future AI are based on arguments from authority and the evidence cited by the “Oxford researchers” consists of one defective notion of AI, one exploration of specifying moral rules and a self-citation.

As a contrary example, consider all the non-human inhabitants of the Earth, none of which have exhibited that unique human trait, the need to drive other species into extinction. Perhaps those who fear a future AI are seeing a reflection from a dark mirror.

PS: You can see the full version of the Oxford report: 12 Risks that threaten human civilisation.

The authors and/or their typesetter is very skilled at page layout and the use of color. It is unfortunate they did not have professional editing for the AI section of the report.

MS Deep Learning Beats Humans (and MS is modest about it)

Tuesday, February 10th, 2015

Microsoft researchers say their newest deep learning system beats humans — and Google

Two stories for the price of one! Microsoft’s deep learning project beats human recognition on a data set and Microsoft is modest about it. 😉

From the post:

The Microsoft creation got a 4.94 percent error rate for the correct classification of images in the 2012 version of the widely recognized ImageNet data set , compared with a 5.1 percent error rate among humans, according to the paper. The challenge involved identifying objects in the images and then correctly selecting the most accurate categories for the images, out of 1,000 options. Categories included “hatchet,” “geyser,” and “microwave.”

[modesty]
“While our algorithm produces a superior result on this particular dataset, this does not indicate that machine vision outperforms human vision on object recognition in general,” they wrote. “On recognizing elementary object categories (i.e., common objects or concepts in daily lives) such as the Pascal VOC task, machines still have obvious errors in cases that are trivial for humans. Nevertheless, we believe that our results show the tremendous potential of machine algorithms to match human-level performance on visual recognition.”

You can grab the paper here.

Hoping that Microsoft sets a trend in reporting breakthroughs in big data and machine learning. Stating the achievement but also its limitations may lead to more accurate reporting of technical news. Not holding my breath but I am hopeful.

I first saw this in a tweet by GPUComputing.

All Models of Learning have Flaws

Wednesday, February 4th, 2015

All Models of Learning have Flaws by John Langford.

From the post:

Attempts to abstract and study machine learning are within some given framework or mathematical model. It turns out that all of these models are significantly flawed for the purpose of studying machine learning. I’ve created a table (below) outlining the major flaws in some common models of machine learning.

Quite dated (2007) but still quite handy chart of what is “right” and “wrong” about machine learning models.

Would be even more useful with smallish data sets that illustrate what is “right” and “wrong” about each model.

Anything you would add or take away?

I first saw this in a tweet by Computer Science.

Facebook open sources tools for bigger, faster deep learning models

Saturday, January 17th, 2015

Facebook open sources tools for bigger, faster deep learning models by Derrick Harris.

From the post:

Facebook on Friday open sourced a handful of software libraries that it claims will help users build bigger, faster deep learning models than existing tools allow.

The libraries, which Facebook is calling modules, are alternatives for the default ones in a popular machine learning development environment called Torch, and are optimized to run on Nvidia graphics processing units. Among the modules are those designed to rapidly speed up training for large computer vision systems (nearly 24 times, in some cases), to train systems on potentially millions of different classes (e.g., predicting whether a word will appear across a large number of documents, or whether a picture was taken in any city anywhere), and an optimized method for building language models and word embeddings (e.g., knowing how different words are related to each other).

“‘[T]here is no way you can use anything existing” to achieve some of these results, said Soumith Chintala, an engineer with Facebook Artificial Intelligence Research.

How very awesome! Keeping abreast of the latest releases and papers on deep learning is turning out to be a real chore. Enjoyable but a time sink none the less.

Derrick’s post and the release from Facebook have more details.

Apologies for the “lite” posting today but I have been proofing related specifications where one defines a term and the other uses the term, but doesn’t cite the other specification’s definition or give its own. Do those mean the same thing? Probably the same thing but users outside the process may or may not realize that. Particularly in translation.

I first saw this in a tweet by Kirk Borne.

Simple Pictures That State-of-the-Art AI Still Can’t Recognize

Thursday, January 8th, 2015

Simple Pictures That State-of-the-Art AI Still Can’t Recognize by Kyle VanHemert.

I encountered this non-technical summary of Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, which I covered as: Deep Neural Networks are Easily Fooled:… earlier today.

While I am sure you have read the fuller explanation, I wanted to replicate the top 40 images for your consideration:

top40-660x589

Select the image to see a larger, readable version.

Enjoy the images and pass the Wired article along to friends.