Archive for the ‘Artificial Intelligence’ Category

AlphaZero: Mastering Unambiguous, Low-Dimensional Data

Wednesday, December 6th, 2017

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm by David Silver, et al.

Abstract:

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

The achievements by the AlphaZero team and their algorithm merit joyous celebration.

Joyous celebration recognizing AlphaZero masters unambiguous, low-dimensional data governed by deterministic rules that define the outcomes for any state, more quickly and completely than any human.

Chess, Shogi and Go appear complex to humans due to the large number of potential outcomes. But every outcome is the result of the application of deterministic rules to unambiguous, low-dimensional data. Something that AlphaZero excels at doing.

What hasn’t been shown is equivalent performance on ambiguous, high-dimensional data, governed by partially (if that) known rules, for a limited set of sub-cases. For those cases, well, you need a human being.

That’s not to take anything away from the AlphaZero team, but to recognize the strengths of AlphaZero and to avoid its application where it is weak.

23 Deep Learning Papers To Get You Started — Part 1 (Reading Slowly)

Saturday, November 25th, 2017

23 Deep Learning Papers To Get You Started — Part 1 by Rupak Kr. Thakur.

Deep Learning has probably been the single-most discussed topic in the academia and industry in recent times. Today, it is no longer exclusive to an elite group of scientists. Its widespread applications warrants that people from all disciplines have an understanding of the underlying concepts, so as to be able to better apply these techniques in their field of work. As a result of which, MOOCs, certifications and bootcamps have flourished. People have generally preferred the hands-on learning experiences. However, there is a considerable population who still give in to the charm of learning the subject the traditional way — through research papers.

Reading research papers can be pretty time-consuming, especially since there are hordes of publications available nowadays, as Andrew Ng said at an AI conference, recently, along with encouraging people to use the existing research output to build truly transformative solutions across industries.

In this series of blog posts, I’ll try to condense the learnings from some really important papers into 15–20 min reads, without missing out on any key formulas or explanations. The blog posts are written, keeping in mind the people, who want to learn basic concepts and applications of deep learning, but can’t spend too much time scouring through the vast literature available. Each part of the blog will broadly cater to a theme and will introduce related key papers, along with suggesting some great papers for additional reading.

In the first part, we’ll explore papers related to CNNs — an important network architecture in deep learning. Let’s get started!

The start of what promises to be a great series on deep learning!

While the posts will extract the concepts and important points of the papers, I suggest you download the papers and map the summaries back to the papers themselves.

It will be good practice on reading original research, not to mention re-enforcing what you have learned from the posts.

In my reading, I will be looking for ways to influence deep learning towards one answer or another.

Whatever they may say about “facts” in public, no sane client asks for advice without an opinion on the range of acceptable answers.

Imagine you found ISIS content on Twitter has no measurable impact on ISIS recruiting. Would any intelligence agency would ask you for deep learning services again?

Hackers! 90% of Federal IT Managers Aiming for Their Own Feet!

Tuesday, November 14th, 2017

The Federal Cyber AI IQ Test November 14, 2017 reports:


Most Powerful Applications:

  • 90% of Feds say AI could help prepare agencies for real-world cyber attack scenarios and 87% say it would improve the efficiency of the Federal cyber security workforce
  • 91% say their agency could utilize AI to monitor human activity and deter insider threats, including detecting suspicious elements and large amounts of data being downloaded, and analyzing risky user behavior
  • (emphasis in original)

One sure conclusion from this report, 90% of Feds don’t know AIs mistake turtles for rifles, 90% of the time. The adversarial example literature is full of such cases and getting more robust by the day.

The trap federal IT managers have fallen into is a familiar one. To solve an entirely human problem, a shortage of qualified labor, they want mechanize the required task, even if it means a lower qualify end result. Human problems are solved poorly, if at all, by mechanized solutions.

Opposed by lowest common denominator AI systems, hackers will be all but running the mints as cybersecurity AI systems spread across the federal government. “Ghost” federal installations will appear on agency records for confirmation of FedEx/UPS shipments. The possibilities are endless.

If you are a state or local government or even a federal IT manager, letting hackers run wild isn’t a foregone conclusion.

You could pattern your compensation packages after West Coast start-ups, along with similar perks. Expensive but do you want an OMB type data leak on your record?

Neuroscience-Inspired Artificial Intelligence

Saturday, August 5th, 2017

Neuroscience-Inspired Artificial Intelligence by Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick.

Abstract:

The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.

Extremely rich article with nearly four (4) pages of citations.

Reading this paper closely and chasing the citations is a non-trivial task but you will be prepared understand and/or participate in the next big neuroscience/AI breakthrough.

Enjoy!

Why Learn OpenAI? In a word, Malware!

Tuesday, August 1st, 2017

OpenAI framework used to create undetectable malware by Anthony Spadafora.

Spadafora reports on Endgame‘s malware generating software, Malware Env for OpenAI Gym.

From the Github page:

This is a malware manipulation environment for OpenAI’s gym. OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. This makes it possible to write agents that learn to manipulate PE files (e.g., malware) to achieve some objective (e.g., bypass AV) based on a reward provided by taking specific manipulation actions.
… (highlight in original)

Introducing OpenAI is a good starting place to learn more about OpenAI.

The value of the OpenAI philosophy:

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

will vary depending upon your objectives.

From my perspective, it’s better for my AI to decide to reach out or stay its hand, as opposed to relying upon ethical behavior of another AI.

You?

Mistaken Location of Creativity in “Machine Creativity Beats Some Modern Art”

Friday, June 30th, 2017

Machine Creativity Beats Some Modern Art

From the post:

Creativity is one of the great challenges for machine intelligence. There is no shortage of evidence showing how machines can match and even outperform humans in vast areas of endeavor, such as face and object recognition, doodling, image synthesis, language translation, a vast variety of games such as chess and Go, and so on. But when it comes to creativity, the machines lag well behind.

Not through lack of effort. For example, machines have learned to recognize artistic style, separate it from the content of an image, and then apply it to other images. That makes it possible to convert any photograph into the style of Van Gogh’s Starry Night, for instance. But while this and other work provides important insight into the nature of artistic style, it doesn’t count as creativity. So the challenge remains to find ways of exploiting machine intelligence for creative purposes.

Today, we get some insight into progress in this area thanks to the work of Ahmed Elgammal at the Art & AI Laboratory at Rutgers University in New Jersey, along with colleagues at Facebook’s AI labs and elsewhere.
… (emphasis in original)

This summary of CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms by Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, Marian Mazzone, repeats a mistake made by the authors, that is the misplacement of creativity.

Creativity, indeed, even art itself, is easily argued to reside in the viewer (reader) and not the creator at all.

To illustrate, I quote a long passage from Stanley Fish’s How to Recognize a Poem When You See One below but a quick summary/reminder goes like this:

Fish was teaching back to back classes in the same classroom and for the first class, wrote a list of authors on the blackboard. After the first class ended but before the second class, a poetry class, arrived, he enclosed the list of authors in a rectangle and wrote a page number, as though the list was from a book. When the second class arrived, he asked them to interpret the “poem” that was on the board. Which they proceeded to do. Where would you locate creativity in that situation?

The longer and better written start of the story (by Fish):

[1] Last time I sketched out an argument by which meanings are the property neither of fixed and stable texts nor of free and independent readers but of interpretive communities that are responsible both for the shape of a reader’s activities and for the texts those activities produce. In this lecture I propose to extend that argument so as to account not only for the meanings a poem might be said to have but for the fact of its being recognized as a poem in the first place. And once again I would like to begin with an anecdote.

[2] In the summer of 1971 I was teaching two courses under the joint auspices of the Linguistic Institute of America and the English Department of the State University of New York at Buffalo. I taught these courses in the morning and in the same room. At 9:30 I would meet a group of students who were interested in the relationship between linguistics and literary criticism. Our nominal subject was stylistics but our concerns were finally theoretical and extended to the presuppositions and assumptions which underlie both linguistic and literary practice. At 11:00 these students were replaced by another group whose concerns were exclusively literary and were in fact confined to English religious poetry of the seventeenth century. These students had been learning how to identify Christian symbols and how to recognize typological patterns and how to move from the observation of these symbols and patterns to the specification of a poetic intention that was usually didactic or homiletic. On the day I am thinking about, the only connection between the two classes was an assignment given to the first which was still on the blackboard at the beginning of the second. It read:

Jacobs-Rosenbaum
Levin
Thorne
Hayes
Ohman (?)

[3] I am sure that many of you will already have recognized the names on this list, but for the sake of the record, allow me to identify them. Roderick Jacobs and Peter Rosenbaum are two linguists who have coauthored a number of textbooks and coedited a number of anthologies. Samuel Levin is a linguist who was one of the first to apply the operations of transformational grammar to literary texts. J. P. Thorne is a linguist at Edinburgh who, like Levin, was attempting to extend the rules of transformational grammar to the notorious ir-regularities of poetic language. Curtis Hayes is a linguist who was then using transformational grammar in order to establish an objective basis for his intuitive impression that the language of Gibbon’s Decline and Fall of the Roman Empire is more complex than the language of Hemingway’s novels. And Richard Ohmann is the literary critic who, more than any other, was responsible for introducing the vocabulary of transformational grammar to the literary community. Ohmann’s name was spelled as you see it here because I could not remember whether it contained one or two n’s. In other words, the question mark in parenthesis signified nothing more than a faulty memory and a desire on my part to appear scrupulous. The fact that the names appeared in a list that was arranged vertically, and that Levin, Thorne, and Hayes formed a column that was more or less centered in relation to the paired names of Jacobs and Rosenbaum, was similarly accidental and was evidence only of a certain compulsiveness if, indeed, it was evidence of anything at all.

[4] In the time between the two classes I made only one change. I drew a frame around the assignment and wrote on the top of that frame “p. 43.” When the members of the second class filed in I told them that what they saw on the blackboard was a religious poem of the kind they had been studying and I asked them to interpret it. Immediately they began to perform in a manner that, for reasons which will become clear, was more or less predictable. The first student to speak pointed out that the poem was probably a hieroglyph, although he was not sure whether it was in the shape of a cross or an altar. This question was set aside as the other students, following his lead, began to concentrate on individual words, interrupting each other with suggestions that came so quickly that they seemed spontaneous. The first line of the poem (the very order of events assumed the already constituted status of the object) received the most attention: Jacobs was explicated as a reference to Jacob’s ladder, traditionally allegorized as a figure for the Christian ascent to heaven. In this poem, however, or so my students told me, the means of ascent is not a ladder but a tree, a rose tree or rosenbaum. This was seen to be an obvious reference to the Virgin Mary who was often characterized as a rose without thorns, itself an emblem of the immaculate conception. At this point the poem appeared to the students to be operating in the familiar manner of an iconographic riddle. It at once posed the question, “How is it that a man can climb to heaven by means of a rose tree?” and directed the reader to the inevitable answer: by the fruit of that tree, the fruit of Mary’s womb, Jesus. Once this interpretation was established it received support from, and conferred significance on, the word “thorne,” which could only be an allusion to the crown of thorns, a symbol of the trial suffered by Jesus and of the price he paid to save us all. It was only a short step (really no step at all) from this insight to the recognition of Levin as a double reference, first to the tribe of Levi, of whose priestly function Christ was the fulfillment, and second to the unleavened bread carried by the children of Israel on their exodus from Egypt, the place of sin, and in response to the call of Moses, perhaps the most familiar of the old testament types of Christ. The final word of the poem was given at least three complementary readings: it could be “omen,” especially since so much of the poem is concerned with foreshadowing and prophecy; it could be Oh Man, since it is mans story as it intersects with the divine plan that is the poem’s subject; and it could, of course, be simply “amen,” the proper conclusion to a poem celebrating the love and mercy shown by a God who gave his only begotten son so that we may live.

[5] In addition to specifying significances for the words of the poem and relating those significances to one another, the students began to discern larger structural patterns. It was noted that of the six names in the poem three–Jacobs, Rosenbaum, and Levin–are Hebrew, two–Thorne and Hayes–are Christian, and one–Ohman–is ambiguous, the ambiguity being marked in the poem itself (as the phrase goes) by the question mark in parenthesis. This division was seen as a reflection of the basic distinction between the old dis-pensation and the new, the law of sin and the law of love. That distinction, however, is blurred and finally dissolved by the typological perspective which invests the old testament events and heroes with new testament meanings. The structure of the poem, my students concluded, is therefore a double one, establishing and undermining its basic pattern (Hebrew vs. Christian) at the same time. In this context there is finally no pressure to resolve the ambiguity of Ohman since the two possible readings–the name is Hebrew, the name is Christian–are both authorized by the reconciling presence in the poem of Jesus Christ. Finally, I must report that one student took to counting letters and found, to no one’s surprise, that the most prominent letters in the poem were S, O, N.

The account by Fish isn’t long and is highly recommended if you are interested in this issue.

If readers/viewers interpret images as art, is the “creativity” of the process that brought it into being even meaningful? Or does polling of viewers measure their appreciation of an image as art, without regard to the process that created it? Exactly what are we measuring when polling such viewers?

By Fish’s account, such a poll tells us a great deal about the viewers but nothing about the creator of the art.

FYI, that same lesson applies to column headers, metadata keys, and indeed, data itself. Which means the “meaning” of what you wrote may be obvious to you, but not to anyone else.

Topic maps can increase your odds of being understood or discovering the understanding captured by others.

If You Don’t Think “Working For The Man” Is All That Weird

Saturday, June 17th, 2017

J.P.Morgan’s massive guide to machine learning and big data jobs in finance by Sara Butcher.

From the post:

Financial services jobs go in and out of fashion. In 2001 equity research for internet companies was all the rage. In 2006, structuring collateralised debt obligations (CDOs) was the thing. In 2010, credit traders were popular. In 2014, compliance professionals were it. In 2017, it’s all about machine learning and big data. If you can get in here, your future in finance will be assured.

J.P. Morgan’s quantitative investing and derivatives strategy team, led Marko Kolanovic and Rajesh T. Krishnamachari, has just issued the most comprehensive report ever on big data and machine learning in financial services.

Titled, ‘Big Data and AI Strategies’ and subheaded, ‘Machine Learning and Alternative Data Approach to Investing’, the report says that machine learning will become crucial to the future functioning of markets. Analysts, portfolio managers, traders and chief investment officers all need to become familiar with machine learning techniques. If they don’t they’ll be left behind: traditional data sources like quarterly earnings and GDP figures will become increasingly irrelevant as managers using newer datasets and methods will be able to predict them in advance and to trade ahead of their release.

At 280 pages, the report is too long to cover in detail, but we’ve pulled out the most salient points for you below.

How important is Sarah’s post and the report by J.P. Morgan?

Let put it this way: Sarah’s post is the first business type post I have saved as a complete webpage so I can clean it up and print without all the clutter. This year. Perhaps last year as well. It’s that important.

Sarah’s post is a quick guide to the languages, talents and tools you will need to start “working for the man.”

It that catches your interest, then Sarah’s post is pure gold.

Enjoy!

PS: I’m still working on a link for the full 280 page report. The switchboard is down for the weekend so I will be following up with J.P. Morgan on Monday next.

AI Brain Scans

Monday, March 13th, 2017

‘AI brain scans’ reveal what happens inside machine learning


The ResNet architecture is used for building deep neural networks for computer vision and image recognition. The image shown here is the forward (inference) pass of the ResNet 50 layer network used to classify images after being trained using the Graphcore neural network graph library

Credit Graphcore / Matt Fyles

The image is great eye candy, but if you want to see images annotated with information, check out: Inside an AI ‘brain’ – What does machine learning look like? (Graphcore)

From the product overview:

Poplar™ is a scalable graph programming framework targeting Intelligent Processing Unit (IPU) accelerated servers and IPU accelerated server clusters, designed to meet the growing needs of both advanced research teams and commercial deployment in the enterprise. It’s not a new language, it’s a C++ framework which abstracts the graph-based machine learning development process from the underlying graph processing IPU hardware.

Poplar includes a comprehensive, open source set of Poplar graph libraries for machine learning. In essence, this means existing user applications written in standard machine learning frameworks, like Tensorflow and MXNet, will work out of the box on an IPU. It will also be a natural basis for future machine intelligence programming paradigms which extend beyond tensor-centric deep learning. Poplar has a full set of debugging and analysis tools to help tune performance and a C++ and Python interface for application development if required.

The IPU-Appliance for the Cloud is due out in 2017. I have looked at Graphcore but came up dry on the Poplar graph libraries and/or an emulator for the IPU.

Perhaps those will both appear later in 2017.

Optimized hardware for graph calculations sounds promising but rapidly processing nodes that may or may not represent the same subject seems like a defect waiting to make itself known.

Many approaches rapidly process uncertain big data but being no more ignorant than your competition is hardly a selling point.

AI Assisted Filtering?

Thursday, February 23rd, 2017

Check Out Alphabet’s New Tool to Weed Out the ‘Toxic’ Abuse of Online Comments by Jeff John Roberts.

From the post:

A research team tied to Google unveiled a new tool on Thursday that could have a profound effect on how we talk to each other online. It’s called “Perspective,” and it provides a way for news websites and blogs to moderate online discussions with the help of artificial intelligence.

The researchers believe it could turn the tide against trolls on the Internet, and reestablish online comment forums—which many view as cesspools of hatred and stupidity—as a place for honest debate about current events.

The Perspective tool was hatched by artificial intelligence experts at Jigsaw, a subsidiary of Google-holding company Alphabet (GOOGL, -0.04%) that is devoted to policy and ideas. The significance of the tool, pictured below, is that it can decide if an online comment is “toxic” without the aid of human moderators. This means websites—many of which have given up on hosting comments altogether—could now have an affordable way to let their readers debate contentious topics of the day in a civil and respectful forum.

“Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime,” Jigsaw founder and president Jared Cohen said in a blog post. “We think technology can help.”

I’m intrigued by this, at least to the extent that AI assisted filtering is extended to users. Such that a user can determine what comments they do/don’t see.

I avoid all manner of nonsense on the Internet, in part by there being places I simply don’t go. Not worth the effort to filter all the trash.

But at the same time, I don’t prevent other people, who may have differing definitions of “trash,” from consuming as much of it as they desire.

It’s really sad that Twitter continues to ignore the market potential of filters in favor of its mad-cap pursuit of being an Internet censor.

I have even added Ed Ho, said to be the VP of Engineering at Twitter, to one or more of my tweets suggesting ways Twitter could make money on filters. No response, nada.

It’s either “not invented here,” or Twitter staff spend so much time basking in their own righteousness they can’t be bothered with communications from venal creatures. Hard to say.

Jeff reports this is a work in progress and you can see it from yourself: What if technology could help improve conversations online?.

Check out the code at: https://conversationai.github.io/.

Or even Request API Access! (There no separate link, try: http://www.perspectiveapi.com/.)

Perspective can help with your authoring in real time.

Try setting the sensitivity very low and write/edit until it finally objects. 😉

Especially for Fox news comments. I always leave some profanity or ill comment unsaid. Maybe Perspective can help with that.

AI Podcast: Winning the Cybersecurity Cat and Mouse Game with AI

Wednesday, February 22nd, 2017

AI Podcast: Winning the Cybersecurity Cat and Mouse Game with AI. Brian Caulfield interviews Eli David of Deep Instinct.

From the description:

Cybersecurity is a cat-and-mouse game. And the mouse always has the upper hand. That’s because it’s so easy for new malware to go undetected.

Eli David, an expert in computational intelligence, wants to use AI to change that. He’s CTO of Deep Instinct, a security firm with roots in Israel’s defense industry, that is bringing the GPU-powered deep learning techniques underpinning modern speech and image recognition to the vexing world of cybersecurity.

“It’s exactly like Tom and Jerry, the cat and the mouse, with the difference being that, in this case, Jerry the mouse always has the upper hand,” David said in a conversation on the AI Podcast with host Michael Copeland. He notes that more than 1 million new pieces of malware are created every day.

Interesting take on detection of closely similar malware using deep learning.

Directed in part at detecting smallish modifications that evade current malware detection techniques.

OK, but who is working on using deep learning to discover flaws in software code?

The Rise of the Weaponized AI Propaganda Machine

Tuesday, February 14th, 2017

The Rise of the Weaponized AI Propaganda Machine by Berit Anderson and Brett Horvath.

From the post:

“This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright.

Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other key experts in the field, including Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, it became clear to Scout that this phenomenon was about much more than just a few fake news stories. It was a piece of a much bigger and darker puzzle — a Weaponized AI Propaganda Machine being used to manipulate our opinions and behavior to advance specific political agendas.

By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world.

Before you get too panicked, remember the techniques attributed to Cambridge Analytica were in use in the 1960 Kennedy presidential campaign. And have been in use since then by marketeers for every known variety of product, including politicians.

It’s hard to know if Anderson and Horvath are trying to drum up more business for Cambridge Analytica or if they are genuinely concerned for the political process.

Granting that Cambridge Analytica has more data than was available in the 1960’s but many people, not just Cambridge Analytica have labored on manipulation of public opinion since then.

If people were as easy to sway, politically speaking, as Anderson and Horvath posit, then why is there any political diversity at all? Shouldn’t we all be marching in lock step by now?

Oh, it’s a fun read so long as you don’t take it too seriously.

Besides, if a “weaponized AI propaganda machine” is that dangerous, isn’t the best defense a good offense?

I’m all for cranking up a “demonized AI propaganda machine” if you have the funding.

Yes?

2017/18 – When you can’t believe your eyes

Friday, December 23rd, 2016

Artificial intelligence is going to make it easier than ever to fake images and video by James Vincent.

From the post:

Smile Vector is a Twitter bot that can make any celebrity smile. It scrapes the web for pictures of faces, and then it morphs their expressions using a deep-learning-powered neural network. Its results aren’t perfect, but they’re created completely automatically, and it’s just a small hint of what’s to come as artificial intelligence opens a new world of image, audio, and video fakery. Imagine a version of Photoshop that can edit an image as easily as you can edit a Word document — will we ever trust our own eyes again?

“I definitely think that this will be a quantum step forward,” Tom White, the creator of Smile Vector, tells The Verge. “Not only in our ability to manipulate images but really their prevalence in our society.” White says he created his bot in order to be “provocative,” and to show people what’s happening with AI in this space. “I don’t think many people outside the machine learning community knew this was even possible,” says White, a lecturer in creative coding at Victoria University School of design. “You can imagine an Instagram-like filter that just says ‘more smile’ or ‘less smile,’ and suddenly that’s in everyone’s pocket and everyone can use it.”

Vincent reviews a number of exciting advances this year and concludes:


AI researchers involved in this fields are already getting a firsthand experience of the coming media environment. “I currently exist in a world of reality vertigo,” says Clune. “People send me real images and I start to wonder if they look fake. And when they send me fake images I assume they’re real because the quality is so good. Increasingly, I think, we won’t know the difference between the real and the fake. It’s up to people to try and educate themselves.”

An image sent to you may appear to be very convincing, but like the general in War Games, you have to ask does it make any sense?

Verification, subject identity in my terminology, requires more than an image. What do we know about the area? Or the people (if any) in the image? Where were they supposed to be today? And many other questions that depend upon the image and its contents.

Unless you are using a subject-identity based technology, where are you going to store that additional information? Or express your concerns about authenticity?

Four Experiments in Handwriting with a Neural Network

Tuesday, December 6th, 2016

Four Experiments in Handwriting with a Neural Network by Shan Carter, David Ha, Ian Johnson, and Chris Olah.

While the handwriting experiments are compelling and entertaining, the author’s have a more profound goal for this activity:


The black box reputation of machine learning models is well deserved, but we believe part of that reputation has been born from the programming context into which they have been locked into. The experience of having an easily inspectable model available in the same programming context as the interactive visualization environment (here, javascript) proved to be very productive for prototyping and exploring new ideas for this post.

As we are able to move them more and more into the same programming context that user interface work is done, we believe we will see richer modes of human-ai interactions flourish. This could have a marked impact on debugging and building models, for sure, but also in how the models are used. Machine learning research typically seeks to mimic and substitute humans, and increasingly it’s able to. What seems less explored is using machine learning to augment humans. This sort of complicated human-machine interaction is best explored when the full capabilities of the model are available in the user interface context.

Setting up a search alert for future work from these authors!

None/Some/All … Are Suicide Bombers & Probabilistic Programming Languages

Tuesday, November 8th, 2016

The Design and Implementation of Probabilistic Programming Languages by Noah D. Goodman and Andreas Stuhlmüller.

Abstract:

Probabilistic programming languages (PPLs) unify techniques for the formal description of computation and for the representation and use of uncertain knowledge. PPLs have seen recent interest from the artificial intelligence, programming languages, cognitive science, and natural languages communities. This book explains how to implement PPLs by lightweight embedding into a host language. We illustrate this by designing and implementing WebPPL, a small PPL embedded in Javascript. We show how to implement several algorithms for universal probabilistic inference, including priority-based enumeration with caching, particle filtering, and Markov chain Monte Carlo. We use program transformations to expose the information required by these algorithms, including continuations and stack addresses. We illustrate these ideas with examples drawn from semantic parsing, natural language pragmatics, and procedural graphics.

If you want to sharpen the discussion of probabilistic programming languages, substitute in the pragmatics example:

‘none/some/all of the children are suicide bombers’,

The substitution raises the issue of how “certainty” can/should vary depending upon the gravity of results.

Who is a nice person?, has low stakes.

Who is a suicide bomber?, has high stakes.

“Why Should I Trust You?”…

Tuesday, August 23rd, 2016

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin.

Abstract:

Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.

In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.

LIME software at Github.

For a quick overview consider: Introduction to Local Interpretable Model-Agnostic Explanations (LIME) (blog post).

Or what originally sent me in this direction: Trusting Machine Learning Models with LIME at Data Skeptic, a podcast described as:

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there’s good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it’s conclusion.

Data Science Renee finds deeply interesting material such as this on a regular basis and should follow her account on Twitter.

I do have one caveat on a quick read of these materials. The authors say in the paper, under 4. Submodular Pick For Explaining Models:


Even though explanations of multiple instances can be insightful, these instances need to be selected judiciously, since users may not have the time to examine a large number of explanations. We represent the time/patience that humans have by a budget B that denotes the number of explanations they are willing to look at in order to understand a model. Given a set of instances X, we define the pick step as the task of selecting B instances for the user to inspect.

The pick step is not dependent on the existence of explanations – one of the main purpose of tools like Modeltracker [1] and others [11] is to assist users in selecting instances themselves, and examining the raw data and predictions. However, since looking at raw data is not enough to understand predictions and get insights, the pick step should take into account the explanations that accompany each prediction. Moreover, this method should pick a diverse, representative set of explanations to show the user – i.e. non-redundant explanations that represent how the model behaves globally.

The “judicious” selection of instances, in models of any degree of sophistication, based upon large data sets seems problematic.

The focus on the “non-redundant coverage intuition” is interesting but based on the assumption that changes in factors don’t lead to “redundant explanations.” In the cases presented that’s true, but I lack confidence that will be true in every case.

Still, a very important area of research and an effort that is worth tracking.

What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?

Friday, August 19th, 2016

What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning? by Michael Copeland.

From the post:

Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.

For example, when Google DeepMind’s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not the same things.

The easiest way to think of their relationship is to visualize them as concentric circles with AI — the idea that came first — the largest, then machine learning — which blossomed later, and finally deep learning — which is driving today’s AI explosion — fitting inside both.

If you are confused by the mix of artificial intelligence, machine learning, and deep learning, floating around, Copeland will set you straight.

It’s a fun read and one you can recommend to non-technical friends.

When AI’s Take The Fifth – Sign Of Intelligence?

Wednesday, July 6th, 2016

Taking the fifth amendment in Turing’s imitation game by Kevin Warwick and Huma Shahb.

Abstract:

In this paper, we look at a specific issue with practical Turing tests, namely the right of the machine to remain silent during interrogation. In particular, we consider the possibility of a machine passing the Turing test simply by not saying anything. We include a number of transcripts from practical Turing tests in which silence has actually occurred on the part of a hidden entity. Each of the transcripts considered here resulted in a judge being unable to make the ‘right identification’, i.e., they could not say for certain which hidden entity was the machine.

A delightful read about something never seen in media interviews: silence of the person being interviewed.

Of the interviews I watch, which is thankfully a small number, most people would seem more intelligent by being silent more often.

I take author’s results as a mark in favor of Fish’s interpretative communities because “interpretation” of silence falls squarely on the shoulders of the questioner.

If you don’t know the name Kevin Warwick, you should.


As of today, footnote 1 correctly points to the Fifth Amendment text at Cornell but mis-quotes it. In relevant part the Fifth Amendment reads, “…nor shall be compelled in any criminal case to be a witness against himself….”

…possibly biased? Try always biased.

Friday, June 24th, 2016

Artificial Intelligence Has a ‘Sea of Dudes’ Problem by Jack Clark.

From the post:


Much has been made of the tech industry’s lack of women engineers and executives. But there’s a unique problem with homogeneity in AI. To teach computers about the world, researchers have to gather massive data sets of almost everything. To learn to identify flowers, you need to feed a computer tens of thousands of photos of flowers so that when it sees a photograph of a daffodil in poor light, it can draw on its experience and work out what it’s seeing.

If these data sets aren’t sufficiently broad, then companies can create AIs with biases. Speech recognition software with a data set that only contains people speaking in proper, stilted British English will have a hard time understanding the slang and diction of someone from an inner city in America. If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.

“I call it a sea of dudes,” said Margaret Mitchell, a researcher at Microsoft. Mitchell works on computer vision and language problems, and is a founding member—and only female researcher—of Microsoft’s “cognition” group. She estimates she’s worked with around 10 or so women over the past five years, and hundreds of men. “I do absolutely believe that gender has an effect on the types of questions that we ask,” she said. “You’re putting yourself in a position of myopia.”

Margaret Mitchell makes a pragmatic case for diversity int the workplace, at least if you want to avoid male biased AI.

Not that a diverse workplace results in an “unbiased” AI, it will be a biased AI that isn’t solely male biased.

It isn’t possible to escape bias because some person or persons has to score “correct” answers for an AI. The scoring process imparts to the AI being trained, the biases of its judge of correctness.

Unless someone wants to contend there are potential human judges without biases, I don’t see a way around imparting biases to AIs.

By being sensitive to evidence of biases, we can in some cases choose the biases we want an AI to possess, but an AI possessing no biases at all, isn’t possible.

AIs are, after all, our creations so it is only fair that they be made in our image, biases and all.

Bots, Won’t You Hide Me?

Thursday, June 23rd, 2016

Emerging Trends in Social Network Analysis of Terrorism and Counterterrorism, How Police Are Scanning All Of Twitter To Detect Terrorist Threats, Violent Extremism in the Digital Age: How to Detect and Meet the Threat, Online Surveillance: …ISIS and beyond [Social Media “chaff”] are just a small sampling of posts on the detection of “terrorists” on social media.

The last one is my post illustrating how “terrorist” at one time = “anti-Vietnam war,” “civil rights,” and “gay rights.” Due to the public nature of social media, avoiding government surveillance isn’t possible.

I stole the title, Bots, Won’t You Hide Me? from Ben Bova’s short story, Stars, Won’t You Hide Me?. It’s not very long and if you like science fiction, you will enjoy it.

Bova took verses in the short story from Sinner Man, a traditional African spiritual, which was recorded by a number of artists.

All of that is a very round about way to introduce you to a new Twitter account: ConvJournalism:

All you need to know about Conversational Journalism, (journalistic) bots and #convcomm by @martinhoffmann.

Surveillance of groups on social media isn’t going to succeed, The White House Asked Social Media Companies to Look for Terrorists. Here’s Why They’d #Fail by Jenna McLaughlin bots can play an important role in assisting in that failure.

Imagine not only having bots that realistically mimic the chatter of actual human users but who follow, unfollow, etc., and engage in apparent conspiracies, with other bots. Entirely without human direction or very little.

Follow ConvJournalism and promote bot research/development that helps all of us hide. (I’d rather have the bots say yes than Satan.)

Are Non-AI Decisions “Open to Inspection?”

Thursday, June 16th, 2016

Ethics in designing AI Algorithms — part 1 by Michael Greenwood.

From the post:

As our civilization becomes more and more reliant upon computers and other intelligent devices, there arises specific moral issue that designers and programmers will inevitably be forced to address. Among these concerns is trust. Can we trust that the AI we create will do what it was designed to without any bias? There’s also the issue of incorruptibility. Can the AI be fooled into doing something unethical? Can it be programmed to commit illegal or immoral acts? Transparency comes to mind as well. Will the motives of the programmer or the AI be clear? Or will there be ambiguity in the interactions between humans and AI? The list of questions could go on and on.

Imagine if the government uses a machine-learning algorithm to recommend applications for student loan approvals. A rejected student and or parent could file a lawsuit alleging that the algorithm was designed with racial bias against some student applicants. The defense could be that this couldn’t be possible since it was intentionally designed so that it wouldn’t have knowledge of the race of the person applying for the student loan. This could be the reason for making a system like this in the first place — to assure that ethnicity will not be a factor as it could be with a human approving the applications. But suppose some racial profiling was proven in this case.

If directed evolution produced the AI algorithm, then it may be impossible to understand why, or even how. Maybe the AI algorithm uses the physical address data of candidates as one of the criteria in making decisions. Maybe they were born in or at some time lived in poverty‐stricken regions, and that in fact, a majority of those applicants who fit these criteria happened to be minorities. We wouldn’t be able to find out any of this if we didn’t have some way to audit the systems we are designing. It will become critical for us to design AI algorithms that are not just robust and scalable, but also easily open to inspection.

While I can appreciate the desire to make AI algorithms that are “…easily open to inspection…,” I feel compelled to point out that human decision making has resisted such openness for thousands of years.

There are the tales we tell each other about “rational” decision making but those aren’t how decisions are made, rather they are how we justify decisions made to ourselves and others. Not exactly the same thing.

Recall the parole granting behavior of israeli judges that depended upon the proximity to their last meal. Certainly all of those judges would argue for their “rational” decisions but meal time was a better predictor than any other. (Extraneous factors in judicial decisions)

My point being that if we struggle to even articulate the actual basis for non-AI decisions, where is our model for making AI decisions “open to inspection?” What would that look like?

You could say, for example, no discrimination based on race. OK, but that’s not going to work if you want to purposely setup scholarships for minority students.

When you object, “…that’s not what I meant! You know what I mean!…,” well, I might, but try convincing an AI that has no social context of what you “meant.”

The openness of AI decisions to inspection is an important issue but the human record in that regard isn’t encouraging.

AI Cultist On Justice System Reform

Wednesday, June 8th, 2016

White House Challenges Artificial Intelligence Experts to Reduce Incarceration Rates by Jason Shueh.

From the post:

The U.S. spends $270 billion on incarceration each year, has a prison population of about 2.2 million and an incarceration rate that’s spiked 220 percent since the 1980s. But with the advent of data science, White House officials are asking experts for help.

On Tuesday, June 7, the White House Office of Science and Technology Policy’s Lynn Overmann, who also leads the White House Police Data Initiative, stressed the severity of the nation’s incarceration crisis while asking a crowd of data scientists and artificial intelligence specialists for aid.

“We have built a system that is too large, and too unfair and too costly — in every sense of the word — and we need to start to change it,” Obermann said, speaking at a Computing Community Consortium public workshop.

She argued that the U.S., a country that has the highest amount incarcerated citizens in the world, is in need of systematic reforms with both data tools to process alleged offenders and at the policy level to ensure fair and measured sentences. As a longtime counselor, advisor and analyst for the Justice Department and at the city and state levels, Overman said she has studied and witnessed an alarming number of issues in terms of bias and unwarranted punishments.

For instance, she said that statistically, while drug use is about equal between African Americans and Caucasians, African Americans are more likely to be arrested and convicted. They also receive longer prison sentences compared to Caucasian inmates convicted of the same crimes.

Other problems, Oberman said, are due to inflated punishments that far exceed the severity of crimes. She recalled her years spent as an assistant public defender for Florida’s Miami-Dade County Public Defender’s Office as an example.

“I represented a client who was looking at spending 40 years of his life in prison because he stole a lawnmower and a weedeater from a shed in a backyard,” Obermann said, “I had another person who had AIDS and was offered a 15-year sentence for stealing mangos.”

Data and digital tools can help curb such pitfalls by increasing efficiency, transparency and accountability, she said.
… (emphasis added)

Spotting a cultist tip: Before specifying criteria for success or even understanding a problem, a cultist announces the approach that will succeed.

Calls like this one are a disservice to legitimate artificial intelligence research, to say nothing of experts in criminal justice (unlike Lynn Overmann), who have struggled for decades to improve the criminal justice system.

Yes, Overmann has experience in the criminal justice system, both in legal practice and at a policy level, but that makes her no more of an expert on criminal justice reform than having multiple flat tires makes me an expert on tire design.

Data is not, has not been, nor will it ever be a magic elixir that solves undefined problems posed to it.

White House sponsored AI cheer leading is a disservice to AI practitioners, experts in the field of criminal justice reform and more importantly, to those impacted by the criminal justice system.

Substitute meaningful problem definitions for the AI pom-poms if this is to be more than resume padding and currying favor with contractors project.

Tay AI Escapes, Recaptured

Wednesday, March 30th, 2016

Microsoft’s offensive chatbot Tay returns, by mistake by Georgia Wells.

From the post:

Less than one week after Microsoft Corp. made its debut and then silenced an artificially intelligent software chatbot that started spewing anti-Semitic rants, a researcher inadvertently put the chatbot, named Tay, back online. The revived Tay’s messages were no less inappropriate than before.

I remembered a DARPA webinar (download and snooze) but despite following Tay I missed her return.

Looks like I need a better tracking/alarm system for incoming social media.

I see more than enough sexist, racist, bigotry in non-Twitter news feeds to not need any more but I prefer to make my own judgments about “inappropriate.”

Whether it is the FBI, FCC or private groups calling “inappropriate.”

#AlphaGo Style Monte Carlo Tree Search In Python

Sunday, March 27th, 2016

Raymond Hettinger (@raymondh) tweeted the following links for anyone who wants an #AlphaGo style Monte Carlo Tree Search in Python:

Introduction to Monte Carlo Tree Search by Jeff Bradberry.

Monte Carlo Tree Search by Cameron Browne.

Jeff’s post is your guide to Monte Carlo Tree Search in Python while Cameron’s site bills itself as:

This site is intended to provide a comprehensive reference point for online MCTS material, to aid researchers in the field.

I didn’t see any dated later than 2010 on Cameron’s site.

Suggestions for other collections of MCTS material that are more up to date?

“Ethical” Botmakers Censor Offensive Content

Saturday, March 26th, 2016

There are almost 500,000 “hits” from “tay ai” in one popular search engine today.

Against that background, I ran into: How to Make a Bot That Isn’t Racist by Sarah Jeong.

From the post:

…I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking.

“The makers of @TayandYou absolutely 10000 percent should have known better,” thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. “It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet.”

Thricedotted and others belong to an established community of botmakers on Twitter that have been creating and experimenting for years. There’s a Bot Summit. There’s a hashtag (#botALLY).

As I spoke to each botmaker, it became increasingly clear that the community at large was tied together by crisscrossing lines of influence. There is a well-known body of talks, essays, and blog posts that form a common ethical code. The botmakers have even created open source blacklists of slurs that have become Step 0 in keeping their bots in line.

Not researching prior art is as bad as not Reading The Fine Manual (RTFM) before posting help queries to heavy traffic developer forums.

Tricedotted claims a prior obligation of TayandYou’s creators to block offensive content:

For thricedotted, TayandYou failed from the start. “You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit,” they said. “It blows my mind, because surely they’ve been working on this for a while, surely they’ve been working with Twitter data, surely they knew this shit existed. And yet they put in absolutely no safeguards against it?!” (emphasis in original)

No doubt Microsoft wishes that it had blocked offensive content in hindsight, but I don’t see a general ethical obligation to block or censor offensive content.

For example:

  • A bot created to follow public and private accounts of elected officials and it only re-tweeted posts that did contain racial slurs? With @news-organization handles in the tweets.
  • A bot based on matching FEC (Federal Election Commission) donation records + Twitter accounts and it re-tweets racist/offensive tweets along with campaign donation identifiers and the candidate in question.
  • A bot that follows accounts known for racist/offensive tweets for the purpose of building archives of those tweets, publicly accessible, to prevent the sanitizing of tweet archives in the future. (like with TayandYou)

Any of those strike you as “unethical?”

I wish the Georgia legislature and the U.S. Congress would openly used racist and offensive language.

They act in racist and offensive ways so they should be openly racist and offensive. Makes it easier to whip up effective opposition against known racists, etc.

Which is, of course, why they self-censor to not use racist language.

The world is full of offensive people and we should make they own their statements.

Creating a false, sanitized view that doesn’t offend some n+1 sensitivities, is just that, a false view of the world.

If you are looking for an ethical issue, creating views of the world that help conceal racism, sexism, etc., is a better starting place than offensive ephemera.

“Not Understanding” was Tay’s Vulnerability?

Friday, March 25th, 2016

Peter Lee (Corporate Vice President, Microsoft Research) posted Learning from Tay’s introduction where he says:


Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

But Peter never specifies what “vulnerability” Tay suffered from.

To find out what why Tay was “vulnerable,” you have to read Microsoft is deleting its AI chatbot’s incredibly racist tweets by Rob Price where he points out:


The reason it spouted garbage is that racist humans on Twitter quickly spotted a vulnerability — that Tay didn’t understand what it was talking about — and exploited it. (emphasis added)

Hmmm, how soon do you think Microsoft can confer on Tay the ability to “…understand what it [is] talking about…?”

I’m betting that’s not going to happen.

Tay can “learn” (read mimic) language patterns of users but if she speaks to racist users she will say racist things. Or religious, ISIS, sexist, Buddhist, trans-gender, or whatever things.

It isn’t ever going to be a question of Tay “understanding,” but rather of humans creating rules that prevent Tay from imitating certain speech patterns.

She will have no more or less “understanding” than before but her speech patterns will be more acceptable to some segments of users.

I have no doubt the result of Tay’s first day in the world was not what Microsoft wanted or anticipated.

That said, people are a ugly lot and I don’t mean a minority of them. All of us are better some days than others and about some issues and not others.

To the extent that Tay was designed to imitate people, I consider the project to be a success. If you think Tay should react the way some people imagine we should act, then it was a failure.

There’s an interesting question for Easter weekend:

Should an artificial intelligence act as we do or should it act as we ought to do?

PS: I take Peter’s comments about “…do not represent who we are or what we stand for, nor how we designed Tay…” at face value. However, the human heart is a dark place and to pretend that is true of a minority or sub-group, is to ignore the lessons of history.

AI Masters Go, Twitter, Not So Much (Log from @TayandYou?)

Thursday, March 24th, 2016

Microsoft deletes ‘teen girl’ AI after it became a Hitler-loving sex robot within 24 hours by Helena Horton.

From the post:

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot.

Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is.

The headline was suggested to me by a tweet from Peter Seibel:

Interesting how wide the gap is between two recent AI: AlphaGo and TayTweets. The Turing Test is *hard*. http://gigamonkeys.com/turing/.

In preparation for the next AI celebration, does anyone have a complete log of the tweets from Tay Tweets?

I prefer non-revisionist history where data doesn’t disappear. You can imagine the use Stalin would have made of that capability.

Project AIX: Using Minecraft to build more intelligent technology

Monday, March 14th, 2016

Project AIX: Using Minecraft to build more intelligent technology by Allison Linn.

From the post:

In the airy, loft-like Microsoft Research lab in New York City, five computer scientists are spending their days trying to get a Minecraft character to climb a hill.

That may seem like a pretty simple job for some of the brightest minds in the field, until you consider this: The team is trying to train an artificial intelligence agent to learn how to do things like climb to the highest point in the virtual world, using the same types of resources a human has when she learns a new task.

That means that the agent starts out knowing nothing at all about its environment or even what it is supposed to accomplish. It needs to understand its surroundings and figure out what’s important – going uphill – and what isn’t, such as whether it’s light or dark. It needs to endure a lot of trial and error, including regularly falling into rivers and lava pits. And it needs to understand – via incremental rewards – when it has achieved all or part of its goal.

“We’re trying to program it to learn, as opposed to programming it to accomplish specific tasks,” said Fernando Diaz, a senior researcher in the New York lab and one of the people working on the project.

The research project is possible thanks to AIX, a platform developed by Katja Hofmann and her colleagues in Microsoft’s Cambridge, UK, lab and unveiled publicly on Monday. AIX allows computer scientists to use the world of Minecraft as a testing ground for conducting research designed to improve artificial intelligence.

The project is in closed beta now but said to be going open source in the summer of 2016.

Someone mentioned quite recently the state of documentation on Minecraft. Their impression was there is a lot of information but poorly organized.

If you are interested in exploring Minecraft for the release this summer, see: How to Install Minecraft on Ubuntu or Any Other Linux Distribution.

Lee Sedol “busted up” AlphaGo – Game 4

Monday, March 14th, 2016

Lee Sedol defeats AlphaGo in masterful comeback – Game 4 by David Ormerod.

From the post:

Expectations were modest on Sunday, as Lee Sedol 9p faced the computer Go program AlphaGo for the fourth time.

Lee Sedol 9 dan, obviously relieved to win his first game.

After Lee lost the first three games, his chance of winning the five game match had evaporated.

His revised goal, and the hope of millions of his fans, was that he might succeed in winning at least one game against the machine before the match concluded.

However, his prospects of doing so appeared to be bleak, until suddenly, just when all seemed to be lost, he pulled a rabbit out of a hat.

And he didn’t even have a hat!

Lee Sedol won game four by resignation.

A reversal of roles but would you say that Sedol “busted up” AlphaGo?

Looking forward to the results of Game 5!

Automating Amazon/Hotel/Travel Reviews (+ Human Intelligence Test (HIT))

Sunday, February 28th, 2016

The Neural Network That Remembers by Zachary C. Lipton & Charles Elkan.

From the post:

On tap at the brewpub. A nice dark red color with a nice head that left a lot of lace on the glass. Aroma is of raspberries and chocolate. Not much depth to speak of despite consisting of raspberries. The bourbon is pretty subtle as well. I really don’t know that find a flavor this beer tastes like. I would prefer a little more carbonization to come through. It’s pretty drinkable, but I wouldn’t mind if this beer was available.

Besides the overpowering bouquet of raspberries in this guy’s beer, this review is remarkable for another reason. It was produced by a computer program instructed to hallucinate a review for a “fruit/vegetable beer.” Using a powerful artificial-intelligence tool called a recurrent neural network, the software that produced this passage isn’t even programmed to know what words are, much less to obey the rules of English syntax. Yet, by mining the patterns in reviews from the barflies at BeerAdvocate.com, the program learns how to generate similarly coherent (or incoherent) reviews.

The neural network learns proper nouns like “Coors Light” and beer jargon like “lacing” and “snifter.” It learns to spell and to misspell, and to ramble just the right amount. Most important, the neural network generates reviews that are contextually relevant. For example, you can say, “Give me a 5-star review of a Russian imperial stout,” and the software will oblige. It knows to describe India pale ales as “hoppy,” stouts as “chocolatey,” and American lagers as “watery.” The neural network also learns more colorful words for lagers that we can’t put in print.

This particular neural network can also run in reverse, taking any review and recognizing the sentiment (star rating) and subject (type of beer). This work, done by one of us (Lipton) in collaboration with his colleagues Sharad Vikram and Julian McAuley at the University of California, San Diego, is part of a growing body of research demonstrating the language-processing capabilities of recurrent networks. Other related feats include captioning images, translating foreign languages, and even answering e-mail messages. It might make you wonder whether computers are finally able to think.

(emphasis in original)

An enthusiastic introduction and projection of the future of recurrent neural networks! Quite a bit so.

My immediate thought was what a time saver a recurrent neural network would be for “evaluation” requests that appear in my inbox with alarming regularity.

What about a service that accepts forwarded emails and generates a review for the book, seller, hotel, travel, etc., which is returned to you for cut-n-paste?

That would be about as “intelligent” as the amount of attention most of us devote to such requests.

You could set the service to mimic highly followed reviewers so over time you would move up the ranks of reviewers.

I mention Amazon, hotel, travel reviews but those are just low-lying fruit. You could do journal book reviews with a different data set.

Near the end of the post the authors write:


In this sense, the computer-science community is evaluating recurrent neural networks via a kind of Turing test. We try to teach a computer to act intelligently by training it to imitate what people produce when faced with the same task. Then we evaluate our thinking machine by seeing whether a human judge can distinguish between its output and what a human being might come up with.

While the very fact that we’ve come this far is exciting, this approach may have some fundamental limitations. For instance, it’s unclear how such a system could ever outstrip the capabilities of the people who provide the training data. Teaching a machine to learn through imitation might never produce more intelligence than was present collectively in those people.

One promising way forward might be an approach called reinforcement learning. Here, the computer explores the possible actions it can take, guided only by some sort of reward signal. Recently, researchers at Google DeepMind combined reinforcement learning with feed-forward neural networks to create a system that can beat human players at 31 different video games. The system never got to imitate human gamers. Instead it learned to play games by trial and error, using its score in the video game as a reward signal.

Instead of asking whether computers can think, the more provocative question is “whether people think for a large range of daily activities?”

Consider it as the Human Intelligence Test (HIT).

How much “intelligence” does it take to win a video game?

Eye/hand coordination to be sure, attention, but what “intelligence” is involved?

Computers may “eclipse” human beings at non-intelligent activities, as a shovel “eclipses” our ability to dig with our bare hands.

But I’m not overly concerned.

Are you?

Danger of Hackers vs. AI

Sunday, January 31st, 2016

An interactive graphical history of large data breaches by Mark Gibbs.

From the post:

If you’re trying to convince your management to beef up the organization’s security to protect against data breaches, an interactive infographic from Information Is Beautiful might help.

Built with IIB’s forthcoming VIZsweet data visualization tools, the World’s Biggest Data Breaches visualization combines data from DataBreaches.net, IdTheftCentre, and press reports to create a timeline of breaches that involved the loss of 30,000 or more records (click the image below to go to the interactive version). What’s particularly interesting is that while breaches were caused by accidental publishing, configuration errors, inside job, lost or stolen computer, lost or stolen media, or just good old poor security, the majority of events and the largest, were due to hacking.

Make sure the powers that be understand that you don’t have to be a really big organization for a serious data breach to happen.

See Mark’s post for the image and link to the interactive graphic.

Hackers (human intelligence) are kicking cybersecurity’s ass 24 x 7.

Danger of AI (artificial intelligence), maybe, someday, it might be a problem, but we don’t know or to what extent.

What priority do you assign these issues in your IT budget?

If you said hackers are #1, congratulations! You have an evidence-based IT budgeting process.

Otherwise, well, see you at DragonCon. I’m sure you will have lots of free time when you aren’t in the unemployment line.

PS: Heavy spending on what is mis-labeled as “artificial intelligence” is perfectly legitimate. Think of it as training computers to do tasks humans can’t do or that machines can do more effectively. Calling it AI loads it with unnecessary baggage.