Archive for the ‘Machine Learning’ Category

Weka MOOCs – Self-Paced Courses

Friday, July 8th, 2016

All three Weka MOOCs available as self-paced courses

From the post:

All three MOOCs (“Data Mining with Weka”, “More Data Mining with Weka” and “Advanced Data Mining with Weka”) are now available on a self-paced basis. All the material, activities and assessments are available from now until 24th September 2016 at:

https://weka.waikato.ac.nz/

The Weka software and MOOCs are great introductions to machine learning!

…possibly biased? Try always biased.

Friday, June 24th, 2016

Artificial Intelligence Has a ‘Sea of Dudes’ Problem by Jack Clark.

From the post:


Much has been made of the tech industry’s lack of women engineers and executives. But there’s a unique problem with homogeneity in AI. To teach computers about the world, researchers have to gather massive data sets of almost everything. To learn to identify flowers, you need to feed a computer tens of thousands of photos of flowers so that when it sees a photograph of a daffodil in poor light, it can draw on its experience and work out what it’s seeing.

If these data sets aren’t sufficiently broad, then companies can create AIs with biases. Speech recognition software with a data set that only contains people speaking in proper, stilted British English will have a hard time understanding the slang and diction of someone from an inner city in America. If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.

“I call it a sea of dudes,” said Margaret Mitchell, a researcher at Microsoft. Mitchell works on computer vision and language problems, and is a founding member—and only female researcher—of Microsoft’s “cognition” group. She estimates she’s worked with around 10 or so women over the past five years, and hundreds of men. “I do absolutely believe that gender has an effect on the types of questions that we ask,” she said. “You’re putting yourself in a position of myopia.”

Margaret Mitchell makes a pragmatic case for diversity int the workplace, at least if you want to avoid male biased AI.

Not that a diverse workplace results in an “unbiased” AI, it will be a biased AI that isn’t solely male biased.

It isn’t possible to escape bias because some person or persons has to score “correct” answers for an AI. The scoring process imparts to the AI being trained, the biases of its judge of correctness.

Unless someone wants to contend there are potential human judges without biases, I don’t see a way around imparting biases to AIs.

By being sensitive to evidence of biases, we can in some cases choose the biases we want an AI to possess, but an AI possessing no biases at all, isn’t possible.

AIs are, after all, our creations so it is only fair that they be made in our image, biases and all.

Bots, Won’t You Hide Me?

Thursday, June 23rd, 2016

Emerging Trends in Social Network Analysis of Terrorism and Counterterrorism, How Police Are Scanning All Of Twitter To Detect Terrorist Threats, Violent Extremism in the Digital Age: How to Detect and Meet the Threat, Online Surveillance: …ISIS and beyond [Social Media “chaff”] are just a small sampling of posts on the detection of “terrorists” on social media.

The last one is my post illustrating how “terrorist” at one time = “anti-Vietnam war,” “civil rights,” and “gay rights.” Due to the public nature of social media, avoiding government surveillance isn’t possible.

I stole the title, Bots, Won’t You Hide Me? from Ben Bova’s short story, Stars, Won’t You Hide Me?. It’s not very long and if you like science fiction, you will enjoy it.

Bova took verses in the short story from Sinner Man, a traditional African spiritual, which was recorded by a number of artists.

All of that is a very round about way to introduce you to a new Twitter account: ConvJournalism:

All you need to know about Conversational Journalism, (journalistic) bots and #convcomm by @martinhoffmann.

Surveillance of groups on social media isn’t going to succeed, The White House Asked Social Media Companies to Look for Terrorists. Here’s Why They’d #Fail by Jenna McLaughlin bots can play an important role in assisting in that failure.

Imagine not only having bots that realistically mimic the chatter of actual human users but who follow, unfollow, etc., and engage in apparent conspiracies, with other bots. Entirely without human direction or very little.

Follow ConvJournalism and promote bot research/development that helps all of us hide. (I’d rather have the bots say yes than Satan.)

Machine Learning Yearning [New Book – Free Draft – Signup By Friday June 24th (2016)

Monday, June 20th, 2016

Machine Learning Yearning by Andrew Ng.

About Andrew Ng:

Andrew Ng is Associate Professor of Computer Science at Stanford; Chief Scientist of Baidu; and Chairman and Co-founder of Coursera.

In 2011 he led the development of Stanford University’s main MOOC (Massive Open Online Courses) platform and also taught an online Machine Learning class to over 100,000 students, leading to the founding of Coursera. Ng’s goal is to give everyone in the world access to a great education, for free. Today, Coursera partners with some of the top universities in the world to offer high quality online courses, and is the largest MOOC platform in the world.

Ng also works on machine learning with an emphasis on deep learning. He founded and led the “Google Brain” project which developed massive-scale deep learning algorithms. This resulted in the famous “Google cat” result, in which a massive neural network with 1 billion parameters learned from unlabeled YouTube videos to detect cats. More recently, he continues to work on deep learning and its applications to computer vision and speech, including such applications as autonomous driving.

Haven’t you signed up yet?

OK, What You Will Learn:

The goal of this book is to teach you how to make the numerous decisions needed with organizing a machine learning project. You will learn:

  • How to establish your dev and test sets
  • Basic error analysis
  • How you can use Bias and Variance to decide what to do
  • Learning curves
  • Comparing learning algorithms to human-level performance
  • Debugging inference algorithms
  • When you should and should not use end-to-end deep learning
  • Error analysis by parts

Free drafts of a new book on machine learning projects, not just machine learning, by one of the leading world experts on machine learning.

Now are you signed up?

If you are interested in machine learning, following Andrew Ng on Twitter isn’t a bad place to start.

Be aware, however, that even machine learning experts can be mistaken. For example, Andrew tweeted, favorably, How to make a good teacher from the Economist.


Instilling these techniques is easier said than done. With teaching as with other complex skills, the route to mastery is not abstruse theory but intense, guided practice grounded in subject-matter knowledge and pedagogical methods. Trainees should spend more time in the classroom. The places where pupils do best, for example Finland, Singapore and Shanghai, put novice teachers through a demanding apprenticeship. In America high-performing charter schools teach trainees in the classroom and bring them on with coaching and feedback.

Teacher-training institutions need to be more rigorous—rather as a century ago medical schools raised the calibre of doctors by introducing systematic curriculums and providing clinical experience. It is essential that teacher-training colleges start to collect and publish data on how their graduates perform in the classroom. Courses that produce teachers who go on to do little or nothing to improve their pupils’ learning should not receive subsidies or see their graduates become teachers. They would then have to improve to survive.

The author conflates “demanding apprenticeship” with “teacher-training colleges start to collect and publish data on how their graduates perform in the classroom,” as though whatever data we collect has some meaningful relationship with teaching and/or the training of teachers.

A “demanding apprenticeship” no doubt weeds out people who are not well suited to be teachers, there is no evidence that it can make a teacher out of someone who isn’t suited for the task.

The collection of data is one of the ongoing fallacies about American education. Simply because you can collect data is no indication that it is useful and/or has any relationship to what you are attempting to measure.

Follow Andrew for his work on machine learning, not so much for his opinions on education.

Are Non-AI Decisions “Open to Inspection?”

Thursday, June 16th, 2016

Ethics in designing AI Algorithms — part 1 by Michael Greenwood.

From the post:

As our civilization becomes more and more reliant upon computers and other intelligent devices, there arises specific moral issue that designers and programmers will inevitably be forced to address. Among these concerns is trust. Can we trust that the AI we create will do what it was designed to without any bias? There’s also the issue of incorruptibility. Can the AI be fooled into doing something unethical? Can it be programmed to commit illegal or immoral acts? Transparency comes to mind as well. Will the motives of the programmer or the AI be clear? Or will there be ambiguity in the interactions between humans and AI? The list of questions could go on and on.

Imagine if the government uses a machine-learning algorithm to recommend applications for student loan approvals. A rejected student and or parent could file a lawsuit alleging that the algorithm was designed with racial bias against some student applicants. The defense could be that this couldn’t be possible since it was intentionally designed so that it wouldn’t have knowledge of the race of the person applying for the student loan. This could be the reason for making a system like this in the first place — to assure that ethnicity will not be a factor as it could be with a human approving the applications. But suppose some racial profiling was proven in this case.

If directed evolution produced the AI algorithm, then it may be impossible to understand why, or even how. Maybe the AI algorithm uses the physical address data of candidates as one of the criteria in making decisions. Maybe they were born in or at some time lived in poverty‐stricken regions, and that in fact, a majority of those applicants who fit these criteria happened to be minorities. We wouldn’t be able to find out any of this if we didn’t have some way to audit the systems we are designing. It will become critical for us to design AI algorithms that are not just robust and scalable, but also easily open to inspection.

While I can appreciate the desire to make AI algorithms that are “…easily open to inspection…,” I feel compelled to point out that human decision making has resisted such openness for thousands of years.

There are the tales we tell each other about “rational” decision making but those aren’t how decisions are made, rather they are how we justify decisions made to ourselves and others. Not exactly the same thing.

Recall the parole granting behavior of israeli judges that depended upon the proximity to their last meal. Certainly all of those judges would argue for their “rational” decisions but meal time was a better predictor than any other. (Extraneous factors in judicial decisions)

My point being that if we struggle to even articulate the actual basis for non-AI decisions, where is our model for making AI decisions “open to inspection?” What would that look like?

You could say, for example, no discrimination based on race. OK, but that’s not going to work if you want to purposely setup scholarships for minority students.

When you object, “…that’s not what I meant! You know what I mean!…,” well, I might, but try convincing an AI that has no social context of what you “meant.”

The openness of AI decisions to inspection is an important issue but the human record in that regard isn’t encouraging.

AI Cultist On Justice System Reform

Wednesday, June 8th, 2016

White House Challenges Artificial Intelligence Experts to Reduce Incarceration Rates by Jason Shueh.

From the post:

The U.S. spends $270 billion on incarceration each year, has a prison population of about 2.2 million and an incarceration rate that’s spiked 220 percent since the 1980s. But with the advent of data science, White House officials are asking experts for help.

On Tuesday, June 7, the White House Office of Science and Technology Policy’s Lynn Overmann, who also leads the White House Police Data Initiative, stressed the severity of the nation’s incarceration crisis while asking a crowd of data scientists and artificial intelligence specialists for aid.

“We have built a system that is too large, and too unfair and too costly — in every sense of the word — and we need to start to change it,” Obermann said, speaking at a Computing Community Consortium public workshop.

She argued that the U.S., a country that has the highest amount incarcerated citizens in the world, is in need of systematic reforms with both data tools to process alleged offenders and at the policy level to ensure fair and measured sentences. As a longtime counselor, advisor and analyst for the Justice Department and at the city and state levels, Overman said she has studied and witnessed an alarming number of issues in terms of bias and unwarranted punishments.

For instance, she said that statistically, while drug use is about equal between African Americans and Caucasians, African Americans are more likely to be arrested and convicted. They also receive longer prison sentences compared to Caucasian inmates convicted of the same crimes.

Other problems, Oberman said, are due to inflated punishments that far exceed the severity of crimes. She recalled her years spent as an assistant public defender for Florida’s Miami-Dade County Public Defender’s Office as an example.

“I represented a client who was looking at spending 40 years of his life in prison because he stole a lawnmower and a weedeater from a shed in a backyard,” Obermann said, “I had another person who had AIDS and was offered a 15-year sentence for stealing mangos.”

Data and digital tools can help curb such pitfalls by increasing efficiency, transparency and accountability, she said.
… (emphasis added)

Spotting a cultist tip: Before specifying criteria for success or even understanding a problem, a cultist announces the approach that will succeed.

Calls like this one are a disservice to legitimate artificial intelligence research, to say nothing of experts in criminal justice (unlike Lynn Overmann), who have struggled for decades to improve the criminal justice system.

Yes, Overmann has experience in the criminal justice system, both in legal practice and at a policy level, but that makes her no more of an expert on criminal justice reform than having multiple flat tires makes me an expert on tire design.

Data is not, has not been, nor will it ever be a magic elixir that solves undefined problems posed to it.

White House sponsored AI cheer leading is a disservice to AI practitioners, experts in the field of criminal justice reform and more importantly, to those impacted by the criminal justice system.

Substitute meaningful problem definitions for the AI pom-poms if this is to be more than resume padding and currying favor with contractors project.

The anatomy of online deception:… [ Statistics Can Be Deceiving – Even In Academic Papers]

Wednesday, June 8th, 2016

The anatomy of online deception: what makes automated text convincing? by Richard M. Everett, Jason R. C. Nurse, Arnau Erola.

Abstract:

Technology is rapidly evolving, and with it comes increasingly sophisticated bots (i.e. software robots) which automatically produce content to inform, influence, and deceive genuine users. This is particularly a problem for social media networks where content tends to be extremely short, informally written, and full of inconsistencies. Motivated by the rise of bots on these networks, we investigate the ease with which a bot can deceive a human. In particular, we focus on deceiving a human into believing that an automatically generated sample of text was written by a human, as well as analysing which factors affect how convincing the text is. To accomplish this, we train a set of models to write text about several distinct topics, to simulate a bot’s behaviour, which are then evaluated by a panel of judges. We find that: (1) typical Internet users are twice as likely to be deceived by automated content than security researchers; (2) text that disagrees with the crowd’s opinion is more believably human; (3) light-hearted topics such as Entertainment are significantly easier to deceive with than factual topics such as Science; and (4) automated text on Adult content is the most deceptive regardless of a user’s background.

The statistics presented are impressive:


We found that automated text is twice as likely to deceive Internet users than security researchers. Also, text that disagrees with the Crowd’s opinion increases the likelihood of deception by up to 78%, while text on light-hearted Topics such as Entertainment increases the likelihood by up to 85%. Notably, we found that automated text on Adult content is the most deceptive for both typical Internet users and security researchers, increasing the likelihood of deception by at least 30% compared to other Topics on average. Together, this shows that it is feasible for a party with technical resources and knowledge to create an environment populated by bots that could successfully deceive users.
… (at page 1120)

To evaluate those statistics consider the judges panels that create the supporting data:


To evaluate this test dataset, a panel of judges is used where every judge receives the entire test set with no other accompanying data such as Topic and Crowd opinion. Then, each judge evaluates the comments based solely on their text and labels each as either human or bot, depending who they believe wrote it. To fill this panel, three judges were selected – in keeping with the average procedure of the work highlighted by Bailey et al. [2] – for two distinct groups:

  • Group 1: Three cyber security researchers who are actively involved in security work with an intimate knowledge of the Internet and its threats.
  • Group 2: Three typical Internet users who browse social media daily but are not experienced with technology or security, and therefore less aware of the threats.

… (pages 1117-1118)

The paper reports human vs. generation evaluations of topics by six (6) people.

I’m suddenly less impressed than I hoped to be from reading the abstract.

A more informative title would have been: 6 People Classify Machine/Human Generated Reddit Comments.

To their credit, the authors were explicit about the judging panels in their study.

I am forced to conclude peer review wasn’t used for the SAC 2016 31st ACM Symposium on Applied Computing or its peer reviewers left a great deal to be desired.

As a conference goer, would you be interested in human/machine judgments of six unknown panelists?

Deep Learning Trends @ ICLR 2016 (+ Shout-Out to arXiv)

Friday, June 3rd, 2016

Deep Learning Trends @ ICLR 2016 by Tomasz Malisiewicz.

From the post:

Started by the youngest members of the Deep Learning Mafia [1], namely Yann LeCun and Yoshua Bengio, the ICLR conference is quickly becoming a strong contender for the single most important venue in the Deep Learning space. More intimate than NIPS and less benchmark-driven than CVPR, the world of ICLR is arXiv-based and moves fast.

Today’s post is all about ICLR 2016. I’ll highlight new strategies for building deeper and more powerful neural networks, ideas for compressing big networks into smaller ones, as well as techniques for building “deep learning calculators.” A host of new artificial intelligence problems is being hit hard with the newest wave of deep learning techniques, and from a computer vision point of view, there’s no doubt that deep convolutional neural networks are today’s “master algorithm” for dealing with perceptual data.

Information packed review of the conference and if that weren’t enough, this shout-out to arXiv:


ICLR Publishing Model: arXiv or bust
At ICLR, papers get posted on arXiv directly. And if you had any doubts that arXiv is just about the single awesomest thing to hit the research publication model since the Gutenberg press, let the success of ICLR be one more data point towards enlightenment. ICLR has essentially bypassed the old-fashioned publishing model where some third party like Elsevier says “you can publish with us and we’ll put our logo on your papers and then charge regular people $30 for each paper they want to read.” Sorry Elsevier, research doesn’t work that way. Most research papers aren’t good enough to be worth $30 for a copy. It is the entire body of academic research that provides true value, for which a single paper just a mere door. You see, Elsevier, if you actually gave the world an exceptional research paper search engine, together with the ability to have 10-20 papers printed on decent quality paper for a $30/month subscription, then you would make a killing on researchers and I would endorse such a subscription. So ICLR, rightfully so, just said fuck it, we’ll use arXiv as the method for disseminating our ideas. All future research conferences should use arXiv to disseminate papers. Anybody can download the papers, see when newer versions with corrections are posted, and they can print their own physical copies. But be warned: Deep Learning moves so fast, that you’ve gotta be hitting refresh or arXiv on a weekly basis or you’ll be schooled by some grad students in Canada.

Is your publishing < arXiv?

Do you hit arXiv every week?

Bias? What Bias? We’re Scientific!

Monday, May 23rd, 2016

This ProPublica story by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, isn’t short but it is worth your time to not only read, but to download the data and test their analysis for yourself.

Especially if you have the mis-impression that algorithms can avoid bias. Or that clients will apply your analysis with the caution that it deserves.

Finding a bias in software, like finding a bug, is a good thing. But that’s just one, there is no estimate of how many others may exist.

And as you will find, clients may not remember your careful explanation of the limits to your work. Or apply it in ways you don’t anticipate.

Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks.

Here’s the first story to try to lure you deeper into this study:

ON A SPRING AFTERNOON IN 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an unlocked kid’s blue Huffy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs.

Just as the 18-year-old girls were realizing they were too big for the tiny conveyances — which belonged to a 6-year-old boy — a woman came running after them saying, “That’s my kid’s stuff.” Borden and her friend immediately dropped the bike and scooter and walked away.

But it was too late — a neighbor who witnessed the heist had already called the police. Borden and her friend were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80.

Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store.

Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile.

Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth of electronics.

This analysis demonstrates that malice isn’t required for bias to damage lives. Whether the biases are in software, in its application, in the interpretation of its results, the end result is the same, damaged lives.

I don’t think bias in software is avoidable but here, here no one was even looking.

What role do you think budget justification/profit making played in that blindness to bias?

Deep Learning: Image Similarity and Beyond (Webinar, May 10, 2016)

Friday, May 6th, 2016

Deep Learning: Image Similarity and Beyond (Webinar, May 10, 2016)

From the registration page:

Deep Learning is a powerful machine learning method for image tagging, object recognition, speech recognition, and text analysis. In this demo, we’ll cover the basic concept of deep learning and walk you through the steps to build an application that finds similar images using an already-trained deep learning model.

Recommended for:

  • Data scientists and engineers
  • Developers and technical team managers
  • Technical product managers

What you’ll learn:

  • How to leverage existing deep learning models
  • How to extract deep features and use them using GraphLab Create
  • How to build and deploy an image similarity service using Dato Predictive Services

What we’ll cover:

  • Using an already-trained deep learning model
  • Extracting deep features
  • Building and deploying an image similarity service for pictures 

Deep learning has difficulty justifying its choices, just like human judges of similarity, but could it play a role in assisting topic map authors in constructing explicit decisions for merging?

Once trained, could deep learning suggest properties and/or values to consider for merging it has not yet experienced?

I haven’t seen any webinars recently so I am ready to gamble on this being an interesting one.

Enjoy!

Peda(bot)bically Speaking:…

Monday, April 25th, 2016

Peda(bot)bically Speaking: Teaching Computational and Data Journalism with Bots by Nicholas Diakopoulos.

From the post:

Bots can be useful little creatures for journalism. Not only because they help us automate tasks like alerting and filtering, but also because they encapsulate how data and computing can work together, in service of automated news. At the University of Maryland, where I’m a professor of journalism, my students are using the power of news bots to learn concepts and skills in computational journalism—including both editorial thinking and computational thinking.

Hmmm, bot that filters all tweets that don’t contain a URL? (To filter cat pics and the like.) 😉

Or retweets tweets with #’s that trigger creation of topics/associations?

I don’t think there is a requirement that hashtags be meaningful to others. Yes?

Sounds like a great class!

Hello World – Machine Learning Recipes #1

Saturday, April 16th, 2016

Hello World – Machine Learning Recipes #1 by Josh Gordon.

From the description:

Six lines of Python is all it takes to write your first machine learning program! In this episode, we’ll briefly introduce what machine learning is and why it’s important. Then, we’ll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up.

The first in a promised series on machine learning using scikit learn and TensorFlow.

The quality of video that you wish was available to intermediate and advanced treatments.

Quite a treat! Pass onto anyone interested in machine learning.

Enjoy!

LSTMetallica:…

Tuesday, April 12th, 2016

LSTMetallica: Generation drum tracks by learning the drum tracks of 60 Metallica songs by Keunwoo Choi.

From the post:

Word-RNN (LSTM) on Keras with wordified text representations of Metallica’s drumming midi files, which came from midiatabase.com.

  • Midi files of Metallica track comes from midiatabase.com.
  • LSTM model comes from Keras.
  • Read Midi files with python-midi.
  • Convert them to a text file (corpus) by my rules, which are
    • (Temporal) Quantisation
    • Simplification/Omitting some notes
    • ‘Word’ with binary numbers
  • Learn an LSTM model with the corpus and generate by prediction of words.
  • Words in a text file → midi according to the rules I used above.
  • Listen!

I mention this in part to inject some variety into the machine learning resources I have mentioned.

The failures of machine learning for recommendations can be amusing. For the most part when it works, they are rather dull.

Learning from drum tracks has the potential to combine drum tracks from different groups, resulting is something new.

May be fit for listening, maybe not. You won’t know without trying it.

Enjoy!

Advanced Data Mining with Weka – Starts 25 April 2016

Wednesday, April 6th, 2016

Advanced Data Mining with Weka by Ian Witten.

From the webpage:

This course follows on from Data Mining with Weka and More Data Mining with Weka. It provides a deeper account of specialized data mining tools and techniques. Again the emphasis is on principles and practical data mining using Weka, rather than mathematical theory or advanced details of particular algorithms. Students will analyse time series data, mine data streams, use Weka to access other data mining packages including the popular R statistical computing language, script Weka in Python, and deploy it within a cluster computing framework. The course also includes case studies of applications such as classifying tweets, functional MRI data, image classification, and signal peptide prediction.

The syllabus: https://weka.waikato.ac.nz/advanceddataminingwithweka/assets/pdf/syllabus.pdf.

Advanced Data Mining with Weka is open for enrollment and starts 25 April 2016.

Five very intense weeks await!

Will you be there?

I first saw this in a tweet by Alyona Medelyan.

Spending Time Rolling Your Own or Using Google Tools in Anger?

Wednesday, March 30th, 2016

The question: Spending Time Rolling Your Own or Using Google Tools in Anger? is one faced by many people who have watched computer technology evolve.

You could write your own blogging software or you can use one of the standard distributions.

You could write your own compiler or you can use one of the standard distributions.

You can install and maintain your own machine learning, big data apps, or you can use the tools offered by Google Machine Learning.

Tinkering with your local system until it is “just so” is fun, but it eats into billable time and honestly is a distraction.

Not promising I immersing in the Google-verse but an honest assessment of where to spend my time is in order.

Google takes Cloud Machine Learning service mainstream by Fausto Ibarra, Director, Product Management.

From the post:

Hundreds of different big data and analytics products and services fight for your attention as it’s one of the most fertile areas of innovation in our industry. And it’s no wonder; the most amazing consumer experiences are driven by insights derived from information. This is an area where Google Cloud Platform has invested almost two decades of engineering, and today at GCP NEXT we’re announcing some of the latest results of that work. This next round of innovation builds on our portfolio of data management and analytics capabilities by adding new products and services in multiples key areas:

Machine Learning:

We’re on a journey to create applications that can see, hear and understand the world around them. Today we’ve taken a major stride forward with the announcement of a new product family: Cloud Machine Learning. Cloud Machine Learning will take machine learning mainstream, giving data scientists and developers a way to build a new class of intelligent applications. It provides access to the same technologies that power Google Now, Google Photos and voice recognition in Google Search as easy to use REST APIs. It enables you to build powerful Machine Learning models on your data using the open-source TensorFlow machine learning library:

Big Data and Analytics:

Doing big data the cloud way means being more productive when building applications, with faster and better insights, without having to worry about the underlying infrastructure. To further this mission, we recently announced the general availability of Cloud Dataproc, our managed Apache Hadoop and Apache Spark service, and we’re adding new services and capabilities today:

Open Source:

Our Cloud Machine Learning offering leverages Google’s cutting edge machine learning and data processing technologies, some of which we’ve recently open sourced:

What, if anything, do you see as a serious omission in this version of the Google-verse?

Suggestions?

“Ethical” Botmakers Censor Offensive Content

Saturday, March 26th, 2016

There are almost 500,000 “hits” from “tay ai” in one popular search engine today.

Against that background, I ran into: How to Make a Bot That Isn’t Racist by Sarah Jeong.

From the post:

…I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking.

“The makers of @TayandYou absolutely 10000 percent should have known better,” thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. “It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet.”

Thricedotted and others belong to an established community of botmakers on Twitter that have been creating and experimenting for years. There’s a Bot Summit. There’s a hashtag (#botALLY).

As I spoke to each botmaker, it became increasingly clear that the community at large was tied together by crisscrossing lines of influence. There is a well-known body of talks, essays, and blog posts that form a common ethical code. The botmakers have even created open source blacklists of slurs that have become Step 0 in keeping their bots in line.

Not researching prior art is as bad as not Reading The Fine Manual (RTFM) before posting help queries to heavy traffic developer forums.

Tricedotted claims a prior obligation of TayandYou’s creators to block offensive content:

For thricedotted, TayandYou failed from the start. “You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit,” they said. “It blows my mind, because surely they’ve been working on this for a while, surely they’ve been working with Twitter data, surely they knew this shit existed. And yet they put in absolutely no safeguards against it?!” (emphasis in original)

No doubt Microsoft wishes that it had blocked offensive content in hindsight, but I don’t see a general ethical obligation to block or censor offensive content.

For example:

  • A bot created to follow public and private accounts of elected officials and it only re-tweeted posts that did contain racial slurs? With @news-organization handles in the tweets.
  • A bot based on matching FEC (Federal Election Commission) donation records + Twitter accounts and it re-tweets racist/offensive tweets along with campaign donation identifiers and the candidate in question.
  • A bot that follows accounts known for racist/offensive tweets for the purpose of building archives of those tweets, publicly accessible, to prevent the sanitizing of tweet archives in the future. (like with TayandYou)

Any of those strike you as “unethical?”

I wish the Georgia legislature and the U.S. Congress would openly used racist and offensive language.

They act in racist and offensive ways so they should be openly racist and offensive. Makes it easier to whip up effective opposition against known racists, etc.

Which is, of course, why they self-censor to not use racist language.

The world is full of offensive people and we should make they own their statements.

Creating a false, sanitized view that doesn’t offend some n+1 sensitivities, is just that, a false view of the world.

If you are looking for an ethical issue, creating views of the world that help conceal racism, sexism, etc., is a better starting place than offensive ephemera.

“Not Understanding” was Tay’s Vulnerability?

Friday, March 25th, 2016

Peter Lee (Corporate Vice President, Microsoft Research) posted Learning from Tay’s introduction where he says:


Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

But Peter never specifies what “vulnerability” Tay suffered from.

To find out what why Tay was “vulnerable,” you have to read Microsoft is deleting its AI chatbot’s incredibly racist tweets by Rob Price where he points out:


The reason it spouted garbage is that racist humans on Twitter quickly spotted a vulnerability — that Tay didn’t understand what it was talking about — and exploited it. (emphasis added)

Hmmm, how soon do you think Microsoft can confer on Tay the ability to “…understand what it [is] talking about…?”

I’m betting that’s not going to happen.

Tay can “learn” (read mimic) language patterns of users but if she speaks to racist users she will say racist things. Or religious, ISIS, sexist, Buddhist, trans-gender, or whatever things.

It isn’t ever going to be a question of Tay “understanding,” but rather of humans creating rules that prevent Tay from imitating certain speech patterns.

She will have no more or less “understanding” than before but her speech patterns will be more acceptable to some segments of users.

I have no doubt the result of Tay’s first day in the world was not what Microsoft wanted or anticipated.

That said, people are a ugly lot and I don’t mean a minority of them. All of us are better some days than others and about some issues and not others.

To the extent that Tay was designed to imitate people, I consider the project to be a success. If you think Tay should react the way some people imagine we should act, then it was a failure.

There’s an interesting question for Easter weekend:

Should an artificial intelligence act as we do or should it act as we ought to do?

PS: I take Peter’s comments about “…do not represent who we are or what we stand for, nor how we designed Tay…” at face value. However, the human heart is a dark place and to pretend that is true of a minority or sub-group, is to ignore the lessons of history.

AI Masters Go, Twitter, Not So Much (Log from @TayandYou?)

Thursday, March 24th, 2016

Microsoft deletes ‘teen girl’ AI after it became a Hitler-loving sex robot within 24 hours by Helena Horton.

From the post:

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot.

Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is.

The headline was suggested to me by a tweet from Peter Seibel:

Interesting how wide the gap is between two recent AI: AlphaGo and TayTweets. The Turing Test is *hard*. http://gigamonkeys.com/turing/.

In preparation for the next AI celebration, does anyone have a complete log of the tweets from Tay Tweets?

I prefer non-revisionist history where data doesn’t disappear. You can imagine the use Stalin would have made of that capability.

Project AIX: Using Minecraft to build more intelligent technology

Monday, March 14th, 2016

Project AIX: Using Minecraft to build more intelligent technology by Allison Linn.

From the post:

In the airy, loft-like Microsoft Research lab in New York City, five computer scientists are spending their days trying to get a Minecraft character to climb a hill.

That may seem like a pretty simple job for some of the brightest minds in the field, until you consider this: The team is trying to train an artificial intelligence agent to learn how to do things like climb to the highest point in the virtual world, using the same types of resources a human has when she learns a new task.

That means that the agent starts out knowing nothing at all about its environment or even what it is supposed to accomplish. It needs to understand its surroundings and figure out what’s important – going uphill – and what isn’t, such as whether it’s light or dark. It needs to endure a lot of trial and error, including regularly falling into rivers and lava pits. And it needs to understand – via incremental rewards – when it has achieved all or part of its goal.

“We’re trying to program it to learn, as opposed to programming it to accomplish specific tasks,” said Fernando Diaz, a senior researcher in the New York lab and one of the people working on the project.

The research project is possible thanks to AIX, a platform developed by Katja Hofmann and her colleagues in Microsoft’s Cambridge, UK, lab and unveiled publicly on Monday. AIX allows computer scientists to use the world of Minecraft as a testing ground for conducting research designed to improve artificial intelligence.

The project is in closed beta now but said to be going open source in the summer of 2016.

Someone mentioned quite recently the state of documentation on Minecraft. Their impression was there is a lot of information but poorly organized.

If you are interested in exploring Minecraft for the release this summer, see: How to Install Minecraft on Ubuntu or Any Other Linux Distribution.

PyGame, Pong and Tensorflow

Monday, March 14th, 2016

Daniel Slater has a couple of posts of interest to AI game followers:

How to run learning agents against PyGame

Deep-Q learning Pong with Tensorflow and PyGame

If you like low-end video games… 😉

Seriously, the principles here can be applied to more complex situations and video games.

Enjoy!

Lee Sedol “busted up” AlphaGo – Game 4

Monday, March 14th, 2016

Lee Sedol defeats AlphaGo in masterful comeback – Game 4 by David Ormerod.

From the post:

Expectations were modest on Sunday, as Lee Sedol 9p faced the computer Go program AlphaGo for the fourth time.

Lee Sedol 9 dan, obviously relieved to win his first game.

After Lee lost the first three games, his chance of winning the five game match had evaporated.

His revised goal, and the hope of millions of his fans, was that he might succeed in winning at least one game against the machine before the match concluded.

However, his prospects of doing so appeared to be bleak, until suddenly, just when all seemed to be lost, he pulled a rabbit out of a hat.

And he didn’t even have a hat!

Lee Sedol won game four by resignation.

A reversal of roles but would you say that Sedol “busted up” AlphaGo?

Looking forward to the results of Game 5!

Chihuahau or Muffin?

Friday, March 11th, 2016

chihuahua-muffin

Adversarial images for deep learning.

Too cute not to re-post.

I first saw it in a tweet by Yhat, Inc.

Automating Amazon/Hotel/Travel Reviews (+ Human Intelligence Test (HIT))

Sunday, February 28th, 2016

The Neural Network That Remembers by Zachary C. Lipton & Charles Elkan.

From the post:

On tap at the brewpub. A nice dark red color with a nice head that left a lot of lace on the glass. Aroma is of raspberries and chocolate. Not much depth to speak of despite consisting of raspberries. The bourbon is pretty subtle as well. I really don’t know that find a flavor this beer tastes like. I would prefer a little more carbonization to come through. It’s pretty drinkable, but I wouldn’t mind if this beer was available.

Besides the overpowering bouquet of raspberries in this guy’s beer, this review is remarkable for another reason. It was produced by a computer program instructed to hallucinate a review for a “fruit/vegetable beer.” Using a powerful artificial-intelligence tool called a recurrent neural network, the software that produced this passage isn’t even programmed to know what words are, much less to obey the rules of English syntax. Yet, by mining the patterns in reviews from the barflies at BeerAdvocate.com, the program learns how to generate similarly coherent (or incoherent) reviews.

The neural network learns proper nouns like “Coors Light” and beer jargon like “lacing” and “snifter.” It learns to spell and to misspell, and to ramble just the right amount. Most important, the neural network generates reviews that are contextually relevant. For example, you can say, “Give me a 5-star review of a Russian imperial stout,” and the software will oblige. It knows to describe India pale ales as “hoppy,” stouts as “chocolatey,” and American lagers as “watery.” The neural network also learns more colorful words for lagers that we can’t put in print.

This particular neural network can also run in reverse, taking any review and recognizing the sentiment (star rating) and subject (type of beer). This work, done by one of us (Lipton) in collaboration with his colleagues Sharad Vikram and Julian McAuley at the University of California, San Diego, is part of a growing body of research demonstrating the language-processing capabilities of recurrent networks. Other related feats include captioning images, translating foreign languages, and even answering e-mail messages. It might make you wonder whether computers are finally able to think.

(emphasis in original)

An enthusiastic introduction and projection of the future of recurrent neural networks! Quite a bit so.

My immediate thought was what a time saver a recurrent neural network would be for “evaluation” requests that appear in my inbox with alarming regularity.

What about a service that accepts forwarded emails and generates a review for the book, seller, hotel, travel, etc., which is returned to you for cut-n-paste?

That would be about as “intelligent” as the amount of attention most of us devote to such requests.

You could set the service to mimic highly followed reviewers so over time you would move up the ranks of reviewers.

I mention Amazon, hotel, travel reviews but those are just low-lying fruit. You could do journal book reviews with a different data set.

Near the end of the post the authors write:


In this sense, the computer-science community is evaluating recurrent neural networks via a kind of Turing test. We try to teach a computer to act intelligently by training it to imitate what people produce when faced with the same task. Then we evaluate our thinking machine by seeing whether a human judge can distinguish between its output and what a human being might come up with.

While the very fact that we’ve come this far is exciting, this approach may have some fundamental limitations. For instance, it’s unclear how such a system could ever outstrip the capabilities of the people who provide the training data. Teaching a machine to learn through imitation might never produce more intelligence than was present collectively in those people.

One promising way forward might be an approach called reinforcement learning. Here, the computer explores the possible actions it can take, guided only by some sort of reward signal. Recently, researchers at Google DeepMind combined reinforcement learning with feed-forward neural networks to create a system that can beat human players at 31 different video games. The system never got to imitate human gamers. Instead it learned to play games by trial and error, using its score in the video game as a reward signal.

Instead of asking whether computers can think, the more provocative question is “whether people think for a large range of daily activities?”

Consider it as the Human Intelligence Test (HIT).

How much “intelligence” does it take to win a video game?

Eye/hand coordination to be sure, attention, but what “intelligence” is involved?

Computers may “eclipse” human beings at non-intelligent activities, as a shovel “eclipses” our ability to dig with our bare hands.

But I’m not overly concerned.

Are you?

Superhuman Neural Network – Urban War Fighters Take Note

Wednesday, February 24th, 2016

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

From the post:

Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.

Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.

So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

The full paper: PlaNet—Photo Geolocation with Convolutional Neural Networks.

Abstract:

Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model.

You might think that with GPS engaged that the location of images is a done deal.

Not really. You can be facing in any direction from a particular GPS location and in a dynamic environment, analysts or others don’t have the time to sort out which images are relevant from those that are just noise.

Urban warfare does not occur on a global scale, bringing home the lesson it isn’t the biggest data set but the most relevant and timely data set that is important.

Relevantly oriented images and feeds are a natural outgrowth of this work. Not to mention pairing those images with other relevant data.

PS: Before I forget, enjoy paying the game at: www.geoguessr.com.

bAbI – Facebook Datasets For Automatic Text Understanding And Reasoning

Sunday, February 21st, 2016

The bAbI project

Four papers and datasets on text understanding and reasoning from Facebook.

Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin and Tomas Mikolov. Towards AI Complete Question Answering: A Set of Prerequisite Toy Tasks. arXiv:1502.05698.

Felix Hill, Antoine Bordes, Sumit Chopra and Jason Weston. The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations. arXiv:1511.02301.

Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, Jason Weston. Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems. arXiv:1511.06931.

Antoine Bordes, Nicolas Usunier, Sumit Chopra and Jason Weston. Simple Question answering with Memory Networks. arXiv:1506.02075.

Enjoy!

More Bad News For EC Brain Project Wood Pigeons

Sunday, February 14th, 2016

I heard the story of how the magpie tried to instruct other birds, particularly the wood pigeon, on how to build nests in a different form but the lesson was much the same.

The EC Brain project reminds me of the wood pigeon hearing “…take two sticks…” and running off to build its nest.

With no understanding of the human brain, the EC set out to build one, on a ten year deadline.

Byron Spice’s report in: Project Aims to Reverse-engineer Brain Algorithms, Make Computers Learn Like Humans casts further doubt upon that project:

Carnegie Mellon University is embarking on a five-year, $12 million research effort to reverse-engineer the brain, seeking to unlock the secrets of neural circuitry and the brain’s learning methods. Researchers will use these insights to make computers think more like humans.

The research project, led by Tai Sing Lee, professor in the Computer Science Department and the Center for the Neural Basis of Cognition (CNBC), is funded by the Intelligence Advanced Research Projects Activity (IARPA) through its Machine Intelligence from Cortical Networks (MICrONS) research program. MICrONS is advancing President Barack Obama’s BRAIN Initiative to revolutionize the understanding of the human brain.

“MICrONS is similar in design and scope to the Human Genome Project, which first sequenced and mapped all human genes,” Lee said. “Its impact will likely be long-lasting and promises to be a game changer in neuroscience and artificial intelligence.”

Artificial neural nets process information in one direction, from input nodes to output nodes. But the brain likely works in quite a different way. Neurons in the brain are highly interconnected, suggesting possible feedback loops at each processing step. What these connections are doing computationally is a mystery; solving that mystery could enable the design of more capable neural nets.

My goodness! Unknown loops in algorithms?

The Carnegie Mellon project is exploring potential algorithms, not trying to engineer the unknown.

If the EC had titled its project the Graduate Assistant and Hospitality Industry Support Project, one could object to the use of funds for travel junkets but it would otherwise be intellectually honest.

International Conference on Learning Representations – Accepted Papers

Monday, February 8th, 2016

International Conference on Learning Representations – Accepted Papers

From the conference overview:

It is well understood that the performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field, and include in it topics such as deep learning and feature learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there was no venue for researchers who share a common interest in this topic. The goal of ICLR has been to help fill this void.

That should give you an idea of the range of data representations/features that you will encounter in the eighty (80) papers accepted for the conference.

ICLR 2016 will be held May 2-4, 2016 in the Caribe Hilton, San Juan, Puerto Rico.

Time to review How To Read A Paper!

Enjoy!

I first saw this in a tweet by Hugo Larochelle.

Google’s Go Victory/AI Danger Summarized In One Sentence

Sunday, January 31st, 2016

Google’s Go Victory Is Just A Glimpse Of How Powerful AI Will Be by Cade Metz.

Cade manages to summarize the implications of the Google Go victory and the future danger of AI in one concise sentence:

Bostrom’s book makes the case that AI could be more dangerous than nuclear weapons, not only because human could misuse it but because we could build AI systems that we are somehow not able to control.

If you don’t have time for the entire article, that sentence summarizes the article as well.

Pay particular attention to the part that reads: “…that we are somehow not able to control.

Is that like a Terex 33-19 “Titan”

640px-SparTitan

with a nuclear power supply and no off switch? (Yes, that is a person in the second wheel from the front.)

We learned only recently that consciousness, at least as we understand the term now, is a product of chaotic and cascading connections. Consciousness May Be the Product of Carefully Balanced Chaos [Show The Red Card].

One supposes that positronic brains (warning: fiction) must share that chaotic characteristic.

However, Cade and Bostrom fail to point to any promising research on the development of positronic brains.

That’s not to deny that poor choices could be made by an AI designed by Aussies. If projected global warming exceeds three degrees Celsius, set off a doomsday bomb. (On the Beach)

The lesson there is two-fold: Don’t build doomsday weapons. Don’t put computers in charge of them.

The danger from AI is in the range of a gamma ray burst ending civilization. If that high.

On the other hand, if you want work has a solid background in science fiction, prone to sound bites in the media and attracts doomsday groupies of all genders, it doesn’t require a lot of research.

The only real requirement is to wring your hands over some imagined scenario that you can’t say will occur or how that will doom us all. Throw in some of the latest buzz words and you have a presentation/speech/book.

Consciousness May Be the Product of Carefully Balanced Chaos [Show The Red Card]

Thursday, January 28th, 2016

Consciousness May Be the Product of Carefully Balanced Chaos by sciencehabit.

From the posting:

The question of whether the human consciousness is subjective or objective is largely philosophical. But the line between consciousness and unconsciousness is a bit easier to measure. In a new study (abstract) of how anesthetic drugs affect the brain, researchers suggest that our experience of reality is the product of a delicate balance of connectivity between neurons—too much or too little and consciousness slips away. During wakeful consciousness, participants’ brains generated “a flurry of ever-changing activity”, and the fMRI showed a multitude of overlapping networks activating as the brain integrated its surroundings and generated a moment to moment “flow of consciousness.” After the propofol kicked in, brain networks had reduced connectivity and much less variability over time. The brain seemed to be stuck in a rut—using the same pathways over and over again.

These researchers need to be shown the red card as they say in soccer.

I thought it was agreed that during the Human Brain Project, no one would research or publish new information about the human brain, in order to allow the EU project to complete its “working model” of the human brain.

The Human Brain Project is a butts in seats and/or hotels project and a gum ball machine will be able to duplicate its results. But discovering vast amounts of unknown facts demonstrates the lack of an adequate foundation for the project at its inception.

In other words, more facts may decrease public support for ill-considered WPA projects for science.

Calling the “judgement,” favoritism would be a more descriptive term, of award managers into question, surely merits the “red card” in this instance.

(Note to readers: This post is to be read as sarcasm. The excellent research reported Enzo Tagliazucchi, et al. in Large-scale signatures of unconsciousness are consistent with a departure from critical dynamics is an indication of some of the distance between current research and replication of a human brain.)

The full abstract if you are interested:

Loss of cortical integration and changes in the dynamics of electrophysiological brain signals characterize the transition from wakefulness towards unconsciousness. In this study, we arrive at a basic model explaining these observations based on the theory of phase transitions in complex systems. We studied the link between spatial and temporal correlations of large-scale brain activity recorded with functional magnetic resonance imaging during wakefulness, propofol-induced sedation and loss of consciousness and during the subsequent recovery. We observed that during unconsciousness activity in frontothalamic regions exhibited a reduction of long-range temporal correlations and a departure of functional connectivity from anatomical constraints. A model of a system exhibiting a phase transition reproduced our findings, as well as the diminished sensitivity of the cortex to external perturbations during unconsciousness. This framework unifies different observations about brain activity during unconsciousness and predicts that the principles we identified are universal and independent from its causes.

The “official” version of this article lies behind a paywall but you can see it at: http://arxiv.org/pdf/1509.04304.pdf for free.

Kudos to the authors for making their work accessible to everyone!

I first saw this in a Facebook post by Simon St. Laurent.

Stanford NLP Blog – First Post

Monday, January 25th, 2016

Sam Bowman posted The Stanford NLI Corpus Revisited today at the Stanford NLP blog.

From the post:

Last September at EMNLP 2015, we released the Stanford Natural Language Inference (SNLI) Corpus. We’re still excitedly working to build bigger and better machine learning models to use it to its full potential, and we sense that we’re not alone, so we’re using the launch of the lab’s new website to share a bit of what we’ve learned about the corpus over the last few months.

What is SNLI?

SNLI is a collection of about half a million natural language inference (NLI) problems. Each problem is a pair of sentences, a premise and a hypothesis, labeled (by hand) with one of three labels: entailment, contradiction, or neutral. An NLI model is a model that attempts to infer the correct label based on the two sentences.

A high level overview of the SNLI corpus.

The news of Marvin Minsky‘s death, today, much have arrived too late for inclusion in the post.