Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 10, 2015

The Bogus Bogeyman of the Brainiac Robot Overlord

Filed under: Artificial Intelligence — Patrick Durusau @ 10:55 am

The Bogus Bogeyman of the Brainiac Robot Overlord by James Kobielus.

From the post:


One of the most overused science-fiction tropes is that of the super-intelligent “robot overlord” that, through human negligence or malice, has enslaved us all. Any of us can name at least one of these off the tops of our heads (e.g., “The Matrix” series). Fear of this Hollywood-fueled cultural bogeyman has stirred up anxiety about the role of machine learning, cognitive computing, and artificial intelligence (AI) in our lives, as I discussed in this recent IBM Big Data & Analytics Hub blog. It’s even fostering uneasiness about the supposedly sinister potential for our smart devices to become “smarter” than us and thereby invisibly monitor and manipulate our every action. I discussed that matter in this separate blog.

This issue will be with us forever, much the way that UFO conspiracy theorists have kept their article of faith alive in the popular mind since the early Cold War era. In the Hollywood-stoked popular mindset that surrounds this issue, the supposed algorithmic overlords represent the evil puppets dangled among us by “Big Brother,” diabolical “technocrats,” and other villains for whom there’s no Superman who might come to our rescue.

Highly entertaining take on the breathless reports that we have to stop some forms of research now or we will be enslaved and then eradicated by our machines.

You could construct a very large bulldozer and instruct it to flatten Los Angeles but that’s not an AI problem, that an HI (human intelligence) issue.

I first saw this because Bob DuCharme tweeted:

“The Bogus Bogeyman of the Brainiac Robot Overlord”: best article title I’ve seen in a long time

+1! to that!

July 8, 2015

Here’s Why Elon Musk Is Wrong About AI

Filed under: Artificial Intelligence,Machine Learning — Patrick Durusau @ 4:46 pm

Here’s Why Elon Musk Is Wrong About AI by Chris V. Nicholson.

From the post:

Nothing against Elon Musk, but the campaign he’s leading against AI is an unfortunate distraction from the true existential threats to humanity: global warming and nuclear proliferation.

Last year was the hottest year on record. We humans as a whole are just a bunch of frogs in an planet-sized pot of boiling water. We’re cooking ourselves with coal and petroleum, pumping carbon dioxide into the air. Smart robots should be the least of our worries.

Pouring money into AI ethics research is the wrong battle to pick because a) it can’t be won, b) it shouldn’t be fought, and c) to survive, humans must focus on other, much more urgent, issues. In the race to destroy humanity, other threats are much better bets than AI.

Not that I disagree with Nicholson, there are much more important issues to worry about than rogue AI, but that overlooks one critical aspect of the argument by Musk.

Musk has said to the world that he’s worried about AI and, more importantly, he has $7 Million+ for anyone who worries about it with him.

Your choices are:

  1. Ignore Musk because building an artificial intelligence when we don’t understand human intelligence seems too remote to be plausible, or
  2. Agree with Musk and if you are in a research group, take a chance on a part of $7 Million in grants.

I am firmly in the #1 camp because I have better things to do with my time attending UFO type meetings. Unfortunately, there are a lot of people in the #2 camp. Just depends on how much money is being offered.

There are any number of research projects that legitimately push the boundaries of knowledge. Unfortunately the government and others also fund projects that are wealth re-distribution programs for universities, hotels, transportation, meeting facilities and the like.

PS: There is a lot of value in the programs being explored under the misnomer of “artificial intelligence.” I don’t have an alternative moniker to suggest but it needs one.

June 14, 2015

#DallasPDShooting

Filed under: Artificial Intelligence,News,Politics — Patrick Durusau @ 2:13 pm

AJ+ tweets today:

A gunman fired at police and had a van full of pipe bombs, but no one called him a terrorist. #DallasPDShooting

That’s easy enough to explain:

  1. He wasn’t setup by the FBI
  2. He wasn’t Muslim
  3. He wasn’t black

Next question.

April 30, 2015

New Survey Technique! Ask Village Idiots

Filed under: Artificial Intelligence,News,Survey — Patrick Durusau @ 1:38 pm

I was deeply disappointed to see Scientific Computing with the headline: ‘Avengers’ Stars Wary of Artificial Intelligence by Ryan Pearson.

The respondents are all talented movie stars but acting talent and even celebrity doesn’t give them insight into issues such as artificial intelligence. You might as well ask football coaches about the radiation hazards of a possible mission to Mars. Football coaches, the winning ones anyway, are bright and intelligent folks, but as a class, aren’t the usual suspects to ask about inter-planetary radiation hazards.

President Reagan was known to confuse movies with reality but that was under extenuating circumstances. Confusing people acting in movies with people who are actually informed on a subject doesn’t make for useful news reporting.

Asking Chris Hemsworth who plays Thor in Avengers: Age of Ultron what the residents of Asgard think about relief efforts for victims of the recent earthquake in Nepal would be as meaningful.

They still publish the National Enquirer. A much better venue for “surveys” of the uninformed.

March 23, 2015

Unstructured Topic Map-Like Data Powering AI

Filed under: Annotation,Artificial Intelligence,Authoring Topic Maps,Topic Maps — Patrick Durusau @ 2:55 pm

Artificial Intelligence Is Almost Ready for Business by Brad Power.

From the post:

Such mining of digitized information has become more effective and powerful as more info is “tagged” and as analytics engines have gotten smarter. As Dario Gil, Director of Symbiotic Cognitive Systems at IBM Research, told me:

“Data is increasingly tagged and categorized on the Web – as people upload and use data they are also contributing to annotation through their comments and digital footprints. This annotated data is greatly facilitating the training of machine learning algorithms without demanding that the machine-learning experts manually catalogue and index the world. Thanks to computers with massive parallelism, we can use the equivalent of crowdsourcing to learn which algorithms create better answers. For example, when IBM’s Watson computer played ‘Jeopardy!,’ the system used hundreds of scoring engines, and all the hypotheses were fed through the different engines and scored in parallel. It then weighted the algorithms that did a better job to provide a final answer with precision and confidence.”

Granting that the tagging and annotation is unstructured, unlike a topic map, but it is as unconstrained by first order logic and other crippling features of RDF and OWL. Out of that mass of annotations, algorithms can construct useful answers.

Imagine what non-experts (Stanford logic refugees need not apply) could author about your domain, to be fed into an AI algorithm. That would take more effort than relying upon users chancing upon subjects of interest but it would also give you greater precision in the results.

Perhaps, just perhaps, one of the errors in the early topic maps days was the insistence on high editorial quality at the outset, as opposed to allowing editorial quality to emerge out of data.

As an editor I’m far more in favor of the former than the latter but seeing the latter work, makes me doubt that stringent editorial control is the only path to an acceptable degree of editorial quality.

What would a rough-cut topic map authoring interface look like?

Suggestions?

March 19, 2015

Can recursive neural tensor networks learn logical reasoning?

Filed under: Artificial Intelligence,Inference,Logic,Reasoning — Patrick Durusau @ 12:35 pm

Can recursive neural tensor networks learn logical reasoning? by Samuel R. Bowman.

Abstract:

Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of “some animal walks” from “some dog walks” or “some cat walks,” given that dogs and cats are animals. This model learns representations that generalize well to new types of reasoning pattern in all but a few cases, a result which is promising for the ability of learned representation models to capture logical reasoning.

From the introduction:

Natural language inference (NLI), the ability to reason about the truth of a statement on the basis of some premise, is among the clearest examples of a task that requires comprehensive and accurate natural language understanding [6].

I stumbled over that line in Samuel’s introduction because it implies, at least to me, that there is a notion of truth that resides outside of ourselves as speakers and hearers.

Take his first example:

Consider the statement all dogs bark. From this, one can infer quite a number of other things. One can replace the first argument of all (the first of the two predicates following it, here dogs) with any more specific category that contains only dogs and get a valid inference: all puppies bark; all collies bark.

Contrast that with one the premises that starts my day:

All governmental statements are lies of omission or commission.

Yet, firmly holding that as a “fact” of the world, I write to government officials, post ranty blog posts about government policies, urge others to attempt to persuade government to take certain positions.

Or as Leonard Cohen would say:

Everybody knows that the dice are loaded

Everybody rolls with their fingers crossed

It’s not that I think Samuel is incorrect about monotonicity for “logical reasoning” but monotonicity is a far cry from how people reason day to day.

Rather than creating “reasoning” that is such a departure from human inference, why not train a deep learning system to “reason” by exposing it to the same inputs and decisions made by human decision makers? Imitation doesn’t require understanding of human “reasoning,” just the ability to engage in the same behavior under similar circumstances.

That would reframe Samuel’s question to read: Can recursive neural tensor networks learn human reasoning?

I first saw this in a tweet by Sharon L. Bolding.

March 15, 2015

Researchers just built a free, open-source version of Siri

Filed under: Artificial Intelligence,Computer Science,Machine Learning — Patrick Durusau @ 8:05 pm

Researchers just built a free, open-source version of Siri by Jordan Norvet.

From the post:

Major tech companies like Apple and Microsoft have been able to provide millions of people with personal digital assistants on mobile devices, allowing people to do things like set alarms or get answers to questions simply by speaking. Now, other companies can implement their own versions, using new open-source software called Sirius — an allusion, of course, to Apple’s Siri.

Today researchers from the University of Michigan are giving presentations on Sirius at the International Conference on Architectural Support for Programming Languages and Operating Systems in Turkey. Meanwhile, Sirius also made an appearance on Product Hunt this morning.

“Sirius … implements the core functionalities of an IPA (intelligent personal assistant) such as speech recognition, image matching, natural language processing and a question-and-answer system,” the researchers wrote in a new academic paper documenting their work. The system accepts questions and commands from a mobile device, processes information on servers, and provides audible responses on the mobile device.

Read the full academic paper (PDF) to learn more about Sirius. Find Sirius on GitHub here.

Opens up the possibility of a IPA (intelligent personal assistant) that has custom intelligence. Are your day-to-day tasks Apple cookie-cutter tasks or do they go beyond that?

The security implications are interesting as well. What if your IPA “reads” on a news stream that you have been arrested? Or if you fail to check in within some time window?

I first saw this in a tweet by Data Geek.

Artificial Neurons and Single-Layer Neural Networks…

Artificial Neurons and Single-Layer Neural Networks – How Machine Learning Algorithms Work Part 1 by Sebastian Raschka.

From the post:

This article offers a brief glimpse of the history and basic concepts of machine learning. We will take a look at the first algorithmically described neural network and the gradient descent algorithm in context of adaptive linear neurons, which will not only introduce the principles of machine learning but also serve as the basis for modern multilayer neural networks in future articles.

Machine learning is one of the hottest and most exciting fields in the modern age of technology. Thanks to machine learning, we enjoy robust email spam filters, convenient text and voice recognition, reliable web search engines, challenging chess players, and, hopefully soon, safe and efficient self-driving cars.

Without any doubt, machine learning has become a big and popular field, and sometimes it may be challenging to see the (random) forest for the (decision) trees. Thus, I thought that it might be worthwhile to explore different machine learning algorithms in more detail by not only discussing the theory but also by implementing them step by step.
To briefly summarize what machine learning is all about: “[Machine learning is the] field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Machine learning is about the development and use of algorithms that can recognize patterns in data in order to make decisions based on statistics, probability theory, combinatorics, and optimization.

The first article in this series will introduce perceptrons and the adaline (ADAptive LINear NEuron), which fall into the category of single-layer neural networks. The perceptron is not only the first algorithmically described learning algorithm [1], but it is also very intuitive, easy to implement, and a good entry point to the (re-discovered) modern state-of-the-art machine learning algorithms: Artificial neural networks (or “deep learning” if you like). As we will see later, the adaline is a consequent improvement of the perceptron algorithm and offers a good opportunity to learn about a popular optimization algorithm in machine learning: gradient descent.

Starting point for what appears to be a great introduction to neural networks.

While you are at Sebastian’s blog, it is very much worthwhile to look around. You will be pleasantly surprised.

March 4, 2015

Futures of text

Filed under: Artificial Intelligence,Interface Research/Design,Messaging,UX — Patrick Durusau @ 10:31 am

Futures of text by Jonathan Libov.

From the post:

I believe comfort, not convenience, is the most important thing in software, and text is an incredibly comfortable medium. Text-based interaction is fast, fun, funny, flexible, intimate, descriptive and even consistent in ways that voice and user interface often are not. Always bet on text:

Text is the most socially useful communication technology. It works well in 1:1, 1:N, and M:N modes. It can be indexed and searched efficiently, even by hand. It can be translated. It can be produced and consumed at variable speeds. It is asynchronous. It can be compared, diffed, clustered, corrected, summarized and filtered algorithmically. It permits multiparty editing. It permits branching conversations, lurking, annotation, quoting, reviewing, summarizing, structured responses, exegesis, even fan fic. The breadth, scale and depth of ways people use text is unmatched by anything.

[Apologies, I lost some of Jonathan’s layout of the quote.]

Jonathan focuses on the use of text/messaging for interactions in a mobile environment, with many examples and suggestions for improvements along the way.

One observation that will have the fearful of an AI future (Elon Musk among others) running for the hills:

Messaging is the only interface in which the machine communicates with you much the same as the way you communicate with it. If some of the trends outlined in this post pervade, it would mark a qualitative shift in how we interact with computers. Whereas computer interaction to date has largely been about discrete, deliberate events — typing in the command line, clicking on files, clicking on hyperlinks, tapping on icons — a shift to messaging- or conversational-based UI’s and implicit hyperlinks would make computer interaction far more fluid and natural.

What’s more, messaging AI benefits from an obvious feedback loop: The more we interact with bots and messaging UI’s, the better it’ll get. That’s perhaps true for GUI as well, but to a far lesser degree. Messaging AI may get better at a rate we’ve never seen in the GUI world. Hold on tight.[Emphasis added.]

Think of it this way, a GUI locks you into the developer’s imagination. A text interface empowers the user and the AI’s imagination. I’m betting on the latter.

BTW, Jonathan ends with a great list of further reading on messaging and mobile applications.

Enjoy!

I first saw this in a tweet by Aloyna Medelyan.

February 25, 2015

Elon Musk Must Be Wringing His Hands, Again

Filed under: Artificial Intelligence — Patrick Durusau @ 7:29 pm

Google develops computer program capable of learning tasks independently by Hannah Devlin.

From the post:

Google scientists have developed the first computer program capable of learning a wide variety of tasks independently, in what has been hailed as a significant step towards true artificial intelligence.

The same program, or “agent” as its creators call it, learnt to play 49 different retro computer games, and came up with its own strategies for winning. In the future, the same approach could be used to power self-driving cars, personal assistants in smartphones or conduct scientific research in fields from climate change to cosmology.

The research was carried out by DeepMind, the British company bought by Google last year for £400m, whose stated aim is to build “smart machines”.

Demis Hassabis, the company’s founder said: “This is the first significant rung of the ladder towards proving a general learning system can work. It can work on a challenging task that even humans find difficult. It’s the very first baby step towards that grander goal … but an important one.”

Truly a remarkable achievement.

I haven’t found a more detailed description of the strategies developed by the “agent,” but it would be interesting to try those out on retro computer games.

The post is a good one and worth your time to read.

It closes by contrasting Elon Musk’s fears of an AI apocalypse with Google’s assurance that any danger is decades away.

I take a great deal of reassurance from the “agent” being supplied with the retro video games.

The “agent” did not choose to become a master of Asteroids, with the intent of being the despair of all other gamers at the local arcade.

However good an “agent” may become, at any task, from video games to surgery, the question is who chooses for the task to be performed? Granting we probably want to lock out commands like: “Make me a suitcase size nuclear weapon.” and that sort of thing.

February 17, 2015

The Future of AI: Refections From A Dark Mirror

Filed under: Artificial Intelligence — Patrick Durusau @ 4:29 pm

You have seen Artificial Intelligence could make us extinct, warn Oxford University researchers or similar pieces in the news of late.

With the usual sound bites (shortened even more here):

  • Oxford researchers: “intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.”
  • Elon Musk, the man behind PayPal, Tesla Motors and SpaceX,… ‘our biggest existential threat
  • Bill Gates backed up Musk’s concerns…”I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
  • The Greatest Living Physicist? Stephen Hawking…”The development of full artificial intelligence could spell the end of the human race. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

This is what is known as the “argument from authority” (a fallacy).

As the Wikipedia article on argument from authority notes:

…authorities can come to the wrong judgments through error, bias, dishonesty, or falling prey to groupthink. Thus, the appeal to authority is not a generally reliable argument for establishing facts.[7]

This article and others like it must use the “argument from authority” fallacy because they have no facts with which to persuade you of the danger of future AI. It isn’t often that you find others, outside of science fiction, who admit their alleged dangers are invented out of whole clothe.

The Oxford Researchers attempt to dress their alarmist assertions up to sound better than “appeal to authority:”

Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), 485 and would probably act in a way to boost their own intelligence and acquire maximal resources for almost all initial AI motivations. 486 And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence.

This makes extremely intelligent AIs a unique risk, 488 in that extinction is more likely than lesser impacts. An AI would only turn on humans if it foresaw a likely chance of winning; otherwise it would remain fully integrated into society. And if an AI had been able to successfully engineer a civilisation collapse, for instance, then it could certainly drive the remaining humans to extinction.

Let’s briefly compare the statements made about some future AI with the sources cited by the authors.

486 See Omohundro, Stephen M.: The basic AI drives. Frontiers in Artificial Intelligence and applications 171 (2008): 483

The Basic AI Drives offers the following abstract:

One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

Omohundro reminds me of Alan Greenspan, who had to admit to Congress that his long held faith in “…rational economic behavior…” of investors to be mistaken.

From wikipedia:

In Congressional testimony on October 23, 2008, Greenspan finally conceded error on regulation. The New York Times wrote, “a humbled Mr. Greenspan admitted that he had put too much faith in the self-correcting power of free markets and had failed to anticipate the self-destructive power of wanton mortgage lending. … Mr. Greenspan refused to accept blame for the crisis but acknowledged that his belief in deregulation had been shaken.” Although many Republican lawmakers tried to blame the housing bubble on Fannie Mae and Freddie Mac, Greenspan placed far more blame on Wall Street for bundling subprime mortgages into securities.[80]

Like Greenspan, Omohundro has created a hedge around intelligence that he calls “rational economic behavior,” which has its roots Boolean logic. The problem is that Omonundro, as so many others, appears to know Boole’s An Investigation of the Laws of Thought by reputation and/or repetition by others.

Boole was very careful to point out that his rules were only one aspect of what it means to “reason,” saying at pp. 327-328:

But the very same class of considerations shows with equal force the error of those who regard the study of Mathematics, and of their applications, as a sufficient basis either of knowledge or of discipline. If the constitution of the material frame is mathematical, it is not merely so. If the mind, in its capacity of formal reasoning, obeys, whether consciously or unconsciously, mathematical laws, it claims through its other capacities of sentiment and action, through its perceptions of beauty and of moral fitness, through its deep springs of emotion and affection, to hold relation to a different order of things. There is, moreover, a breadth of intellectual vision, a power of sympathy with truth in all its forms and manifestations, which is not measured by the force and subtlety of the dialectic faculty. Even the revelation of the material universe in its boundless magnitude, and pervading order, and constancy of law, is not necessarily the most fully apprehended by him who has traced with minutest accuracy the steps of the great demonstration. And if we embrace in our survey the interests and duties of life, how little do any processes of mere ratiocination enable us to comprehend the weightier questions which they present! As truly, therefore, as the cultivation of the mathematical or deductive faculty is a part of intellectual discipline, so truly is it only a part. The prejudice which would either banish or make supreme any one department of knowledge or faculty of mind, betrays not only error of judgment, but a defect of that intellectual modesty which is inseparable from a pure devotion to truth. It assumes the office of criticising a constitution of things which no human appointment has established, or can annul. It sets aside the ancient and just conception of truth as one though manifold. Much of this error, as actually existent among us, seems due to the special and isolated character of scientific teaching—which character it, in its turn, tends to foster. The study of philosophy, notwithstanding a few marked instances of exception, has failed to keep pace with the advance of the several departments of knowledge, whose mutual relations it is its province to determine. It is impossible, however, not to contemplate the particular evil in question as part of a larger system, and connect it with the too prevalent view of knowledge as a merely secular thing, and with the undue predominance, already adverted to, of those motives, legitimate within their proper limits, which are founded upon a regard to its secular advantages. In the extreme case it is not difficult to see that the continued operation of such motives, uncontrolled by any higher principles of action, uncorrected by the personal influence of superior minds, must tend to lower the standard of thought in reference to the objects of knowledge, and to render void and ineffectual whatsoever elements of a noble faith may still survive.

As far as “drives” of an AI, we only have one speculation on such drives and not any factual evidence. Restricting the future model of AI to current mis-understandings of what it means to reason doesn’t seem like a useful approach.

487 See Muehlhauser, Luke, and Louie Helm.: Intelligence Explosion and Machine Ethics. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer (2012)

Muehlhauser and Helm are cited for the proposition:

And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence.

The abstract for Intelligence Explosion and Machine Ethics reads:

Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence,” we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity.”

What follows is a delightful discussion of the difficulties of constructing moral rules of universal application and how many moral guidance for AIs could reach unintended consequences. I take the essay as evidence of our imprecision in moral reasoning and the need to do better for ourselves and any future AI. Its relationship to “…driven to construct a world without humans or without meaningful features of human existence” is tenuous at best.

For their most extreme claim:

This makes extremely intelligent AIs a unique risk, 488 in that extinction is more likely than lesser impacts.

the authors rely upon the most reliable source, themselves:

488 Dealing with most risks comes under the category of decision theory: finding the right approaches to maximise the probability of the most preferred options. But an intelligent agent can react to decisions in a way the environment cannot, meaning that interactions with AIs are better modelled by the more complicated discipline of game theory.

For the extinction by a future AI being more likely, the authors only have self-citation as authority.

To summarize, the claims about future AI are based on arguments from authority and the evidence cited by the “Oxford researchers” consists of one defective notion of AI, one exploration of specifying moral rules and a self-citation.

As a contrary example, consider all the non-human inhabitants of the Earth, none of which have exhibited that unique human trait, the need to drive other species into extinction. Perhaps those who fear a future AI are seeing a reflection from a dark mirror.

PS: You can see the full version of the Oxford report: 12 Risks that threaten human civilisation.

The authors and/or their typesetter is very skilled at page layout and the use of color. It is unfortunate they did not have professional editing for the AI section of the report.

February 10, 2015

MS Deep Learning Beats Humans (and MS is modest about it)

Filed under: Artificial Intelligence,Deep Learning,Machine Learning — Patrick Durusau @ 7:51 pm

Microsoft researchers say their newest deep learning system beats humans — and Google

Two stories for the price of one! Microsoft’s deep learning project beats human recognition on a data set and Microsoft is modest about it. 😉

From the post:

The Microsoft creation got a 4.94 percent error rate for the correct classification of images in the 2012 version of the widely recognized ImageNet data set , compared with a 5.1 percent error rate among humans, according to the paper. The challenge involved identifying objects in the images and then correctly selecting the most accurate categories for the images, out of 1,000 options. Categories included “hatchet,” “geyser,” and “microwave.”

[modesty]
“While our algorithm produces a superior result on this particular dataset, this does not indicate that machine vision outperforms human vision on object recognition in general,” they wrote. “On recognizing elementary object categories (i.e., common objects or concepts in daily lives) such as the Pascal VOC task, machines still have obvious errors in cases that are trivial for humans. Nevertheless, we believe that our results show the tremendous potential of machine algorithms to match human-level performance on visual recognition.”

You can grab the paper here.

Hoping that Microsoft sets a trend in reporting breakthroughs in big data and machine learning. Stating the achievement but also its limitations may lead to more accurate reporting of technical news. Not holding my breath but I am hopeful.

I first saw this in a tweet by GPUComputing.

February 4, 2015

All Models of Learning have Flaws

Filed under: Artificial Intelligence,Machine Learning — Patrick Durusau @ 5:55 pm

All Models of Learning have Flaws by John Langford.

From the post:

Attempts to abstract and study machine learning are within some given framework or mathematical model. It turns out that all of these models are significantly flawed for the purpose of studying machine learning. I’ve created a table (below) outlining the major flaws in some common models of machine learning.

Quite dated (2007) but still quite handy chart of what is “right” and “wrong” about machine learning models.

Would be even more useful with smallish data sets that illustrate what is “right” and “wrong” about each model.

Anything you would add or take away?

I first saw this in a tweet by Computer Science.

January 17, 2015

Facebook open sources tools for bigger, faster deep learning models

Filed under: Artificial Intelligence,Deep Learning,Facebook,Machine Learning — Patrick Durusau @ 6:55 pm

Facebook open sources tools for bigger, faster deep learning models by Derrick Harris.

From the post:

Facebook on Friday open sourced a handful of software libraries that it claims will help users build bigger, faster deep learning models than existing tools allow.

The libraries, which Facebook is calling modules, are alternatives for the default ones in a popular machine learning development environment called Torch, and are optimized to run on Nvidia graphics processing units. Among the modules are those designed to rapidly speed up training for large computer vision systems (nearly 24 times, in some cases), to train systems on potentially millions of different classes (e.g., predicting whether a word will appear across a large number of documents, or whether a picture was taken in any city anywhere), and an optimized method for building language models and word embeddings (e.g., knowing how different words are related to each other).

“‘[T]here is no way you can use anything existing” to achieve some of these results, said Soumith Chintala, an engineer with Facebook Artificial Intelligence Research.

How very awesome! Keeping abreast of the latest releases and papers on deep learning is turning out to be a real chore. Enjoyable but a time sink none the less.

Derrick’s post and the release from Facebook have more details.

Apologies for the “lite” posting today but I have been proofing related specifications where one defines a term and the other uses the term, but doesn’t cite the other specification’s definition or give its own. Do those mean the same thing? Probably the same thing but users outside the process may or may not realize that. Particularly in translation.

I first saw this in a tweet by Kirk Borne.

January 8, 2015

Simple Pictures That State-of-the-Art AI Still Can’t Recognize

Filed under: Artificial Intelligence,Deep Learning,Machine Learning,Neural Networks — Patrick Durusau @ 3:58 pm

Simple Pictures That State-of-the-Art AI Still Can’t Recognize by Kyle VanHemert.

I encountered this non-technical summary of Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, which I covered as: Deep Neural Networks are Easily Fooled:… earlier today.

While I am sure you have read the fuller explanation, I wanted to replicate the top 40 images for your consideration:

top40-660x589

Select the image to see a larger, readable version.

Enjoy the images and pass the Wired article along to friends.

December 31, 2014

Google’s Secretive DeepMind Startup Unveils a “Neural Turing Machine”

Filed under: Artificial Intelligence,Semantic Web,Semantics — Patrick Durusau @ 2:58 pm

Google’s Secretive DeepMind Startup Unveils a “Neural Turing Machine”

From the post:

One of the great challenges of neuroscience is to understand the short-term working memory in the human brain. At the same time, computer scientists would dearly love to reproduce the same kind of memory in silico.

Today, Google’s secretive DeepMind startup, which it bought for $400 million earlier this year, unveils a prototype computer that attempts to mimic some of the properties of the human brain’s short-term working memory. The new computer is a type of neural network that has been adapted to work with an external memory. The result is a computer that learns as it stores memories and can later retrieve them to perform logical tasks beyond those it has been trained to do.

Of particular interest to topic mappers and folks looking for realistic semantic solutions for big data. In particular the concept of “recoding,” which is how the human brain collapses multiple chunks of data into one chunk for easier access/processing.

It sounds close to referential transparency to me but where the transparency is optional. That is you don’t have to look unless you need the details.

The full article will fully repay the time to read it and then some:

Neural Turing Machines by Alex Graves, Greg Wayne, Ivo Danihelka.

Abstract:

We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

The paper was revised on 10 December 2014 so if you read an earlier version, you may want to read it again. Whether Google cracks this aspect of the problem of intelligence or not, it sounds like an intriguing technique with applications in topic map/semantic processing.

December 19, 2014

The top 10 Big data and analytics tutorials in 2014

Filed under: Analytics,Artificial Intelligence,BigData — Patrick Durusau @ 4:31 pm

The top 10 Big data and analytics tutorials in 2014 by Sarah Domina.

From the post:

At developerWorks, our Big data and analytics content helps you learn to leverage the tools and technologies to harness and analyze data. Let’s take a look back at the top 10 tutorials from 2014, in no particular order.

There are a couple of IBM product line specific tutorials but the majority of them you will enjoy whether you are an IBM shop or not.

Oddly enough, the post for the top ten (10) in 2014 was made on 26 September 2014.

Either Watson is far better than I have ever imagined or IBM has its own calendar.

In favor of an IBM calendar, I would point out that IBM has its own song.

A flag:

ibm-flag

IBM ranks ahead of Morocco in terms of GDP at $99.751 billion.

Does IBM have its own calendar? Hard to say for sure but I would not doubt it. 😉

December 12, 2014

RoboBrain: The World’s First Knowledge Engine For Robots

Filed under: Artificial Intelligence,Machine Learning — Patrick Durusau @ 8:01 pm

RoboBrain: The World’s First Knowledge Engine For Robots

From the post:

One of the most exciting changes influencing modern life is the ability to search and interact with information on a scale that has never been possible before. All this is thanks to a convergence of technologies that have resulted in services such as Google Now, Siri, Wikipedia and IBM’s Watson supercomputer.

This gives us answers to a wide range of questions on almost any topic simply by whispering a few words into a smart phone or typing a few characters into a laptop. Part of what makes this possible is that humans are good at coping with ambiguity. So the answer to a simple question such as “how to make cheese on toast” can result in very general instructions that an ordinary person can easily follow.

For robots, the challenge is quite different. These machines require detailed instructions even for the simplest task. For example, a robot asking a search engine “how to bring sweet tea from the kitchen” is unlikely to get the detail it needs to carry out the task since it requires all kinds of incidental knowledge such as the idea that cups can hold liquid (but not when held upside down), that water comes from taps and can be heated in a kettle or microwave, and so on.

The truth is that if robots are ever to get useful knowledge from search engines, these databases will have to contain a much more detailed description of every task that they might need to carry out.

Enter Ashutosh Saxena at Stanford University in Palo Alto and a number of pals, who have set themselves the task of building such knowledge engine for robots.

These guys have already begun creating a kind of Google for robots that can be freely accessed by any device wishing to carry out a task. At the same time, the database gathers new information about these tasks as robots perform them, thereby learning as it goes. They call their new knowledge engine RoboBrain.

Robobrain

An overview of: arxiv.org/abs/1412.0691 RoboBrain: Large-Scale Knowledge Engine for Robots.

See the website as well: RoboBrain.me

Not quite AI but something close.

If nothing else, the project should identify a large amount of tacit knowledge that is generally overlooked.

November 30, 2014

BRAIN WORKSHOP [Dec. 3-5, 2014]

Filed under: Artificial Intelligence,Neural Information Processing,Neuroinformatics — Patrick Durusau @ 10:18 am

BRAIN WORKSHOP: Workshop on the Research Interfaces between Brain Science and Computer Science

From the post:

brain conference logo

Computer science and brain science share deep intellectual roots – after all, computer science sprang out Alan Turing’s musings about the brain in the spring of 1936.  Today, understanding the structure and function of the human brain is one of the greatest scientific challenges of our generation. Decades of study and continued progress in our knowledge of neural function and brain architecture have led to important advances in brain science, but a comprehensive understanding of the brain still lies well beyond the horizon.  How might computer science and brain science benefit from one another? Computer science, in addition to staggering advances in its core mission, has been instrumental in scientific progress in physical and social sciences. Yet among all scientific objects of study, the brain seems by far the most blatantly computational in nature, and thus presumably most conducive to algorithmic insights, and more apt to inspire computational research. Models of the brain are naturally thought of as graphs and networks; machine learning seeks inspiration in human learning; neuromorphic computing models attempt to use biological insight to solve complex problems. Conversely, the study of the brain depends crucially on interpretation of data: imaging data that reveals structure, activity data that relates to the function of individual or groups of neurons, and behavioral data that embodies the complex interaction of all of these elements.

 

This two-day workshop, sponsored by the Computing Community Consortium (CCC) and National Science Foundation (NSF), brings together brain researchers and computer scientists for a scientific dialogue aimed at exposing new opportunities for joint research in the many exciting facets, established and new, of the interface between the two fields.   The workshop will be aimed at questions such as these:

  • What are the current barriers to mapping the architecture of the brain, and how can they be overcome?
  • What scale of data suffices for the discovery of “neural motifs,” and what might they look like?
  • What would be required to truly have a “neuron in-silico,” and how far are we from that?
  • How can we connect models across the various scales (biophysics – neural function – cortical functional units – cognition)?
  • Which computational principles of brain function which can be employed to solve computational problems? What sort of platforms would support such work?
  • What advances are needed in hardware and software to enable true brain-computer interfaces? What is the right “neural language” for communicating with the brain?
  • How would one be able to test equivalence between a computational model and the modeled brain subsystem?
  • Suppose we could map the network of billions nodes and trillions connections that is the brain, how would we infer structure?
  • Can we create open-science platforms enabling computational science on enormous amounts of heterogeneous brain data (as it has happened in genomics)?
  • Is there a productive algorithmic theory of the brain, which can inform our search for answers to such questions?

Plenary addresses to be live-streamed at: http://www.cra.org/ccc/visioning/visioning-activities/brain

December 4, 2014 (EST):

8:40 AM Plenary: Jack Gallant, UC Berkeley, A Big Data Approach to Functional Characterization of the Mammalian Brain

2:00 PM Plenary: Aude Oliva, MIT Time, Space and Computation: Converging Human Neuroscience and Computer Science

7:30 PM Plenary: Leslie Valiant, Harvard, Can Models of Computation in Neuroscience be Experimentally Validated?

December 5, 2014 (EST)

10:05 AM Plenary: Terrence Sejnowski, Salk Institute, Theory, Computation, Modeling and Statistics: Connecting the Dots from the BRAIN Initiative

Mark your calendars today!

November 27, 2014

The Three Breakthroughs That Have Finally Unleashed AI on the World

Filed under: Artificial Intelligence — Patrick Durusau @ 8:51 pm

The Three Breakthroughs That Have Finally Unleashed AI on the World by Kevin Kelly.

I was attracted to this post by a tweet from Diana Zeaiter Joumblat which read:

How parallel computing, big data & deep learning algos have put an end to the #AI winter

It has been almost a decade now but while riding to lunch with a doctoral student in computer science, they related how their department was known as “human-centered computing” because AI had gotten such a bad name. In their view, the AI winter was about to end.

I was quite surprised as I remembered the AI winter of the 1970’s. 😉

The purely factual observations by Kevin in this article are all true, but I would not fret too much about:

As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.

I am very doubtful of: “The more people who use an AI, the smarter it gets.”

As we have seen from the Michael Brown case, the more people who comment on a subject, the less is known about it. Or at least what is known gets lost is a tide of non-factual but stated as factual, information.

The assumption that the current AI boom will crash upon is the assumption that accurate knowledge can be obtained in all areas. Some, like chess, sure, that can happen. Do we know all the factors at play between the police and the communities they serve?

AIs can help with medicine, but considering what we don’t know about the human body and medicine, taking a statistical guess at the best treatment isn’t reasoning, it a better betting window.

I am all for pushing AIs where they are useful, but being ever mindful that it has no more operations than my father’s mechanical pocket calculator I remember as a child. Impressive but that’s not the equivalent of intelligence.

October 24, 2014

This Is Watson

Filed under: Artificial Intelligence — Patrick Durusau @ 7:08 pm

This is Watson (IBM Journal of Research and Development, Volume 56, Issue: 3.4, 2012)

The entire issue of IBM Journal of Research and Development, Volume 56, Issue: 3.4 as PDF files.

From the table of contents:

This Is Watson

In 2007, IBM Research took on the grand challenge of building a computer system that could compete with champions at the game of Jeopardy!. In 2011, the open-domain question-answering system dubbed Watson beat the two highest ranked players in a nationally televised two-game Jeopardy! match. This special issue provides a deep technical overview of the ideas and accomplishments that positioned our team to take on the Jeopardy! challenge, build Watson, and ultimately triumph. It describes the nature of the question-answering challenge represented by Jeopardy! and details our technical approach. The papers herein describe and provide experimental results for many of the algorithmic techniques developed as part of the Watson system, covering areas including computational linguistics, information retrieval, knowledge representation and reasoning, and machine leaning. The papers offer component-level evaluations as well as their end-to-end contribution to Watson’s overall question-answering performance.

1 Introduction to “This is Watson”
D. A. Ferrucci

2 Question analysis: How Watson reads a clue
A. Lally, J. M. Prager, M. C. McCord, B. K. Boguraev, S. Patwardhan, J. Fan, P. Fodor, and J. Chu-Carroll

3 Deep parsing in Watson
M. C. McCord, J. W. Murdock, and B. K. Boguraev

4 Textual resource acquisition and engineering
J. Chu-Carroll, J. Fan, N. Schlaefer, and W. Zadrozny

5 Automatic knowledge extraction from documents
J. Fan, A. Kalyanpur, D. C. Gondek, and D. A. Ferrucci

6 Finding needles in the haystack: Search and candidate generation
J. Chu-Carroll, J. Fan, B. K. Boguraev, D. Carmel, D. Sheinwald, and C. Welty

7 Typing candidate answers using type coercion
J. W. Murdock, A. Kalyanpur, C. Welty, J. Fan, D. A. Ferrucci, D. C. Gondek, L. Zhang, and H. Kanayama

8 Textual evidence gathering and analysis
J. W. Murdock, J. Fan, A. Lally, H. Shima, and B. K. Boguraev

9 Relation extraction and scoring in DeepQA
C. Wang, A. Kalyanpur, J. Fan, B. K. Boguraev, and D. C. Gondek

10 Structured data and inference in DeepQA
A. Kalyanpur, B. K. Boguraev, S. Patwardhan, J. W. Murdock, A. Lally, C. Welty, J. M. Prager, B. Coppola, A. Fokoue-Nkoutche, L. Zhang, Y. Pan, and Z. M. Qiu

11 Special Questions and techniques
J. M. Prager, E. W. Brown, and J. Chu-Carroll

12 Identifying implicit relationships
J. Chu-Carroll, E. W. Brown, A. Lally, and J. W. Murdock

13 Fact-based question decomposition in DeepQA
A. Kalyanpur, S. Patwardhan, B. K. Boguraev, A. Lally, and J. Chu-Carroll

14 A framework for merging and ranking of answers in DeepQA
D. C. Gondek, A. Lally, A. Kalyanpur, J. W. Murdock, P. A. Duboue, L. Zhang, Y. Pan, Z. M. Qiu, and C. Welty

15 Making Watson fast
E. A. Epstein, M. I. Schor, B. S. Iyer, A. Lally, E. W. Brown, and J. Cwiklik

16 Simulation, learning, and optimization techniques in Watson’s game strategies
G. Tesauro, D. C. Gondek, J. Lenchner, J. Fan, and J. M. Prager

17 In the game: The interface between Watson and Jeopardy!
B. L. Lewis

Whatever your views on AI, Watson is truly impressive computer science.

Enjoy!

I first saw this in a tweet by Christopher Phipps.

October 16, 2014

IBM Watson: How it Works [This is a real hoot!]

Filed under: Artificial Intelligence — Patrick Durusau @ 4:03 pm

Dibs on why “artificial intelligence” has, is and will fail! (At least if you think “artificial intelligence” means reason like a human being.)

IBM describes the decision making process in humans as four steps:

  1. Observe
  2. Interpret and draw hypotheses
  3. Evaluate which hypotheses is right or wrong
  4. Decide based on the evaluation

Most of us learned those four steps or variations on them as part of research paper writing or introductions to science. And we have heard them repeated in a variety of contexts.

However, we also know that model of human “reasoning” is a fantasy. Most if not all of us claim to follow it but the truth about the vast majority of decision making has little to do with those four steps.

That’s not just a “blog opinion” but one that has been substantiated by years of research. Look at any chapter in Thinking, Fast and Slow by Daniel Kahneman and tell me how Watson’s four step process is a better explanation than the one you will find there.

One of my favorite examples was the impact of meal times on parole decisions in Israel. Shai Danzinger, Jonathan Levav, and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions,” PNAS 108 (2011): 6889-92.

Abstract from Danzinger:

Are judicial rulings based solely on laws and facts? Legal formalism holds that judges apply legal reasons to the facts of a case in a rational, mechanical, and deliberative manner. In contrast, legal realists argue that the rational application of legal reasons does not sufficiently explain the decisions of judges and that psychological, political, and social factors influence judicial rulings. We test the common caricature of realism that justice is “what the judge ate for breakfast” in sequential parole decisions made by experienced judges. We record the judges’ two daily food breaks, which result in segmenting the deliberations of the day into three distinct “decision sessions.” We find that the percentage of favorable rulings drops gradually from ≈65% to nearly zero within each decision session and returns abruptly to ≈65% after a break. Our findings suggest that judicial rulings can be swayed by extraneous variables that should have no bearing on legal decisions.

If yes on parole applications starts at 65% right after breakfast or lunch and dwindles to zero, I know when I want my case heard.

That is just one example from hundreds in Kahneman.

Watson lacks the irrationality necessary to “reason like a human being.”

(Note that Watson is only given simple questions. No questions about policy choices in long simmering conflicts. We save those for human beings.)

October 5, 2014

The Barrier of Meaning

Filed under: Artificial Intelligence,Computer Science,Meaning — Patrick Durusau @ 6:40 pm

The Barrier of Meaning by Gian-Carlo Rota.

The author discusses the “AI-problem” with Stanislaw Ulam. Ulam makes reference to the history of the “AI-problem” and then continues:

Well, said Stan Ulam, let us play a game. Imagine that we write a dictionary of common words. We shall try to write definitions that are unmistakeably explicit, as if ready to be programmed. Let us take, for instance, nouns like key, book, passenger, and verbs like waiting, listening, arriving. Let us start with the word “key.” I now take this object out of my pocket and ask you to look at it. No amount of staring at this object will ever tell you that this is a key, unless you already have some previous familiarity with the way keys are used.

Now look at that man passing by in a car. How do you tell that it is not just a man you are seeing, but a passenger?

When you write down precise definitions for these words, you discover that what you are describing is not an object, but a function, a role that is tied inextricably tied to some context. Take away that context, and the meaning also disappears.

When you perceive intelligently, as you sometimes do, you always perceive a function, never an object in the set-theoretic or physical sense.

Your Cartesian idea of a device in the brain that does the registering is based upon a misleading analogy between vision and photography. Cameras always register objects, but human perception is always the perceptions of functional roles. The two porcesses could not be more different.

Your friends in AI are now beginning to trumpet the role of contexts, but they are not practicing their lesson. They still want to build machines that see by imitating cameras, perhaps with some feedback thrown in. Such an approach is bound to fail since it start out with a logical misunderstanding….

Should someone mention this to the EC Brain project?

BTW, you may be able to access this article at: Physica D: Nonlinear Phenomena, Volume 22, Issues 1–3, Pages 1-402 (October–November 1986), Proceedings of the Fifth Annual International Conference. For some unknown reason, the editorial board pages are $37.95, as are all the other articles, save for this one by Gian-Carlo Rota. Which as of today, is freely accessible.

The webpages say Physica D supports “open access.” I find that rather doubtful when only three (3) pages out of four hundred and two (402) requires no payment. For material published in 1986.

You?

October 4, 2014

You Don’t Have to Be Google to Build an Artificial Brain

Filed under: Artificial Intelligence,Deep Learning — Patrick Durusau @ 7:24 pm

You Don’t Have to Be Google to Build an Artificial Brain by Cade Metz.

From the post:

When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence.

Applying its massive cluster of computers to an emerging breed of AI algorithm known as “deep learning,” the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.

But in the middle of this revolution, a researcher named Alex Krizhevsky showed that you don’t need a massive computer cluster to benefit from this technology’s unique ability to “train itself” as it analyzes digital data. As described in a paper published later that same year, he outperformed Google’s 16,000-machine cluster with a single computer—at least on one particular image recognition test.

This was a rather expensive computer, equipped with large amounts of memory and two top-of-the-line cards packed with myriad GPUs, a specialized breed of computer chip that allows the machine to behave like many. But it was a single machine nonetheless, and it showed that you didn’t need a Google-like computing cluster to exploit the power of deep learning.

Cade’s article should encourage you to do two things:

  • Learn GPU’s cold
  • Ditto on Deep Learning

Google and others will always have more raw processing power than any system you are likely to afford. However, while a steam shovel can shovel a lot of clay, it takes a real expert to make a vase. Particularly a very good one.

Do you want to pine for a steam shovel or work towards creating a fine vase?

PS: Google isn’t building “an artificial brain,” not anywhere close. That’s why all their designers, programmers and engineers are wetware.

October 3, 2014

EC [WPA] Brain Project Update

Filed under: Artificial Intelligence,EU — Patrick Durusau @ 3:49 pm

Electronic Brain by 2023: E.U.’s Human Brain Project ramps up by R. Colin Johnson.

From the post:

The gist of the first year’s report is that all the pieces are assembled — all personal are hired, laboratories throughout the region engaged, and the information and communications (ICT) is in place to allow the researchers and their more than 100 academic and corporate partners in more than 20 countries to effectively collaborate and share data. Already begun are projects that reconstruct the brain’s functioning at several different biological scales, the analysis of clinical data of diseases of the brain, and the development of computing systems inspired by the brain.

The agenda for the first two and a half years (the ramp-up phase) has also been set whereby the HBP will amass all known strategic data about brain functioning, develop theoretical frameworks that fit that data, and develop the necessary infrastructure for developing six ICT platforms during the following “operational” phase circa 2017.

“Getting ready” is a fair summary of HBP Achievements Year One.

The report fails to mention the concerns of scientists threatening to boycott the project, but given the response of the EC to that letter, which could be summarized as: “…we have decided to spend the money, get in line or get out of the way,” a further response was unlikely.

No, the EC Brain Project is more in line with the WPA projects of depression era in the United States. WPA projects were employment projects first and the results of those projects, strictly a secondary concern.

No doubt some new results will come from the EU Brain Project, simply because it isn’t possible to employ that many researchers and not have some publishable results. Particularly if self-published by the project itself.

One can only hope that the project will publish a bibliography of “all known strategic data about brain functioning” as part of its research results. Just so outsiders can gauge the development of “…theoretical frameworks that fit that data.”

One suspects for less than the conference and travel costs built into this project, the EC could have purchased a site license for the entire EU to most if not all European scientific publishers. That would do more to advance scientific research in the EU than attempting to duplicate the unknown.

September 15, 2014

A Cambrian Explosion In AI Is Coming

Filed under: Artificial Intelligence,Topic Maps — Patrick Durusau @ 6:35 am

A Cambrian Explosion In AI Is Coming by Dag Kittlaus.

From the post:

However, done properly, this emerging conversational paradigm enables a new fluidity for achieving tasks in the digital realm. Such an interface requires no user manual, makes short work of complex tasks via simple conversational commands and, once it gets to know you, makes obsolete many of the most tedious aspects of using the apps, sites and services of today. What if you didn’t have to: register and form-fill; continuously express your preferences; navigate new interfaces with every new app; and the biggest one of them all, discover and navigate each single-purpose app or service at a time?

Let me repeat the last one.

When you can use AI as a conduit, as an orchestrating mechanism to the world of information and services, you find yourself in a place where services don’t need to be discovered by an app store or search engine. It’s a new space where users will no longer be required to navigate each individual application or service to find and do what they want. Rather they move effortlessly from one need to the next with thousands of services competing and cooperating to accomplish their desires and tasks simply by expressing their desires. Just by asking.

Need a babysitter tomorrow night in a jam? Just ask your assistant to find one and it will immediately present you with a near complete set of personalized options: it already knows where you live, knows how many kids you have and their ages, knows which of the babysitting services has the highest reputation and which ones cover your geographic area. You didn’t need to search and discover a babysitting app, download it, register for it, enter your location and dates you are requesting and so on.

Dag uses the time worn acronym AI (artificial intelligence), which covers any number of intellectual sins. For the scenarios that Dag describes, I propose a new acronym, UsI (user intelligence).

Take the babysitter example to make UsI concrete. The assistant has captured your current (it could change over time) identification of “babysitter” and uses that to find information with that identification. Otherwise searching for “babysitter” would return both useful and useless results, much like contemporary search engines today.

It is the capturing of your subject identifications, to use topic map language, that enables an assistant to “understand” the world as you do. Perhaps the reverse of “personalization” where an application attempts to guess your preferences for marketing purposes, this is “individualization” where the assistant becomes more like you and knows the usually unspoken facts that underlie your requests.

If I say, “check utility bill,” my assistant will already “know” that I mean for Covington, Georgia, not any of the other places I have resided and implicitly I mean the current (unpaid) bill.

The easier and faster it is for an assistant to capture UsI, the faster and more seamless it will become for users.

Specifying and inspecting properties that underlie identifications will play an important role in fueling a useful Cambrian explosion in UsI.

Who wants a “babysitter” using your definition? Could have quite unexpected (to me) results. http://www.imdb.com/title/tt0796302/ (Be mindful of your corporate policies on what you can or can’t view at work.)

PS: Did I mention topic maps as collections of properties for identifications?

I first saw this in a tweet by Subject-centric.

September 13, 2014

Open AI Resources

Filed under: Artificial Intelligence,Open Source — Patrick Durusau @ 10:54 am

Open AI Resources

From the about page:

We all go further when we all work together. That’s the promise of Open AIR, an open source collaboration hub for AI researchers. With the decline of university- and government-sponsored research and the rise of large search and social media companies insistence on proprietary software, the field is quickly privatizing. Open AIR is the antidote: it’s important for leading scientists and researchers to keep our AI research out in the open, shareable, and extensible by the community. Join us in our goal to keep the field moving forward, together, openly.

An impressive collection of open source AI software and data.

The categories are:

A number of the major players in AI research are part of this project, which bodes well for it being maintained into the future.

If you create or encounter any open AI resources not listed at Open AI Resources, please Submit a Resource.

I first saw this in a tweet by Ana-Maria Popescu.

August 19, 2014

Seeing Things Art Historians Don’t

Filed under: Art,Artificial Intelligence,Machine Learning — Patrick Durusau @ 3:33 pm

When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed: Artificial intelligence reveals previously unrecognised influences between great artists

From the post:

The task of classifying pieces of fine art is hugely complex. When examining a painting, an art expert can usually determine its style, its genre, the artist and the period to which it belongs. Art historians often go further by looking for the influences and connections between artists, a task that is even trickier.

So the possibility that a computer might be able to classify paintings and find connections between them at first glance seems laughable. And yet, that is exactly what Babak Saleh and pals have done at Rutgers University in New Jersey.

These guys have used some of the latest image processing and classifying techniques to automate the process of discovering how great artists have influenced each other. They have even been able to uncover influences between artists that art historians have never recognised until now.

At first I thought the claim was that computer saw something art historians did not. That’s not hard. The question is whether you can convince anyone else to see what you saw. 😉

I stumbled a bit on figure 1 both in the post and in the paper. The caption for figure 1 in the article says:

Figure 1: An example of an often cited comparison in the context of influence. Left: Diego Vel´azquez’s Portrait of Pope Innocent X (1650), and, Right: Francis Bacon’s Study After Vel´azquez’s Portrait of Pope Innocent X (1953). Similar composition, pose, and subject matter but a different view of the work.

Well, not exactly. Bacon never saw the original Portrait of Pope Innocent X but produced over forty-five variants of it. It wasn’t a question of “influence” but of subsequent interpretations of the portrait. Not really the same thing as influence. See: Study after Velázquez’s Portrait of Pope Innocent X

I feel certain this will be a useful technique for exploration but naming objects in a painting would result in a large number of painting of popes sitting in chairs. Some of which may or may not have been “influences” in subsequent artists.

Or to put it another way, concluding influence, based on when artists lived, is a post hoc ergo propter hoc fallacy. Good technique to find possible places to look but not a definitive answer.

The original post was based on: Toward Automated Discovery of Artistic Influence

Abstract:

Considering the huge amount of art pieces that exist, there is valuable information to be discovered. Examining a painting, an expert can determine its style, genre, and the time period that the painting belongs. One important task for art historians is to find influences and connections between artists. Is influence a task that a computer can measure? The contribution of this paper is in exploring the problem of computer-automated suggestion of influences between artists, a problem that was not addressed before in a general setting. We first present a comparative study of different classification methodologies for the task of fine-art style classification. A two-level comparative study is performed for this classification problem. The first level reviews the performance of discriminative vs. generative models, while the second level touches the features aspect of the paintings and compares semantic-level features vs. low-level and intermediate-level features present in the painting. Then, we investigate the question “Who influenced this artist?” by looking at his masterpieces and comparing them to others. We pose this interesting question as a knowledge discovery problem. For this purpose, we investigated several painting-similarity and artist-similarity measures. As a result, we provide a visualization of artists (Map of Artists) based on the similarity between their works

I first saw this in a tweet by yarapavan.

August 15, 2014

our new robo-reader overlords

Filed under: Artificial Intelligence,Machine Learning,Security — Patrick Durusau @ 6:18 pm

our new robo-reader overlords by Alan Jacobs.

After you read this post by Jacobs, be sure to spend time with Flunk the robo-graders by Les Perelman (quoted by Jacobs).

Both raise the issue of what sort of writing can be taught by algorithms that have no understanding of writing?

In a very real sense, the outcome can only be writing that meets but does not exceed what has been programmed into an algorithm.

That is frightening enough for education, but if you are relying on AI or machine learning for intelligence analysis, your stakes may be far higher.

To be sure, software can recognize “send the atomic bomb triggers by Federal Express to this address….,” or at least I hope that is within the range of current software. But what if the message is: “The destroyer of worlds will arrive next week.” Alert? Yes/No? What if it was written in Sanskrit?

I think computers, along with AI and machine learning can be valuable tools, but not if they are setting the standard for review. At least if you don’t want to dumb down writing and national security intelligence to the level of an algorithm.

I first saw this in a tweet by James Schirmer.

August 8, 2014

ContentMine

Filed under: Artificial Intelligence,Data Mining,Machine Learning — Patrick Durusau @ 6:45 pm

ContentMine

From the webpage:

The ContentMine uses machines to liberate 100,000,000 facts from the scientific literature.

We believe that Content Mining has huge potential to make knowledge available to everyone (including machines). This can enable new and exciting research, technology developments such as in Artificial Intelligence, and opportunities for wealth creation.

Manual content-mining has been routine for 150 years, but many publishers feel threatened by machine-content-mining. It’s certainly disruptive technology but we argue that if embraced wholeheartedly it will take science forward massively and create completely new opportunities. Nevertheless many mainstream publishers have actively campaigned against it.

Although content mining can be done without breaking current laws, the borderline between legal and illegal is usually unclear. So we campaign for reform, and we work on the basis that anything that is legal for a human should also be legal for a machine.

* The right to read is the right to mine *

Well, when I went to see what facts had been discovered:

We don’t have any facts yet – there should be some here very soon!

Well, at least now you have the URL and the pitch. Curious when facts are going to start to appear?

I’m not entirely comfortable with the term “facts” because it is usually used to put some particular “fact” off-limits from discussion or debate. “It’s a fact that ….” (you fill in the blank) To disagree with such a statement makes the questioner appear stupid, obstinate or even rude.

Which is, of course, the purpose of any statement “It’s a fact that….” It is intended to end debate on that “fact” and to exclude anyone who continues to disagree.

While we wait for “facts” to appear at ContentMine, research the history of claims of various “facts” in history. You can start with some “facts” about beavers.

« Newer PostsOlder Posts »

Powered by WordPress