## DeepView: Computational Tools for Chess Spectatorship [Knowledge Retention?]

October 19th, 2014

DeepView: Computational Tools for Chess Spectatorship by Greg Borenstein, Prof. Kevin Slavin, Grandmaster Maurice Ashley.

From the post:

DeepView is a suite of computational and statistical tools meant to help novice viewers understand the drama of a high-level chess match through storytelling. Good drama includes characters and situations. We worked with GM Ashley to indentify the elements of individual player’s styles and the components of an ongoing match that computation could analyze to help bring chess to life. We gathered an archive of more than 750,000 games from chessgames.com including extensive collections of games played by each of the grandmasters in the tournament. We then used the Stockfish open source chess engine to analyze the details of each move within these games. We combined these results into a comprehensive statistical analysis that provided us with meaningful and compelling information to pass on to viewers and to provide to chess commentators to aid in their work.

In addition to making chess more accessible to novice viewers, we believe that providing access to these kinds of statistics will change how expert players play chess, allowing them to prepare differently for specific opponents and to detect limitations or quirks in their own play.

Further, we believe that the techniques used here could be applied to other sports and games as well. Specifically we wonder why traditional sports broadcasting doesn’t use measures of significance to filter or interpret the statistics they show to their viewers. For example, is a batter’s RBI count actually informative without knowing whether it is typical or extraordinary compared to other players? And when it comes to eSports with their exploding viewer population, this approach points to rich possibilities improving the spectator experience and translating complex gameplay so it is more legible for novice fans.

A deeply intriguing notion of mining data to extract patterns that are fashioned into a narrative by an expert.

Participants in the games were not called upon to make explicit the tacit knowledge they unconsciously rely upon to make decisions. Instead, decisions (moves) were collated into patterns and an expert recognized those patterns to make the tacit knowledge explicit.

Outside of games would this be a viable tactic for knowledge retention? Not asking employees/experts but recording their decisions and mining those for later annotation?

## Another Greek update: Forty-six more manuscripts online!

October 18th, 2014

Another Greek update: Forty-six more manuscripts online! by Sarah J. Biggs.

From the post:

It’s time for a monthly progress report on our Greek Manuscripts Digitisation Project, generously funded by the Stavros Niarchos Foundation and many others, including the A. G. Leventis Foundation, Sam Fogg, the Sylvia Ioannou Foundation, the Thriplow Charitable Trust, and the Friends of the British Library. There are some very exciting items in this batch, most notably the famous Codex Crippsianus(Burney MS 95), the most important manuscript for the text of the Minor Attic Orators; Egerton MS 942, a very fine copy of Demosthenes; a 19th-century poem and prose narrative on the Greek Revolution (Add MS 35072); a number of collections of 16th- and 17th-century complimentary verses in Greek and Latin dedicated to members of the Royal Family; and an exciting array of classical and patristic texts.

Texts that helped to shape the world we experience today. As did others but Greek texts played a special role in European history.

You can find ways to support the Greek Digitization project here.

I prefer, ahem, other material and for that you can consult:

Which list 1111 (eleventy-one-one?) manuscripts. Quite impressive.

Do consider supporting the British Library in this project and others. Some profess interest in sharing our common heritage. The British Library is sharing our common heritage. Your choice.

## Tupleware: Redefining Modern Analytics

October 18th, 2014

Tupleware: Redefining Modern Analytics by Andrew Crotty and Alexander Galakatos.

From the post:

Up until a decade ago, most companies sufficed with simple statistics and offline reporting, relying on traditional database management systems (DBMSs) to meet their basic business intelligence needs. This model prevailed in a time when data was small and analysis was simple.

But data has gone from being scarce to superabundant, and now companies want to leverage this wealth of information in order to make smarter business decisions. This data explosion has given rise to a host of new analytics platforms aimed at flexible processing in the cloud. Well-known systems like Hadoop and Spark are built upon the MapReduce paradigm and fulfill a role beyond the capabilities of traditional DBMSs. However, these systems are engineered for deployment on hundreds or thousands of cheap commodity machines, but non-tech companies like banks or retailers rarely operate clusters larger than a few dozen nodes. Analytics platforms, then, should no longer be built specifically to accommodate the bottlenecks of large cloud deployments, focusing instead on small clusters with more reliable hardware.

Furthermore, computational complexity is rapidly increasing, as companies seek to incorporate advanced data mining and probabilistic models into their business intelligence repertoire. Users commonly express these types of tasks as a workflow of user-defined functions (UDFs), and they want the ability to compose jobs in their favorite programming language. Yet, existing analytics systems fail to adequately serve this new generation of highly complex, UDF-centric jobs, especially when companies have limited resources or require sub-second response times. So what is the next logical step?

It’s time for a new breed of systems. In particular, a platform geared toward modern analytics needs the ability to (1) concisely express complex workflows, (2) optimize specifically for UDFs, and (3) leverage the characteristics of the underlying hardware. To meet these requirements, the Database Group at Brown University is developing Tupleware, a parallel high-performance UDF processing system that considers the data, computations, and hardware together to produce results as efficiently as possible.

The article is the “lite” introduction to Tuppleware. You may be more interested in:

Abstract:

There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world—petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to several terabytes, and perform primarily compute-intensive operations. Targeting these users fundamentally changes the way we should build analytics systems.

This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware’s architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems.

Subject to the “in memory” limitation, speedups of 10 – 6,000x over other systems are nothing to dismiss without further consideration.

Interesting to see that “medium” data now reaches into the terabyte range.

Are “mini-clouds” in the offing that provide specialized processing models?

The Tuppleware website.

I first saw this in a post by Danny Bickson, Tuppleware.

## Data Sources for Cool Data Science Projects: Part 1

October 18th, 2014

Data Sources for Cool Data Science Projects: Part 1

From the post:

At The Data Incubator, we run a free six week data science fellowship to help our Fellows land industry jobs. Our hiring partners love considering Fellows who don’t mind getting their hands dirty with data. That’s why our Fellows work on cool capstone projects that showcase those skills. One of the biggest obstacles to successful projects has been getting access to interesting data. Here are a few cool public data sources you can use for your next project:

Nothing surprising or unfamiliar but at least you know what the folks at Data Incubator think is “cool” and/or important. Intell is never a waste.

Enjoy!

## Introducing Pyleus: An Open-source Framework for Building Storm Topologies in Pure Python

October 18th, 2014

From the post:

Yelp loves Python, and we use it at scale to power our websites and process the huge amount of data we produce.

Pyleus is a new open-source framework that aims to do for Storm what mrjob, another open-source Yelp project, does for Hadoop: let developers process large amounts of data in pure Python and iterate quickly, spending more time solving business-related problems and less time concerned with the underlying platform.

First, a brief introduction to Storm. From the project’s website, “Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing.”

A Pyleus topology consists of, at minimum, a YAML file describing the structure of the topology, declaring each component and how tuples flow between them. The pyleus command-line tool builds a self-contained Storm JAR which can be submitted to any Storm cluster.

Since the U.S. baseball league championships are over, something to occupy you over the weekend.

## Update with 162 new papers to Deeplearning.University Bibliography

October 17th, 2014

From the post:

Added 162 new Deep Learning papers to the Deeplearning.University Bibliography, if you want to see them separate from the previous papers in the bibliography the new ones are listed below. There are many highly interesting papers, a few examples are:

1. Deep neural network based load forecast – forecasts of electricity prediction
2. The relation of eye gaze and face pose: Potential impact on speech recognition – combining speech recognition with facial expression
3. Feature Learning from Incomplete EEG with Denoising Autoencoder – Deep Learning for Brain Computer Interfaces

Underneath are the 162 new papers, enjoy!

(Complete Bibliography – at Deeplearning.University Bibliography)

Disclaimer: we’re so far only covering (subset of) 2014 deep learning papers, so still far from a complete bibliography, but our goal is to come close eventuallly

Best regards,

Amund Tveit (Memkite Team)

You could find all these papers by search, if you knew what search terms to use.

This bibliography is a reminder of the power of curated data. The categories and grouping the papers into categories are definitely a value-add. Search doesn’t have those, in case you haven’t noticed.

## DevCenter 1.2 delivers support for Cassandra 2.1 and query tracing

October 17th, 2014

From the post:

We’re very pleased to announce the availability of DataStax DevCenter 1.2, which you can download now. We’re excited to see how DevCenter has already become the defacto query and development tool for those of you working with Cassandra and DataStax Enterprise, and now with version 1.2, we’ve added additional support and options to make your development work even easier.

Version 1.2 of DevCenter delivers full support for the many new features in Apache Cassandra 2.1, including user defined types and tuples. DevCenter’s built-in validations, quick fix suggestions, the updated code assistance engine and the new snippets can greatly simplify your work with all the new features of Cassandra 2.1.

The download page offers the DataStax Sandbox if you are interested in a VM version.

Enjoy!

## BBC Genome Project

October 17th, 2014

BBC Genome Project

From the post:

This site contains the BBC listings information which the BBC printed in Radio Times between 1923 and 2009. You can search the site for BBC programmes, people, dates and Radio Times editions.

We hope it helps you find that long forgotten BBC programme, research a particular person or browse your own involvement with the BBC.

This is a historical record of both the planned output and the BBC services of any given time. It should be viewed in this context and with the understanding that it reflects the attitudes and standards of its time – not those of today.

Join in

You can join in and become part of the community that is improving this resource. As a result of the scanning process there are lots of spelling mistakes and punctuation errors and you can edit the entries to accurately reflect the magazine entry. You can also tell us when the schedule changed and we will hold on to that information for the next stage of this project.

What a delightful resource to find on a Friday!

True, no links to the original programs but perhaps someday?

Enjoy!

I first saw this in a tweet by Tom Loosemore.

Update: Genome: behind the scenes by Andy Armstrong.

From the post:

In October 2011 Helen Papadopoulos wrote about the Genome project – a mammoth effort to digitise an issue of the Radio Times from every week between 1923 and 2009 and make searchable programme listings available online.

Helen expected there to be between 3 and 3.5 million programme entries. Since then the number has grown to 4,423,653 programmes from 4,469 issues. You can now browse and search all of them at http://genome.ch.bbc.co.uk/

Back in 2011 the process of digitising the scanned magazines was well advanced and our thoughts were turning to how to present the archive online. It’s taken three years and a few prototypes to get us to our first public release.

Andy gives you the backend view of the BBC Genome Project.

I first saw this in a tweet by Jem Stone.

## Mobile encryption could lead to FREEDOM

October 17th, 2014

FBI Director: Mobile encryption could lead us to ‘very dark place’ by Charlie Osborne.

Opps! Looks like I mis-quoted the headline!

Charlie got the FBI Director’s phrase right but I wanted to emphasize the cost of the FBI’s position.

The choices really are that stark: You can have encryption + freedom or back doors + government surveillance.

Director Comey argues that mechanisms are in place to make sure the government obeys the law. I concede there are mechanisms with that purpose, but the reason we are having this national debate is that the government chose to not use those mechanisms.

Having not followed its own rules for years, why should we accept the government’s word that it won’t do so again?

The time has come to “go dark,” not just on mobile devices but all digital communications. It won’t be easy at first but products will be created to satisfy the demand to “go dark.”

Any artists in the crowd? Will need buttons for “Going Dark,” “Go Dark,” and “Gone Dark.”

BTW, read Charlie’s post in full to get a sense of the arguments the FBI will be making against encryption.

PS: Charlie mentions that Google and Apple will be handing encryption keys over to customers. That means that the 5th Amendment protections about self-incrimination come into play. You can refuse to hand over the keys!

There is an essay on the 5th Amendment and encryption at: The Fifth Amendment, Encryption, and the Forgotten State Interest by Dan Terzian. 61 UCLA L. Rev. Disc. 298 (2014).

Abstract:

This Essay considers how the Fifth Amendment’s Self Incrimination Clause applies to encrypted data and computer passwords. In particular, it focuses on one aspect of the Fifth Amendment that has been largely ignored: its aim to achieve a fair balance between the state’s interest and the individual’s. This aim has often guided courts in defining the Self Incrimination Clause’s scope, and it should continue to do so here. With encryption, a fair balance requires permitting the compelled production of passwords or decrypted data in order to give state interests, like prosecution, an even chance. Courts should therefore interpret Fifth Amendment doctrine in a manner permitting this compulsion.

Hoping that Terzian’s position never prevails but you do need to know the arguments that will be made in support of his position.

## COLD 2014 Consuming Linked Data

October 16th, 2014

## Free Public Access to Federal Materials on Guide to Law Online [Browsing, No Search]

October 16th, 2014

From the post:

Through an agreement with the Library of Congress, the publisher William S. Hein & Co., Inc. has generously allowed the Law Library of Congress to offer free online access to historical U.S. legal materials from HeinOnline. These titles are available through the Library’s web portal, Guide to Law Online: U.S. Federal, and include:

I should be happy but then I read:

These collections are browseable. For example, to locate the 1982 version of the Bankruptcy code in Title 11 of the U.S. Code you could select the year (1982) and then Title number (11) to retrieve the material. (emphasis added)

Err, actually it should say: These collections are browseable only. No search within or across the collections.

Here is an example:

If you expand volume 542 you will see:

Look! There is Intell vs. ADM, let’s look at that one!

Did I just overlook a search box?

I checked the others and you can to.

I did find one that was small enough (less than 20 pages I suppose) to have a search function:

So, let’s search for something that ought to be in the CFR general provisions, like “department:”

The result?

Actually that is an abbreviation of the error message. Waste of space to show more.

To summarize, the Library of Congress has arranged for all of us to have browseable access but no search to:

• United States Code 1925-1988 (includes content up to 1993)
• From Guide to Law Online: United States Law
• United States Reports v. 1-542 (1754-2004)
• From Guide to Law Online: United States Judiciary
• Code of Federal Regulations (1938-1995)
• From Guide to Law Online: Executive
• Federal Register v. 1-58 (1936-1993)
• From Guide to Law Online: Executive

Hundreds of thousands of pages of some of the most complex documents in history and no searching.

If that’s helping us, I don’t think we can afford much more help from the Library of Congress. That’s a hard thing for me to say because in the vast number of cases I really like and support the Library of Congress (aside from the robber baron refugees holed up on the Copyright Office).

Just so I don’t end on a negative note, I have a suggestion to correct this situation:

Give Thompson-Reuters (I knew them as West Publishing Company) or LexisNexis a call. Either one is capable of a better solution than you have with William S. Hein & Co., Inc. Either one has “related” products it could tastefully suggest along with search results.

## Storyline Ontology

October 16th, 2014

Storyline Ontology

From the post:

The News Storyline Ontology is a generic model for describing and organising the stories news organisations tell. The ontology is intended to be flexible to support any given news or media publisher’s approach to handling news stories. At the heart of the ontology, is the concept of Storyline. As a nuance of the English language the word ‘story’ has multiple meanings. In news organisations, a story can be an individual piece of content, such as an article or news report. It can also be the editorial view on events occurring in the world.

The journalist pulls together information, facts, opinion, quotes, and data to explain the significance of world events and their context to create a narrative. The event is an award being received; the story is the triumph over adversity and personal tragedy of the victor leading up to receiving the reward (and the inevitable fall from grace due to drugs and sexual peccadillos). Or, the event is a bombing outside a building; the story is an escalating civil war or a gas mains fault due to cost cutting. To avoid this confusion, the term Storyline has been used to remove the ambiguity between the piece of creative work (the written article) and the editorial perspective on events.

I know, it’s RDF. Well, but the ontology itself, aside from the RDF cruft, represents a thought out and shared view of story development by major news producers. It is important for that reason if no other.

And you can use it as the basis for developing or integrating other story development ontologies.

Just as the post acknowledges:

As news stories are typically of a subjective nature (one news publisher’s interpretation of any given news story may be different from another’s), Storylines can be attributed to some agent to provide this provenance.

the same is true for ontologies. Ready to claim credit/blame for yours?

## IBM Watson: How it Works [This is a real hoot!]

October 16th, 2014

Dibs on why “artificial intelligence” has, is and will fail! (At least if you think “artificial intelligence” means reason like a human being.)

IBM describes the decision making process in humans as four steps:

1. Observe
2. Interpret and draw hypotheses
3. Evaluate which hypotheses is right or wrong
4. Decide based on the evaluation

Most of us learned those four steps or variations on them as part of research paper writing or introductions to science. And we have heard them repeated in a variety of contexts.

However, we also know that model of human “reasoning” is a fantasy. Most if not all of us claim to follow it but the truth about the vast majority of decision making has little to do with those four steps.

That’s not just a “blog opinion” but one that has been substantiated by years of research. Look at any chapter in Thinking, Fast and Slow by Daniel Kahneman and tell me how Watson’s four step process is a better explanation than the one you will find there.

One of my favorite examples was the impact of meal times on parole decisions in Israel. Shai Danzinger, Jonathan Levav, and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions,” PNAS 108 (2011): 6889-92.

Abstract from Danzinger:

Are judicial rulings based solely on laws and facts? Legal formalism holds that judges apply legal reasons to the facts of a case in a rational, mechanical, and deliberative manner. In contrast, legal realists argue that the rational application of legal reasons does not sufficiently explain the decisions of judges and that psychological, political, and social factors influence judicial rulings. We test the common caricature of realism that justice is “what the judge ate for breakfast” in sequential parole decisions made by experienced judges. We record the judges’ two daily food breaks, which result in segmenting the deliberations of the day into three distinct “decision sessions.” We find that the percentage of favorable rulings drops gradually from ≈65% to nearly zero within each decision session and returns abruptly to ≈65% after a break. Our findings suggest that judicial rulings can be swayed by extraneous variables that should have no bearing on legal decisions.

If yes on parole applications starts at 65% right after breakfast or lunch and dwindles to zero, I know when I want my case heard.

That is just one example from hundreds in Kahneman.

Watson lacks the irrationality necessary to “reason like a human being.”

(Note that Watson is only given simple questions. No questions about policy choices in long simmering conflicts. We save those for human beings.)

## GraphLab Create™ v1.0 Now Generally Available

October 16th, 2014

GraphLab Create™ v1.0 Now Generally Available by Johnnie Konstantas.

From the post:

It is with tremendous pride in this amazing team that I am posting on the general availability of version 1.0, our flagship product. This work represents a bar being set on usability, breadth of features and productivity possible with a machine learning platform.

What’s next you ask? It’s easy to talk about all of our great plans for scale and administration but I want to give this watershed moment it’s due. Have a look at what’s new.

New features available in the GraphLab Create platform include:

• Predictive Services – Companies can build predictive applications quickly, easily, and at scale.  Predictive service deployments are scalable, fault-tolerant, and high performing, enabling easy integration with front-end applications. Trained models can be deployed on Amazon Elastic Compute Cloud (EC2) and monitored through Amazon CloudWatch. They can be queried in real-time via a RESTful API and the entire deployment pipeline is seen through a visual dashboard. The time from prototyping to production is dramatically reduced for GraphLab Create users.
• Deep Learning – These models are ideal for automatic learning of salient features, without human supervision, from data such as images. Combined with GraphLab Create image analysis tools, the Deep Learning package enables accurate and in-depth understanding of images and videos. The GraphLab Create image analysis package makes quick work of importing and preprocessing millions of images as well as numeric data. It is built on the latest architectures including Convolution Layer, Max, Sum, Average Pooling and Dropout. The available API allows for extensibility in building user custom neural networks. Applications include image classification, object detection and image similarity.
• Boosted Trees – With this feature, GraphLab adds support for this popular class of algorithms for robust and accurate regression and classification tasks.  With an out-of-core implementation, Boosted Trees in GraphLab Create can easily scale up to large datasets that do not fit into memory.

• Visualization – New dashboards allow users to visualize the status and health of offline jobs deployed in various environments including local, Hadoop Clusters and EC2.  Also part of GraphLab Canvas is the visualization of GraphLab SFrames and SGraphs, enabling users to explore tables, graphs, text and images, in a single interactive environment making feature engineering more efficient.

…(and more)

Rather than downloading the software, go to GraphLab Create™ Quick Start to generate a product key. After you generate a product key (displayed on webpage), GraphLab offers command line code to set you up for installing GraphLab via pip. Quick and easy on Ubuntu 12.04.

Next stop: The Five-Line Recommender, Explained by Alice Zheng.

Enjoy!

## Bloom Filters

October 15th, 2014

Bloom Filters by Jason Davies.

From the post:

Everyone is always raving about bloom filters. But what exactly are they, and what are they useful for?

Very straightforward explanation along with interactive demo. The applications section will immediately suggest how Bloom filters could be used when querying.

There are other complexities, see the Bloom Filter entry at Wikipedia. But as a first blush explanation, you will be hard pressed to find one as good as Jason’s.

I first saw this in a tweet by Allen Day.

## How To Build Linked Data APIs…

October 15th, 2014

This is the second high signal-to-noise presentation I have seen this week! I am sure that streak won’t last but I will enjoy it as long as it does.

Resources for after you see the presentation: Hydra: Hypermedia-Driven Web APIs, JSON for Linking Data, and, JSON-LD 1.0.

Near the end of the presentation, Marcus quotes Phil Archer, W3C Data Activity Lead:

Which is an odd statement considering that JSON-LD 1.0 Section 7 Data Model, reads in part:

JSON-LD is a serialization format for Linked Data based on JSON. It is therefore important to distinguish between the syntax, which is defined by JSON in [RFC4627], and the data model which is an extension of the RDF data model [RDF11-CONCEPTS]. The precise details of how JSON-LD relates to the RDF data model are given in section 9. Relationship to RDF.

And section 9. Relationship to RDF reads in part:

JSON-LD is a concrete RDF syntax as described in [RDF11-CONCEPTS]. Hence, a JSON-LD document is both an RDF document and a JSON document and correspondingly represents an instance of an RDF data model. However, JSON-LD also extends the RDF data model to optionally allow JSON-LD to serialize Generalized RDF Datasets. The JSON-LD extensions to the RDF data model are:…

Is JSON-LD “…a concrete RDF syntax…” where you can ignore RDF?

Not that I was ever a fan of RDF but standards should be fish or fowl and not attempt to be something in between.

## 5 Machine Learning Areas You Should Be Cultivating

October 15th, 2014

5 Machine Learning Areas You Should Be Cultivating by Jason Brownlee.

From the post:

You want to learn machine learning to have more opportunities at work or to get a job. You may already be working as a data scientist or machine learning engineer and looking to improve your skills.

It is about as easy to pigeonhole machine learning skills as it is programming skills (you can’t).

There is a wide array of tasks that require some skill in data mining and machine learning in business from data analysis type work to full systems architecture and integration.

Nevertheless there are common tasks and common skills that you will want to develop, just like you could suggest for an aspiring software developer.

In this post we will look at 5 key areas were you might want to develop skills and the types of activities that you could take on to practice in those areas.

Jason has a number of useful suggestions for the five areas and you will profit from taking his advice.

At the same time, I would be keeping a notebooks of assumptions or exploits that are possible with every technique or process that you learn. Results and data will be presented to you as though the results and data are both clean. It is your responsibility to test that presentation.

## Concatenative Clojure

October 15th, 2014

Concatenative Clojure by Brandon Bloom.

Summary:

Brandon Bloom introduces Factor and demonstrates Factjor –concatenative DSL – and DomScript –DOM library written in ClojureScript – in the context of concatenative programming.

Brandon compares and contrasts applicative and concatenative programming languages, concluding with this table:

Urges viewers to explore Factjor and to understand the differences between between applicative and contatenative programming languages. It is a fast moving presentation that will require viewing more than once!

Watch for new developments at: https://github.com/brandonbloom

I first saw this in a tweet by Wiliam Byrd.

## Google details new “Poodle” bug…

October 15th, 2014

Google details new “Poodle” bug, making browsers susceptible to hacking by Jonathan Vanian.

From the post:

Google’s security team detailed today a new bug that takes advantage of a design flaw in SSL version 3.0, a security protocol created by Netscape in the mid 1990s. The researchers called it a Padding Oracle on Downgraded Legacy Encryption bug, or POODLE.

Although the protocol is old, Google said that “nearly all browsers support it” and its available for hackers to exploit. Even though many modern-day websites use the TLS security protocol (essentially, the next-generation SSL) as their means of encrypting data for a secure network connection between a browser and a website, things can run amok if the connection goes down for some reason.

See Jonathan’s post for more “Poodle” details.

Suggestions for a curated and relatively comprehensive collection in security bugs as they are discovered? I ask because I follow a couple of fairly active streams but I haven’t found one that I would call “curated,” in the sense that each bug is reported once and only once, with related material linked to it.

Is it just me or would others find that to be a useful resource?

## Inductive Graph Representations in Idris

October 15th, 2014

Inductive Graph Representations in Idris by Michael R. Bernstein.

An early exploration on Inductive Graphs and Functional Graph Algorithms by Martin Erwig.

Abstract (of Erwig’s paper):

We propose a new style of writing graph algorithms in functional languages which is based on an alternative view of graphs as inductively defined data types. We show how this graph model can be implemented efficiently and then we demonstrate how graph algorithms can be succinctly given by recursive function definitions based on the inductive graph view. We also regard this as a contribution to the teaching of algorithms and data structures in functional languages since we can use the functional-graph algorithms instead of the imperative algorithms that are dominant today.

You can follow Michael at: @mrb_bk or https://github.com/mrb or his blog: http://michaelrbernste.in/.

More details on Idris: A Language With Dependent Types.

## Cryptic genetic variation in software:…

October 14th, 2014

From the post:

In genetics, cryptic genetic variation means that a genome can contain mutations whose phenotypic effects are invisible because they are suppressed or buffered, but under rare conditions they become visible and subject to selection pressure.

In software code, engineers sometimes also face the nightmare of a bug in one routine that has no visible effect because of a compensatory bug elsewhere. You fix the other routine, and suddenly the first routine starts failing for an apparently unrelated reason. Epistasis sucks.

I’ve just found an example in our code, and traced the origin of the problem back 41 years to the algorithm’s description in a 1973 applied mathematics paper. The algorithm — for sampling from a Gaussian distribution — is used worldwide, because it’s implemented in the venerable RANLIB software library still used in lots of numerical codebases, including GNU Octave. It looks to me that the only reason code has been working is that a compensatory “mutation” has been selected for in everyone else’s code except mine.

,,,

A bug hunting story to read and forward! Sean just bagged a forty-one (41) year old bug. What’s the oldest bug you have ever found?

When you reach the crux of the problem, you will understand why ambiguous, vague, incomplete and poorly organized standards annoy me to no end.

No guarantees of unambiguous results but if you need extra eyes on IT standards you know where to find me.

I first saw this in a tweet by Neil Saunders.

## Classifying Shakespearean Drama with Sparse Feature Sets

October 14th, 2014

Classifying Shakespearean Drama with Sparse Feature Sets by Douglas Duhaime.

From the post:

In her fantastic series of lectures on early modern England, Emma Smith identifies an interesting feature that differentiates the tragedies and comedies of Elizabethan drama: “Tragedies tend to have more streamlined plots, or less plot—you know, fewer things happening. Comedies tend to enjoy a multiplication of characters, disguises, and trickeries. I mean, you could partly think about the way [tragedies tend to move] towards the isolation of a single figure on the stage, getting rid of other people, moving towards a kind of solitude, whereas comedies tend to end with a big scene at the end where everybody’s on stage” (6:02-6:37).

The distinction Smith draws between tragedies and comedies is fairly intuitive: tragedies isolate the poor player that struts and frets his hour upon the stage and then is heard no more. Comedies, on the other hand, aggregate characters in order to facilitate comedic trickery and tidy marriage plots. While this discrepancy seemed promising, I couldn’t help but wonder whether computational analysis would bear out the hypothesis. Inspired by the recent proliferation of computer-assisted genre classifications of Shakespeare’s plays—many of which are founded upon high dimensional data sets like those generated by DocuScope—I was curious to know if paying attention to the number of characters on stage in Shakespearean drama could help provide additional feature sets with which to carry out this task.

A quick reminder that not all text analysis is concerned with 140 character strings.

Do you prefer:

where every letter in “high dimensional” is a hyperlink with an unknown target or a fuller listing:

Allison, Sarah, and Ryan Heuser, Matthew Jockers, Franco Moretti, Michael Witmore. Quantitative Formalism: An Experiment

Jockers, Matthew. Machine-Classifying Novels and Plays by Genre

Hope, Jonathan and Michael Witmore. “The Hundredth Psalm to the Tune of ‘Green Sleeves’”: Digital Approaches Shakespeare’s Language of Genre

Hope, Jonathan. Shakespeare by the numbers: on the linguistic texture of the Late Plays

Hope, Jonathan and Michael Witmore. The Very Large Textual Object: A Prosthetic Reading of Shakespeare

Lenthe, Victor. Finding the Sherlock in Shakespeare: some ideas about prose genre and linguistic uniqueness

Stumpf, Mike. How Quickly Nature Falls Into Revolt: On Revisiting Shakespeare’s Genres

Stumpf, Mike. This Thing of Darkness (Part III)

Tootalian, Jacob A. Shakespeare, Without Measure: The Rhetorical Tendencies of Renaissance Dramatic Prose

Ullyot, Michael. Encoding Shakespeare

Witmore, Michael. A Genre Map of Shakespeare’s Plays from the First Folio (1623)

Witmore, Michael. Shakespeare Out of Place?

Witmore, Michael. Shakespeare Quarterly 61.3 Figures

Witmore, Michael. Visualizing English Print, 1530-1800, Genre Contents of the Corpus

Decompiling Shakespeare (Site is down. Was also down when the WayBack machine tried to archive the site in July of 2014)

I prefer the longer listing.

If you are interested in Shakespeare, Folger Digital Texts has free XML and PDF versions of his work.

I first saw this in a tweet by Gregory Piatetsky

## RNeo4j: Neo4j graph database combined with R statistical programming language

October 14th, 2014

From the description:

RNeo4j combines the power of a Neo4j graph database with the R statistical programming language to easily build predictive models based on connected data. From calculating the probability of friends of friends connections to plotting an adjacency heat map based on graph analytics, the RNeo4j package allows for easy interaction with a Neo4j graph database.

Nicole is the author of the RNeo4j R package. Don’t be dismayed by the “What is a Graph” and “What is R” in the presentation outline. Mercifully only three minutes followed by a rocking live coding demonstration of the package!

Beyond Neo4j and R, use this webinar as a standard for the useful content that should appear in a webinar!

## How designers prototype at GDS

October 14th, 2014

How designers prototype at GDS by Rebecca Cottrell.

From the post:

All of the designers at GDS can code or are learning to code. If you’re a designer who has used prototyping tools like Axure for a large part of your professional career, the idea of prototyping in code might be intimidating. Terrifying, even.

I’m a good example of that. When I joined GDS I felt intimidated by the idea of using Terminal and things like Git and GitHub, and just the perceived slowness of coding in HTML.

At first I felt my workflow had slowed down significantly, but the reason for that was the learning curve involved – I soon adapted and got much faster.

GDS has lots of tools (design patterns, code snippets, front-end toolkit) to speed things up. Sharing what I learned in the process felt like a good idea to help new designers get to grips with how we work.

Not a rigid set of prescriptions but experience at prototyping and pointers to other resources. Whether you have a current system of prototyping or not, you are very likely to gain a tip or two from this post.

I first saw this in a tweet by Ben Terrett.

## ADW (Align, Disambiguate and Walk) [Semantic Similarity]

October 14th, 2014

From the webpage:

This package provides a Java implementation of ADW, a state-of-the-art semantic similarity approach that enables the comparison of lexical items at different lexical levels: from senses to texts. For more details about the approach please refer to: http://wwwusers.di.uniroma1.it/~navigli/pubs/ACL_2013_Pilehvar_Jurgens_Navigli.pdf

The abstract for the paper reads:

Semantic similarity is an essential component of many Natural Language Processing applications. However, prior methods for computing semantic similarity often operate at different levels, e.g., single words or entire documents, which requires adapting the method for each data type. We present a unified approach to semantic similarity that operates at multiple levels, all the way from comparing word senses to comparing text documents. Our method leverages a common probabilistic representation over word senses in order to compare different types of linguistic data. This unified representation shows state-of-the-art performance on three tasks: semantic textual similarity, word similarity, and word sense coarsening.

The strength of this approach is the use of multiple levels of semantic similarity. It relies on WordNet but the authors promise to extend their approach to named entities and other tokens not appearing in WordNet (like your company or industry’s internal vocabulary).

The bibliography of the paper cites much of the recent work in this area so that will be an added bonus for perusing the paper.

I first saw this in a tweet by Gregory Piatetsky.

## The Dirty Little Secret of Cancer Research

October 13th, 2014

The Dirty Little Secret of Cancer Research by Jill Neimark.

From the post:

Across different fields of cancer research, up to a third of all cell lines have been identified as imposters. Yet this fact is widely ignored, and the lines continue to be used under their false identities. As recently as 2013, one of Ain’s contaminated lines was used in a paper on thyroid cancer published in the journal Oncogene.

“There are about 10,000 citations every year on false lines—new publications that refer to or rely on papers based on imposter (human cancer) celllines,” says geneticist Christopher Korch, former director of the University of Colorado’s DNA Sequencing Analysis & Core Facility. “It’s like a huge pyramid of toothpicks precariously and deceptively held together.”

For all the worry about “big data,” where is the concern over “big bad data?”

Or is “big data” too big for correctness of the data to matter?

Once you discover that a paper is based on “imposter (human cancer) celllines,” how do you pass that information along to anyone who attempts to cite the article?

In other words, where do you write down that data about the paper, where the paper is the subject in question?

And how do you propagate that data across a universe of citations?

The post ends on a high note of current improvements but it is far from settled how to prevent reliance on compromised research.

I first saw this in a tweet by Dan Graur

## Scrape the Gibson: Python skills for data scrapers

October 13th, 2014

Scrape the Gibson: Python skills for data scrapers by Brian Abelson.

From the post:

Two years ago, I learned I had superpowers. Steve Romalewski was working on some fascinating analyses of CitiBike locations and needed some help scraping information from the city’s data portal. Cobbling together the little I knew about R, I wrote a simple scraper to fetch the json files for each bike share location and output it as a csv. When I opened the clean data in Excel, the feeling was tantamount to this scene from Hackers:

Ever since then I’ve spent a good portion of my life scraping data from websites. From movies, to bird sounds, to missed connections, and john boards (don’t ask, I promise it’s for good!), there’s not much I haven’t tried to scrape. In many cases, I dont’t even analyze the data I’ve obtained, and the whole process amounts to a nerdy version of sport hunting, with my comma-delimited trophies mounted proudly on Amazon S3.

Important post for two reasons:

• Good introduction to the art of scraping data
• Set the norm for sharing scraped data
• The people who force scraping of data don’t want it shared, combined, merged or analyzed.

You can help in disappointing them!

## Making of: Introduction to A*

October 13th, 2014

Making of: Introduction to A* by Amit Patel.

From the post:

(Warning: these notes are rough – the main page is here and these are some notes I wrote for a few colleagues and then I kept adding to it until it became a longer page)

Several people have asked me how I make the diagrams on my tutorials.

I need to learn the algorithm and data structures I want to demonstrate. Sometimes I already know them. Sometimes I know nothing about them. It varies a lot. It can take 1 to 5 months to make a tutorial. It’s slow, but the more I make, the faster I am getting.

I need to figure out what I want to show. I start with what’s in the algorithm itself: inputs, outputs, internal variables. With A*, the input is (start, goal, graph), the output is (parent pointers, distances), and the internal variables are (open set, closed set, parent pointers, distances, current node, neighbors, child node). I’m looking for the main idea to visualize. With A* it’s the frontier, which is the open set. Sometimes the thing I want to visualize is one of the algorithm’s internal variables, but not always.

Pure gold on making diagrams for tutorials here. You may make different choices but it isn’t often that the process of making a choice is exposed.

Pass this along. We all benefit from better illustrations in tutorials!

## The Big List of D3.js Examples (Approx. 2500 Examples)

October 13th, 2014

The Big List of D3.js Examples by Christophe Viau.

The interactive version has 2523 examples, whereas the numbered list has 1897 examples, as of 13 October 2014.

There is a rudimentary index of the examples. That’s an observation, not a compliant. Effective indexing of the examples would be a real challenge to the art of indexing.

The current index uses chart type, a rather open ended category. The subject matter of the chart would be another way to index. Indexing by the D3 techniques used would be useful. Data that is being combined with other data?

Effective access to the techniques and data represented by this collection would be awesome!

Give it some thought.

I first saw this in a tweet by Michael McGuffin.

## Introduction to Graphing with D3.js

October 13th, 2014

Introduction to Graphing with D3.js by Jan Milosh.

From the post:

D3.js (d3js.org) stands for Data-Driven Documents, a JavaScript library for data visualization. It was created by Mike Bostock, based on his PhD studies in the Stanford University data visualization program. Mike now works at the New York Times who sponsors his open source work.

D3 was designed for more than just graphs and charts. It’s also capable of presenting maps, networks, and ordered lists. It was created for the efficient manipulation of documents based on data.

This demonstration will focus on creating a simple scatter plot.

If you are not already using D3 for graphics, Jan’s post is an easy introduction with additional references to take you further.

Enjoy!

I first saw this in a tweet by Christophe Viau.