How to kill a patent with Python: or using NLP and graph theory for great good! by Van Linberg.
From the description:
Finding the right piece of “prior art” – technical documentation that described a patented piece of technology before the patent was filed – is like finding a needle in a very big haystack. This session will talk about how I am making that process faster and more accurate through the use of natural language processing, graph theory, machine learning, and lots of Python.
Very fascinating presentation with practical suggestions on mining patents.
Key topic map statement: “People are inconsistent in the use of language.”
From the OSCON description of the same presentation:
When faced with a patent case, it is essential to find “prior art” – patents and publications that describe a technology before a certain date. The problem is that the indexing mechanisms for patents and publications are not as good as they could be, making good prior art searching more of an art than a science. We can apply some of our natural language processing and “big data” techniques to the US patent database, getting us better results more quickly.
- Part I: The USPTO as a data source. The full-text of each patent is available from the USPTO (and now from Google.) What does this data look like? How can it be harvested and normalized to create data structures that we can work with?
- Part II: Once the patents have been cleaned and normalized, they can be turned into data structures that we can use to evaluate their relationship to other documents. This is done in two ways – by modeling each patent as a document vector and a graph node.
- Part IIA: Patents as document vectors. Once we have a patent as a data structure, we can treat the patent as a vector in an n-dimensional space. In moving from a document into a vector space, we will touch on normalization, stemming, TF/IDF, Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA).
- Part IIB: Patents as technology graphs. This will show building graph structures using the connections between patents – both the built-in connections in the patents themselves as well as the connections discovered while working with the patents as vectors. We apply some social network analysis to partition the patent graph and find other documents in the same technology space.
- Part III: What have we built? Now that we have done all this analysis, we can see some interesting things about the patent database as a whole. How does the patent database act as a map to the world of technology? And how has this helped with the original problem – finding better prior art?
My suggestion was to use topic maps to capture the human analysis of the clusters at the end of the day for merging with other human analysis of other clusters.
Waiting to learn more about this project!