Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 5, 2017

Tabula: Extracting A Hit (sorry) Security List From PDF Report

Filed under: Cybersecurity,Extraction,Government,PDF,Security — Patrick Durusau @ 11:44 am

Benchmarking U.S. Government Websites by Daniel Castro, Galia Nurko, and Alan McQuinn, provides a quick assessment of 468 of the most popular federal websites for “…page-load speed, mobile friendliness, security, and accessibility.”

Unfortunately, it has an ugly table layout:

Double column listings with the same headers?

There are 476 results on Stackoverflow this morning for extracting tables from PDF.

However, I need a cup of coffee, maybe two cups of coffee answer to extracting data from these tables.

Enter Tabula.

If you’ve ever tried to do anything with data provided to you in PDFs, you know how painful it is — there’s no easy way to copy-and-paste rows of data out of PDF files. Tabula allows you to extract that data into a CSV or Microsoft Excel spreadsheet using a simple, easy-to-use interface. Tabula works on Mac, Windows and Linux.

Tabula is download, extract, start and point your web browser to http://localhost:8080 (or http://127.0.0.1:8080), load your PDF file, select the table, export the content, easy to use.

I tried selecting the columns separately (one page at a time) but then used table recognition and selected the entirety of Table 6 (security evaluation). I don’t think it made any difference in the errors I was seeing in the result (dropping first letter of site domains, but check with your data.)

Warning: For some unknown reason, possibly a defect in the PDF and/or Tabula, the leading character from the second domain field was dropped on some entries. Not all, not consistently, but it was dropped. Not to mention missing the last line of entries on a couple of pages. Proofing is required!

Not to mention there were other recognition issues

Capture wasn’t perfect due to underlying differences in the PDF:

cancer.gov,100,901,fdic.gov,100,"3,284"
weather.gov,100,904,blm.gov,100,"3,307"
transportation.gov,,,100,,,"3,340",,,ecreation.gov,,,100,,,"9,012",
"regulations.gov1003,390data.gov1009,103",,,,,,,,,,,,,,,,
nga.gov,,,100,,,"3,462",,,irstgov.gov,,,100,,,"9,112",
"nrel.gov1003,623nationalservice.gov1009,127",,,,,,,,,,,,,,,,
hrsa.gov,,,100,,,"3,635",,,topbullying.gov,,,100,,,"9,285",
"consumerfinance.gov1004,144section508.gov1009,391",,,,,,,,,,,,,,,,

With proofing, we are way beyond two cups of coffee but once proofed, I tossed it into Calc and produced a single column CSV file: 2017-Benchmarking-US-Government-Websites-Security-Table-6.csv.

Enjoy!

PS: I discovered a LibreOffice Calc “gotcha” in this exercise. If you select a column for the top and attempt to paste it under an existing column (same or different spreadsheet), you get the error message: “There is not enough room on the sheet to insert here.”

When you select a column from the top, it copies all the blank cells in that column so there truly isn’t sufficient space to paste it under another column. Tip: Always copy columns in Calc from the bottom of the column up.

April 20, 2014

Annotating, Extracting, and Linking Legal Information

Filed under: Annotation,Extraction,Law,Law - Sources,Legal Informatics,Linked Data — Patrick Durusau @ 3:59 pm

Annotating, Extracting, and Linking Legal Information by Adam Wyner. (slides)

Great slides, provided you have enough background in the area to fill in the gaps.

I first saw this at: Wyner: Annotating, Extracting, and Linking Legal Information, which has collected up the links/resources mentioned in the slides.

Despite decades of electronic efforts and several centuries of manual effort before that, legal information retrieval remains an open challenge.

April 11, 2014

Definitions Extractions from the Code of Federal Regulations

Filed under: Extraction,Law,Law - Sources — Patrick Durusau @ 7:03 pm

Definitions Extractions from the Code of Federal Regulations by Mohamma M. AL Asswad, Deepthi Rajagopalan, and Neha Kulkarni. (poster)

From a description of the project:

Imagine you’re opening a new business that uses water in the production cycle. If you want to know what federal regulations apply to you, you might do a Google search that leads to the Code of Federal Regulations. But that’s where it gets complicated, because the law contains hundreds of regulations involving water that are difficult to narrow down. (The CFR alone contains 13898 references to water.) For example, water may be defined one way when referring to a drinkable liquid and another when defined as an emission from a manufacturing facility. If the regulation says your water must maintain a certain level of purity, to which water are they referring? Definitions are the building blocks of the law, and yet pouring through them to find what applies to you is frustrating to an average business owner. Computer automation might help, but how can a computer understand exactly what kind of water you’re looking for? We at the Legal Information Institute think this is pretty important challenge, and apparently Google does too.

Looking forward to learning more about this project!

BTW, this is the same Code of Federal Regulations that some members of Congress don’t think needs to be indexed.

Knowing what legal definitions apply is a big step towards making legal material more accessible.

October 21, 2013

7 command-line tools for data science

Filed under: Data Mining,Data Science,Extraction — Patrick Durusau @ 4:54 pm

7 command-line tools for data science by Jeroen Janssens.

From the post:

Data science is OSEMN (pronounced as awesome). That is, it involves Obtaining, Scrubbing, Exploring, Modeling, and iNterpreting data. As a data scientist, I spend quite a bit of time on the command-line, especially when there's data to be obtained, scrubbed, or explored. And I'm not alone in this. Recently, Greg Reda discussed how the classics (e.g., head, cut, grep, sed, and awk) can be used for data science. Prior to that, Seth Brown discussed how to perform basic exploratory data analysis in Unix.

I would like to continue this discussion by sharing seven command-line tools that I have found useful in my day-to-day work. The tools are: jq, json2csv, csvkit, scrape, xml2json, sample, and Rio. (The home-made tools scrape, sample, and Rio can be found in this data science toolbox.) Any suggestions, questions, comments, and even pull requests are more than welcome.

Jeroen covers:

  1. jq – sed for JSON
  2. json2csv – convert JSON to CSV
  3. csvkit – suite of utilities for converting to and working with CSV
  4. scrape – HTML extraction using XPath or CSS selectors
  5. xml2json – convert XML to JSON
  6. sample – when you’re in debug mode
  7. Rio – making R part of the pipeline

There are fourteen (14) more suggested by readers at the bottom of the post.

Some definite additions to the tool belt here.

I first saw this in Pete Warden’s Five Short Links, October 19, 2013.

September 11, 2012

Web Data Extraction, Applications and Techniques: A Survey

Filed under: Data Mining,Extraction,Machine Learning,Text Extraction,Text Mining — Patrick Durusau @ 5:05 am

Web Data Extraction, Applications and Techniques: A Survey by Emilio Ferrara, Pasquale De Meo, Giacomo Fiumara, Robert Baumgartner.

Abstract:

Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of application domains. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc application domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction.

This survey aims at providing a structured and comprehensive overview of the research efforts made in the field of Web Data Extraction. The fil rouge of our work is to provide a classification of existing approaches in terms of the applications for which they have been employed. This differentiates our work from other surveys devoted to classify existing approaches on the basis of the algorithms, techniques and tools they use.

We classified Web Data Extraction approaches into categories and, for each category, we illustrated the basic techniques along with their main variants.

We grouped existing applications in two main areas: applications at the Enterprise level and at the Social Web level. Such a classification relies on a twofold reason: on one hand, Web Data Extraction techniques emerged as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. On the other hand, Web Data Extraction techniques allow for gathering a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities of analyzing human behaviors on a large scale.

We discussed also about the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.

Comprehensive (> 50 pages) survey of web data extraction. Supplements and updates existing work by its focus on classifying by field of use, web data extraction.

Very likely to lead to adaptation of techniques from one field to another.

November 28, 2011

Parsing Wikipedia Articles: Wikipedia Extractor and Cloud9

Filed under: Data Mining,Dataset,Extraction — Patrick Durusau @ 7:05 pm

Parsing Wikipedia Articles: Wikipedia Extractor and Cloud9 by Ryan Rosario.

From the post:

Lately I have doing a lot of work with the Wikipedia XML dump as a corpus. Wikipedia provides a wealth information to researchers in easy to access formats including XML, SQL and HTML dumps for all language properties. Some of the data freely available from the Wikimedia Foundation include

  • article content and template pages
  • article content with revision history (huge files)
  • article content including user pages and talk pages
  • redirect graph
  • page-to-page link lists: redirects, categories, image links, page links, interwiki etc.
  • image metadata
  • site statistics

The above resources are available not only for Wikipedia, but for other Wikimedia Foundation projects such as Wiktionary, Wikibooks and Wikiquotes.

All of that is available but also lacking any consistent usage of syntax. Ryan stumbles upon Wikipedia Extractor, which has pluses and minuses, an example of that latter being really slow. Things look up for Ryan when he is reminded about Cloud9, which is designed for a MapReduce environment.

Read the post to see how things turned out for Ryan using Cloud9.

Depending on your needs, Wikipedia URLs are a start on subject identifiers, although you will probably need to create some for your particular domain.

January 28, 2011

Sofia-ML and Maui: Two Cool Machine Learning and Extraction libraries – Post

Filed under: Extraction,Machine Learning — Patrick Durusau @ 7:21 am

Sofia-ML and Maui: Two Cool Machine Learning and Extraction libraries

Jeff Dalton reports on two software packages for text analysis.

These are examples of just some of the tools that could be run on a corpus like the Afghan War Diaries.

Powered by WordPress