Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 13, 2014

Scrape the Gibson: Python skills for data scrapers

Filed under: Python,Web Scrapers — Patrick Durusau @ 7:32 pm

Scrape the Gibson: Python skills for data scrapers by Brian Abelson.

From the post:

Two years ago, I learned I had superpowers. Steve Romalewski was working on some fascinating analyses of CitiBike locations and needed some help scraping information from the city’s data portal. Cobbling together the little I knew about R, I wrote a simple scraper to fetch the json files for each bike share location and output it as a csv. When I opened the clean data in Excel, the feeling was tantamount to this scene from Hackers:

Ever since then I’ve spent a good portion of my life scraping data from websites. From movies, to bird sounds, to missed connections, and john boards (don’t ask, I promise it’s for good!), there’s not much I haven’t tried to scrape. In many cases, I dont’t even analyze the data I’ve obtained, and the whole process amounts to a nerdy version of sport hunting, with my comma-delimited trophies mounted proudly on Amazon S3.

Important post for two reasons:

  • Good introduction to the art of scraping data
  • Set the norm for sharing scraped data
    • The people who force scraping of data don’t want it shared, combined, merged or analyzed.

      You can help in disappointing them! 😉

October 10, 2014

No Query Language Needed: Using Python with an Ordered Key-Value Store

Filed under: FoundationDB,Key-Value Stores,Python — Patrick Durusau @ 10:50 am

No Query Language Needed: Using Python with an Ordered Key-Value Store by Stephen Pimentel.

From the post:

FoundationDB is a complex and powerful database, designed to handle sharding, replication, network hiccups, and server failures gracefully and automatically. However, when we designed our Python API, we wanted most of that complexity to be hidden from the developer. By utilizing familiar features- such as generators, itertools, and comprehensions-we tried to make FoundationDB’s API as easy to us as a Python dictionary.

In the video below, I show how FoundationDB lets you query data directly using Python language features, rather than a separate query language.

Most applications have back-end data stores that developers need to query. This talk presents an approach to storing and querying data that directly employs Python language features. Using the Key-Value Store, we can make our data persistent with an interface similar to a Python dictionary. Python then gives us a number of tools “out of the box” that we can use to form queries:

  • generators for memory-efficient data retrieval;
  • itertools to filter and group data;
  • comprehensions to assemble the query results.

Taken together, these features give us a query capability using straight Python. The talk walks through a number of example queries using the Enron email dataset.

https://github.com/stephenpiment/object-store For code and the details.

More motivation to take a look at FoundationDB!

I do wonder about the “no query language needed.” Users, despite their poor results, appear to be committed to querying and query languages.

Whether it is the illusion of “empowerment” of users, the current inability to measure the cost of ineffectual searching, or acceptance of poor search results, search and search operators continue to be the preferred means of interaction. Plan accordingly.

I first saw this in a tweet by Hari Kishan.

October 1, 2014

Continuum Analytics Releases Anaconda 2.1

Filed under: Anaconda,BigData,Python — Patrick Durusau @ 4:18 pm

Continuum Analytics Releases Anaconda 2.1 by Corinna Bahr.

From the post:

Continuum Analytics, the premier provider of Python-based data analytics solutions and services, announced today the release of the latest version of Anaconda, its free, enterprise-ready collection of libraries for Python.

Anaconda enables big data management, analysis, and cross-platform visualization for business intelligence, scientific analysis, engineering, machine learning, and more. The latest release, version 2.1, adds a new version of the Anaconda Launcher and PyOpenSSL, as well as updates NumPy, Blaze, Bokeh, Numba, and 50 other packages.

Available on Windows, Mac OS X and Linux, Anaconda includes more than 195 of the most popular numerical and scientific Python libraries used by scientists, engineers and data analysts, with a single integrated and flexible installer. It also allows for the mixing and matching of different versions of Python (2.6, 2.7, 3.3, 3.4), NumPy, SciPy, etc., and the ability to easily switch between these environments.

See the post for more details, check the change log, or, what the hell, download the most recent version of Anaconda.

Remember, it’s open source so you can see “…where it keeps its brain.” Be wary of results based on software that operates behind a curtain.

BTW, check out the commercial services and products from Continuum Analytics if you need even more firepower for your data processing.

September 30, 2014

IPython Cookbook released

Filed under: BigData,Programming,Python — Patrick Durusau @ 3:58 pm

IPython Cookbook released by Cyrille Rossant.

From the post:

My new book, IPython Interactive Computing and Visualization Cookbook, has just been released! A sequel to my previous beginner-level book on Python for data analysis, this new 500-page book is a complete advanced-level guide to Python for data science. The 100+ recipes cover not only interactive and high-performance computing topics, but also data science methods in statistics, data mining, machine learning, signal processing, image processing, network analysis, and mathematical modeling.

Here is a glimpse of the topics addressed in this book:

  • IPython notebook, interactive widgets in IPython 2+
  • Best practices in interactive computing: version control, workflows with IPython, testing, debugging, continuous integration…
  • Data analysis with pandas, NumPy/SciPy, and matplotlib
  • Advanced data visualization with seaborn, Bokeh, mpld3, d3.js, Vispy
  • Code profiling and optimization
  • High-performance computing with Numba, Cython, GPGPU with CUDA/OpenCL, MPI, HDF5, Julia
  • Statistical data analysis with SciPy, PyMC, R
  • Machine learning with scikit-learn
  • Signal processing with SciPy, image processing with scikit-image and OpenCV
  • Analysis of graphs and social networks with NetworkX
  • Geographic Information Systems in Python
  • Mathematical modeling: dynamical systems, symbolic mathematics with SymPy

All of the code is freely available as IPython notebooks on the book’s GitHub repository. This repository is also the place where you can signal errata or propose improvements to any part of the book.

It’s never too early to work on your “wish list” for the holidays! 😉

Or to be person who tweaks the code (or data).

September 14, 2014

Astropy v0.4 Released

Filed under: Astroinformatics,Python — Patrick Durusau @ 3:58 pm

Astropy v0.4 Released by Erik Tollerud.

From the post:

This July, we performed the third major public release (v0.4) of the astropy package, a core Python package for Astronomy. Astropy is a community-driven package intended to contain much of the core functionality and common tools needed for performing astronomy and astrophysics with Python.

New and improved major functionality in this release includes:

  • A new astropy.vo.samp sub-package adapted from the previously standalone SAMPy package
  • A re-designed astropy.coordinates sub-package for celestial coordinates
  • A new ‘fitsheader’ command-line tool that can be used to quickly inspect FITS headers
  • A new HTML table reader/writer
  • Improved performance for Quantity objects
  • A re-designed configuration framework

Erik goes on to say that Astropy 1.0 should arrive by the end of the year!

Enjoy!

September 12, 2014

Bokeh 0.6 release

Filed under: Graphics,Python,Visualization — Patrick Durusau @ 10:30 am

Bokeh 0.6 release by Bryan Van de Ven.

From the post:

Bokeh is a Python library for visualizing large and realtime datasets on the web. Its goal is to provide to developers (and domain experts) with capabilities to easily create novel and powerful visualizations that extract insight from local or remote (possibly large) data sets, and to easily publish those visualization to the web for others to explore and interact with.

This release includes many bug fixes and improvements over our most recent 0.5.2 release:

  • Abstract Rendering recipes for large data sets: isocontour, heatmap
  • New charts in bokeh.charts: Time Series and Categorical Heatmap
  • Full Python 3 support for bokeh-server
  • Much expanded User and Dev Guides
  • Multiple axes and ranges capability
  • Plot object graph query interface
  • Hit-testing (hover tool support) for patch glyphs

See the CHANGELOG for full details.

I’d also like to announce a new Github Organization for Bokeh: https://github.com/bokeh. Currently it is home to Scala and and Julia language bindings for Bokeh, but the Bokeh project itself will be moved there before the next 0.7 release. Any implementors of new language bindings who are interested in hosting your project under this organization are encouraged to contact us.

In upcoming releases, you should expect to see more new layout capabilities (colorbar axes, better grid plots and improved annotations), additional tools, even more widgets and more charts, R language bindings, Blaze integration and cloud hosting for Bokeh apps.

Don’t forget to check out the full documentation, interactive gallery, and tutorial at

http://bokeh.pydata.org

as well as the Bokeh IPython notebook nbviewer index (including all the tutorials) at:

http://nbviewer.ipython.org/github/ContinuumIO/bokeh-notebooks/blob/master/index.ipynb

One of the examples from the gallery:

plot graphic

reminds me of U.S. foreign policy. The unseen attractors are defense contractors and other special interests.

September 7, 2014

NLTK 3.0 Is Out!

Filed under: NLTK,Python — Patrick Durusau @ 6:45 pm

NLTK 3.0

The online book has been updated: http://www.nltk.org/book/

Porting your code to NLTK 3.0

Enjoy!

August 1, 2014

COSMOS: Python library for massively parallel workflows

Filed under: Bioinformatics,Parallel Programming,Python,Workflow — Patrick Durusau @ 10:11 am

COSMOS: Python library for massively parallel workflows by Erik Gafni, et al. (Bioinformatics (2014) doi: 10.1093/bioinformatics/btu385 )

Abstract:

Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services.

Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu.

Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu.

Supplementary information: Supplementary data are available at Bioinformatics online.

A very good abstract but for pitching purposes, I would have chosen the first paragraph of the introduction:

The growing deluge of data from next-generation sequencers leads to analyses lasting hundreds or thousands of compute hours per specimen, requiring massive computing clusters or cloud infrastructure. Existing computational tools like Pegasus (Deelman et al., 2005) and more recent efforts like Galaxy (Goecks et al., 2010) and Bpipe (Sadedin et al., 2012) allow the creation and execution of complex workflows. However, few projects have succeeded in describing complicated workflows in a simple, but powerful, language that generalizes to thousands of input files; fewer still are able to deploy workflows onto distributed resource management systems (DRMs) such as Platform Load Sharing Facility (LSF) or Sun Grid Engine that stitch together clusters of thousands of compute cores. Here we describe COSMOS, a Python library developed to address these and other needs.

That paragraph highlights the bioinformatics aspects of COSMOS but also hints at a language that might be adapted to other “massively parallel workflows.” Workflows may differ details but the need to efficiently and effectively define them is a common problem.

July 30, 2014

Graphs, Databases and Graphlab

Filed under: GraphLab,Graphs,IMDb,Python — Patrick Durusau @ 2:40 pm

Graphs, Databases and Graphlab by Bugra Akyildiz.

From the post:

I will talk about graphs, graph databases and mainly the paper that powers Graphlab. At the end of the post, I will go over briefly basic capabilities of Graphlab as well.

Background coverage of graphs and graphdatabases, followed by a discussion of GraphLab.

The high point of the post are graphs generated from prior work by Bugra on the Internet Movie Database. (IMDB Top 100K Movies Analysis in Depth (Parts 1- 4))

Enjoy!

Scrapy and Elasticsearch

Filed under: ElasticSearch,Python,Web Scrapers — Patrick Durusau @ 9:56 am

Scrapy and Elasticsearch by Florian Hopf.

From the post:

On 29.07.2014 I gave a talk at Search Meetup Karlsruhe on using Scrapy with Elasticsearch, the slides are here. This post evolved from the talk and introduces you to web scraping and search with Scrapy and Elasticsearch.

Web Crawling

You might think that web crawling and scraping only is for search engines like Google and Bing. But a lot of companies are using it for different purposes: Price comparison, financial risk information and portals all need a way to get the data. And at least sometimes the way is to retrieve it through some public website. Besides these cases where the data is not in your hand it can also make sense if the data is aggregated already. For intranet and portal search engines it can be easier to just scrape the frontend instead of building data import facilities for different, sometimes even old systems.

The Example

In this post we are looking at a rather artificial example: Crawling the meetup.com page for recent meetups to make them available for search. Why artificial? Because meetup.com has an API that provides all the data in a more convenient way. But imagine there is no other way and we would like to build a custom search on this information, probably by adding other event sites as well. (emphasis in original)

Not everything you need to know about Scrapy but enough to get you interested.

APIs for data are on the up swing but web scrapers will be relevant to data mining for decades to come.

July 23, 2014

NLTK 3.0 Beta!

Filed under: Natural Language Processing,NLTK,Python — Patrick Durusau @ 12:29 pm

NLTK 3.0 Beta!

The official name is nltk 3.0.0b1 but I thought 3.0 beta rolls off the tongue better. 😉

Interface changes.

Grab the latest, contribute bug reports, etc.

July 20, 2014

SciPy Videos – Title Sort Order

Filed under: Conferences,Python — Patrick Durusau @ 3:39 pm

You have probably seen that the SciPy 2014 videos are up! Good News! SciPy 2014.

You may have also noticed, the videos are in no discernable order. Not so good news.

However, I have created a list of the SciPy Videos in Title Sort Order.

Enjoy!

July 17, 2014

Scikit-learn 0.15 release

Filed under: Machine Learning,Python,Scikit-Learn — Patrick Durusau @ 6:16 pm

Scikit-learn 0.15 release by Gaël Varoquaux.

From the post:

Highlights:

Quality— Looking at the commit log, there has been a huge amount of work to fix minor annoying issues.

Speed— There has been a huge effort put in making many parts of scikit-learn faster. Little details all over the codebase. We do hope that you’ll find that your applications run faster. For instance, we find that the worst case speed of Ward clustering is 1.5 times faster in 0.15 than 0.14. K-means clustering is often 1.1 times faster. KNN, when used in brute-force mode, got faster by a factor of 2 or 3.

Random Forest and various tree methods— The random forest and various tree methods are much much faster, use parallel computing much better, and use less memory. For instance, the picture on the right shows the scikit-learn random forest running in parallel on a fat Amazon node, and nicely using all the CPUs with little RAM usage.

Hierarchical aglomerative clusteringComplete linkage and average linkage clustering have been added. The benefit of these approach compared to the existing Ward clustering is that they can take an arbitrary distance matrix.

Robust linear models— Scikit-learn now includes RANSAC for robust linear regression.

HMM are deprecated— We have been discussing for a long time removing HMMs, that do not fit in the focus of scikit-learn on predictive modeling. We have created a separate hmmlearn repository for the HMM code. It is looking for maintainers.

And much more— plenty of “minor things”, such as better support for sparse data, better support for multi-label data…

Get thee to Scikit-learn!

July 12, 2014

Astropy Tutorials:…

Filed under: Astroinformatics,Python — Patrick Durusau @ 6:23 pm

Astropy Tutorials: Learn how to do common astro tasks with astropy and Python by Adrian Price-Whelan.

From the post:

Astropy is a community-developed Python package intended to provide much of the core functionality and common tools needed for astronomy and astrophysics research (c.f., IRAF, idlastro). In order to provide demonstrations of the package and subpackage features and how they interact, we are announcing Astropy tutorials. These tutorials are aimed to be accessible by folks with little-to-no python experience and we hope they will be useful exercises for those just getting started with programming, Python, and/or the Astropy package. (The tutorials complement the Astropy documentation, which provides more detailed and complete information about the contents of the package along with short examples of code usage.)

The Astropy tutorials work through software tasks common in astronomical data manipulation and analysis. For example, the “Read and plot catalog information from a text file” tutorial demonstrates using astropy.io.ascii for reading and writing ASCII data, astropy.coordinates and astropy.units for converting RA (as a sexagesimal angle) to decimal degrees, and then uses matplotlib for making a color-magnitude diagram an all-sky projection of the source positions.

The more data processing you do in any domain, the better your data processing skills overall.

If you already know Python, take this opportunity to learn some astronomy.

If you already like astronomy, take this opportunity to learn some Python and data processing.

Either way, you can’t lose!

Enjoy!

July 1, 2014

Introduction to Python for Econometrics, Statistics and Data Analysis

Filed under: Data Analysis,Python,Statistics — Patrick Durusau @ 7:04 pm

Introduction to Python for Econometrics, Statistics and Data Analysis by Kevin Sheppard.

From the introduction:

These notes are designed for someone new to statistical computing wishing to develop a set of skills necessary to perform original research using Python. They should also be useful for students, researchers or practitioners who require a versatile platform for econometrics, statistics or general numerical analysis (e.g. numeric solutions to economic models or model simulation).

Python is a popular general purpose programming language which is well suited to a wide range of problems. 1 Recent developments have extended Python’s range of applicability to econometrics, statistics and general numerical analysis. Python – with the right set of add-ons – is comparable to domain-specific languages such as MATLAB and R. If you are wondering whether you should bother with Python (or another language), a very incomplete list of considerations includes:

One of the more even-handed introductions I have read in a long time.

Enough examples and exercises to build some keyboard memory into your fingers! 😉

Bookmark this text so you can forward the link to others.

I first saw this in a tweet by yhat.

June 26, 2014

Graphing 173 Million Taxi Rides

Filed under: Government,Government Data,GraphLab,Python — Patrick Durusau @ 6:43 pm

Interesting taxi rides dataset by Danny Bickson.

From the post:

I got the following from my collaborator Zach Nation. NY taxi ride dataset that was not properly anonymized and was reverse engineered to find interesting insights in the data.

Danny mapped the data using GraphLab and asks some interesting questions of the data.

BTW, Danny is offering the iPython notebook to play with!

Cool!

This is the same data set I mentioned in: On Taxis and Rainbows

June 11, 2014

Exploring FBI Crime Statistics…

Filed under: Data Mining,FBI,Python,Statistics — Patrick Durusau @ 2:30 pm

Exploring FBI Crime Statistics with Glue and plotly by Chris Beaumont.

From the post:

Glue is a project I’ve been working on to interactively visualize multidimensional datasets in Python. The goal of Glue is to make trivially easy to identify features and trends in data, to inform followup analysis.

This notebook shows an example of using Glue to explore crime statistics collected by the FBI (see this notebook for the scraping code). Because Glue is an interactive tool, I’ve included a screencast showing the analysis in action. All of the plots in this notebook were made with Glue, and then exported to plotly (see the bottom of this page for details).
….

FBI crime statistics are used for demonstration purposes but Glue should be generally useful for exploring multidimensional datasets.

It isn’t possible to tell how “clean” or “consistent” the FBI reported crime data may or may not be. And as the FBI itself points out, comparison between locales is fraught with peril.

June 4, 2014

Python for Data Science

Filed under: Data Science,Python — Patrick Durusau @ 6:37 pm

Python for Data Science by Joe McCarthy.

From the post:

This short primer on Python is designed to provide a rapid “on-ramp” to enable computer programmers who are already familiar with concepts and constructs in other programming languages learn enough about Python to facilitate the effective use of open-source and proprietary Python-based machine learning and data science tools.

Uses an IPython Notebook for delivery.

This is a tutorial you will want to pass on to others! Or emulate if you want to cover another language or subject.

I first saw this in a tweet by Tom Brander.

May 30, 2014

Anaconda 2.0

Filed under: Anaconda,Python — Patrick Durusau @ 3:26 pm

Anaconda 2.0 by Corinna Bahr.

From the post:

We are pleased to announce Anaconda 2.0, the newest version of our enterprise-ready Python distribution. Available for free on Windows, Mac OS X and Linux, Anaconda includes almost 200 of the most popular numerical and scientific Python libraries used by scientists, engineers and data analysts, with an integrated and flexible installer.

From the Anaconda page:

Completely free enterprise-ready Python distribution for large-scale data processing, predictive analytics, and scientific computing

  • 195+ of the most popular Python packages for science, math, engineering, data analysis
  • Completely free – including for commercial use and even redistribution
  • Cross platform on Linux, Windows, Mac
  • Installs into a single directory and doesn’t affect other Python installations on your system. Doesn’t require root or local administrator privileges
  • Stay up-to-date by easily updating packages from our free, online repository
  • Easily switch between Python 2.6, 2.7, 3.3, 3.4, and experiment with multiple versions of libraries, using our conda package manager and its great support for virtual environments
  • Comes with tools to connect and integrate with Excel

Enjoy!

I first saw this in a tweet by Scientific Python

May 17, 2014

Create Dataset of Users from the Twitter API

Filed under: Python,Tweets — Patrick Durusau @ 6:24 pm

Create Dataset of Users from the Twitter API by Ryan Swanson.

From the post:

This project provides an example of using python to pull user data from Twitter.

This project will create a dataset of the top 1000 twitter users for any given search topic.

As written, the project returns these values:

  1. handle – twitter username | string
  2. name – full name of the twitter user | string
  3. age – number of days the user has existed on twitter | number
  4. numOfTweets – number of tweets this user has created (includes retweets) | number
  5. hasProfile – 1 if the user has created a profile description, 0 otherwise | boolean
  6. hasPic – 1 if the user has setup a profile pic, 0 otherwise | boolean
  7. numFollowing – number of other twitter users, this user is following | number
  8. numOfFavorites – number of tweets the user has favorited | number
  9. numOfLists – number of public lists this user has been added to | number
  10. numOfFollowers – number of other users following this user | number

You need to read the Twitter documentation if you want to extend this project to capture other values. Such as a list of followers or who someone is following, important for sketching communities for example. Or tracing tweets/retweets across a community.

Enjoy!

May 12, 2014

Enough Machine Learning to…

Filed under: Machine Learning,Python,Scikit-Learn — Patrick Durusau @ 6:38 pm

Enough Machine Learning to Make Hacker News Readable Again by Ned Jackson Lovely.

From the description:

It’s inevitable that online communities will change, and that we’ll remember the community with a fondness that likely doesn’t accurately reflect the former reality. We’ll explore how we can take a set of articles from an online community and winnow out the stuff we feel is unworthy. We’ll explore some of the machine learning tools that are just a “pip install” away, such as scikit-learn and nltk.

Ned recommends you start with the map I cover at: Machine Learning Cheat Sheet (for scikit-learn).

Great practice with scikit-learn. Following this as a general outline will develop your machine learning skills!

April 28, 2014

Parsing English with 500 lines of Python

Filed under: Natural Language Processing,Python — Patrick Durusau @ 4:19 pm

Parsing English with 500 lines of Python by Matthew Honnibal.

From the post:

A syntactic parser describes a sentence’s grammatical structure, to help another application reason about it. Natural languages introduce many unexpected ambiguities, which our world-knowledge immediately filters out. A favourite example:

Definitely a post to savor if you have any interest in natural language processing.

I first saw this in a tweet by Jim Salmons.

April 27, 2014

Using NLTK for Named Entity Extraction

Filed under: Entity Extraction,Named Entity Mining,NLTK,Python — Patrick Durusau @ 7:33 pm

Using NLTK for Named Entity Extraction by Emily Daniels.

From the post:

Continuing on from the previous project, I was able to augment the functions that extract character names using NLTK’s named entity module and an example I found online, building my own custom stopwords list to run against the returned names to filter out frequently used words like “Come”, “Chapter”, and “Tell” which were caught by the named entity functions as potential characters but are in fact just terms in the story.

Whether you are trusting your software or using human proofing, named entity extraction is a key task in mining data.

Having extracted named entities, the harder task is uncovering relationships between them that may not be otherwise identified.

Challenging with the text of Oliver Twist but even more difficult when mining donation records and the Congressional record.

April 23, 2014

Diving into Statsmodels…

Filed under: Programming,Python — Patrick Durusau @ 1:12 pm

Diving into Statsmodels with an Intro to Python & Pydata by Skipper Seabold.

From the post:

Abhijit and Marck, the organizers of Statistical Programming DC, kindly invited me to give the talk for the April meetup on statsmodels. Statsmodels is a Python module for conducting data exploration and statistical analysis, modeling, and inference. You can find many common usage examples and a full list of features in the online documentation.

For those who were unable to make it, the entire talk is available as an IPython Notebook on github. If you aren’t familiar with the notebook, it is an incredibly useful and exciting tool. The Notebook is a web-based interactive document that allows you combine text, mathematics, graphics, and code (languages other than Python such as R, Julia, Matlab, and, even, C/C++ and Fortran are supported).

The talk introduced users to what is available in statsmodels. Then we looked at a typical statsmodels workflow, highlighting high-level features such as our integration with pandas and the use of formulas via patsy. We covered a few areas in a little more detail building off some of our example datasets. And finally we discussed some of the features we have in the pipeline for our upcoming release.

I don’t know that this will help in public policy debates but it can’t hurt to have your own analysis of available data.

Of course the key to “your own analysis” is having the relevant data before meetings/discussions, etc. Request and document your request for relevant data long prior to public meetings. If you don’t get the data, be sure to get your prior request documented in the meeting record.

April 17, 2014

A Gentle Introduction to Scikit-Learn…

Filed under: Python,Scikit-Learn — Patrick Durusau @ 3:02 pm

A Gentle Introduction to Scikit-Learn: A Python Machine Learning Library by Jason Brownlee.

From the post:

If you are a Python programmer or you are looking for a robust library you can use to bring machine learning into a production system then a library that you will want to seriously consider is scikit-learn.

In this post you will get an overview of the scikit-learn library and useful references of where you can learn more.

Nothing new if you are already using Scikit-Learn but a very nice introduction with additional resources to pass onto others.

Save yourself some time in gathering materials to spread the use of Scikit-Learn. Bookmark and forward today!

April 13, 2014

Online Python Tutor (update)

Filed under: Programming,Python,Visualization — Patrick Durusau @ 1:27 pm

Online Python Tutor by Philip Guo.

From the webpage:

Online Python Tutor is a free educational tool created by Philip Guo that helps students overcome a fundamental barrier to learning programming: understanding what happens as the computer executes each line of a program’s source code. Using this tool, a teacher or student can write a Python program in the Web browser and visualize what the computer is doing step-by-step as it executes the program.

As of Dec 2013, over 500,000 people in over 165 countries have used Online Python Tutor to understand and debug their programs, often as a supplement to textbooks, lecture notes, and online programming tutorials. Over 6,000 pieces of Python code are executed and visualized every day.

Users include self-directed learners, students taking online courses from Coursera, edX, and Udacity, and professors in dozens of universities such as MIT, UC Berkeley, and the University of Washington.

If you believe in crowd wisdom, 500,000 users is a vote of confidence in the Online Python Tutor.

I first mentioned the Online Python Tutor in LEARN programming by visualizing code execution

Philip points to similar online tutors for Java, Ruby and Javascript.

Enjoy!

April 12, 2014

PyCon US 2014 – Videos (Tutorials)

Filed under: Conferences,Programming,Python — Patrick Durusau @ 2:04 pm

The tutorial videos from PyCon US 2014 are online! Talks to follow.

Tutorials arranged by author for your finding convenience:

  • Blomo, Jim mrjob: Snakes on a Hadoop
    This tutorial will take participants through basic usage of mrjob by writing analytics jobs over Yelp data. mrjob lets you easily write, run, and test distributed batch jobs in Python, on top of Hadoop. Hadoop is a MapReduce platform for processing big data but requires a fair amount of Java boilerplate. mrjob is an open source Python library written by Yelp used to process TBs of data every day.
  • Clifford, Williams, G. 0 to 00111100 with web2py
    This tutorial teaches basic web development for people who have some experience with HTML. No experience with CSS or JavaScript is required. We will build a basic web application using AJAX, web forms, and a local SQL database.
  • Grisel, Olivier; Jake, Vanderplas Exploring Machine Learning with Scikit-learn
    This tutorial will offer an introduction to the core concepts of machine learning, and how they can be easily applied in Python using Scikit-learn. We will use the scikit-learn API to introduce and explore the basic categories of machine learning problems, related topics such as feature selection and model validation, and the application of these tools to real-world data sets.
  • Love, Kenneth Getting Started with Django, a crash course

    Getting Started With Django is a well-established series of videos teaching best practices and common approaches for building web apps to people new to Django. This tutorial combines the first few lessons into a single lesson. Attendees will follow along as I start and build an entire simple web app and, network permitting, deploy it to Heroku.
  • Ma, Eric How to formulate a (science) problem and analyze it using Python code
    Are you interested in doing analysis but don’t know where to start? This tutorial is for you. Python packages & tools (IPython, scikit-learn, NetworkX) are powerful for performing data analysis. However, little is said about formulating the questions and tying these tools together to provide a holistic view of the data. This tutorial will provide you with an introduction on how this can be done.
  • Müller, Mike Descriptors and Metaclasses – Understanding and Using Python's More Advanced Features
    Descriptors and metaclasses are advanced Python features. While it is possible to write Python programs without active of knowledge of them, knowing how they work provides a deeper understanding about the language. Using examples, you will learn how they work and when to use as well as when better not to use them. Use cases provide working code that can serve as a base for own solutions.
  • Vanderplas, Jake; Olivier Grisel Exploring Machine Learning with Scikit-learn
    This tutorial will offer an introduction to the core concepts of machine learning, and how they can be easily applied in Python using Scikit-learn. We will use the scikit-learn API to introduce and explore the basic categories of machine learning problems, related topics such as feature selection and model validation, and the application of these tools to real-world data sets.

Tutorials or talks with multiple authors are listed under each author. (I don’t know which one you will remember.)

I am going to spin up the page for the talks so when the videos appear, all I need do is to insert the video links.

Enjoy!

April 11, 2014

Transcribing Piano Rolls…

Filed under: Music,Python — Patrick Durusau @ 6:14 pm

Transcribing Piano Rolls, the Pythonic Way by Zulko.

From the post:

Piano rolls are these rolls of perforated paper that you put in the saloon’s mechanical piano. They have been very popular until the 1950s, and the piano roll repertory counts thousands of arrangements (some by greatest names of jazz) which have never been published in any other form.

NSA news isn’t going to subside anytime soon so I am including this post as one way to relax over the weekend. 😉

I’m not a musicologist but I think transcribing music from a image of roll music being played is quite fascinating.

I first saw this in a tweet from Lars Marius Garshol.

March 12, 2014

Building a tweet ranking web app using Neo4j

Filed under: Graphs,MongoDB,Neo4j,node-js,Python,Tweets — Patrick Durusau @ 7:28 pm

Building a tweet ranking web app using Neo4j by William Lyon.

From the post:

twizzard

I spent this past weekend hunkered down in the basement of the local Elk’s club, working on a project for a hackathon. The project was a tweet ranking web application. The idea was to build a web app that would allow users to login with their Twitter account and view a modified version of their Twitter timeline that shows them tweets ranked by importance. Spending hours every day scrolling through your timeline to keep up with what’s happening in your Twitter network? No more, with Twizzard!

The project uses the following components:

  • Node.js web application (using Express framework)
  • MongoDB database for storing basic user data
  • Integration with Twitter API, allowing for Twitter authentication
  • Python script for fetching Twitter data from Twitter API
  • Neo4j graph database for storing Twitter network data
  • Neo4j unmanaged server extension, providing additional REST endpoint for querying / retrieving ranked timelines per user

Looks like a great project and good practice as well!

Curious what you think of the ranking of tweets:

How can we score Tweets to show users their most important Tweets? Users are more likely to be interested in tweets from users they are more similar to and from users they interact with the most. We can calculate metrics to represent these relationships between users, adding an inverse time decay function to ensure that the content at the top of their timeline stays fresh.

That’s one measure of “importance.” Being able to assign a rank would be useful as well, say for the British Library.

Do take notice of the Jaccard similarity index.

Would you say that possessing at least one identical string (id, subject identifier, subject indicator) is a form of similarity measure?

What other types of similarity measures do you think would be useful for topic maps?

I first saw this in a tweet by GraphemeDB.

Data Mining the Internet Archive Collection [Librarians Take Note]

Filed under: Archives,Data Mining,Librarian/Expert Searchers,MARC,MARCXML,Python — Patrick Durusau @ 4:48 pm

Data Mining the Internet Archive Collection by Caleb McDaniel.

From the “Lesson Goals:”

The collections of the Internet Archive (IA) include many digitized sources of interest to historians, including early JSTOR journal content, John Adams’s personal library, and the Haiti collection at the John Carter Brown Library. In short, to quote Programming Historian Ian Milligan, “The Internet Archive rocks.”

In this lesson, you’ll learn how to download files from such collections using a Python module specifically designed for the Internet Archive. You will also learn how to use another Python module designed for parsing MARC XML records, a widely used standard for formatting bibliographic metadata.

For demonstration purposes, this lesson will focus on working with the digitized version of the Anti-Slavery Collection at the Boston Public Library in Copley Square. We will first download a large collection of MARC records from this collection, and then use Python to retrieve and analyze bibliographic information about items in the collection. For example, by the end of this lesson, you will be able to create a list of every named place from which a letter in the antislavery collection was written, which you could then use for a mapping project or some other kind of analysis.

This rocks!

In particular for librarians and library students who will already be familiar with MARC records.

Some 7,000 items from the Boston Public Library’s anti-slavery collection at Copley Square are the focus of this lesson.

That means historians have access to rich metadata, full images, and partial descriptions for thousands of antislavery letters, manuscripts, and publications.

Would original anti-slavery materials, written by actual participants, have interested you as a student? Do you think such materials would interest students now?

I first saw this in a tweet by Gregory Piatetsky.

« Newer PostsOlder Posts »

Powered by WordPress