Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

November 8, 2014

The Concert Programmer

Filed under: Lisp,Music,Scheme — Patrick Durusau @ 4:50 pm

From the description:

From OSCON 2014: Is it possible to imagine a future where “concert programmers” are as common a fixture in the worlds auditoriums as concert pianists? In this presentation Andrew will be live-coding the generative algorithms that will be producing the music that the audience will be listening too. As Andrew is typing he will also attempt to narrate the journey, discussing the various computational and musical choices made along the way. A must see for anyone interested in creative computing.

This impressive demonstration is performed using Extempore.

From the GitHub page:

Extempore is a systems programming language designed to support the programming of real-time systems in real-time. Extempore promotes human orchestration as a meta model of real-time man-machine interaction in an increasingly distributed and environmentally aware computing context.

Extempore is designed to support a style of programming dubbed ‘cyberphysical’ programming. Cyberphysical programming supports the notion of a human programmer operating as an active agent in a real-time distributed network of environmentally aware systems. The programmer interacts with the distributed real-time system procedurally by modifying code on-the-fly. In order to achieve this level of on-the-fly interaction Extempore is designed from the ground up to support code hot-swapping across a distributed heterogeneous network, compiler as service, real-time task scheduling and a first class semantics for time.

Extempore is designed to mix the high-level expressiveness of Lisp with the low-level expressiveness of C. Extempore is a statically typed, type-inferencing language with strong temporal semantics and a flexible concurrency architecture in a completely hot-swappable runtime environment. Extempore makes extensive use of the LLVM project to provide back-end code generation across a variety of architectures.

For more detail on what the Extempore project is all about, see the Extempore philosophy.

For programmers only at this stage but can you imagine the impact of “live searching?” Where data structures and indexes arise from interaction with searchers? Definitely worth a long look!

I first saw this in a tweet by Alan Zucconi.

September 28, 2014

Scientists confess to sneaking Bob Dylan lyrics into their work for the past 17 years

Filed under: Humor,Music,Natural Language Processing — Patrick Durusau @ 11:03 am

Scientists confess to sneaking Bob Dylan lyrics into their work for the past 17 years by Rachel Feltman.

From the post:

While writing an article about intestinal gasses 17 years ago, Karolinska Institute researchers John Lundberg and Eddie Weitzberg couldn’t resist a punny title: “Nitric Oxide and inflammation: The answer is blowing in the wind”.

Thus began their descent down the slippery slope of Bob Dylan call-outs. While the two men never put lyrics into their peer-reviewed studies, The Local Sweden reports, they started a personal tradition of getting as many Dylan quotes as possible into everything else they wrote — articles about other peoples’ work, editorials, book introductions, and so on.

An amusing illustration of one difficulty in natural language processing, allusion.

The Wikipedia article on allusion summarizes one typology of allusion (R. F. Thomas, “Virgil’s Georgics and the art of reference” Harvard Studies in Classical Philology 90 (1986) pp 171–98) as:

  1. Casual Reference, “the use of language which recalls a specific antecedent, but only in a general sense” that is relatively unimportant to the new context;
  2. Single Reference, in which the hearer or reader is intended to “recall the context of the model and apply that context to the new situation”; such a specific single reference in Virgil, according to Thomas, is a means of “making connections or conveying ideas on a level of intense subtlety”;
  3. Self-Reference, where the locus is in the poet’s own work;
  4. Corrective Allusion, where the imitation is clearly in opposition to the original source’s intentions;
  5. Apparent Reference, “which seems clearly to recall a specific model but which on closer inspection frustrates that intention”; and
  6. Multiple Reference or Conflation, which refers in various ways simultaneously to several sources, fusing and transforming the cultural traditions.

(emphasis in original)

Allusion is a sub-part of the larger subject of intertextuality.

Thinking of the difficulties that allusions introduce into NLP. With “Dylan lyrics meaning” as a quoted search string, I get over 60,000 “hits” consisting of widely varying interpretations. Add to that the interpretation of a Dylan allusion in a different context and you have a truly worthy NLP problem.

Two questions:

The Dylan post is one example of allusion. Is there any literature or sense of how much allusion occurs in specific types of writing?

Any literature on NLP techniques for dealing with allusion in general?

I first saw this in a tweet by Carl Anderson.

August 24, 2014

Introducing Riffmuse

Filed under: Clojure,Music — Patrick Durusau @ 2:12 pm

Introducing Riffmuse by Dave Yarwood.

From the post:

I’ve written a simple command line app in Clojure that will take a musical scale as a command line argument and algorithmically generate a short musical idea or “riff” using the notes in that scale. I call it Riffmuse.

Here’s how Riffmuse works in a nutshell: it takes the command line argument(s), figures out what scale you want (For this, I used the awesome parser generator library Instaparse to create a parser that allows a flexible syntax in specifying scales. C major can be represented as, e.g., “C major,” “CMAJ” or even just “c”), determines what notes are in that scale, comes up with a rhythmic pattern for the notes (represented as 16 “slots” that can either have a note or no note), and then fills the slots with notes from the scale you’ve specified.

On the theory that you never know what will capture someone’s interest, this is a post that needs to be shared. It may spark an interest in some future Clojure or music rock star!

I first saw this in a tweet by coderpost.

July 29, 2014

MusicGraph

Filed under: Graphs,Music,Titan — Patrick Durusau @ 3:31 pm

Senzari Unveils MusicGraph.ai At The GraphLab Conference 2014

From the post:

Senzari introduced MusicGraph.ai, the first web-based graph analytics and intelligence engine for the music industry at the GraphLab Conference 2014, the annual gathering of leading data scientists and machine learning experts. MusicGraph.ai will serve as the primary dashboard for MusicGraph, where API clients will be able to view detailed reports on their API usage and manage their account. More importantly, through this dashboard, they will also be able to access a comprehensive library of algorithms to extract even more value from the world’s most extensive repository of music data.

“We believe MusicGraph.ai will forever change the music intelligence industry, as it allows scientists to execute powerful analytics and machine learning algorithms at scale on a huge data-set without the need to write a single-line of code”

Free access to MusicGraph at: http://developer.musicgraph.com

I originally encountered MusicGraph because of its use of the Titan graph database. BTW, GraphLab and GraphX are also available for data analytics.

From the MusicGraph website:

MusicGraph is the world’s first “natural graph” for music, which represents the real-world structure of the musical universe. Information contained within it includes data related to the relationship between millions of artists, albums, and songs. Also included is detailed acoustical and lyrical features, as well as real-time statistics across artists and their music across many sources.

MusicGraph has over 600 million vertices and 1 billion edges, but more importantly it has over 7 billion properties, which allows for deep knowledge extraction through various machine learning approaches.

Sigh, why can’t people say: “…it represents a useful view of the musical universe…,” instead of “…which represents the real-world structure of the musical universe”? All representations are views of some observer. (full stop) If you think otherwise, please return your college and graduate degrees for a refund.

Yes, I know political leaders use “real world” all the time. But they are trying to deceive you into accepting their view as beyond question because it represents the “real world.” Don’t be deceived. Their views are no “real world” based than yours are. Which is to say, not at all. Defend your view but knowing it is a view.

I first saw this in a tweet by Gregory Piatetsky.

July 23, 2014

Creating music with Clojure and Overtone

Filed under: Clojure,Music — Patrick Durusau @ 2:52 pm

Creating music with Clojure and Overtone by Chris Ford.

From the description:

Chris Ford will show how to make music with Clojure, starting with the basic building block of sound, the sine wave, and gradually accumulating abstractions culminating in a canon by Johann Sebastian Bach.

Very impressive! You will pick up some music theory, details on sound that you have forgotten since high school physics, and perhaps yet another reason to learn Clojure!

Chris mentions: Music For Geeks And Nerds (Python) as a resource.

June 5, 2014

Leipzig from scratch

Filed under: Clojure,Music — Patrick Durusau @ 6:53 pm

Leipzig from scratch (GitHub) by Chris Ford.

From the description:

I show you how to make a simple track using Leipzig, Overtone and Clojure, from “lein new” onwards.

And you can type along with the video!

Enjoy!

May 25, 2014

Real Time Robot Dance Party

Filed under: Clojure,Functional Programming,Music — Patrick Durusau @ 2:43 pm

From the description:

From the 2014 Solid Conference: In this day and age, we usually consider robots to be utilitarian, problem solvers. But there is another use for robots, that is for artistic expression.

In this fun, energetic talk, we will explore controlling multiple robots in real time. Roombas sway to gentle computer generated music, while Sphero balls roll with flashing lights. This robot jam will culminate in spectacular finale when the AR Drones fly in to join the dance.

Using Emacs, Overture, Clojure, a live robot dance by Carin Meier and Peter Shanley.

A very impressive demonstration but of what I am not exactly sure. Which is of course, a perfect demonstration!

Enjoy!

I first saw this in a tweet by Michael Klishin

May 15, 2014

Digital Libraries For Musicology

Filed under: Digital Library,Music,Music Retrieval — Patrick Durusau @ 12:53 pm

The 1st International Digital Libraries for Musicology workshop (DLfM 2014)

12th September 2014 (full day), London, UK

in conjunction with the ACM/IEEE Digital Libraries conference 2014

From the call for papers:

BACKGROUND

Many Digital Libraries have long offered facilities to provide multimedia content, including music. However there is now an ever more urgent need to specifically support the distinct multiple forms of music, the links between them, and the surrounding scholarly context, as required by the transformed and extended methods being applied to musicology and the wider Digital Humanities.

The Digital Libraries for Musicology (DLfM) workshop presents a venue specifically for those working on, and with, Digital Library systems and content in the domain of music and musicology. This includes Music Digital Library systems, their application and use in musicology, technologies for enhanced access and organisation of musics in Digital Libraries, bibliographic and metadata for music, intersections with music Linked Data, and the challenges of working with the multiple representations of music across large-scale digital collections such as the Internet Archive and HathiTrust.

IMPORTANT DATES

Paper submission deadline: 27th June 2014 (23:59 UTC-11)
Notification of acceptance: 30th July 2014
Registration deadline for one author per paper: 11th August 2014 (14:00 UTC)
Camera ready submission deadline: 11th August 2014 (14:00 UTC)

If you want a feel for the complexity of music as a retrieval subject, consult the various proposals at: Music markup languages, which are only some of the possible music encoding languages.

It is hard to say which domains are more “complex” than others in terms of encoding and subject identity, but it is safe to say that music falls towards the complex end of the scale. (sorry)

I first saw this in a tweet by Misanderasaurus Rex.

May 5, 2014

All of Bach

Filed under: Music,Music Retrieval — Patrick Durusau @ 3:16 pm

All of Bach

From the webpage:

Every week, you will find a new recording here of one Johann Sebastian Bach’s 1080 works, performed by The Netherlands Bach Society and many guest musicians.

Six (6) works posted, only another one thousand and seventy-four (1074) to go. 😉

Music is an area with well known connections to many other domains, people, places, history, literature, religion and many others. Not that other domains lack such connections, but music seems particularly rich in such connections. Which also includes performers, places of performance, reactions to performances, reviews of performances, to say nothing of the instruments and the music itself.

A consequence of this tapestry of connections is that annotating music can draw from almost all known forms of recorded knowledge from an unlimited number of domains and perspectives.

Rather than the clamor of arbitrary links one after the other about a performance or its music, a topic map can support multiple, coherent views of any particular work. Perhaps ranging from the most recent review to the oldest known review of a work. Or exploding one review into historical context. Or exploring the richness of the composition proper.

The advantage of a topic map being that you don’t have to favor one view to the exclusion of another.

April 22, 2014

MIDI notes and enharmonic equivalence

Filed under: Clojure,Music — Patrick Durusau @ 6:19 pm

MIDI notes and enharmonic equivalence – towards unequal temperaments in Clojure by Tim Regan.

From the post:

pipe organ

“Positiv Division, Manila Cathedral Pipe Organ” by Cealwyn on flickr

One current ‘when-I-get-spare-time-in-the-evening’ project is to explore how different keys sounded before the advent of equal temperament. Partly out of interest and partly because whenever I hear/read discussions of how keys got their distinctive characteristics (for example in answers to this question on the Musical Practise and Performance Stack Exchange) temperament is raised as an issue or explanation.

Having recently enjoyed Karsten Schmidt‘s Clojure workshop at Resonate 2014 Clojure and Overtone seem a good place to start. My first steps are with the easiest non-equal temperament to get my head around, the Pythagorean Temperament. My (albeit limited) understanding of temperaments has been helped enormously by the amazing chapters on the subject in David Benson’s book Music, a mathematical offering.

The pipes in the image caught my attention and reminded me of Jim Mason and his long association with pipe organs. Incredibly complex instruments, Jim was working on a topic map that mapped the relationships between a pipe organ’s many parts.

Well, that and enharmonic equivalence. 😉

Wikipedia avers (sans the hyperlinks):

In modern musical notation and tuning, an enharmonic equivalent is a note, interval, or key signature that is equivalent to some other note, interval, or key signature but “spelled”, or named differently.

Use that definition with caution as the Wikipedia articles goes on to state that the meaning of enharmonic equivalent has changed several times in history and across tuning systems.

Tim’s post will give you a start towards exploring enharmonic equivalence for yourself.

Clojure is not a substitute for a musician but you can explore music while waiting for a musician to arrive.

April 11, 2014

Transcribing Piano Rolls…

Filed under: Music,Python — Patrick Durusau @ 6:14 pm

Transcribing Piano Rolls, the Pythonic Way by Zulko.

From the post:

Piano rolls are these rolls of perforated paper that you put in the saloon’s mechanical piano. They have been very popular until the 1950s, and the piano roll repertory counts thousands of arrangements (some by greatest names of jazz) which have never been published in any other form.

NSA news isn’t going to subside anytime soon so I am including this post as one way to relax over the weekend. 😉

I’m not a musicologist but I think transcribing music from a image of roll music being played is quite fascinating.

I first saw this in a tweet from Lars Marius Garshol.

March 6, 2014

The “Tube” as History of Music

Filed under: Maps,Music,Visualization — Patrick Durusau @ 9:14 pm

The history of music shown by the London Underground

I have serious difficulties with the selection of music to be mapped, but that should not diminish your enjoyment of this map if you find it more to your taste.

Great technique if somewhat lacking in content. 😉

It does illustrate the point that every map is from a point of view, even if it is an incorrect one (IMHO).

I first saw this in a tweet by The O.C.R.

February 7, 2014

Making Music with Clojure – An Introduction to MIDI

Filed under: Clojure,Music — Patrick Durusau @ 7:22 pm

Making Music with Clojure – An Introduction to MIDI by @taylodl’s.

From the post:

This post takes a break from Functional JavaScript and has a little fun making music. We’re going to be using Clojure, a Lisp language for the JVM, so that we can utilize the JVM’s MIDI implementation. No experience with music or MIDI is required though a familiarity with Clojure or any other Lisp is helpful.

I’m using Clojure for its functional similarities to JavaScript—the syntax of the languages are different but the underlying programming philosophies are similar. For this post I’m assuming you already have Clojure and Leiningen installed on your system. See Clojure Quick Start for everything you need to get Clojure and Leiningen installed and running on your system.

Once you have everything installed you can create a new midi-sequencer project by executing:

Accessibility, that is what I like about this post. Being innocent of any musical playing ability, the history of music remains silent for me unless I can find a recording. Or program a computer to perform it.

MIDI production isn’t the same thing as a live or recorded performance by a real musician, but it is better than a silent page.

Enjoy!

PS: Not all extant music is recorded or performed. Some resources to explore:

Digital Image Archive of Medieval Music

Music Manuscripts (British Library)

Music Manuscripts Online (The Morgan Library & Museum, 42,000 pages)

Wikipedia list of Online Digital Musical Document Libraries

January 27, 2014

The Sonification Handbook

Filed under: BigData,Data Mining,Music,Sonification,Sound — Patrick Durusau @ 5:26 pm

The Sonification Handbook. Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff. (Logos Publishing House, Berlin 2011, 586 pages, 1. edition (11/2011) ISBN 978-3-8325-2819-5)

Summary:

This book is a comprehensive introductory presentation of the key research areas in the interdisciplinary fields of sonification and auditory display. Chapters are written by leading experts, providing a wide-range coverage of the central issues, and can be read from start to finish, or dipped into as required (like a smorgasbord menu).

Sonification conveys information by using non-speech sounds. To listen to data as sound and noise can be a surprising new experience with diverse applications ranging from novel interfaces for visually impaired people to data analysis problems in many scientific fields.

This book gives a solid introduction to the field of auditory display, the techniques for sonification, suitable technologies for developing sonification algorithms, and the most promising application areas. The book is accompanied by the online repository of sound examples.

The text has this advice for readers:

The Sonification Handbook is intended to be a resource for lectures, a textbook, a reference, and an inspiring book. One important objective was to enable a highly vivid experience for the reader, by interleaving as many sound examples and interaction videos as possible. We strongly recommend making use of these media. A text on auditory display without listening to the sounds would resemble a book on visualization without any pictures. When reading the pdf on screen, the sound example names link directly to the corresponding website at http://sonification.de/handbook. The margin symbol is also an active link to the chapter’s main page with supplementary material. Readers of the printed book are asked to check this website manually.

Did I mention the entire text, all 586 pages, can be downloaded for free?

Here’s an interesting idea: What if you had several dozen workers listening to sonofied versions of the same data stream, listening along different dimensions for changes in pitch or tone? When heard, each user signals the change. When some N of the dimensions all have a change at the same time, the data set is pulled at that point for further investigation.

I will regret suggesting that idea. Someone from a leading patent holder will boilerplate an application together tomorrow and file it with the patent office. 😉

NASA’s Voyager Data Is Now a Musical

Filed under: Music,Sonification,Sound — Patrick Durusau @ 5:01 pm

NASA’s Voyager Data Is Now a Musical by Victoria Turk.

From the post:

You might think that big data would sound like so many binary beeps, but a project manager at Géant in the UK has turned 320,000 measurements from NASA Voyager equipment into a classically-inspired track. The company describes it as “an up-tempo string and piano orchestral piece.”

Domenico Vicinanza, who is a trained musician as well as a physicist, took measurements from the cosmic ray detectors on Voyager 1 and Voyager 2 at hour intervals, and converted it into two melodies. The result is a duet: the data sets from the two spacecraft play off each other throughout to create a rather charming harmony. …

Data sonification, the technique of representing data points with sound, makes it easier to spot trends, peaks, patterns, and anomalies in a huge data set without having to pore over the numbers.

Some data sonification resources:

audiolyzR: Data sonification with R

Georgia Tech Sonification Lab

Sonification Sandbox

Sonification.de

I suspect that sonification is a much better way to review monotonous data for any unusual entries.

My noticing an OMB calculation that multiplied a budget item by zero (0) and produced a larger number, was just chance. Had math operations been set to music, I am sure that would have struck a discordant note!

Human eyesight is superior to computers for galaxy classification.

Human hearing as superior way to explore massive datasets is a promising avenue of research.

January 14, 2014

Algorithmic Music Discovery at Spotify

Filed under: Algorithms,Machine Learning,Matrix,Music,Music Retrieval,Python — Patrick Durusau @ 3:19 pm

Algorithmic Music Discovery at Spotify by Chris Johnson.

From the description:

In this presentation I introduce various Machine Learning methods that we utilize for music recommendations and discovery at Spotify. Specifically, I focus on Implicit Matrix Factorization for Collaborative Filtering, how to implement a small scale version using python, numpy, and scipy, as well as how to scale up to 20 Million users and 24 Million songs using Hadoop and Spark.

Among a number of interesting points, Chris points out differences between movie and music data.

One difference is that songs are consumed over and over again. Another is that users rate movies but “vote” by their streaming behavior on songs.*

While leads to Chris’ main point, implicit matrix factorization. Code. The source code page points to: Collaborative Filtering for Implicit Feedback Datasets by Yifan Hu, Yehuda Koren, and Chris Volinsky.

Scaling that process is represented in blocks for Hadoop and Spark.

* I suspect that “behavior” is more reliable than “ratings” from the same user. Reasoning ratings are more likely to be subject to social influences. I don’t have any research at my fingertips on that issue. Do you?

January 12, 2014

Musopen

Filed under: Data,Music — Patrick Durusau @ 8:53 pm

Musopen

From the webpage:

Musopen (www.musopen.org) is a 501(c)(3) non-profit focused on improving access and exposure to music by creating free resources and educational materials. We provide recordings, sheet music, and textbooks to the public for free, without copyright restrictions. Put simply, our mission is to set music free.

The New Grove Dictionary of Music and Musicians it’s not but losing our musical heritage did not happen over night.

Nor will winning it back.

Contribute to and support Musopen.

December 22, 2013

Spectrograms with Overtone

Filed under: Clojure,Music — Patrick Durusau @ 8:55 pm

Spectrograms with Overtone by mikera7.

From the post:

spectrograms are fascinating: the ability to visualise sound in terms of its constituent frequencies. I’ve been playing with Overtone lately, so decided to create a mini-library to produce spectrograms from Overtone buffers.

spectrogram

This particular image is a visualisation of part of a trumpet fanfare. I like it because you can clearly see the punctuation of the different notes, and the range of strong harmonics above the base note. Read on for some more details on how this works.

Spectrograms (Wikipedia), Reading Spectrograms, and Spek – Acoustic Spectrum Analyser, are just a few of the online resources on spectograms.

Here’s your chance to experiment with a widely used technique (spectrograms) and practice with Clojure as well.

A win-win situation!

December 4, 2013

MusicGraph

Filed under: Graphs,Marketing,Music,Titan — Patrick Durusau @ 4:30 pm

Senzari releases a searchable MusicGraph service for making musical connections by Josh Ong.

From the post:

Music data company Senzari has launched MusicGraph, a new service for discovering music by searching through graph of over a billion music-related data points.

MusicGraph includes a consumer-facing version and an API that can be used for commercial purposes. Senzari built the graph while working on the recommendation engine for its own streaming service, which has been rebranded as Wahwah.

Interestingly, MusicGraph is launching first on Firefox OS before coming to iOS, Android and Windows Phone in “the coming weeks.”

You know how much I try to avoid “practical” applications but when I saw aureliusgraphs tweet this as using the Titan database, I just had to mention it. 😉

I think this announcement underlines something a comment said recently about promoting topic maps for what they do, not because they are topic maps.

Here, graphs are being promoted as the source of a great user experience, not because they are fun, powerful, etc. (all of which is also true).

November 29, 2013

Overtone 0.9.0

Filed under: Clojure,Functional Programming,Mathematics,Music,Music Retrieval — Patrick Durusau @ 9:24 pm

Overtone 0.9.0

From the webpage:

Overtone is an Open Source toolkit for designing synthesizers and collaborating with music. It provides:

  • A Clojure API to the SuperCollider synthesis engine
  • A growing library of musical functions (scales, chords, rhythms, arpeggiators, etc.)
  • Metronome and timing system to support live-programming and sequencing
  • Plug and play MIDI device I/O
  • A full Open Sound Control (OSC) client and server implementation.
  • Pre-cache – a system for locally caching external assets such as .wav files
  • An API for querying and fetching sounds from http://freesound.org
  • A global concurrent event stream

When I saw the announcement for Overtone 0.9.0 I was reminded it was almost a year ago that I posted: Functional Composition [Overtone/Clojure].

Hard to say if Overtone will be of more interest to musicians who want to learn functional programming or functional programmers who want a deeper understanding of music or people for who the usual baseball, book publishing, web pages, etc., examples just don’t cut it. 😉

While looking for holiday music for Overtone, I did stumble across:

Music: a Mathematical Offering by Dave Benson.

At over 500 pages, this living text is also for sale in hard copy by Cambridge University Press. Do us all a favor and if the electronic version proves useful to you, ask your library to order a hard copy. And/or recommend it to others. That will encourage presses to continue to allow electronic versions of hard copy materials to circulate freely.

If you are interested in the mathematics that underlie music or need to know more for use in music retrieval, this is a good place to start.

I struck out on finding Christmas music written with Overtone.

I did find this video:

I would deeply appreciate a pointer to Christmas music with or for Overtone.

Thanks!


Update: @Overtone tweeted this link for Christmas music: …/overtone/examples/compositions/bells.clj.

Others?

November 7, 2013

Musicbrainz in Neo4j – Part 1

Filed under: Cypher,Graphs,Music,Music Retrieval,Neo4j — Patrick Durusau @ 9:06 am

Musicbrainz in Neo4j – Part 1 by Paul Tremberth.

From the post:

What is MusicBrainz?

Quoting Wikipedia, MusicBrainz is an “open content music database [that] was founded in response to the restrictions placed on the CDDB.(…) MusicBrainz captures information about artists, their recorded works, and the relationships between them.”

Anyone can browse the database at http://musicbrainz.org/. If you create an account with them you can contribute new data or fix existing records details, track lengths, send in cover art scans of your favorite albums etc. Edits are peer reviewed, and any member can vote up or down. There are a lot of similarities with Wikipedia.

With this first post, we want to show you how to import the Musicbrainz data into Neo4j for some further analysis with Cypher in the second post. See below for what we will end up with:

MusicBrainz data

MusicBrainz currently has around 1000 active users, nearly 800,000 artists, 75,000 record labels, around 1,200,000 releases, more than 12,000,000 tracks, and short under 2,000,000 URLs for these entities (Wikipedia pages, official homepages, YouTube channels etc.) Daily fixes by the community makes their data probably the freshest and most accurate on the web.
You can check the current numbers here and here.

This rocks!

Interesting data, walk through how to load the data into Neo4j and the promise of more interesting activities to follow.

However, I urge caution on showing this to family members. 😉

You may wind up scripting daily data updates and teaching Cypher to family members and no doubt their friends.

Up to you.

I first saw this in a tweet by Peter Neubauer.

August 29, 2013

Parsing arbitrary Text-based Guitar Tab…

Filed under: ElasticSearch,Music,Music Retrieval — Patrick Durusau @ 6:39 pm

RiffBank – Parsing arbitrary Text-based Guitar Tab into an Indexable and Queryable “RiffCode for ElasticSearch
by Ryan Robitalle.

Guitar tab is a form of tablature, a form of music notation that records finger positions.

Surfing just briefly, there appear to be a lot of music available in “tab” format.

Deeply interesting post that will take some time to work through.

It is one of those odd things that may suddenly turn out to be very relevant (or not) in another domain.

Looking forward to spending some time with tablature data.

August 28, 2013

Computer Music Journal

Filed under: Music — Patrick Durusau @ 4:29 pm

Computer Music Journal

After seeing Chris Ford’s presentation, I went looking for other computer music related material.

The Computer Music Journal is a pay-per-view journal out of MIT.

The Computer Music Journal link at the top of this post is a companion site that has a computer music biography and computer music links, organized by subjects.

If you are interested in computer music, this could be a very rich resource.

Functional Composition [Coding and Music]

Filed under: Clojure,Functional Programming,Music — Patrick Durusau @ 4:12 pm

Functional Composition by Chris Ford.

From the summary:

Chris Ford shows how to make music starting with the basic building block of sound, the sine wave, and gradually accumulating abstractions culminating in a canon by Johann Sebastian Bach.

You can grab the source on Github.

Truly a performance presentation!

Literally.

Chris not only plays music with an instrument, he also writes code to alter music as it is being played on a loop.

Steady hands if nothing else in front of a live audience!

Perhaps a great way to interest people in functional programming.

Certainly a great way to encode historical music that is hard to find performed.

August 16, 2013

Semantic Computing of Moods…

Filed under: Music,Music Retrieval,Semantics,Tagging — Patrick Durusau @ 4:46 pm

Semantic Computing of Moods Based on Tags in Social Media of Music by Pasi Saari, Tuomas Eerola. (IEEE Transactions on Knowledge and Data Engineering, 2013; : 1 DOI: 10.1109/TKDE.2013.128)

Abstract:

Social tags inherent in online music services such as Last.fm provide a rich source of information on musical moods. The abundance of social tags makes this data highly beneficial for developing techniques to manage and retrieve mood information, and enables study of the relationships between music content and mood representations with data substantially larger than that available for conventional emotion research. However, no systematic assessment has been done on the accuracy of social tags and derived semantic models at capturing mood information in music. We propose a novel technique called Affective Circumplex Transformation (ACT) for representing the moods of music tracks in an interpretable and robust fashion based on semantic computing of social tags and research in emotion modeling. We validate the technique by predicting listener ratings of moods in music tracks, and compare the results to prediction with the Vector Space Model (VSM), Singular Value Decomposition (SVD), Nonnegative Matrix Factorization (NMF), and Probabilistic Latent Semantic Analysis (PLSA). The results show that ACT consistently outperforms the baseline techniques, and its performance is robust against a low number of track-level mood tags. The results give validity and analytical insights for harnessing millions of music tracks and associated mood data available through social tags in application development.

These results make me wonder if the results of tagging represents the average semantic resolution that users want?

Obviously a musician or musicologist would want far finer and sharper distinctions, at least for music of interest to them. Or substitute the domain of your choice. Domain experts want precision, while the average user muddles along with coarser divisions.

We already know from Karen Drabenstott’s work (Subject Headings and the Semantic Web) that library classification systems are too complex for the average user and even most librarians.

On the other hand, we all have some sense of the wasted time and effort caused by the uncharted semantic sea where Google and others practice catch and release with semantic data.

Some of the unanswered questions that remain:

How much semantic detail is enough?

For which domains?

Who will pay for gathering it?

What economic model is best?

July 25, 2013

Classification accuracy is not enough

Filed under: Classification,Machine Learning,Music — Patrick Durusau @ 4:41 pm

Classification accuracy is not enough by Bob L. Sturm.

From the post:

Finally published is my article, Classification accuracy is not enough: On the evaluation of music genre recognition systems. I made it completely open access and free for anyone.

Some background: In my paper Two Systems for Automatic Music Genre Recognition: What Are They Really Recognizing?, I perform three different experiments to determine how well two state-of-the-art systems for music genre recognition are recognizing genre. In the first experiment, I find the two systems are consistently making extremely bad misclassifications. In the second experiment, I find the two systems can be fooled by such simple transformations that they cannot possibly be listening to the music. In the third experiment, I find their internal models of the genres do not match how humans think the genres sound. Hence, it appears that the systems are not recognizing genre in the least. However, this seems to contradict the fact that they achieve extremely good classification accuracies, and have been touted as superior solutions in the literature. Turns out, Classification accuracy is not enough!

(…)

I look closely at what kinds of mistakes the systems make, and find they all make very poor yet “confident” mistakes. I demonstrate the latter by looking at the decision statistics of the systems. There is little difference for a system between making a correct classification, and an incorrect one. To judge how poor the mistakes are, I test with humans whether the labels selected by the classifiers describe the music. Test subjects listen to a music excerpt and select between two labels which they think was given by a human. Not one of the systems fooled anyone. Hence, while all the systems had good classification accuracies, good precisions, recalls, and F-scores, and confusion matrices that appeared to make sense, a deeper evaluation shows that none of them are recognizing genre, and thus that none of them are even addressing the problem. (They are all horses, making decisions based on irrelevant but confounded factors.)

(…)

If you have ever wondered what a detailed review of classification efforts would look like, you need wonder no longer!

Bob’s Two Systems for Automatic Music Genre Recognition: What Are They Really Recognizing? is thirty-six (36) pages that examines efforts at music genre recognition (MGR) in detail.

I would highly recommend this paper as a demonstration of good research technique.

July 24, 2013

Crafting Linked Open Data for Cultural Heritage:…

Filed under: Linked Data,Music,Music Retrieval — Patrick Durusau @ 1:17 pm

Crafting Linked Open Data for Cultural Heritage: Mapping and Curation Tools for the Linked Jazz Project by M. Cristina Pattuelli, Matt Miller, Leanora Lange, Sean Fitzell, and Carolyn Li-Madeo.

Abstract:

This paper describes tools and methods developed as part of Linked Jazz, a project that uses Linked Open Data (LOD) to reveal personal and professional relationships among jazz musicians based on interviews from jazz archives. The overarching aim of Linked Jazz is to explore the possibilities offered by LOD to enhance the visibility of cultural heritage materials and enrich the semantics that describe them. While the full Linked Jazz dataset is still under development, this paper presents two applications that have laid the foundation for the creation of this dataset: the Mapping and Curator Tool, and the Transcript Analyzer. These applications have served primarily for data preparation, analysis, and curation and are representative of the types of tools and methods needed to craft linked data from digital content available on the web. This paper discusses these two domain-agnostic tools developed to create LOD from digital textual documents and offers insight into the process behind the creation of LOD in general.

The Linked Data Jazz Name Directory:

consists of 8,725 unique names of jazz musicians as N-Triples.

It’s a starting place if you want to create a topic map about Jazz.

Although, do be aware the Center for Arts and Cultural Policy Studies at Princeton University reports:

Although national estimates of the number of jazz musicians are unavailable, the Study of Jazz Artists 2001 estimated the number of jazz musicians in three metropolitan jazz hubs — New York, San Francisco, and New Orleans — at 33,003, 18,733, and 1,723, respectively. [A total of 53,459. How Many Jazz Musicians Are There?]

And that is only for one point in time. It does not include jazz musicians who perished before the estimate was made.

Much work remains to be done.

June 16, 2013

Music Information Research Based on Machine Learning

Filed under: Machine Learning,Music,Music Retrieval — Patrick Durusau @ 3:38 pm

Music Information Research Based on Machine Learning by Masataka Goto and Kazuyoshi Yoshii.

From the webpage:

Music information research is gaining a lot of attention after 2000 when the general public started listening to music on computers in daily life. It is widely known as an important research field, and new researchers are continually joining the field worldwide. Academically, one of the reasons many researchers are involved in this field is that the essential unresolved issue is the understanding of complex musical audio signals that convey content by forming a temporal structure while multiple sounds are interrelated. Additionally, there are still appealing unresolved issues that have not been touched yet, and the field is a treasure trove of research topics that could be tackled with state-of-the-art machine learning techniques.

This tutorial is intended for an audience interested in the application of machine learning techniques to such music domains. Audience members who are not familiar with music information research are welcome, and researchers working on music technologies are likely to find something new to study.

First, the tutorial serves as a showcase of music information research. The audience can enjoy and study many state-of-the-art demonstrations of music information research based on signal processing and machine learning. This tutorial highlights timely topics such as active music listening interfaces, singing information processing systems, web-related music technologies, crowdsourcing, and consumer-generated media (CGM).

Second, this tutorial explains the music technologies behind the demonstrations. The audience can learn how to analyze and understand musical audio signals, process singing voices, and model polyphonic sound mixtures. As a new approach to advanced music modeling, this tutorial introduces unsupervised music understanding based on nonparametric Bayesian models.

Third, this tutorial provides a practical guide to getting started in music information research. The audience can try available research tools such as music feature extraction, machine learning, and music editors. Music databases and corpora are then introduced. As a hint towards research topics, this tutorial also discusses open problems and grand challenges that the audience members are encouraged to tackle.

In the future, music technologies, together with image, video, and speech technologies, are expected to contribute toward all-around media content technologies based on machine learning.

Download tutorial slides.

Always nice to start with week with something different.

I first saw this in a tweet by Masataka Goto.

May 19, 2013

You Are Listening to The New York Times

Filed under: Interface Research/Design,Music,News — Patrick Durusau @ 4:05 pm

You Are Listening to The New York Times by Hugh Mandeville.

From the post:

When the San Francisco Giants won the 2010 World Series, the post-victory celebrations got out of control. Revelers smashed windows, got into fistfights and started fires. A Muni bus and the metaverse were both set alight.

To track the chaos, Eric Eberhardt, a techie from the Bay Area, tuned in to a San Francisco police scanner station on soma.fm — while also listening to music. Something about the combination of ambient music and live police chatter clicked for Eberhardt, and youarelistening.to was born.

Eberhardt’s site is a mash-up of three APIs: police scanner audio from RadioReference.com, ambient music from SoundCloud and images from Flickr. The outcome is like a real-time soundtrack to Michael Mann’s movie “Heat.” My colleague Chase Davis, interactive news assistant editor, describes it as “‘Hearts of Space’ meets ‘The Wire.’”

(…)

My explorations inspired me to create a page on youarelistening.to that takes New York Times headlines from the Times Newswire API and reads them aloud using TTS-API.com’s text-to-speech API. I also created a page that reads trending tweets, using Twitter’s Search API.

Definitely has potential to enrich a user experience.

Imagine studying early 21st century history and when George W. Bush or Dick Cheney show up on your ereader, War Pigs plays in the background.

Trivia: Did you know that War Pigs was one of 165 songs that Clear Channel suggested could be inappropriate to play after 9/11? 2001 Clear Channel Memorandum.

Cat Stevens with Peace Train also made the list.

Terrorism we can survive. Those trying to protect us, I’m not so sure.

May 18, 2013

A Trillion Triples in Perspective

Filed under: BigData,Music,RDF,Semantic Web — Patrick Durusau @ 10:11 am

Mozart Meets MapReduce by Isaac Lopez.

From the post:

Big data has been around since the beginning of time, says Thomas Paulmichl, founder and CEO of Sigmaspecto, who says that what has changed is how we process the information. In a talk during Big Data Week, Paulmichl encouraged people to open up their perspective on what big data is, and how it can be applied.

During the talk, he admonished people to take a human element into big data. Paulmichl demonstrated this by examining the work of musical prodigy, Mozart – who Paulmichl noted is appreciated greatly by both music scientists, as well as the common music listener.

“When Mozart makes choices on writing a piece of work, the number of choices that he has and the kind of neural algorithms that his brain goes through to choose things is infinitesimally higher that what we call big data – it’s really small data in comparison,” he said.

Taking Mozart’s The Magic Flute as an example, Paulmichl, discussed the framework that Mozart used to make his choices by examining a music sheet outlining the number of bars, the time signature, the instrument and singer voicing.

“So from his perspective, he sits down, and starts to make what we as data scientists call quantitative choices,” explained Paulmichl. “Do I put a note here, down here, do I use a different instrument; do I use a parallel voicing for different violins – so these are all metrics that his brain has to decide.”

Exploring the mathematics of the music, Paulmichl concluded that in looking at The Magic Flute, Mozart had 4.72391E+21 creative variations (and then some) that he could have taken with the direction of it over the course of the piece. “We’re not talking about a trillion dataset; we’re talking about a sextillion or more,” he says adding that this is a very limited cut of the quantitative choice that his brain makes at every composition point.

“[A] sextillion or more…” puts the question of processing a trillion triples into perspective.

Another musical analogy?

Triples are the one finger version of Jingle Bells*:

*The gap is greater than the video represents but it is still amusing.

Does your analysis/data have one finger subtlety?

« Newer PostsOlder Posts »

Powered by WordPress