Archive for the ‘Sound’ Category

Sam Aaron – Cognicast Episode 069

Tuesday, December 23rd, 2014


From the webpage:

In this episode, we talk to Sam Aaron, programmer, educator and musician.

Our Guest, Sam Aaron


Sam is sharing original music he composed using Sonic Pi. To start the show, he chose “Time Machine”. To end the show, he chose “Goodbyes”.


Subscribing to The Cognicast

The show is available on iTunes! You can also subscribe to the podcast using our podcast feed.

A great perspective on getting people interested in coding, which should be transferable to topic maps. Yes?

Although, I must admit I almost raised my hand when Aaron asked “…who has had fun with sorting?” Well, some people have different interests. 😉

A very enjoyable podcast! I will have to look at prior episodes to see what else I have missed!

PS: What would it take to make the topic map equivalent of Sonic Pi? Taking note of Aaron’s comments on “friction.”

The Sonification Handbook

Monday, January 27th, 2014

The Sonification Handbook. Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff. (Logos Publishing House, Berlin 2011, 586 pages, 1. edition (11/2011) ISBN 978-3-8325-2819-5)


This book is a comprehensive introductory presentation of the key research areas in the interdisciplinary fields of sonification and auditory display. Chapters are written by leading experts, providing a wide-range coverage of the central issues, and can be read from start to finish, or dipped into as required (like a smorgasbord menu).

Sonification conveys information by using non-speech sounds. To listen to data as sound and noise can be a surprising new experience with diverse applications ranging from novel interfaces for visually impaired people to data analysis problems in many scientific fields.

This book gives a solid introduction to the field of auditory display, the techniques for sonification, suitable technologies for developing sonification algorithms, and the most promising application areas. The book is accompanied by the online repository of sound examples.

The text has this advice for readers:

The Sonification Handbook is intended to be a resource for lectures, a textbook, a reference, and an inspiring book. One important objective was to enable a highly vivid experience for the reader, by interleaving as many sound examples and interaction videos as possible. We strongly recommend making use of these media. A text on auditory display without listening to the sounds would resemble a book on visualization without any pictures. When reading the pdf on screen, the sound example names link directly to the corresponding website at The margin symbol is also an active link to the chapter’s main page with supplementary material. Readers of the printed book are asked to check this website manually.

Did I mention the entire text, all 586 pages, can be downloaded for free?

Here’s an interesting idea: What if you had several dozen workers listening to sonofied versions of the same data stream, listening along different dimensions for changes in pitch or tone? When heard, each user signals the change. When some N of the dimensions all have a change at the same time, the data set is pulled at that point for further investigation.

I will regret suggesting that idea. Someone from a leading patent holder will boilerplate an application together tomorrow and file it with the patent office. 😉

NASA’s Voyager Data Is Now a Musical

Monday, January 27th, 2014

NASA’s Voyager Data Is Now a Musical by Victoria Turk.

From the post:

You might think that big data would sound like so many binary beeps, but a project manager at Géant in the UK has turned 320,000 measurements from NASA Voyager equipment into a classically-inspired track. The company describes it as “an up-tempo string and piano orchestral piece.”

Domenico Vicinanza, who is a trained musician as well as a physicist, took measurements from the cosmic ray detectors on Voyager 1 and Voyager 2 at hour intervals, and converted it into two melodies. The result is a duet: the data sets from the two spacecraft play off each other throughout to create a rather charming harmony. …

Data sonification, the technique of representing data points with sound, makes it easier to spot trends, peaks, patterns, and anomalies in a huge data set without having to pore over the numbers.

Some data sonification resources:

audiolyzR: Data sonification with R

Georgia Tech Sonification Lab

Sonification Sandbox

I suspect that sonification is a much better way to review monotonous data for any unusual entries.

My noticing an OMB calculation that multiplied a budget item by zero (0) and produced a larger number, was just chance. Had math operations been set to music, I am sure that would have struck a discordant note!

Human eyesight is superior to computers for galaxy classification.

Human hearing as superior way to explore massive datasets is a promising avenue of research.

Distributed Multimedia Systems (Archives)

Tuesday, February 12th, 2013

Proceedings of the International Conference on Distributed Multimedia Systems

From the webpage:

DMS 2012 Proceedings August 9 to August 11, 2012 Eden Roc Renaissance Miami Beach, USA
DMS 2011 Proceedings August 18 to August 19, 2011 Convitto della Calza, Florence, Italy
DMS 2010 Proceedings October 14 to October 16, 2010 Hyatt Lodge at McDonald’s Campus, Oak Brook, Illinois, USA
DMS 2009 Proceedings September 10 to September 12, 2009 Hotel Sofitel, Redwood City, San Francisco Bay, USA
DMS 2008 Proceedings September 4 to September 6, 2008 Hyatt Harborside at Logan Int’l Airport, Boston, USA
DMS 2007 Proceedings September 6 to September 8, 2007 Hotel Sofitel, Redwood City, San Francisco Bay, USA

For coverage, see the Call for Papers, DMS 2013.

Another archive with topic map related papers!

DMS 2013

Tuesday, February 12th, 2013

DMS 2013: The 19th International Conference on Distributed Multimedia Systems


Paper submission due: April 29, 2013
Notification of acceptance: May 31, 2013
Camera-ready copy: June 15, 2013
Early conference registration due: June 15, 2013
Conference: August 8 – 10, 2013

From the call for papers:

With today’s proliferation of multimedia data (e.g., images, animations, video, and sound), comes the challenge of using such information to facilitate data analysis, modeling, presentation, interaction and programming, particularly for end-users who are domain experts, but not IT professionals. The main theme of the 19th International Conference on Distributed Multimedia Systems (DMS’2013) is multimedia inspired computing. The conference organizers seek contributions of high quality papers, panels or tutorials, addressing any novel aspect of computing (e.g., programming language or environment, data analysis, scientific visualization, etc.) that significantly benefits from the incorporation/integration of multimedia data (e.g., visual, audio, pen, voice, image, etc.), for presentation at the conference and publication in the proceedings. Both research and case study papers or demonstrations describing results in research area as well as industrial development cases and experiences are solicited. The use of prototypes and demonstration video for presentations is encouraged.


Topics of interest include, but are not limited to:

Distributed Multimedia Technology

  • media coding, acquisition and standards
  • QoS and Quality of Experience control
  • digital rights management and conditional access solutions
  • privacy and security issues
  • mobile devices and wireless networks
  • mobile intelligent applications
  • sensor networks, environment control and management

Distributed Multimedia Models and Systems

  • human-computer interaction
  • languages for distributed multimedia
  • multimedia software engineering issues
  • semantic computing and processing
  • media grid computing, cloud and virtualization
  • web services and multi-agent systems
  • multimedia databases and information systems
  • multimedia indexing and retrieval systems
  • multimedia and cross media authoring

Applications of Distributed Multimedia Systems

  • collaborative and social multimedia systems and solutions
  • humanities and cultural heritage applications, management and fruition
  • multimedia preservation
  • cultural heritage preservation, management and fruition
  • distance and lifelong learning
  • emergency and safety management
  • e-commerce and e-government applications
  • health care management and disability assistance
  • intelligent multimedia computing
  • internet multimedia computing
  • virtual, mixed and augmented reality
  • user profiling, reasoning and recommendations

The presence of information/data doesn’t mean topic maps return good ROI.

On the other hand, the presence of information/data does mean semantic impedance is present.

The question is what need you have to overcome semantic impedance and at what cost?

Musical Spheres Anyone?

Friday, June 15th, 2012

Making Music With Real Stars: Kepler Telescope Star Data Creates Musical Melody reports on the creation of music from astronomical data.

By itself an amusing curiousity but in the larger context of data exploration, perhaps something more.

I would have trouble carrying a tune in sack but we shouldn’t evaluate data exploration techniques based solely on our personal capabilities. Any more than colors should be ignored in visualization because some researchers are color blind.

A starting place for conversations about sonification would be the Georgia Tech Sonification Lab.

Or you can download the Sonification Sandbox.

BTW, question for music librarians/researchers:

Is there an autocomplete feature for music searches? Where a user can type in the first few notes and is offered a list of continuations?

Indexing Sound: Musical Riffs to Gunshots

Thursday, November 10th, 2011

Sound, Digested: New Software Tool Provides Unprecedented Searches of Sound, from Musical Riffs to Gunshots

From the post:

Audio engineers have developed a novel artificial intelligence system for understanding and indexing sound, a unique tool for both finding and matching previously un-labeled audio files.

Having concluded beta testing with one of the world’s largest Hollywood sound studios and leading media streaming and hosting services, Imagine Research of San Francisco, Calif., is now releasing MediaMined™ ( for applications ranging from music composition to healthcare.


One of the key innovations of the new technology is the ability to perform sound-similarity searches. Now, when a musician wants a track with a matching feel to mix into a song, or an audio engineer wants a slightly different sound effect to work into a film, the process can be as simple as uploading an example file and browsing the detected matches.

“There are many tools to analyze and index sound, but the novel, machine-learning approach of MediaMined™ was one reason we felt the technology could prove important,” says Errol Arkilic, the NSF program director who helped oversee the Imagine Research grants. “The software enables users to go beyond finding unique objects, allowing similarity searches–free of the burden of keywords–that generate previously hidden connections and potentially present entirely new applications.”

Or from the Imagine Research Applications page:

Organize Sound

Automatically index the acoustic content of video, audio, and live streams across a companies web services. Analyze web-crawled data, user-generated content, professional broadcast content, and live streaming events.


  • Millions of minutes of content are now searchable
  • Recommending related content increases audience and viewer consumption
  • Better content discovery, intelligent navigation within media files
  • Search audio content with ease and accuracy
  • Audio content-aware targeted ads – improves ad performance and revenue

Search Sound

Perform sound-similarity searches for sounds and music by using example sounds.
Search for production music that matches a given track.
Perform the rhythmic similarity searches


  • Recommending related content increases audience and viewer consumption
  • Music/Audio licensing portals provide a unique-selling point: find content based on an input seed track.
  • Improved monetization of existing content with similarity-search and recommendations

And if you have a topic map of music, producers, studios, albums, etc., this could supplement your topic map based on similarity measures every time a new recording is released, or music is uploaded to a website or posted to YouTube. So you know who to contact for whatever reason.

A topic map of videos could serve up matches and links with thumbnails for videos with similar sound content based on a submitted sample, a variation as it were on “more of same.”

A topic map of live news feeds could detect repetition of news stories and with timing information could map the copying of content from one network to the next. Or provide indexing of news accounts without the necessity of actually sitting through the broadcasts. That is an advantage not mentioned above.

Sound recognition isn’t my area so if this is old news or there are alternatives to suggest to topic mappers, please sing out! (Sorry!)