The Sonification Handbook. Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff. (Logos Publishing House, Berlin 2011, 586 pages, 1. edition (11/2011) ISBN 978-3-8325-2819-5)
This book is a comprehensive introductory presentation of the key research areas in the interdisciplinary fields of sonification and auditory display. Chapters are written by leading experts, providing a wide-range coverage of the central issues, and can be read from start to finish, or dipped into as required (like a smorgasbord menu).
Sonification conveys information by using non-speech sounds. To listen to data as sound and noise can be a surprising new experience with diverse applications ranging from novel interfaces for visually impaired people to data analysis problems in many scientific fields.
This book gives a solid introduction to the field of auditory display, the techniques for sonification, suitable technologies for developing sonification algorithms, and the most promising application areas. The book is accompanied by the online repository of sound examples.
The text has this advice for readers:
The Sonification Handbook is intended to be a resource for lectures, a textbook, a reference, and an inspiring book. One important objective was to enable a highly vivid experience for the reader, by interleaving as many sound examples and interaction videos as possible. We strongly recommend making use of these media. A text on auditory display without listening to the sounds would resemble a book on visualization without any pictures. When reading the pdf on screen, the sound example names link directly to the corresponding website at http://sonification.de/handbook. The margin symbol is also an active link to the chapter’s main page with supplementary material. Readers of the printed book are asked to check this website manually.
Did I mention the entire text, all 586 pages, can be downloaded for free?
Here’s an interesting idea: What if you had several dozen workers listening to sonofied versions of the same data stream, listening along different dimensions for changes in pitch or tone? When heard, each user signals the change. When some N of the dimensions all have a change at the same time, the data set is pulled at that point for further investigation.
I will regret suggesting that idea. Someone from a leading patent holder will boilerplate an application together tomorrow and file it with the patent office. 😉