How It Works – The “Musical Brain”
I found this following the links in the Million Song Dataset post.
One aspect, among others, that I found interesting, was the support for multiple ID spaces.
I am curious about the claim it works by:
Analyzing every song on the web to extract key, tempo, rhythm and timbre and other attributes — understanding every song in the same way a musician would describe it
Leaving aside the ambitious claims about NLP processing made elsewhere on that page, I find it curious that there is a uniform method for describing music.
Or perhaps they mean that the “Musical Brain” uses only one description uniformly across the music it evaluates. I can buy that. And it could well be a useful exercise.
At least from the prospective of generating raw data that could then be mapped to other nomenclatures used by musicians.
I wonder if the Rolling Stone uses the same nomenclature as the “Musical Brain?” Will have to check.
Suggestions for other music description languages? Mappings to the one used by the “Musical Brain?”
BTW, before I forget, the “Musical Brain” offers a free API (for non-commercial use) to its data.
Would appreciate hearing about your experiences with the API.