This is a follow-up post of the recently published one that introduced a categorisation of musical characteristics.
Even the process of feature extraction and derivation from audio signals is a rather complex one. The graphic above illustrates an abstraction of the main tasks that are necessary to gain from given audio signals compact descriptions that are known as musical fingerprints. They should ideally act as a robust and correct song identification mechanism and be applicable for a very fast song recognition. That is why, they have to be efficient computable and compact as possible (see ). Fortunately, the whole audio signal analysis process can be outsourced and various music information services provide different functionalities to support this task, e.g., Echo Nest or Canoris.
A full-fledged work flow of musical characteristics analysis includes a music context data extraction process, besides an audio signal analysis task to retrieve music content data. Thereby, a metadata enhancement task makes use of further information services, instead of only relying on information that can be gained from a music document (see the graphic above). It is important to note that the processes of classification, categorisation and similarity calculations should be driven by user profiles to be adaptable . Finally, fuzzy and abstract feature descriptions are necessary to enable an intuitive handling with the music knowledge base.
Please have a look at my bachelor-like thesis  to get a deeper insight of the background of music content and context data, and the not trivial extraction and derivation process of musical characteristics. Even this whole analysis process can be outsourced (as needed) to external music information services, e.g., Echo Nest.
A PDF version of the audio signal feature extraction and derivation figure can be found here and one of the musical characteristics analysis figure there. These graphics are freely usable and sharable under the Creative Common Attribution 3.0 Unported license.
 Cano, Pedro; “Content-based Audio Search: From Fingerprinting to Semantic Audio
Retrieval”; PhD thesis; Technology Department of Pompeu Fabra University,
 Ferris, Bob; “Musical and Music Related Metadata and Features for Administration of Private Music Collections”; smiy.org; 2010