BSC Research Methods - Musical Sonification

  • Musical sonifications can be especially helpful to researchers who are looking for patterns, periodicity or trends that develop over time in multi-channel datastreams.

    Musical score for sonification of data from 4 MEG sensors

    Sonification

    Two-dimensional visual charts are adequate displays for most data. However, when the data arrives in 248 channels, as it does from the magnetoencephalograph (MEG) at the Brain Sciences Center, and changes fluidly over time, two dimensions are not enough. Raw datastreams generated during MEG experiments can contain 1017 samples/second over 248 channels. For a 45-second experiment, that’s 11,349,720 data points! One way to study the datastreams is by listening to audio representations, or sonifications.

    Listen to musical sonifications of brain data!

    Sound dimensions

    Sound is a unique medium for data representation. Not only does it occur over time, but sound exists in at least six other dynamic dimensions: frequency, amplitude, tone color, and physical location (up/down, front/back & left/right). Evolution has equipped humans with the ability to perceive very small changes in sounds. Some changes are smooth and subtle (e.g. speech inflections), while others are abrupt and alarming (e.g. dishes dropped in a restaurant).

    Musical dimensions

    Music is a very sophisticated refinement of the parameters of sound. The six natural dimensions listed above are organized into new categories that serve to define the qualities of musical instruments, ensembles, and styles. In Western musical tradition: Frequency becomes 12 pitch classes (C, C#, D, D#, E, F, F#, G, G#, A, A#). Pitch classes are organized into major and minor scales of different sizes (chromatic/diatonic/pentatonic). Amplitude becomes loudness (dynamics ppp - fff). Smooth changes in loudness take the form of crescendos (gradually louder) and decrescendos (gradually softer). Tone color becomes instrument timbre (strings, brass, woodwinds, percussion). Instruments are assigned parts to play based on their timbres and pitch range (e.g. the 88-note piano). Locations are translated in three dimensions: left/right becomes pan(orama), front/back becomes presence (reverberation), and elevation becomes height (only used in special circumstances). Instruments might be arranged to ‘level the playing field’, as in an orchestra where the violinists are seated in front of the louder trumpeters and drummers.

    Processing data in the Studio of the Mind

    Individual data points are converted to integers, then to Musical Instrument Digital Interface (MIDI) events/notes. A MIDI sequencer application plays back these notes into synthesizers in much the same way as a player-piano mechanism sends note information to a piano. Each datastream is treated as a separate ‘track’ by the sequencer, but because the data is in digital form (and not an audio recording), playback can be slowed without affecting pitch. Other changes are possible: a unique instrument sound can be assigned to each datastream, pitches can be organized into musical scales, and accents can be derived from the data values. The result is a piece of ‘music’ that can sound like anything from a symphonic orchestra to a solo piano arrangement.





Updated May 5, 2015