know more

Tuesday, January 22, 2013

Phenomenology of the Beat: Neuroscience of Music

Fascinating article by Virginia Hughes at NGO's Phenomena details the research gathered by Dartmouth student Beau Sievers for his Master's thesis in Electroacoustic Music regarding the curious fact that the same regions of the brain activate while perceiving motion and perceiving music.

Sievers' thesis, which is available on his portfolio website, sourced its fundamental data from a program Sievers coded which would procedurally generate 16 seconds of musical tones or video of bouncing, shivering egg given certain adjustable parameters: tempo of tones or the egg's motion (beats per minute), variance from the set tempo (jitter), dissonance of scale or shivery wrinkling of the egg's skin (consonance), the harmonic distance from the current tone to the next tone or height of the egg's bounces (big/small interval probability), and the probability that the next tone will be higher or lower than the current, expressed by the egg as a lean forward or backward (up/down probability).


The program is presented to the test subject in either its video or audio form with two interactive fields: a set of sliders to freely adjust the algorithm's parameters and a handful of target emotion categories. The test subject is asked to repeatedly adjust the sliders and play the generated audio or video until they are satisfied that they have accurately depicted one of the emotion categories; the program saves those slider settings and the subject moves on to the next category until all categories are satisfactorily defined. The subject can at any time go back and check their work with any category they've already finished and make changes.
There isn't a particularly clear line of decision making behind how Sievers designated a parameter of motion to each parameter of music; on page 33 of his thesis Sievers writes that "a combination of evidence from the literature and our intuition suggested the parameters..." Sievers bemoans in the conclusion of his thesis that modern research seems perpetually unfit for precise parametrization of phenomena such as this and insists that his findings shouldn't be taken for more than statistically significant results; luckily, even if the results don't offer specific response patterns to specific real-world stimulus, they did show that subjects would overwhelmingly choose nearly identical parameter settings for each emotional category whether they were provided the audio or video version of the program. The results strongly suggest that the brain processes and parses data on subject in motion and a piece of music in progress in the same way: variances in spatial position or position on the the tonal scale over time, the sense of beat or movement repetition over time, and physical or harmonic disunification seem to be predictable factors which result in specific emotional responses.

Sievers even tested his methods in the Cambodian village L'ak; the difference in parameter choices for each of the (carefully translated) emotional categories was statistically insignificant.

Virginia Hughes ponders how this connection might have developed:
It could be, for instance, that our ancestors first learned to interpret emotion from movement — something that would be useful, say, if you encountered an angry saber-tooth cat. Those same brain systems, finely tuned to detect changes in rhythm and speed, could have also evolved to pick up similar changes in sounds, and later, to intentionally exploit this perceptual system by making music.
It's a cool thought that has a satisfying primal quality: because the mind is engineered to pick out even the most subtle motion patterns and kick you with an emotion appropriate to what that motion pattern might mean, the mind runs audio through the same pattern-detector and assigns an emotion to otherwise meaningless sound.

Check out the NGO article for sample video and audio from Siever's program.

No comments:

Post a Comment