We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

Decoding emotions in expressive music performances : a multi-lab replication and extension study.

Akkermans, Jessica and Schapiro, Renee and Müllensiefen, Daniel and Jakubowski, Kelly and Shanahan, Daniel and Baker, David and Busch, Veronika and Lothwesen, Kai and Elvers, Paul and Fischinger, Timo and Schlemmer, Kathrin and Frieler, Klaus (2019) 'Decoding emotions in expressive music performances : a multi-lab replication and extension study.', Cognition and emotion., 33 (6). 1099-1118 .


With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers’ abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper’s influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N = 319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0–10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.

Item Type:Article
Full text:(AM) Accepted Manuscript
Download PDF
Publisher Web site:
Publisher statement:This is an Accepted Manuscript of an article published by Taylor & Francis in Cognition and emotion on 8 November 2018 available online:
Date accepted:07 October 2018
Date deposited:10 January 2019
Date of first online publication:08 November 2018
Date first made open access:08 November 2019

Save or Share this output

Look up in GoogleScholar