Skip to main content

Research Repository

Advanced Search

Decoding emotions in expressive music performances: A multi-lab replication and extension study

Akkermans, Jessica; Schapiro, Renee; Müllensiefen, Daniel; Jakubowski, Kelly; Shanahan, Daniel; Baker, David; Busch, Veronika; Lothwesen, Kai; Elvers, Paul; Fischinger, Timo; Schlemmer, Kathrin; Frieler, Klaus

Decoding emotions in expressive music performances: A multi-lab replication and extension study Thumbnail


Authors

Jessica Akkermans

Renee Schapiro

Daniel Müllensiefen

Daniel Shanahan

David Baker

Veronika Busch

Kai Lothwesen

Paul Elvers

Timo Fischinger

Kathrin Schlemmer

Klaus Frieler



Abstract

With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers’ abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper’s influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N = 319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0–10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.

Citation

Akkermans, J., Schapiro, R., Müllensiefen, D., Jakubowski, K., Shanahan, D., Baker, D., …Frieler, K. (2019). Decoding emotions in expressive music performances: A multi-lab replication and extension study. Cognition and Emotion, 33(6), 1099-1118. https://doi.org/10.1080/02699931.2018.1541312

Journal Article Type Article
Acceptance Date Oct 7, 2018
Online Publication Date Nov 8, 2018
Publication Date 2019
Deposit Date Jan 9, 2019
Publicly Available Date Mar 29, 2024
Journal Cognition and Emotion
Print ISSN 0269-9931
Electronic ISSN 1464-0600
Publisher Taylor and Francis Group
Peer Reviewed Peer Reviewed
Volume 33
Issue 6
Pages 1099-1118
DOI https://doi.org/10.1080/02699931.2018.1541312

Files




You might also like



Downloadable Citations