WorldWideScience

Sample records for sad music emotion

  1. Sad music induces pleasant emotion.

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Katahira, Kentaro; Okanoya, Kazuo

    2013-01-01

    In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion.

  2. Sad music induces pleasant emotion

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Katahira, Kentaro; Okanoya, Kazuo

    2013-01-01

    In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion. PMID:23785342

  3. Congruence of happy and sad emotion in music and faces modifies cortical audiovisual activation.

    Science.gov (United States)

    Jeong, Jeong-Won; Diwadkar, Vaibhav A; Chugani, Carla D; Sinsoongsud, Piti; Muzik, Otto; Behen, Michael E; Chugani, Harry T; Chugani, Diane C

    2011-02-14

    The powerful emotion inducing properties of music are well-known, yet music may convey differing emotional responses depending on environmental factors. We hypothesized that neural mechanisms involved in listening to music may differ when presented together with visual stimuli that conveyed the same emotion as the music when compared to visual stimuli with incongruent emotional content. We designed this study to determine the effect of auditory (happy and sad instrumental music) and visual stimuli (happy and sad faces) congruent or incongruent for emotional content on audiovisual processing using fMRI blood oxygenation level-dependent (BOLD) signal contrast. The experiment was conducted in the context of a conventional block-design experiment. A block consisted of three emotional ON periods, music alone (happy or sad music), face alone (happy or sad faces), and music combined with faces where the music excerpt was played while presenting either congruent emotional faces or incongruent emotional faces. We found activity in the superior temporal gyrus (STG) and fusiform gyrus (FG) to be differentially modulated by music and faces depending on the congruence of emotional content. There was a greater BOLD response in STG when the emotion signaled by the music and faces was congruent. Furthermore, the magnitude of these changes differed for happy congruence and sad congruence, i.e., the activation of STG when happy music was presented with happy faces was greater than the activation seen when sad music was presented with sad faces. In contrast, incongruent stimuli diminished the BOLD response in STG and elicited greater signal change in bilateral FG. Behavioral testing supplemented these findings by showing that subject ratings of emotion in faces were influenced by emotion in music. When presented with happy music, happy faces were rated as more happy (p=0.051) and sad faces were rated as less sad (p=0.030). When presented with sad music, happy faces were rated as less

  4. Influence of trait empathy on the emotion evoked by sad music and on the preference for it.

    Science.gov (United States)

    Kawakami, Ai; Katahira, Kenji

    2015-01-01

    Some people experience pleasant emotion when listening to sad music. Therefore, they can enjoy listening to it. In the current study, we aimed to investigate such apparently paradoxical emotional mechanisms and focused on the influence of individuals' trait empathy, which has been reported to associate with emotional responses to sad music and a preference for it. Eighty-four elementary school children (42 males and 42 females, mean age 11.9 years) listened to two kinds of sad music and rated their emotional state and liking toward them. In addition, trait empathy was assessed using the Interpersonal Reactivity Index scale, which comprises four sub-components: Empathic Concern, Personal Distress, Perspective Taking, and Fantasy (FS). We conducted a path analysis and tested our proposed model that hypothesized that trait empathy and its sub-components would affect the preference for sad music directly or indirectly, mediated by the emotional response to the sad music. Our findings indicated that FS, a sub-component of trait empathy, was directly associated with liking sad music. Additionally, perspective taking ability, another sub-component of trait empathy, was correlated with the emotional response to sad music. Furthermore, the experience of pleasant emotions contributed to liking sad music.

  5. Influence of Trait Empathy on the Emotion Evoked by Sad Music and on the Preference for it

    Directory of Open Access Journals (Sweden)

    Ai eKawakami

    2015-10-01

    Full Text Available Some people experience pleasant emotion when listening to sad music. Therefore, they can enjoy listening to it. In the current study, we aimed to investigate such apparently paradoxical emotional mechanisms and focused on the influence of individuals’ trait empathy, which has been reported to associate with emotional responses to sad music and a preference for it. Eighty-four elementary school children (42 males and 42 females, mean age 11.9 years listened to two kinds of sad music and rated their emotional state and liking towards them. In addition, trait empathy was assessed using the IRI scale, which comprises four sub-components: Empathic Concern, Personal Distress, Perspective Taking, and Fantasy. We conducted a path analysis and tested our proposed model that hypothesized that trait empathy and its sub-components would affect the preference for sad music directly or indirectly, mediated by the emotional response to the sad music. Our findings indicated that fantasy, a sub-component of trait empathy, was directly associated with liking sad music. Additionally, perspective taking ability, another sub-component of trait empathy, was correlated with the emotional response to sad music. Furthermore, the experience of pleasant emotions contributed to liking sad music.

  6. Anxiety, Sadness, and Emotion Specificity: The Role of Music in Consumer Emotion and Advertisement Evaluation

    Directory of Open Access Journals (Sweden)

    Felix Septianto

    2013-12-01

    Full Text Available Although music could diversely influence consumer judgment process and behavior, it is still unclear whether music can evoke discrete emotions on consumers and influence consumer evaluation toward certain advertisements. This research proposes that music could evoke sad and anxious emotion on consumers; subsequently, consumers would regulate their negative emotions in accordance to their emotion orientations: Consumers who feel sad would show high evaluation toward happy-themed advertisement, while consumers who feel anxious would show high evaluation toward calm-themed advertisement. This paper concludes with the discussion of theoretical and practical implications and conclusion of this study.

  7. A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics

    Science.gov (United States)

    Brattico, Elvira; Alluri, Vinoo; Bogert, Brigitte; Jacobsen, Thomas; Vartiainen, Nuutti; Nieminen, Sirke; Tervaniemi, Mari

    2011-01-01

    Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions

  8. A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics.

    Science.gov (United States)

    Brattico, Elvira; Alluri, Vinoo; Bogert, Brigitte; Jacobsen, Thomas; Vartiainen, Nuutti; Nieminen, Sirke; Tervaniemi, Mari

    2011-01-01

    Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants' self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects' selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca's area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.

  9. A functional MRI study of happy and sad emotions in music with and without lyrics

    Directory of Open Access Journals (Sweden)

    Elvira eBrattico

    2011-12-01

    Full Text Available Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging (fMRI data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area, and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics

  10. The Paradox of Music-Evoked Sadness: An Online Survey

    Science.gov (United States)

    Taruffi, Liila; Koelsch, Stefan

    2014-01-01

    This study explores listeners’ experience of music-evoked sadness. Sadness is typically assumed to be undesirable and is therefore usually avoided in everyday life. Yet the question remains: Why do people seek and appreciate sadness in music? We present findings from an online survey with both Western and Eastern participants (N = 772). The survey investigates the rewarding aspects of music-evoked sadness, as well as the relative contribution of listener characteristics and situational factors to the appreciation of sad music. The survey also examines the different principles through which sadness is evoked by music, and their interaction with personality traits. Results show 4 different rewards of music-evoked sadness: reward of imagination, emotion regulation, empathy, and no “real-life” implications. Moreover, appreciation of sad music follows a mood-congruent fashion and is greater among individuals with high empathy and low emotional stability. Surprisingly, nostalgia rather than sadness is the most frequent emotion evoked by sad music. Correspondingly, memory was rated as the most important principle through which sadness is evoked. Finally, the trait empathy contributes to the evocation of sadness via contagion, appraisal, and by engaging social functions. The present findings indicate that emotional responses to sad music are multifaceted, are modulated by empathy, and are linked with a multidimensional experience of pleasure. These results were corroborated by a follow-up survey on happy music, which indicated differences between the emotional experiences resulting from listening to sad versus happy music. This is the first comprehensive survey of music-evoked sadness, revealing that listening to sad music can lead to beneficial emotional effects such as regulation of negative emotion and mood as well as consolation. Such beneficial emotional effects constitute the prime motivations for engaging with sad music in everyday life. PMID:25330315

  11. The paradox of music-evoked sadness: an online survey.

    Directory of Open Access Journals (Sweden)

    Liila Taruffi

    Full Text Available This study explores listeners' experience of music-evoked sadness. Sadness is typically assumed to be undesirable and is therefore usually avoided in everyday life. Yet the question remains: Why do people seek and appreciate sadness in music? We present findings from an online survey with both Western and Eastern participants (N = 772. The survey investigates the rewarding aspects of music-evoked sadness, as well as the relative contribution of listener characteristics and situational factors to the appreciation of sad music. The survey also examines the different principles through which sadness is evoked by music, and their interaction with personality traits. Results show 4 different rewards of music-evoked sadness: reward of imagination, emotion regulation, empathy, and no "real-life" implications. Moreover, appreciation of sad music follows a mood-congruent fashion and is greater among individuals with high empathy and low emotional stability. Surprisingly, nostalgia rather than sadness is the most frequent emotion evoked by sad music. Correspondingly, memory was rated as the most important principle through which sadness is evoked. Finally, the trait empathy contributes to the evocation of sadness via contagion, appraisal, and by engaging social functions. The present findings indicate that emotional responses to sad music are multifaceted, are modulated by empathy, and are linked with a multidimensional experience of pleasure. These results were corroborated by a follow-up survey on happy music, which indicated differences between the emotional experiences resulting from listening to sad versus happy music. This is the first comprehensive survey of music-evoked sadness, revealing that listening to sad music can lead to beneficial emotional effects such as regulation of negative emotion and mood as well as consolation. Such beneficial emotional effects constitute the prime motivations for engaging with sad music in everyday life.

  12. The paradox of music-evoked sadness: an online survey.

    Science.gov (United States)

    Taruffi, Liila; Koelsch, Stefan

    2014-01-01

    This study explores listeners' experience of music-evoked sadness. Sadness is typically assumed to be undesirable and is therefore usually avoided in everyday life. Yet the question remains: Why do people seek and appreciate sadness in music? We present findings from an online survey with both Western and Eastern participants (N = 772). The survey investigates the rewarding aspects of music-evoked sadness, as well as the relative contribution of listener characteristics and situational factors to the appreciation of sad music. The survey also examines the different principles through which sadness is evoked by music, and their interaction with personality traits. Results show 4 different rewards of music-evoked sadness: reward of imagination, emotion regulation, empathy, and no "real-life" implications. Moreover, appreciation of sad music follows a mood-congruent fashion and is greater among individuals with high empathy and low emotional stability. Surprisingly, nostalgia rather than sadness is the most frequent emotion evoked by sad music. Correspondingly, memory was rated as the most important principle through which sadness is evoked. Finally, the trait empathy contributes to the evocation of sadness via contagion, appraisal, and by engaging social functions. The present findings indicate that emotional responses to sad music are multifaceted, are modulated by empathy, and are linked with a multidimensional experience of pleasure. These results were corroborated by a follow-up survey on happy music, which indicated differences between the emotional experiences resulting from listening to sad versus happy music. This is the first comprehensive survey of music-evoked sadness, revealing that listening to sad music can lead to beneficial emotional effects such as regulation of negative emotion and mood as well as consolation. Such beneficial emotional effects constitute the prime motivations for engaging with sad music in everyday life.

  13. Sad and happy emotion discrimination in music by children with cochlear implants.

    Science.gov (United States)

    Hopyan, Talar; Manno, Francis A M; Papsin, Blake C; Gordon, Karen A

    2016-01-01

    Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.

  14. It's Sad but I Like It: The Neural Dissociation Between Musical Emotions and Liking in Experts and Laypersons.

    Science.gov (United States)

    Brattico, Elvira; Bogert, Brigitte; Alluri, Vinoo; Tervaniemi, Mari; Eerola, Tuomas; Jacobsen, Thomas

    2015-01-01

    Emotion-related areas of the brain, such as the medial frontal cortices, amygdala, and striatum, are activated during listening to sad or happy music as well as during listening to pleasurable music. Indeed, in music, like in other arts, sad and happy emotions might co-exist and be distinct from emotions of pleasure or enjoyment. Here we aimed at discerning the neural correlates of sadness or happiness in music as opposed those related to musical enjoyment. We further investigated whether musical expertise modulates the neural activity during affective listening of music. To these aims, 13 musicians and 16 non-musicians brought to the lab their most liked and disliked musical pieces with a happy and sad connotation. Based on a listening test, we selected the most representative 18 sec excerpts of the emotions of interest for each individual participant. Functional magnetic resonance imaging (fMRI) recordings were obtained while subjects listened to and rated the excerpts. The cortico-thalamo-striatal reward circuit and motor areas were more active during liked than disliked music, whereas only the auditory cortex and the right amygdala were more active for disliked over liked music. These results discern the brain structures responsible for the perception of sad and happy emotions in music from those related to musical enjoyment. We also obtained novel evidence for functional differences in the limbic system associated with musical expertise, by showing enhanced liking-related activity in fronto-insular and cingulate areas in musicians.

  15. It's Sad but I Like It: The Neural Dissociation Between Musical Emotions and Liking in Experts and Laypersons

    Science.gov (United States)

    Brattico, Elvira; Bogert, Brigitte; Alluri, Vinoo; Tervaniemi, Mari; Eerola, Tuomas; Jacobsen, Thomas

    2016-01-01

    Emotion-related areas of the brain, such as the medial frontal cortices, amygdala, and striatum, are activated during listening to sad or happy music as well as during listening to pleasurable music. Indeed, in music, like in other arts, sad and happy emotions might co-exist and be distinct from emotions of pleasure or enjoyment. Here we aimed at discerning the neural correlates of sadness or happiness in music as opposed those related to musical enjoyment. We further investigated whether musical expertise modulates the neural activity during affective listening of music. To these aims, 13 musicians and 16 non-musicians brought to the lab their most liked and disliked musical pieces with a happy and sad connotation. Based on a listening test, we selected the most representative 18 sec excerpts of the emotions of interest for each individual participant. Functional magnetic resonance imaging (fMRI) recordings were obtained while subjects listened to and rated the excerpts. The cortico-thalamo-striatal reward circuit and motor areas were more active during liked than disliked music, whereas only the auditory cortex and the right amygdala were more active for disliked over liked music. These results discern the brain structures responsible for the perception of sad and happy emotions in music from those related to musical enjoyment. We also obtained novel evidence for functional differences in the limbic system associated with musical expertise, by showing enhanced liking-related activity in fronto-insular and cingulate areas in musicians. PMID:26778996

  16. Hidden sources of joy, fear, and sadness: Explicit versus implicit neural processing of musical emotions.

    Science.gov (United States)

    Bogert, Brigitte; Numminen-Kontti, Taru; Gold, Benjamin; Sams, Mikko; Numminen, Jussi; Burunat, Iballa; Lampinen, Jouko; Brattico, Elvira

    2016-08-01

    Music is often used to regulate emotions and mood. Typically, music conveys and induces emotions even when one does not attend to them. Studies on the neural substrates of musical emotions have, however, only examined brain activity when subjects have focused on the emotional content of the music. Here we address with functional magnetic resonance imaging (fMRI) the neural processing of happy, sad, and fearful music with a paradigm in which 56 subjects were instructed to either classify the emotions (explicit condition) or pay attention to the number of instruments playing (implicit condition) in 4-s music clips. In the implicit vs. explicit condition, stimuli activated bilaterally the inferior parietal lobule, premotor cortex, caudate, and ventromedial frontal areas. The cortical dorsomedial prefrontal and occipital areas activated during explicit processing were those previously shown to be associated with the cognitive processing of music and emotion recognition and regulation. Moreover, happiness in music was associated with activity in the bilateral auditory cortex, left parahippocampal gyrus, and supplementary motor area, whereas the negative emotions of sadness and fear corresponded with activation of the left anterior cingulate and middle frontal gyrus and down-regulation of the orbitofrontal cortex. Our study demonstrates for the first time in healthy subjects the neural underpinnings of the implicit processing of brief musical emotions, particularly in frontoparietal, dorsolateral prefrontal, and striatal areas of the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Music evokes vicarious emotions in listeners.

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Okanoya, Kazuo

    2014-01-01

    Why do we listen to sad music? We seek to answer this question using a psychological approach. It is possible to distinguish perceived emotions from those that are experienced. Therefore, we hypothesized that, although sad music is perceived as sad, listeners actually feel (experience) pleasant emotions concurrent with sadness. This hypothesis was supported, which led us to question whether sadness in the context of art is truly an unpleasant emotion. While experiencing sadness may be unpleasant, it may also be somewhat pleasant when experienced in the context of art, for example, when listening to sad music. We consider musically evoked emotion vicarious, as we are not threatened when we experience it, in the way that we can be during the course of experiencing emotion in daily life. When we listen to sad music, we experience vicarious sadness. In this review, we propose two sides to sadness by suggesting vicarious emotion.

  18. The minor third communicates sadness in speech, mirroring its use in music.

    Science.gov (United States)

    Curtis, Meagan E; Bharucha, Jamshed J

    2010-06-01

    There is a long history of attempts to explain why music is perceived as expressing emotion. The relationship between pitches serves as an important cue for conveying emotion in music. The musical interval referred to as the minor third is generally thought to convey sadness. We reveal that the minor third also occurs in the pitch contour of speech conveying sadness. Bisyllabic speech samples conveying four emotions were recorded by 9 actresses. Acoustic analyses revealed that the relationship between the 2 salient pitches of the sad speech samples tended to approximate a minor third. Participants rated the speech samples for perceived emotion, and the use of numerous acoustic parameters as cues for emotional identification was modeled using regression analysis. The minor third was the most reliable cue for identifying sadness. Additional participants rated musical intervals for emotion, and their ratings verified the historical association between the musical minor third and sadness. These findings support the theory that human vocal expressions and music share an acoustic code for communicating sadness.

  19. The pleasures of sad music: a systematic review.

    Science.gov (United States)

    Sachs, Matthew E; Damasio, Antonio; Habibi, Assal

    2015-01-01

    Sadness is generally seen as a negative emotion, a response to distressing and adverse situations. In an aesthetic context, however, sadness is often associated with some degree of pleasure, as suggested by the ubiquity and popularity, throughout history, of music, plays, films and paintings with a sad content. Here, we focus on the fact that music regarded as sad is often experienced as pleasurable. Compared to other art forms, music has an exceptional ability to evoke a wide-range of feelings and is especially beguiling when it deals with grief and sorrow. Why is it, then, that while human survival depends on preventing painful experiences, mental pain often turns out to be explicitly sought through music? In this article we consider why and how sad music can become pleasurable. We offer a framework to account for how listening to sad music can lead to positive feelings, contending that this effect hinges on correcting an ongoing homeostatic imbalance. Sadness evoked by music is found pleasurable: (1) when it is perceived as non-threatening; (2) when it is aesthetically pleasing; and (3) when it produces psychological benefits such as mood regulation, and empathic feelings, caused, for example, by recollection of and reflection on past events. We also review neuroimaging studies related to music and emotion and focus on those that deal with sadness. Further exploration of the neural mechanisms through which stimuli that usually produce sadness can induce a positive affective state could help the development of effective therapies for disorders such as depression, in which the ability to experience pleasure is attenuated.

  20. The Pleasures of Sad Music: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Matthew eSachs

    2015-07-01

    Full Text Available Sadness is generally seen as a negative emotion, a response to distressing and adverse situations. In an aesthetic context, however, sadness is often associated with some degree of pleasure, as suggested by the ubiquity and popularity, throughout history, of music, plays, films and paintings with a sad content. Here, we focus on the fact that music regarded as sad is often experienced as pleasurable. Compared to other art forms, music has an exceptional ability to evoke a wide-range of feelings and is especially beguiling when it deals with grief and sorrow. Why is it, then, that while human survival depends on preventing painful experiences, mental pain often turns out to be explicitly sought through music? In this article we consider why and how sad music can become pleasurable. We offer a framework to account for how listening to sad music can lead to positive feelings, contending that this effect hinges on correcting an ongoing homeostatic imbalance. Sadness evoked by music is found pleasurable (1 when it is perceived as non-threatening; (2 when it is aesthetically pleasing; and (3 when it produces psychological benefits such as mood regulation, and empathic feelings, caused, for example, by recollection of and reflection on past events. We also review neuroimaging studies related to music and emotion and focus on those that deal with sadness. Further exploration of the neural mechanisms through which stimuli that usually produce sadness can induce a positive affective state could help the development of effective therapies for disorders such as depression, in which the ability to experience pleasure is attenuated.

  1. Being Moved by Unfamiliar Sad Music Is Associated with High Empathy

    Science.gov (United States)

    Eerola, Tuomas; Vuoskoski, Jonna K.; Kautiainen, Hannu

    2016-01-01

    The paradox of enjoying listening to music that evokes sadness is yet to be fully understood. Unlike prior studies that have explored potential explanations related to lyrics, memories, and mood regulation, we investigated the types of emotions induced by unfamiliar, instrumental sad music, and whether these responses are consistently associated with certain individual difference variables. One hundred and two participants were drawn from a representative sample to minimize self-selection bias. The results suggest that the emotional responses induced by unfamiliar sad music could be characterized in terms of three underlying factors: Relaxing sadness, Moving sadness, and Nervous sadness. Relaxing sadness was characterized by felt and perceived peacefulness and positive valence. Moving sadness captured an intense experience that involved feelings of sadness and being moved. Nervous sadness was associated with felt anxiety, perceived scariness and negative valence. These interpretations were supported by indirect measures of felt emotion. Experiences of Moving sadness were strongly associated with high trait empathy and emotional contagion, but not with other previously suggested traits such as absorption or nostalgia-proneness. Relaxing sadness and Nervous sadness were not significantly predicted by any of the individual difference variables. The findings are interpreted within a theoretical framework of embodied emotions. PMID:27695424

  2. Sharing experienced sadness : Negotiating meanings of self-defined sad music within a group interview session

    OpenAIRE

    Peltola, Henna-Riikka

    2017-01-01

    Sadness induced by music listening has been a popular research focus in music and emotion research. Despite the wide consensus in affective sciences that emotional experiences are social processes, previous studies have only concentrated on individuals. Thus, the intersubjective dimension of musical experience – how music and music-related emotions are experienced between individuals – has not been investigated. In order to tap into shared emotional experiences, group discussions about experi...

  3. Intact brain processing of musical emotions in autism spectrum disorder, but more cognitive load and arousal in happy versus sad music

    Directory of Open Access Journals (Sweden)

    Line eGebauer

    2014-07-01

    Full Text Available Music is a potent source for eliciting emotions, but not everybody experience emotions in the same way. Individuals with autism spectrum disorder (ASD show difficulties with social and emotional cognition. Impairments in emotion recognition are widely studied in ASD, and have been associated with atypical brain activation in response to emotional expressions in faces and speech. Whether these impairments and atypical brain responses generalize to other domains, such as emotional processing of music, is less clear. Using functional magnetic resonance imaging, we investigated neural correlates of emotion recognition in music in high-functioning adults with ASD and neurotypical adults. Both groups engaged similar neural networks during processing of emotional music, and individuals with ASD rated emotional music comparable to the group of neurotypical individuals. However, in the ASD group, increased activity in response to happy compared to sad music was observed in dorsolateral prefrontal regions and in the rolandic operculum/insula, and we propose that this reflects increased cognitive processing in response to emotional musical stimuli in this group.

  4. Effects of Sad and Happy Music on Mind-Wandering and the Default Mode Network.

    Science.gov (United States)

    Taruffi, Liila; Pehrs, Corinna; Skouras, Stavros; Koelsch, Stefan

    2017-10-31

    Music is a ubiquitous phenomenon in human cultures, mostly due to its power to evoke and regulate emotions. However, effects of music evoking different emotional experiences such as sadness and happiness on cognition, and in particular on self-generated thought, are unknown. Here we use probe-caught thought sampling and functional magnetic resonance imaging (fMRI) to investigate the influence of sad and happy music on mind-wandering and its underlying neuronal mechanisms. In three experiments we found that sad music, compared with happy music, is associated with stronger mind-wandering (Experiments 1A and 1B) and greater centrality of the nodes of the Default Mode Network (DMN) (Experiment 2). Thus, our results demonstrate that, when listening to sad vs. happy music, people withdraw their attention inwards and engage in spontaneous, self-referential cognitive processes. Importantly, our results also underscore that DMN activity can be modulated as a function of sad and happy music. These findings call for a systematic investigation of the relation between music and thought, having broad implications for the use of music in education and clinical settings.

  5. Older but not younger infants associate own-race faces with happy music and other-race faces with sad music.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2018-03-01

    We used a novel intermodal association task to examine whether infants associate own- and other-race faces with music of different emotional valences. Three- to 9-month-olds saw a series of neutral own- or other-race faces paired with happy or sad musical excerpts. Three- to 6-month-olds did not show any specific association between face race and music. At 9 months, however, infants looked longer at own-race faces paired with happy music than at own-race faces paired with sad music. Nine-month-olds also looked longer at other-race faces paired with sad music than at other-race faces paired with happy music. These results indicate that infants with nearly exclusive own-race face experience develop associations between face race and music emotional valence in the first year of life. The potential implications of such associations for developing racial biases in early childhood are discussed. © 2017 John Wiley & Sons Ltd.

  6. Music-evoked emotions in schizophrenia.

    Science.gov (United States)

    Abe, Daijyu; Arai, Makoto; Itokawa, Masanari

    2017-07-01

    Previous studies have reported that people with schizophrenia have impaired musical abilities. Here we developed a simple music-based assay to assess patient's ability to associate a minor chord with sadness. We further characterize correlations between impaired musical responses and psychiatric symptoms. We exposed participants sequentially to two sets of sound stimuli, first a C-major progression and chord, and second a C-minor progression and chord. Participants were asked which stimulus they associated with sadness, the first set, the second set, or neither. The severity of psychiatric symptoms was assessed using the Positive and Negative Syndrome Scale (PANSS). Study participants were 29 patients diagnosed with schizophrenia and 29 healthy volunteers matched in age, gender and musical background. 37.9% (95% confidence interval [CI]:19.1-56.7) of patients with schizophrenia associated the minor chord set as sad, compared with 97.9% (95%CI: 89.5-103.6) of controls. Four patients were diagnosed with treatment-resistant schizophrenia, and all four failed to associate the minor chord with sadness. Patients who did not recognize minor chords as sad had significantly higher scores on all PANSS subscales. A simple test allows music-evoked emotions to be assessed in schizophrenia patient, and may show potential relationships between music-evoked emotions and psychiatric symptoms. Copyright © 2016. Published by Elsevier B.V.

  7. Improvement of autobiographic memory recovery by means of sad music in Alzheimer's Disease type dementia.

    Science.gov (United States)

    Meilán García, Juan José; Iodice, Rosario; Carro, Juan; Sánchez, José Antonio; Palmero, Francisco; Mateos, Ana María

    2012-06-01

    Autobiographic memory undergoes progressive deterioration during the evolution of Alzheimer's disease (AD). The aim of this study was to analyze mechanisms which facilitate recovery of autobiographic memories. We used a repeatedly employed mechanism, music, with the addition of an emotional factor. Autobiographic memory provoked by a variety of sounds (music which was happy, sad, lacking emotion, ambient noise in a coffee bar and no sound) was analyzed in a sample of 25 patients with AD. Emotional music, especially sad music for remote memories, was found to be the most effective kind for recall of autobiographic experiences. The factor evoking the memory is not the music itself, but rather the emotion associated with it, and is useful for semantic rather than episodic memory.

  8. The Pleasure Evoked by Sad Music Is Mediated by Feelings of Being Moved.

    Science.gov (United States)

    Vuoskoski, Jonna K; Eerola, Tuomas

    2017-01-01

    Why do we enjoy listening to music that makes us sad? This question has puzzled music psychologists for decades, but the paradox of "pleasurable sadness" remains to be solved. Recent findings from a study investigating the enjoyment of sad films suggest that the positive relationship between felt sadness and enjoyment might be explained by feelings of being moved (Hanich et al., 2014). The aim of the present study was to investigate whether feelings of being moved also mediated the enjoyment of sad music. In Experiment 1, 308 participants listened to five sad music excerpts and rated their liking and felt emotions. A multilevel mediation analysis revealed that the initial positive relationship between liking and felt sadness ( r = 0.22) was fully mediated by feelings of being moved. Experiment 2 explored the interconnections of perceived sadness, beauty, and movingness in 27 short music excerpts that represented independently varying levels of sadness and beauty. Two multilevel mediation analyses were carried out to test competing hypotheses: (A) that movingness mediates the effect of perceived sadness on liking, or (B) that perceived beauty mediates the effect of sadness on liking. Stronger support was obtained for Hypothesis A. Our findings suggest that - similarly to the enjoyment of sad films - the aesthetic appreciation of sad music is mediated by being moved. We argue that felt sadness may contribute to the enjoyment of sad music by intensifying feelings of being moved.

  9. The Pleasure Evoked by Sad Music Is Mediated by Feelings of Being Moved

    Science.gov (United States)

    Vuoskoski, Jonna K.; Eerola, Tuomas

    2017-01-01

    Why do we enjoy listening to music that makes us sad? This question has puzzled music psychologists for decades, but the paradox of “pleasurable sadness” remains to be solved. Recent findings from a study investigating the enjoyment of sad films suggest that the positive relationship between felt sadness and enjoyment might be explained by feelings of being moved (Hanich et al., 2014). The aim of the present study was to investigate whether feelings of being moved also mediated the enjoyment of sad music. In Experiment 1, 308 participants listened to five sad music excerpts and rated their liking and felt emotions. A multilevel mediation analysis revealed that the initial positive relationship between liking and felt sadness (r = 0.22) was fully mediated by feelings of being moved. Experiment 2 explored the interconnections of perceived sadness, beauty, and movingness in 27 short music excerpts that represented independently varying levels of sadness and beauty. Two multilevel mediation analyses were carried out to test competing hypotheses: (A) that movingness mediates the effect of perceived sadness on liking, or (B) that perceived beauty mediates the effect of sadness on liking. Stronger support was obtained for Hypothesis A. Our findings suggest that – similarly to the enjoyment of sad films – the aesthetic appreciation of sad music is mediated by being moved. We argue that felt sadness may contribute to the enjoyment of sad music by intensifying feelings of being moved. PMID:28377740

  10. A functional MRI study of happy and sad affective states induced by classical music.

    Science.gov (United States)

    Mitterschiffthaler, Martina T; Fu, Cynthia H Y; Dalton, Jeffrey A; Andrew, Christopher M; Williams, Steven C R

    2007-11-01

    The present study investigated the functional neuroanatomy of transient mood changes in response to Western classical music. In a pilot experiment, 53 healthy volunteers (mean age: 32.0; SD = 9.6) evaluated their emotional responses to 60 classical musical pieces using a visual analogue scale (VAS) ranging from 0 (sad) through 50 (neutral) to 100 (happy). Twenty pieces were found to accurately induce the intended emotional states with good reliability, consisting of 5 happy, 5 sad, and 10 emotionally unevocative, neutral musical pieces. In a subsequent functional magnetic resonance imaging (fMRI) study, the blood oxygenation level dependent (BOLD) signal contrast was measured in response to the mood state induced by each musical stimulus in a separate group of 16 healthy participants (mean age: 29.5; SD = 5.5). Mood state ratings during scanning were made by a VAS, which confirmed the emotional valence of the selected stimuli. Increased BOLD signal contrast during presentation of happy music was found in the ventral and dorsal striatum, anterior cingulate, parahippocampal gyrus, and auditory association areas. With sad music, increased BOLD signal responses were noted in the hippocampus/amygdala and auditory association areas. Presentation of neutral music was associated with increased BOLD signal responses in the insula and auditory association areas. Our findings suggest that an emotion processing network in response to music integrates the ventral and dorsal striatum, areas involved in reward experience and movement; the anterior cingulate, which is important for targeting attention; and medial temporal areas, traditionally found in the appraisal and processing of emotions. Copyright 2006 Wiley-Liss, Inc.

  11. Impaired emotion recognition in music in Parkinson's disease.

    Science.gov (United States)

    van Tricht, Mirjam J; Smeding, Harriet M M; Speelman, Johannes D; Schmand, Ben A

    2010-10-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction and other disease characteristics in emotion recognition was also evaluated. We used 32 musical excerpts that expressed happiness, sadness, fear or anger. PD patients were impaired in recognizing fear and anger in music. Fear recognition was associated with executive functions in PD patients and in healthy controls, but the emotion recognition impairments of PD patients persisted after adjusting for executive functioning. We found no differences in the recognition of happy or sad music. Emotion recognition was not related to depressive symptoms, disease duration or severity of motor symptoms. We conclude that PD patients are impaired in recognizing complex emotions in music. Although this impairment is related to executive dysfunction, our findings most likely reflect an additional primary deficit in emotional processing. 2010 Elsevier Inc. All rights reserved.

  12. The effect of music background on the emotional appraisal of film sequences

    Directory of Open Access Journals (Sweden)

    Pavlović Ivanka

    2011-01-01

    Full Text Available In this study the effects of musical background on the emotional appraisal of film sequences was investigated. Four pairs of polar emotions defined in Plutchik’s model were used as basic emotional qualities: joy-sadness, anticipation-surprise, fear-anger, and trust disgust. In the preliminary study eight film sequences and eight music themes were selected as the best representatives of all eight Plutchik’s emotions. In the main experiment the participant judged the emotional qualities of film-music combinations on eight seven-point scales. Half of the combinations were congruent (e.g. joyful film - joyful music, and half were incongruent (e.g. joyful film - sad music. Results have shown that visual information (film had greater effects on the emotion appraisal than auditory information (music. The modulation effects of music background depend on emotional qualities. In some incongruent combinations (joysadness the modulations in the expected directions were obtained (e.g. joyful music reduces the sadness of a sad film, in some cases (anger-fear no modulation effects were obtained, and in some cases (trust-disgust, anticipation-surprise the modulation effects were in an unexpected direction (e.g. trustful music increased the appraisal of disgust of a disgusting film. These results suggest that the appraisals of conjoint effects of emotions depend on the medium (film masks the music and emotional quality (three types of modulation effects.

  13. The Influence of Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder and Neurotypical Children.

    Science.gov (United States)

    Brown, Laura S

    2017-03-01

    Children with autism spectrum disorder (ASD) often struggle with social skills, including the ability to perceive emotions based on facial expressions. Research evidence suggests that many individuals with ASD can perceive emotion in music. Examining whether music can be used to enhance recognition of facial emotion by children with ASD would inform development of music therapy interventions. The purpose of this study was to investigate the influence of music with a strong emotional valance (happy; sad) on children with ASD's ability to label emotions depicted in facial photographs, and their response time. Thirty neurotypical children and 20 children with high-functioning ASD rated expressions of happy, neutral, and sad in 30 photographs under two music listening conditions (sad music; happy music). During each music listening condition, participants rated the 30 images using a 7-point scale that ranged from very sad to very happy. Response time data were also collected across both conditions. A significant two-way interaction revealed that participants' ratings of happy and neutral faces were unaffected by music conditions, but sad faces were perceived to be sadder with sad music than with happy music. Across both conditions, neurotypical children rated the happy faces as happier and the sad faces as sadder than did participants with ASD. Response times of the neurotypical children were consistently shorter than response times of the children with ASD; both groups took longer to rate sad faces than happy faces. Response times of neurotypical children were generally unaffected by the valence of the music condition; however, children with ASD took longer to respond when listening to sad music. Music appears to affect perceptions of emotion in children with ASD, and perceptions of sad facial expressions seem to be more affected by emotionally congruent background music than are perceptions of happy or neutral faces. © the American Music Therapy Association 2016

  14. Explicit versus implicit neural processing of musical emotions

    OpenAIRE

    Bogert, Brigitte; Numminen-Kontti, Taru; Gold, Benjamin; Sams, Mikko; Numminen, Jussi; Burunat, Iballa; Lampinen, Jouko; Brattico, Elvira

    2016-01-01

    Music is often used to regulate emotions and mood. Typically, music conveys and induces emotions even when one does not attend to them. Studies on the neural substrates of musical emotions have, however, only examined brain activity when subjects have focused on the emotional content of the music. Here we address with functional magnetic resonance imaging (fMRI) the neural processing of happy, sad, and fearful music with a paradigm in which 56 subjects were instructed to either classify the e...

  15. Mapping aesthetic musical emotions in the brain.

    Science.gov (United States)

    Trost, Wiebke; Ethofer, Thomas; Zentner, Marcel; Vuilleumier, Patrik

    2012-12-01

    Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.

  16. Music to my ears: Age-related decline in musical and facial emotion recognition.

    Science.gov (United States)

    Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted

    2017-12-01

    We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Effects of induced sad mood on facial emotion perception in young and older adults.

    Science.gov (United States)

    Lawrie, Louisa; Jackson, Margaret C; Phillips, Louise H

    2018-02-15

    Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults' perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants' rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.

  18. Recognition of facial and musical emotions in Parkinson's disease.

    Science.gov (United States)

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  19. Mapping Aesthetic Musical Emotions in the Brain

    Science.gov (United States)

    Ethofer, Thomas; Zentner, Marcel; Vuilleumier, Patrik

    2012-01-01

    Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions. PMID:22178712

  20. Emotion rendering in music: range and characteristic values of seven musical variables.

    Science.gov (United States)

    Bresin, Roberto; Friberg, Anders

    2011-10-01

    Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 × 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than

  1. Sensitivity to musical emotions in congenital amusia.

    Science.gov (United States)

    Gosselin, Nathalie; Paquette, Sébastien; Peretz, Isabelle

    2015-10-01

    The emotional experience elicited by music is largely dependent on structural characteristics such as pitch, rhythm, and dynamics. We examine here to what extent amusic adults, who have experienced pitch perception difficulties all their lives, still maintain some ability to perceive emotions from music. Amusic and control participants judged the emotions expressed by unfamiliar musical clips intended to convey happiness, sadness, fear and peacefulness (Experiment 1A). Surprisingly, most amusic individuals showed normal recognition of the four emotions tested here. This preserved ability was not due to some peculiarities of the music, since the amusic individuals showed a typical deficit in perceiving pitch violations intentionally inserted in the same clips (Experiment 1B). In Experiment 2, we tested the use of two major structural determinants of musical emotions: tempo and mode. Neutralization of tempo had the same effect on both amusics' and controls' emotional ratings. In contrast, amusics did not respond to a change of mode as markedly as controls did. Moreover, unlike the control participants, amusics' judgments were not influenced by subtle differences in pitch, such as the number of semitones changed by the mode manipulation. Instead, amusics showed normal sensitivity to fluctuations in energy, to pulse clarity, and to timbre differences, such as roughness. Amusics even showed sensitivity to key clarity and to large mean pitch differences in distinguishing happy from sad music. Thus, the pitch perception deficit experienced by amusic adults had only mild consequences on emotional judgments. In sum, emotional responses to music may be possible in this condition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Sadness is unique: Neural processing of emotions in speech prosody in musicians and non-musicians

    Directory of Open Access Journals (Sweden)

    Mona ePark

    2015-01-01

    Full Text Available Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes.

  3. Mapping aesthetic musical emotions in the brain

    OpenAIRE

    Trost, Johanna Wiebke; Ethofer, Thomas Stefan; Zentner, Marcel Robert; Vuilleumier, Patrik

    2012-01-01

    Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness...

  4. Emotional responses to music: towards scientific perspectives on music therapy.

    Science.gov (United States)

    Suda, Miyuki; Morimoto, Kanehisa; Obata, Akiko; Koizumi, Hideaki; Maki, Atsushi

    2008-01-08

    Neurocognitive research has the potential to identify the relevant effects of music therapy. In this study, we examined the effect of music mode (major vs. minor) on stress reduction using optical topography and an endocrinological stress marker. In salivary cortisol levels, we observed that stressful conditions such as mental fatigue (thinking and creating a response) was reduced more by major mode music than by minor mode music. We suggest that music specifically induces an emotional response similar to a pleasant experience or happiness. Moreover, we demonstrated the typical asymmetrical pattern of stress responses in upper temporal cortex areas, and suggested that happiness/sadness emotional processing might be related to stress reduction by music.

  5. Emotions induced by operatic music: psychophysiological effects of music, plot, and acting: a scientist's tribute to Maria Callas.

    Science.gov (United States)

    Balteş, Felicia Rodica; Avram, Julia; Miclea, Mircea; Miu, Andrei C

    2011-06-01

    Operatic music involves both singing and acting (as well as rich audiovisual background arising from the orchestra and elaborate scenery and costumes) that multiply the mechanisms by which emotions are induced in listeners. The present study investigated the effects of music, plot, and acting performance on emotions induced by opera. There were three experimental conditions: (1) participants listened to a musically complex and dramatically coherent excerpt from Tosca; (2) they read a summary of the plot and listened to the same musical excerpt again; and (3) they re-listened to music while they watched the subtitled film of this acting performance. In addition, a control condition was included, in which an independent sample of participants succesively listened three times to the same musical excerpt. We measured subjective changes using both dimensional, and specific music-induced emotion questionnaires. Cardiovascular, electrodermal, and respiratory responses were also recorded, and the participants kept track of their musical chills. Music listening alone elicited positive emotion and autonomic arousal, seen in faster heart rate, but slower respiration rate and reduced skin conductance. Knowing the (sad) plot while listening to the music a second time reduced positive emotions (peacefulness, joyful activation), and increased negative ones (sadness), while high autonomic arousal was maintained. Watching the acting performance increased emotional arousal and changed its valence again (from less positive/sad to transcendent), in the context of continued high autonomic arousal. The repeated exposure to music did not by itself induce this pattern of modifications. These results indicate that the multiple musical and dramatic means involved in operatic performance specifically contribute to the genesis of music-induced emotions and their physiological correlates. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. The role of music in deaf culture: deaf students' perception of emotion in music.

    Science.gov (United States)

    Darrow, Alice-Ann

    2006-01-01

    Although emotional interpretation of music is an individual and variable experience, researchers have found that typical listeners are quite consistent in associating basic or primary emotions such as happiness, sadness, fear, and anger to musical compositions. It has been suggested that an individual with a sensorineural hearing loss, or any lesion in auditory perceptors in the brain may have trouble perceiving music emotionally. The purpose of the present study was to investigate whether students with a hearing loss who associate with the deaf culture, assign the same emotions to music as students without a hearing loss. Sixty-two elementary and junior high students at a Midwestern state school for the deaf and students at neighboring elementary and junior high schools served as participants. Participants at the state school for the deaf had hearing losses ranging from moderate to severe. Twelve film score excerpts, composed to depict the primary emotions-happiness, sadness, and fear, were used as the musical stimuli. Participants were asked to assign an emotion to each excerpt. Results indicated a significant difference between the Deaf and typical hearing participants' responses, with hearing participants' responses more in agreement with the composers' intent. No significant differences were found for age or gender. Analyses of the Deaf participants' responses indicate that timbre, texture, and rhythm are perhaps the musical elements most influential in transmitting emotion to persons with a hearing loss. Adaptive strategies are suggested for assisting children who are deaf in accessing the elements of music intended to portray emotion.

  7. Role of tempo entrainment in psychophysiological differentiation of happy and sad music?

    Science.gov (United States)

    Khalfa, Stéphanie; Roy, Mathieu; Rainville, Pierre; Dalla Bella, Simone; Peretz, Isabelle

    2008-04-01

    Respiration rate allows to differentiate between happy and sad excerpts which may be attributable to entrainment of respiration to the rhythm or the tempo rather than to emotions [Etzel, J.A., Johnsen, E.L., Dickerson, J., Tranel, D., Adolphs, R., 2006. Cardiovascular and respiratory responses during musical mood induction. Int. J. Psychophysiol. 61(1), 57-69]. In order to test for this hypothesis, this study intended to verify whether fast and slow rhythm, and/or tempo alone are sufficient to induce differential physiological effects. Psychophysiological responses (electrodermal responses, facial muscles activity, blood pressure, heart and respiration rate) were then measured in fifty young adults listening to fast/happy and slow/sad music, and to two control versions of these excerpts created by removing pitch variations (rhythmic version) and both pitch and temporal variations (beat-alone). The results indicate that happy and sad music are significantly differentiated (happy>sad) by diastolic blood pressure, electrodermal activity, and zygomatic activity, while the fast and slow rhythmic and tempo control versions did not elicit such differentiations. In contrast, respiration rate was faster with stimuli presented at fast tempi relative to slow stimuli in the beat-alone condition. It was thus demonstrated that the psychophysiological happy/sad distinction requires the tonal variations and cannot be explained solely by entrainment to tempo and rhythm. The tempo entrainment exists in the tempo alone condition but our results suggest this effect may disappear when embedded in music or with rhythm.

  8. Music and emotions: from enchantment to entrainment.

    Science.gov (United States)

    Vuilleumier, Patrik; Trost, Wiebke

    2015-03-01

    Producing and perceiving music engage a wide range of sensorimotor, cognitive, and emotional processes. Emotions are a central feature of the enjoyment of music, with a large variety of affective states consistently reported by people while listening to music. However, besides joy or sadness, music often elicits feelings of wonder, nostalgia, or tenderness, which do not correspond to emotion categories typically studied in neuroscience and whose neural substrates remain largely unknown. Here we review the similarities and differences in the neural substrates underlying these "complex" music-evoked emotions relative to other more "basic" emotional experiences. We suggest that these emotions emerge through a combination of activation in emotional and motivational brain systems (e.g., including reward pathways) that confer its valence to music, with activation in several other areas outside emotional systems, including motor, attention, or memory-related regions. We then discuss the neural substrates underlying the entrainment of cognitive and motor processes by music and their relation to affective experience. These effects have important implications for the potential therapeutic use of music in neurological or psychiatric diseases, particularly those associated with motor, attention, or affective disturbances. © 2015 New York Academy of Sciences.

  9. The Role of Emotion in Musical Improvisation: An Analysis of Structural Features

    OpenAIRE

    McPherson, Malinda J.; Lopez-Gonzalez, Monica; Rankin, Summer K.; Limb, Charles J.

    2014-01-01

    One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not u...

  10. The Musical Emotional Bursts: A validated set of musical affect bursts to investigate auditory affective processing.

    Directory of Open Access Journals (Sweden)

    Sébastien ePaquette

    2013-08-01

    Full Text Available The Musical Emotional Bursts (MEB consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear and neutrality. These musical bursts were designed to be the musical analogue of the Montreal Affective Voices (MAV – a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 sec improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (n:40 or a clarinet (n:40. The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, nonlinguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli (30 stimuli x 4 [3 emotions + neutral] x 2 instruments by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task; 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80 was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0% and fearful or sad violin (88.0% each MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.

  11. Extraction Of Audio Features For Emotion Recognition System Based On Music

    Directory of Open Access Journals (Sweden)

    Kee Moe Han

    2015-08-01

    Full Text Available Music is the combination of melody linguistic information and the vocalists emotion. Since music is a work of art analyzing emotion in music by computer is a difficult task. Many approaches have been developed to detect the emotions included in music but the results are not satisfactory because emotion is very complex. In this paper the evaluations of audio features from the music files are presented. The extracted features are used to classify the different emotion classes of the vocalists. Musical features extraction is done by using Music Information Retrieval MIR tool box in this paper. The database of 100 music clips are used to classify the emotions perceived in music clips. Music may contain many emotions according to the vocalists mood such as happy sad nervous bored peace etc. In this paper the audio features related to the emotions of the vocalists are extracted to use in emotion recognition system based on music.

  12. Time flies with music whatever its emotional valence.

    Science.gov (United States)

    Droit-Volet, Sylvie; Bigand, Emmanuel; Ramos, Danilo; Bueno, José Lino Oliveira

    2010-10-01

    The present study used a temporal bisection task to investigate whether music affects time estimation differently from a matched auditory neutral stimulus, and whether the emotional valence of the musical stimuli (i.e., sad vs. happy music) modulates this effect. The results showed that, compared to sine wave control music, music presented in a major (happy) or a minor (sad) key shifted the bisection function toward the right, thus increasing the bisection point value (point of subjective equality). This indicates that the duration of a melody is judged shorter than that of a non-melodic control stimulus, thus confirming that "time flies" when we listen to music. Nevertheless, sensitivity to time was similar for all the auditory stimuli. Furthermore, the temporal bisection functions did not differ as a function of musical mode. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Detrended Fluctuation Analysis of the Human EEG during Listening to Emotional Music

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A nonlinear method named detrended fluctuation analysis (DFA) was utilized to investigate the scaling behavior of the human electroencephalogram (EEG) in three emotional music conditions (fear, happiness, sadness) and a rest condition (eyes-closed). The results showed that the EEG exhibited scaling behavior in two regions with two scaling exponents β1 and β2 which represented the complexity of higher and lower frequency activity besides β band respectively. As the emotional intensity decreased the value of β1 increased and the value of β2 decreased. The change of β1 was weakly correlated with the 'approach-withdrawal' model of emotion and both of fear and sad music made certain differences compared with the eyes-closed rest condition. The study shows that music is a powerful elicitor of emotion and that using nonlinear method can potentially contribute to the investigation of emotion.

  14. Music, emotions and first impression perceptions of a healthcare institutions’ quality: An experimental investigation

    Directory of Open Access Journals (Sweden)

    Ivana First Komen

    2015-03-01

    Full Text Available One of the direct ways of influencing emotions and service quality perceptions is by music stimulation. The purpose of this research is to examine the impact of music of different musical elements (i.e. sad vs. happy music on respondents' emotions and their first impression perceptions of a healthcare institution's quality. The research was designed as an experimental simulation, i.e. data were collected in an online survey from respondents randomly assigned to evaluate a presentation consisting of multiple images of a healthcare institution in one of three experimental conditions (absence of, happy, and sad music stimulation. The results, in alliance with previous research, demonstrate a relationship between emotions and first impression quality perceptions and between music and emotions, but no relationship between music and first impression quality perception. The obtained significant results yet again emphasize the importance of inducing positive customer emotions as they lead to positive first impression service quality evaluations that subsequently provide appreciated returns. They also stress the importance of carefully choosing music when inducing emotions as music with different musical elements results in different emotional states. One of the limitations of this research is the non-real life situation experimental setting, which is to be overcome in future research.

  15. Expression of emotion in Eastern and Western music mirrors vocalization.

    Science.gov (United States)

    Bowling, Daniel Liu; Sundararajan, Janani; Han, Shui'er; Purves, Dale

    2012-01-01

    In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.

  16. Basic, specific, mechanistic? Conceptualizing musical emotions in the brain.

    Science.gov (United States)

    Omigie, Diana

    2016-06-01

    The number of studies investigating music processing in the human brain continues to increase, with a large proportion of them focussing on the correlates of so-called musical emotions. The current Review highlights the recent development whereby such studies are no longer concerned only with basic emotions such as happiness and sadness but also with so-called music-specific or "aesthetic" ones such as nostalgia and wonder. It also highlights how mechanisms such as expectancy and empathy, which are seen as inducing musical emotions, are enjoying ever-increasing investigation and substantiation with physiological and neuroimaging methods. It is proposed that a combination of these approaches, namely, investigation of the precise mechanisms through which so-called music-specific or aesthetic emotions may arise, will provide the most important advances for our understanding of the unique nature of musical experience. © 2015 Wiley Periodicals, Inc.

  17. Re-exploring the influence of sad mood on music preference

    NARCIS (Netherlands)

    Friedman, R.S.; Gordis, E.; Förster, J.

    2012-01-01

    We conducted three experiments to rectify methodological limitations of prior studies on selective exposure to music and, thereby, clarify the nature of the impact of sad mood on music preference. In all studies, we experimentally manipulated mood (sad vs. neutral in Experiments 1 and 2; sad vs.

  18. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study.

    Science.gov (United States)

    Vieillard, Sandrine; Gilet, Anne-Laure

    2013-01-01

    There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplored the question of how aging may influence emotional responses to and memory for emotions conveyed by music. In the present study, eighteen older (60-84 years) and eighteen younger (19-24 years) listeners were asked to evaluate the strength of their experienced emotion on happy, peaceful, sad, and scary musical excerpts (Vieillard et al., 2008) while facial muscle activity was recorded. Participants then performed an incidental recognition task followed by a task in which they judged to what extent they experienced happiness, peacefulness, sadness, and fear when listening to music. Compared to younger adults, older adults (a) reported a stronger emotional reactivity for happiness than other emotion categories, (b) showed an increased zygomatic activity for scary stimuli, (c) were more likely to falsely recognize happy music, and (d) showed a decrease in their responsiveness to sad and scary music. These results are in line with previous findings and extend them to emotion experience and memory recognition, corroborating the view of age-related changes in emotional responses to music in a positive direction away from negativity.

  19. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Sandrine eVieillard

    2013-10-01

    Full Text Available There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplored the question of how aging may influence emotional responses to and memory for music. In the present study, eighteen older (60-84 years and eighteen younger (19-24 years listeners were asked to evaluate the strength of their experienced emotion on happy, peaceful, sad, and scary musical excerpts (Vieillard, et al., 2008 while facial muscle activity was recorded. Participants then performed an incidental recognition task followed by a task in which they judged to what extent they experienced happiness, peacefulness, sadness, and fear when listening to music. Compared to younger adults, older adults (a reported a stronger emotional reactivity for happiness than other emotion categories, (b showed an increased zygomatic activity for scary stimuli, (c were more likely to falsely recognize happy music, and (d showed a decrease in their responsiveness to sad and scary music. These results are in line with previous findings and extend them to emotion experience and memory recognition, corroborating the view of age-related changes in emotional responses to music in a positive direction away from negativity.

  20. Sad music as a means for acceptance-based coping

    OpenAIRE

    Van den Tol, Annemieke, J. M.; Edwards, Jane; Heflick, N. A.

    2016-01-01

    Self-identified sad music (SISM) is often listened to when experiencing sad life situations. Research indicates that the most common reason people give for listening to SISM is “to be in touch with or express feelings of sadness”. But why might this be the case? We suggest that one reason people choose to listen to sad music when feeling sad is to accept aversive situations. We tested if SISM is associated with acceptance coping and consolation. We hypothesized that SISM relates to acceptance...

  1. Emotional expression in music: contribution, linearity, and additivity of primary musical cues.

    Science.gov (United States)

    Eerola, Tuomas; Friberg, Anders; Bresin, Roberto

    2013-01-01

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77-89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0-8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).

  2. Expression of emotion in Eastern and Western music mirrors vocalization.

    Directory of Open Access Journals (Sweden)

    Daniel Liu Bowling

    Full Text Available In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.

  3. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study

    Science.gov (United States)

    Vieillard, Sandrine; Gilet, Anne-Laure

    2013-01-01

    There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplored the question of how aging may influence emotional responses to and memory for emotions conveyed by music. In the present study, eighteen older (60–84 years) and eighteen younger (19–24 years) listeners were asked to evaluate the strength of their experienced emotion on happy, peaceful, sad, and scary musical excerpts (Vieillard et al., 2008) while facial muscle activity was recorded. Participants then performed an incidental recognition task followed by a task in which they judged to what extent they experienced happiness, peacefulness, sadness, and fear when listening to music. Compared to younger adults, older adults (a) reported a stronger emotional reactivity for happiness than other emotion categories, (b) showed an increased zygomatic activity for scary stimuli, (c) were more likely to falsely recognize happy music, and (d) showed a decrease in their responsiveness to sad and scary music. These results are in line with previous findings and extend them to emotion experience and memory recognition, corroborating the view of age-related changes in emotional responses to music in a positive direction away from negativity. PMID:24137141

  4. The effects of emotion on memory for music and vocalisations.

    Science.gov (United States)

    Aubé, William; Peretz, Isabelle; Armony, Jorge L

    2013-01-01

    Music is a powerful tool for communicating emotions which can elicit memories through associative mechanisms. However, it is currently unknown whether emotion can modulate memory for music without reference to a context or personal event. We conducted three experiments to investigate the effect of basic emotions (fear, happiness, and sadness) on recognition memory for music, using short, novel stimuli explicitly created for research purposes, and compared them with nonlinguistic vocalisations. Results showed better memory accuracy for musical clips expressing fear and, to some extent, happiness. In the case of nonlinguistic vocalisations we confirmed a memory advantage for all emotions tested. A correlation between memory accuracy for music and vocalisations was also found, particularly in the case of fearful expressions. These results confirm that emotional expressions, particularly fearful ones, conveyed by music can influence memory as has been previously shown for other forms of expressions, such as faces and vocalisations.

  5. Recognizing Induced Emotions of Happiness and Sadness from Dance Movement

    Science.gov (United States)

    Van Dyck, Edith; Vansteenkiste, Pieter; Lenoir, Matthieu; Lesaffre, Micheline; Leman, Marc

    2014-01-01

    Recent research revealed that emotional content can be successfully decoded from human dance movement. Most previous studies made use of videos of actors or dancers portraying emotions through choreography. The current study applies emotion induction techniques and free movement in order to examine the recognition of emotional content from dance. Observers (N = 30) watched a set of silent videos showing depersonalized avatars of dancers moving to an emotionally neutral musical stimulus after emotions of either sadness or happiness had been induced. Each of the video clips consisted of two dance performances which were presented side-by-side and were played simultaneously; one of a dancer in the happy condition and one of the same individual in the sad condition. After every film clip, the observers were asked to make forced-choices concerning the emotional state of the dancer. Results revealed that observers were able to identify the emotional state of the dancers with a high degree of accuracy. Moreover, emotions were more often recognized for female dancers than for their male counterparts. In addition, the results of eye tracking measurements unveiled that observers primarily focus on movements of the chest when decoding emotional information from dance movement. The findings of our study show that not merely portrayed emotions, but also induced emotions can be successfully recognized from free dance movement. PMID:24587026

  6. [Emotional response to music by postlingually-deafened adult cochlear implant users].

    Science.gov (United States)

    Wang, Shuo; Dong, Ruijuan; Zhou, Yun; Li, Jing; Qi, Beier; Liu, Bo

    2012-10-01

    To assess the emotional response to music by postlingually-deafened adult cochlear implant users. Munich music questionnaire (MUMU) was used to match the music experience and the motivation of use of music between 12 normal-hearing and 12 cochlear implant subjects. Emotion rating test in Musical Sounds in Cochlear Implants (MuSIC) test battery was used to assess the emotion perception ability for both normal-hearing and cochlear implant subjects. A total of 15 pieces of music phases were used. Responses were given by selecting the rating scales from 1 to 10. "1" represents "very sad" feeling, and "10" represents "very happy feeling. In comparison with normal-hearing subjects, 12 cochlear implant subjects made less active use of music for emotional purpose. The emotion ratings for cochlear implant subjects were similar to normal-hearing subjects, but with large variability. Post-lingually deafened cochlear implant subjects on average performed similarly in emotion rating tasks relative to normal-hearing subjects, but their active use of music for emotional purpose was obviously less than normal-hearing subjects.

  7. Emotional Expression in Music: Contribution, Linearity, and Additivity of Primary Musical Cues

    Directory of Open Access Journals (Sweden)

    Tuomas eEerola

    2013-07-01

    Full Text Available The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary. The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%. Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin & Lindström, 2010.

  8. Emotional expression in music: contribution, linearity, and additivity of primary musical cues

    Science.gov (United States)

    Eerola, Tuomas; Friberg, Anders; Bresin, Roberto

    2013-01-01

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010). PMID:23908642

  9. Empathy manipulation impacts music-induced emotions: a psychophysiological study on opera.

    Directory of Open Access Journals (Sweden)

    Andrei C Miu

    Full Text Available This study investigated the effects of voluntarily empathizing with a musical performer (i.e., cognitive empathy on music-induced emotions and their underlying physiological activity. N = 56 participants watched video-clips of two operatic compositions performed in concerts, with low or high empathy instructions. Heart rate and heart rate variability, skin conductance level (SCL, and respiration rate (RR were measured during music listening, and music-induced emotions were quantified using the Geneva Emotional Music Scale immediately after music listening. Listening to the aria with sad content in a high empathy condition facilitated the emotion of nostalgia and decreased SCL, in comparison to the low empathy condition. Listening to the song with happy content in a high empathy condition also facilitated the emotion of power and increased RR, in comparison to the low empathy condition. To our knowledge, this study offers the first experimental evidence that cognitive empathy influences emotion psychophysiology during music listening.

  10. Elucidating the relationship between work attention performance and emotions arising from listening to music.

    Science.gov (United States)

    Shih, Yi-Nuo; Chien, Wei-Hsien; Chiang, Han-Sun

    2016-10-17

    In addition to demonstrating that human emotions improve work attention performance, numerous studies have also established that music alters human emotions. Given the pervasiveness of background music in the workplace, exactly how work attention, emotions and music listening are related is of priority concern in human resource management. This preliminary study investigates the relationship between work attention performance and emotions arising from listening to music. Thirty one males and 34 females, ranging from 20-24 years old, participated in this study following written informed consent. A randomized controlled trial (RCT) was performed in this study, which consisted of six steps and the use of the standard attention test and emotion questionnaire. Background music with lyrics adversely impacts attention performance more than that without lyrics. Analysis results also indicate that listeners self-reported feeling "loved" while music played that implied a higher score on their work-attention performance. Moreover, a greater ability of music to make listeners feel sad implied a lower score on their work-attention performance. Results of this preliminary study demonstrate that background music in the workplace should focus mainly on creating an environment in which listeners feel loved or taken care and avoiding music that causes individuals to feel stressed or sad. We recommend that future research increase the number of research participants to enhance the applicability and replicability of these findings.

  11. Intact brain processing of musical emotions in autism spectrum disorder, but more cognitive load and arousal in happy versus sad music

    DEFF Research Database (Denmark)

    Gebauer, Line; Skewes, Joshua; Westphael, Gitte Gülche

    2014-01-01

    Music is a potent source for eliciting emotions, but not everybody experience emotions in the same way. Individuals with autism spectrum disorder (ASD) show difficulties with social and emotional cognition. Impairments in emotion recognition are widely studied in ASD, and have been associated...... of emotion recognition in music in high-functioning adults with ASD and neurotypical adults. Both groups engaged similar neural networks during processing of emotional music, and individuals with ASD rated emotional music comparable to the group of neurotypical individuals. However, in the ASD group...

  12. The Effect of "Sad" and "Happy" Background Music on the Interpretation of a Story in 5 to 6-Year-Old Children

    Science.gov (United States)

    Ziv, Naomi; Goshen, Maya

    2006-01-01

    Children hear music in the background of a large variety of situations and activities. Throughout development, they acquire knowledge both about the syntactical norms of tonal music, and about the relationship between musical form and emotion. Five to six-year-old children heard a story, with a background "happy", "sad" or no…

  13. Fatty acid-induced gut-brain signaling attenuates neural and behavioral effects of sad emotion in humans.

    Science.gov (United States)

    Van Oudenhove, Lukas; McKie, Shane; Lassman, Daniel; Uddin, Bilal; Paine, Peter; Coen, Steven; Gregory, Lloyd; Tack, Jan; Aziz, Qasim

    2011-08-01

    Although a relationship between emotional state and feeding behavior is known to exist, the interactions between signaling initiated by stimuli in the gut and exteroceptively generated emotions remain incompletely understood. Here, we investigated the interaction between nutrient-induced gut-brain signaling and sad emotion induced by musical and visual cues at the behavioral and neural level in healthy nonobese subjects undergoing functional magnetic resonance imaging. Subjects received an intragastric infusion of fatty acid solution or saline during neutral or sad emotion induction and rated sensations of hunger, fullness, and mood. We found an interaction between fatty acid infusion and emotion induction both in the behavioral readouts (hunger, mood) and at the level of neural activity in multiple pre-hypothesized regions of interest. Specifically, the behavioral and neural responses to sad emotion induction were attenuated by fatty acid infusion. These findings increase our understanding of the interplay among emotions, hunger, food intake, and meal-induced sensations in health, which may have important implications for a wide range of disorders, including obesity, eating disorders, and depression.

  14. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion.

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally "happy") pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers "trade off" cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music-widely recognized for its artistic significance-complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  15. The role of emotion in musical improvisation: an analysis of structural features.

    Science.gov (United States)

    McPherson, Malinda J; Lopez-Gonzalez, Monica; Rankin, Summer K; Limb, Charles J

    2014-01-01

    One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.

  16. An examination of cue redundancy theory in cross-cultural decoding of emotions in music.

    Science.gov (United States)

    Kwoun, Soo-Jin

    2009-01-01

    The present study investigated the effects of structural features of music (i.e., variations in tempo, loudness, or articulation, etc.) and cultural and learning factors in the assignments of emotional meaning in music. Four participant groups, young Koreans, young Americans, older Koreans, and older Americans, rated emotional expressions of Korean folksongs with three adjective scales: happiness, sadness and anger. The results of the study are in accordance with the Cue Redundancy model of emotional perception in music, indicating that expressive music embodies both universal auditory cues that communicate the emotional meanings of music across cultures and cultural specific cues that result from cultural convention.

  17. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy”) pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech. PMID:29249997

  18. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Directory of Open Access Journals (Sweden)

    Michael Schutz

    2017-11-01

    Full Text Available Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor, a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy” pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015. Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  19. Manipulating Greek musical modes and tempo affects perceived musical emotion in musicians and nonmusicians.

    Science.gov (United States)

    Ramos, D; Bueno, J L O; Bigand, E

    2011-02-01

    The combined influence of tempo and mode on emotional responses to music was studied by crossing 7 changes in mode with 3 changes in tempo. Twenty-four musicians aged 19 to 25 years (12 males and 12 females) and 24 nonmusicians aged 17 to 25 years (12 males and 12 females) were required to perform two tasks: 1) listening to different musical excerpts, and 2) associating an emotion to them such as happiness, serenity, fear, anger, or sadness. ANOVA showed that increasing the tempo strongly affected the arousal (F(2,116) = 268.62, mean square error (MSE) = 0.6676, P effects were found between tempo and mode (F (1,58) = 115.6, MSE = 0.6428, P effects. This finding demonstrates that small changes in the pitch structures of modes modulate the emotions associated with the pieces, confirming the cognitive foundation of emotional responses to music.

  20. Influence of Tempo and Rhythmic Unit in Musical Emotion Regulation.

    Science.gov (United States)

    Fernández-Sotos, Alicia; Fernández-Caballero, Antonio; Latorre, José M

    2016-01-01

    This article is based on the assumption of musical power to change the listener's mood. The paper studies the outcome of two experiments on the regulation of emotional states in a series of participants who listen to different auditions. The present research focuses on note value, an important musical cue related to rhythm. The influence of two concepts linked to note value is analyzed separately and discussed together. The two musical cues under investigation are tempo and rhythmic unit. The participants are asked to label music fragments by using opposite meaningful words belonging to four semantic scales, namely "Tension" (ranging from Relaxing to Stressing), "Expressiveness" (Expressionless to Expressive), "Amusement" (Boring to Amusing) and "Attractiveness" (Pleasant to Unpleasant). The participants also have to indicate how much they feel certain basic emotions while listening to each music excerpt. The rated emotions are "Happiness," "Surprise," and "Sadness." This study makes it possible to draw some interesting conclusions about the associations between note value and emotions.

  1. The role of emotion in musical improvisation: an analysis of structural features.

    Directory of Open Access Journals (Sweden)

    Malinda J McPherson

    Full Text Available One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor and emotion (happy vs. sad do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.

  2. Changing the tune: listeners like music that expresses a contrasting emotion

    Directory of Open Access Journals (Sweden)

    E Glenn eSchellenberg

    2012-12-01

    Full Text Available Theories of aesthetic appreciation propose that (1 a stimulus is liked because it is expected or familiar, (2 a stimulus is liked most when it is neither too familiar nor too novel, or (3 a novel stimulus is liked because it elicits an intensified emotional response. We tested the third hypothesis by examining liking for music as a function of whether the emotion it expressed contrasted with the emotion expressed by music heard previously. Stimuli were 30-s happy- or sad-sounding excerpts from recordings of classical piano music. On each trial, listeners heard a different excerpt and made liking and emotion-intensity ratings. The emotional character of consecutive excerpts was repeated with varying frequencies, followed by an excerpt that expressed a contrasting emotion. As the number of presentations of the background emotion increased, liking and intensity ratings became lower compared to those for the contrasting emotion. Consequently, when the emotional character of the music was relatively novel, listeners’ responses intensified and their appreciation increased.

  3. Aesthetic Emotions Across Arts: A Comparison Between Painting and Music

    Science.gov (United States)

    Miu, Andrei C.; Pițur, Simina; Szentágotai-Tătar, Aurora

    2016-01-01

    Emotional responses to art have long been subject of debate, but only recently have they started to be investigated in affective science. The aim of this study was to compare perceptions regarding frequency of aesthetic emotions, contributing factors, and motivation which characterize the experiences of looking at painting and listening to music. Parallel surveys were filled in online by participants (N = 971) interested in music and painting. By comparing self-reported characteristics of these experiences, this study found that compared to listening to music, looking at painting was associated with increased frequency of wonder and decreased frequencies of joyful activation and power. In addition to increased vitality, as reflected by the latter two emotions, listening to music was also more frequently associated with emotions such as tenderness, nostalgia, peacefulness, and sadness. Compared to painting-related emotions, music-related emotions were perceived as more similar to emotions in other everyday life situations. Participants reported that stimulus features and previous knowledge made more important contributions to emotional responses to painting, whereas prior mood, physical context and the presence of other people were considered more important in relation to emotional responses to music. Self-education motivation was more frequently associated with looking at painting, whereas mood repair and keeping company motivations were reported more frequently in relation to listening to music. Participants with visual arts education reported increased vitality-related emotions in their experience of looking at painting. In contrast, no relation was found between music education and emotional responses to music. These findings offer a more general perspective on aesthetic emotions and encourage integrative research linking different types of aesthetic experience. PMID:26779072

  4. Aesthetic Emotions Across Arts: A Comparison Between Painting and Music.

    Science.gov (United States)

    Miu, Andrei C; Pițur, Simina; Szentágotai-Tătar, Aurora

    2015-01-01

    Emotional responses to art have long been subject of debate, but only recently have they started to be investigated in affective science. The aim of this study was to compare perceptions regarding frequency of aesthetic emotions, contributing factors, and motivation which characterize the experiences of looking at painting and listening to music. Parallel surveys were filled in online by participants (N = 971) interested in music and painting. By comparing self-reported characteristics of these experiences, this study found that compared to listening to music, looking at painting was associated with increased frequency of wonder and decreased frequencies of joyful activation and power. In addition to increased vitality, as reflected by the latter two emotions, listening to music was also more frequently associated with emotions such as tenderness, nostalgia, peacefulness, and sadness. Compared to painting-related emotions, music-related emotions were perceived as more similar to emotions in other everyday life situations. Participants reported that stimulus features and previous knowledge made more important contributions to emotional responses to painting, whereas prior mood, physical context and the presence of other people were considered more important in relation to emotional responses to music. Self-education motivation was more frequently associated with looking at painting, whereas mood repair and keeping company motivations were reported more frequently in relation to listening to music. Participants with visual arts education reported increased vitality-related emotions in their experience of looking at painting. In contrast, no relation was found between music education and emotional responses to music. These findings offer a more general perspective on aesthetic emotions and encourage integrative research linking different types of aesthetic experience.

  5. Aesthetic emotions across arts: A comparison between painting and music

    Directory of Open Access Journals (Sweden)

    Andrei C. Miu

    2016-01-01

    Full Text Available Emotional responses to art have long been subject of debate, but only recently have they started to be investigated in affective science. The aim of this study was to compare perceptions regarding frequency of aesthetic emotions, contributing factors and motivation which characterize the experiences of looking at painting and listening to music. Parallel surveys were filled in online by participants (N = 971 interested in music and painting. By comparing self-reported characteristics of these experiences, this study found that compared to listening to music, looking at painting was associated with increased frequency of wonder and decreased frequencies of joyful activation and power. In addition to increased vitality, as reflected by the latter two emotions, listening to music was also more frequently associated with emotions such as tenderness, nostalgia, peacefulness and sadness. Compared to painting-related emotions, music-related emotions were perceived as more similar to emotions in other everyday life situations. Participants reported that stimulus features and previous knowledge made more important contributions to emotional responses to painting, whereas prior mood, physical context and the presence of other people were considered more important in relation to emotional responses to music. Self-education motivation was more frequently associated with looking at painting, whereas mood repair and keeping company motivations were reported more frequently in relation to listening to music. Participants with visual arts education reported increased vitality-related emotions in their experience of looking at painting. In contrast, no relation was found between music education and emotional responses to music. These findings offer a more general perspective on aesthetic emotions and encourage integrative research linking different types of aesthetic experience.

  6. Impaired socio-emotional processing in a developmental music disorder

    Science.gov (United States)

    Lima, César F.; Brancatisano, Olivia; Fancourt, Amy; Müllensiefen, Daniel; Scott, Sophie K.; Warren, Jason D.; Stewart, Lauren

    2016-01-01

    Some individuals show a congenital deficit for music processing despite normal peripheral auditory processing, cognitive functioning, and music exposure. This condition, termed congenital amusia, is typically approached regarding its profile of musical and pitch difficulties. Here, we examine whether amusia also affects socio-emotional processing, probing auditory and visual domains. Thirteen adults with amusia and 11 controls completed two experiments. In Experiment 1, participants judged emotions in emotional speech prosody, nonverbal vocalizations (e.g., crying), and (silent) facial expressions. Target emotions were: amusement, anger, disgust, fear, pleasure, relief, and sadness. Compared to controls, amusics were impaired for all stimulus types, and the magnitude of their impairment was similar for auditory and visual emotions. In Experiment 2, participants listened to spontaneous and posed laughs, and either inferred the authenticity of the speaker’s state, or judged how much laughs were contagious. Amusics showed decreased sensitivity to laughter authenticity, but normal contagion responses. Across the experiments, mixed-effects models revealed that the acoustic features of vocal signals predicted socio-emotional evaluations in both groups, but the profile of predictive acoustic features was different in amusia. These findings suggest that a developmental music disorder can affect socio-emotional cognition in subtle ways, an impairment not restricted to auditory information. PMID:27725686

  7. Personality traits modulate neural responses to emotions expressed in music.

    Science.gov (United States)

    Park, Mona; Hennig-Fast, Kristina; Bao, Yan; Carl, Petra; Pöppel, Ernst; Welker, Lorenz; Reiser, Maximilian; Meindl, Thomas; Gutyrchik, Evgeny

    2013-07-26

    Music communicates and evokes emotions. The number of studies on the neural correlates of musical emotion processing is increasing but few have investigated the factors that modulate these neural activations. Previous research has shown that personality traits account for individual variability of neural responses. In this study, we used functional magnetic resonance imaging (fMRI) to investigate how the dimensions Extraversion and Neuroticism are related to differences in brain reactivity to musical stimuli expressing the emotions happiness, sadness and fear. 12 participants (7 female, M=20.33 years) completed the NEO-Five Factor Inventory (NEO-FFI) and were scanned while performing a passive listening task. Neurofunctional analyses revealed significant positive correlations between Neuroticism scores and activations in bilateral basal ganglia, insula and orbitofrontal cortex in response to music expressing happiness. Extraversion scores were marginally negatively correlated with activations in the right amygdala in response to music expressing fear. Our findings show that subjects' personality may have a predictive power in the neural correlates of musical emotion processing and should be considered in the context of experimental group homogeneity. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Are Stopped Strings Preferred in Sad Music?

    OpenAIRE

    David Huron; Caitlyn Trevor

    2017-01-01

    String instruments may be played either with open strings (where the string vibrates between the bridge and a hard wooden nut) or with stopped strings (where the string vibrates between the bridge and a performer's finger pressed against the fingerboard). Compared with open strings, stopped strings permit the use of vibrato and exhibit a darker timbre. Inspired by research on the timbre of sad speech, we test whether there is a tendency to use stopped strings in nominally sad music. Specifica...

  9. Preattentive processing of emotional musical tones: a multidimensional scaling and ERP study

    Directory of Open Access Journals (Sweden)

    Thomas F Münte

    2013-09-01

    Full Text Available Musical emotion can be conveyed by subtle variations in timbre. Here, we investigated whether the brain is capable to discriminate tones differing in emotional expression by recording event-related potentials (ERPs in an oddball paradigm under preattentive listening conditions. First, using multidimensional Fechnerian scaling, pairs of violin tones played with a happy or sad intonation were rated same or different by a group of non-musicians. Three happy and three sad tones were selected for the ERP experiment. The Fechnerian distances between tones within an emotion were in the same range as the distances between tones of different emotions. In two conditions, either 3 happy and 1 sad or 3 sad and 1 happy tone were presented in pseudo-random order. A mismatch negativity for the emotional deviant was observed, indicating that in spite of considerable perceptual differences between the three equiprobable tones of the standard emotion, a template was formed based on timbral cues against which the emotional deviant was compared. Based on Juslin’s assumption of redundant code usage, we propose that tones were grouped together, because they were identified as belonging to one emotional category based on different emotion-specific cues. These results indicate that the brain forms an emotional memory trace at a preattentive level and thus extends previous investigations in which emotional deviance was confounded with physical dissimilarity. Differences between sad and happy tones were observed which might be due to the fact that the happy emotion is mostly communicated by suprasegmental features.

  10. Biased emotional recognition in depression: perception of emotions in music by depressed patients.

    Science.gov (United States)

    Punkanen, Marko; Eerola, Tuomas; Erkkilä, Jaakko

    2011-04-01

    Depression is a highly prevalent mood disorder, that impairs a person's social skills and also their quality of life. Populations affected with depression also suffer from a higher mortality rate. Depression affects person's ability to recognize emotions. We designed a novel experiment to test the hypothesis that depressed patients show a judgment bias towards negative emotions. To investigate how depressed patients differ in their perception of emotions conveyed by musical examples, both healthy (n=30) and depressed (n=79) participants were presented with a set of 30 musical excerpts, representing one of five basic target emotions, and asked to rate each excerpt using five Likert scales that represented the amount of each one of those same emotions perceived in the example. Depressed patients showed moderate but consistent negative self-report biases both in the overall use of the scales and their particular application to certain target emotions, when compared to healthy controls. Also, the severity of the clinical state (depression, anxiety and alexithymia) had an effect on the self-report biases for both positive and negative emotion ratings, particularly depression and alexithymia. Only musical stimuli were used, and they were all clear examples of one of the basic emotions of happiness, sadness, fear, anger and tenderness. No neutral or ambiguous excerpts were included. Depressed patients' negative emotional bias was demonstrated using musical stimuli. This suggests that the evaluation of emotional qualities in music could become a means to discriminate between depressed and non-depressed subjects. The practical implications of the present study relate both to diagnostic uses of such perceptual evaluations, as well as a better understanding of the emotional regulation strategies of the patients. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Emotion perception in music in high-functioning adolescents with Autism Spectrum Disorders.

    Science.gov (United States)

    Quintin, Eve-Marie; Bhatara, Anjali; Poissant, Hélène; Fombonne, Eric; Levitin, Daniel J

    2011-09-01

    Individuals with Autism Spectrum Disorders (ASD) succeed at a range of musical tasks. The ability to recognize musical emotion as belonging to one of four categories (happy, sad, scared or peaceful) was assessed in high-functioning adolescents with ASD (N = 26) and adolescents with typical development (TD, N = 26) with comparable performance IQ, auditory working memory, and musical training and experience. When verbal IQ was controlled for, there was no significant effect of diagnostic group. Adolescents with ASD rated the intensity of the emotions similarly to adolescents with TD and reported greater confidence in their responses when they had correctly (vs. incorrectly) recognized the emotions. These findings are reviewed within the context of the amygdala theory of autism.

  12. Memory for facial expression is influenced by the background music playing during study.

    Science.gov (United States)

    Woloszyn, Michael R; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities.

  13. The structural neuroanatomy of music emotion recognition: evidence from frontotemporal lobar degeneration.

    Science.gov (United States)

    Omar, Rohani; Henley, Susie M D; Bartlett, Jonathan W; Hailstone, Julia C; Gordon, Elizabeth; Sauter, Disa A; Frost, Chris; Scott, Sophie K; Warren, Jason D

    2011-06-01

    Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Manipulating Greek musical modes and tempo affects perceived musical emotion in musicians and nonmusicians

    Directory of Open Access Journals (Sweden)

    D. Ramos

    2011-02-01

    Full Text Available The combined influence of tempo and mode on emotional responses to music was studied by crossing 7 changes in mode with 3 changes in tempo. Twenty-four musicians aged 19 to 25 years (12 males and 12 females and 24 nonmusicians aged 17 to 25 years (12 males and 12 females were required to perform two tasks: 1 listening to different musical excerpts, and 2 associating an emotion to them such as happiness, serenity, fear, anger, or sadness. ANOVA showed that increasing the tempo strongly affected the arousal (F(2,116 = 268.62, mean square error (MSE = 0.6676, P < 0.001 and, to a lesser extent, the valence of emotional responses (F(6,348 = 8.71, MSE = 0.6196, P < 0.001. Changes in modes modulated the affective valence of the perceived emotions (F(6,348 = 4.24, MSE = 0.6764, P < 0.001. Some interactive effects were found between tempo and mode (F (1,58 = 115.6, MSE = 0.6428, P < 0.001, but, in most cases, the two parameters had additive effects. This finding demonstrates that small changes in the pitch structures of modes modulate the emotions associated with the pieces, confirming the cognitive foundation of emotional responses to music.

  15. An integrative review of the enjoyment of sadness associated with music.

    Science.gov (United States)

    Eerola, Tuomas; Vuoskoski, Jonna K; Peltola, Henna-Riikka; Putkinen, Vesa; Schäfer, Katharina

    2017-11-23

    The recent surge of interest towards the paradoxical pleasure produced by sad music has generated a handful of theories and an array of empirical explorations on the topic. However, none of these have attempted to weigh the existing evidence in a systematic fashion. The present work puts forward an integrative framework laid out over three levels of explanation - biological, psycho-social, and cultural - to compare and integrate the existing findings in a meaningful way. First, we review the evidence pertinent to experiences of pleasure associated with sad music from the fields of neuroscience, psychophysiology, and endocrinology. Then, the psychological and interpersonal mechanisms underlying the recognition and induction of sadness in the context of music are combined with putative explanations ranging from social surrogacy and nostalgia to feelings of being moved. Finally, we address the cultural aspects of the paradox - the extent to which it is embedded in the Western notion of music as an aesthetic, contemplative object - by synthesising findings from history, ethnography, and empirical studies. Furthermore, we complement these explanations by considering the particularly significant meanings that sadness portrayed in art can evoke in some perceivers. Our central claim is that one cannot attribute the enjoyment of sadness fully to any one of these levels, but to a chain of functionalities afforded by each level. Each explanatory level has several putative explanations and its own shift towards positive valence, but none of them deliver the full transformation from a highly negative experience to a fully enjoyable experience alone. The current evidence within this framework ranges from weak to non-existent at the biological level, moderate at the psychological level, and suggestive at the cultural level. We propose a series of focussed topics for future investigation that would allow to deconstruct the drivers and constraints of the processes leading to

  16. The Pleasure Evoked by Sad Music Is Mediated by Feelings of Being Moved

    OpenAIRE

    Vuoskoski, Jonna K.; Eerola, Tuomas

    2017-01-01

    Why do we enjoy listening to music that makes us sad? This question has puzzled music psychologists for decades, but the paradox of “pleasurable sadness” remains to be solved. Recent findings from a study investigating the enjoyment of sad films suggest that the positive relationship between felt sadness and enjoyment might be explained by feelings of being moved (Hanich et al., 2014). The aim of the present study was to investigate whether feelings of being moved also mediated the enjoyment ...

  17. Reduced sensitivity to emotional prosody in congenital amusia rekindles the musical protolanguage hypothesis.

    Science.gov (United States)

    Thompson, William Forde; Marin, Manuela M; Stewart, Lauren

    2012-11-13

    A number of evolutionary theories assume that music and language have a common origin as an emotional protolanguage that remains evident in overlapping functions and shared neural circuitry. The most basic prediction of this hypothesis is that sensitivity to emotion in speech prosody derives from the capacity to process music. We examined sensitivity to emotion in speech prosody in a sample of individuals with congenital amusia, a neurodevelopmental disorder characterized by deficits in processing acoustic and structural attributes of music. Twelve individuals with congenital amusia and 12 matched control participants judged the emotional expressions of 96 spoken phrases. Phrases were semantically neutral but prosodic cues (tone of voice) communicated each of six emotional states: happy, tender, afraid, irritated, sad, and no emotion. Congenitally amusic individuals were significantly worse than matched controls at decoding emotional prosody, with decoding rates for some emotions up to 20% lower than that of matched controls. They also reported difficulty understanding emotional prosody in their daily lives, suggesting some awareness of this deficit. The findings support speculations that music and language share mechanisms that trigger emotional responses to acoustic attributes, as predicted by theories that propose a common evolutionary link between these domains.

  18. The role of mood and personality in the perception of emotions represented by music.

    Science.gov (United States)

    Vuoskoski, Jonna K; Eerola, Tuomas

    2011-10-01

    Neuroimaging studies investigating the processing of emotions have traditionally considered variance between subjects as statistical noise. However, according to behavioural studies, individual differences in emotional processing appear to be an inherent part of the process itself. Temporary mood states as well as stable personality traits have been shown to influence the processing of emotions, causing trait- and mood-congruent biases. The primary aim of this study was to explore how listeners' personality and mood are reflected in their evaluations of discrete emotions represented by music. A related aim was to investigate the role of personality in music preferences. An experiment was carried out where 67 participants evaluated 50 music excerpts in terms of perceived emotions (anger, fear, happiness, sadness, and tenderness) and preference. Current mood was associated with mood-congruent biases in the evaluation of emotions represented by music, but extraversion moderated the degree of mood-congruence. Personality traits were strongly connected with preference ratings, and the correlations reflected the trait-congruent patterns obtained in prior studies investigating self-referential emotional processing. Implications for future behavioural and neuroimaging studies on music and emotions are raised. Copyright © 2011 Elsevier Srl. All rights reserved.

  19. Memorable Experiences with Sad Music—Reasons, Reactions and Mechanisms of Three Types of Experiences

    Science.gov (United States)

    Peltola, Henna-Riikka

    2016-01-01

    Reactions to memorable experiences of sad music were studied by means of a survey administered to a convenience (N = 1577), representative (N = 445), and quota sample (N = 414). The survey explored the reasons, mechanisms, and emotions of such experiences. Memorable experiences linked with sad music typically occurred in relation to extremely familiar music, caused intense and pleasurable experiences, which were accompanied by physiological reactions and positive mood changes in about a third of the participants. A consistent structure of reasons and emotions for these experiences was identified through exploratory and confirmatory factor analyses across the samples. Three types of sadness experiences were established, one that was genuinely negative (Grief-Stricken Sorrow) and two that were positive (Comforting Sorrow and Sweet Sorrow). Each type of emotion exhibited certain individual differences and had distinct profiles in terms of the underlying reasons, mechanisms, and elicited reactions. The prevalence of these broad types of emotional experiences suggested that positive experiences are the most frequent, but negative experiences were not uncommon in any of the samples. The findings have implications for measuring emotions induced by music and fiction in general, and call attention to the non-pleasurable aspects of these experiences. PMID:27300268

  20. Not all sounds sound the same: Parkinson's disease affects differently emotion processing in music and in speech prosody.

    Science.gov (United States)

    Lima, César F; Garrett, Carolina; Castro, São Luís

    2013-01-01

    Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.

  1. Effects of sad mood on facial emotion recognition in Chinese people.

    Science.gov (United States)

    Lee, Tatia M C; Ng, Emily H H; Tang, S W; Chan, Chetwyn C H

    2008-05-30

    This study examined the influence of sad mood on the judgment of ambiguous facial emotion expressions among 47 healthy volunteers who had been induced to feel sad (n=13), neutral (n=15), or happy (n=19) emotions by watching video clips. The findings suggest that when the targets were ambiguous, participants who were in a sad mood tended to classify them in the negative emotional categories rather than the positive emotional categories. Also, this observation indicates that emotion-specific negative bias in the judgment of facial expressions is associated with a sad mood. The finding argues against a general impairment in decoding facial expressions. Furthermore, the observed mood-congruent negative bias was best predicted by spatial perception. The findings of this study provide insights into the cognitive processes underlying the interpersonal difficulties experienced by people in a sad mood, which may be predisposing factors in the development of clinical depression.

  2. The effect of music on corticospinal excitability is related to the perceived emotion: a transcranial magnetic stimulation study.

    Science.gov (United States)

    Giovannelli, Fabio; Banfi, Chiara; Borgheresi, Alessandra; Fiori, Elisa; Innocenti, Iglis; Rossi, Simone; Zaccara, Gaetano; Viggiano, Maria Pia; Cincotta, Massimo

    2013-03-01

    Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a functional link between the emotion-related brain areas and the motor system. It is not well understood, however, whether the motor cortex activity is modulated by specific emotions experienced during music listening. In 23 healthy volunteers, we recorded the motor evoked potentials (MEP) following TMS to investigate the corticospinal excitability while subjects listened to music pieces evoking different emotions (happiness, sadness, fear, and displeasure), an emotionally neutral piece, and a control stimulus (musical scale). Quality and intensity of emotions were previously rated in an additional group of 30 healthy subjects. Fear-related music significantly increased the MEP size compared to the neutral piece and the control stimulus. This effect was not seen with music inducing other emotional experiences and was not related to changes in autonomic variables (respiration rate, heart rate). Current data indicate that also in a musical context, the excitability of the corticomotoneuronal system is related to the emotion expressed by the listened piece. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Daydreams and trait affect: The role of the listener's state of mind in the emotional response to music.

    Science.gov (United States)

    Martarelli, Corinna S; Mayer, Boris; Mast, Fred W

    2016-11-01

    Music creates room for the mind to wander, mental time travel, and departures into more fantastical worlds. We examined the mediating role of daydreams and the moderating function of personality differences for the emotional response to music by using a moderated mediation approach. The results showed that the valence of daydreams played a mediating role in the reaction to the musical experience: happy music was related to more positive daydreams, which were associated with greater relaxation with the happy music and to greater liking of the happy music. Furthermore, negative affect (trait) moderated the direct effect of sad vs. happy music on the liking of the music: individuals with high scores on negative affect preferred sad music. The results are discussed with regard to the interplay of general and personality-specific processes as it is relevant to better understand the effects music can have on the listeners. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. The effect of background music and song texts on the emotional understanding of children with autism.

    Science.gov (United States)

    Katagiri, June

    2009-01-01

    The purpose of this study was to examine the effect of background music and song texts to teach emotional understanding to children with autism. Participants were 12 students (mean age 11.5 years) with a primary diagnosis of autism who were attending schools in Japan. Each participant was taught four emotions to decode and encode: happiness, sadness, anger, and fear by the counterbalanced treatment-order. The treatment consisted of the four conditions: (a) no contact control (NCC)--no purposeful teaching of the selected emotion, (b) contact control (CC)--teaching the selected emotion using verbal instructions alone, (c) background music (BM)--teaching the selected emotion by verbal instructions with background music representing the emotion, and singing songs (SS)--teaching the selected emotion by singing specially composed songs about the emotion. Participants were given a pretest and a posttest and received 8 individual sessions between these tests. The results indicated that all participants improved significantly in their understanding of the four selected emotions. Background music was significantly more effective than the other three conditions in improving participants' emotional understanding. The findings suggest that background music can be an effective tool to increase emotional understanding in children with autism, which is crucial to their social interactions.

  5. Music, memory and emotion.

    Science.gov (United States)

    Jäncke, Lutz

    2008-08-08

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory.

  6. Emoções de uma escuta musical afetam a percepção subjetiva de tempo Emotions from listening to music affect the subjective perception of time

    Directory of Open Access Journals (Sweden)

    Danilo Ramos

    2012-01-01

    Full Text Available Este estudo verificou se emoções percebidas durante uma escuta musical influenciam a percepção temporal. Músicos e não músicos foram submetidos a tarefas de escuta de trechos musicais do repertório erudito ocidental com 20 segundos de duração cada um e tarefas de associação temporal de cada trecho ouvido a durações padrões, que variavam de 16 a 24 segundos. Os trechos musicais empregados eram representativos de uma dentre as categorias emocionais Alegria, Tristeza, Serenidade ou Medo / Raiva. Uma análise de variância mostrou que, enquanto os não músicos apresentaram subestimações temporais associadas a pelo menos um trecho musical de cada uma das categorias emocionais, os músicos subestimaram todos os trechos musicais tristes, relacionados às características de baixo arousal e valência afetiva negativa.This study examined whether perceived emotions during music listening tasks influence time perception. Musicians and non-musicians were submitted to tasks of listening to musical excerpts from Western classical repertoire of 20 seconds and tasks of temporal association of each piece of music to standard durations, ranging from 16 to 24 seconds. Musical excerpts were representative from one of the following emotional categories: Happiness, Sadness, Threat and Peacefulness. An analysis of variance showed that, while non-musicians showed temporal underestimations associated with, at least, one piece of music from each emotional category, musicians underestimated all sad musical excerpts, related to low arousal and negative valence features.

  7. Music, memory and emotion

    Science.gov (United States)

    Jäncke, Lutz

    2008-01-01

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory. PMID:18710596

  8. Sad man's nose: Emotion induction and olfactory perception.

    Science.gov (United States)

    Flohr, Elena L R; Erwin, Elena; Croy, Ilona; Hummel, Thomas

    2017-03-01

    Emotional and olfactory processing is frequently shown to be closely linked both anatomically and functionally. Depression, a disease closely related to the emotional state of sadness, has been shown to be associated with a decrease in olfactory sensitivity. The present study focuses on the state of sadness in n = 31 healthy subjects in order to investigate the specific contribution of this affective state in the modulation of olfactory processing. A sad or indifferent affective state was induced using 2 movies that were presented on 2 separate days. Afterward, chemosensory-evoked potentials were recorded after stimulation with an unpleasant (hydrogen sulfide: "rotten eggs") or a pleasant (phenyl ethyl alcohol: "rose") odorant. Latencies of N1 and P2 peaks were longer after induction of the sad affective state. Additionally, amplitudes were lower in a sad affective state when being stimulated with the unpleasant odorant. Processing of olfactory input has thus been reduced under conditions of the sad affective state. We argue that the affective state per se could at least partially account for the reduced olfactory sensitivity in depressed patients. To our knowledge, the present study is the first to show influence of affective state on chemosensory event-related potentials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. It's not what you play, it's how you play it: timbre affects perception of emotion in music.

    Science.gov (United States)

    Hailstone, Julia C; Omar, Rohani; Henley, Susie M D; Frost, Chris; Kenward, Michael G; Warren, Jason D

    2009-11-01

    Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18-30 years, 24 older adults aged 58-75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p music after controlling for other acoustic, cognitive, and performance factors.

  10. Memory for facial expression is influenced by the background music playing during study

    OpenAIRE

    Woloszyn, Michael R.; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were rec...

  11. Verbal and facial-emotional Stroop tasks reveal specific attentional interferences in sad mood.

    Science.gov (United States)

    Isaac, Linda; Vrijsen, Janna N; Eling, Paul; van Oostrom, Iris; Speckens, Anne; Becker, Eni S

    2012-01-01

    Mood congruence refers to the tendency of individuals to attend to information more readily when it has the same emotional content as their current mood state. The aim of the present study was to ascertain whether attentional interference occurred for participants in sad mood states for emotionally relevant stimuli (mood-congruence), and to determine whether this interference occurred for both valenced words and valenced faces. A mood induction procedure was administered to 116 undergraduate females divided into two equal groups for the sad and happy mood condition. This study employed three versions of the Stroop task: color, verbal-emotional, and a facial-emotional Stroop. The two mood groups did not differ on the color Stroop. Significant group differences were found on the verbal-emotional Stroop for sad words with longer latencies for sad-induced participants. Main findings for the facial-emotional Stroop were that sad mood is associated with attentional interference for angry-threatening faces as well as longer latencies for neutral faces. Group differences were not found for positive stimuli. These findings confirm that sad mood is associated with attentional interference for mood-congruent stimuli in the verbal domain (sad words), but this mood-congruent effect does not necessarily apply to the visual domain (sad faces). Attentional interference for neutral faces suggests sad mood participants did not necessarily see valence-free faces. Attentional interference for threatening stimuli is often associated with anxiety; however, the current results show that threat is not an attentional interference observed exclusively in states of anxiety but also in sad mood.

  12. Independent component processes underlying emotions during natural music listening.

    Science.gov (United States)

    Rogenmoser, Lars; Zollinger, Nina; Elmer, Stefan; Jäncke, Lutz

    2016-09-01

    The aim of this study was to investigate the brain processes underlying emotions during natural music listening. To address this, we recorded high-density electroencephalography (EEG) from 22 subjects while presenting a set of individually matched whole musical excerpts varying in valence and arousal. Independent component analysis was applied to decompose the EEG data into functionally distinct brain processes. A k-means cluster analysis calculated on the basis of a combination of spatial (scalp topography and dipole location mapped onto the Montreal Neurological Institute brain template) and functional (spectra) characteristics revealed 10 clusters referring to brain areas typically involved in music and emotion processing, namely in the proximity of thalamic-limbic and orbitofrontal regions as well as at frontal, fronto-parietal, parietal, parieto-occipital, temporo-occipital and occipital areas. This analysis revealed that arousal was associated with a suppression of power in the alpha frequency range. On the other hand, valence was associated with an increase in theta frequency power in response to excerpts inducing happiness compared to sadness. These findings are partly compatible with the model proposed by Heller, arguing that the frontal lobe is involved in modulating valenced experiences (the left frontal hemisphere for positive emotions) whereas the right parieto-temporal region contributes to the emotional arousal. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. From Motion to Emotion: Accelerometer Data Predict Subjective Experience of Music

    Science.gov (United States)

    Irrgang, Melanie

    2016-01-01

    Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems. PMID:27415015

  14. From Motion to Emotion: Accelerometer Data Predict Subjective Experience of Music.

    Science.gov (United States)

    Irrgang, Melanie; Egermann, Hauke

    2016-01-01

    Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems.

  15. Music, memory and emotion

    OpenAIRE

    J?ncke, Lutz

    2008-01-01

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory. Music has a prominent role in the everyday life of many people. Whether it is for recreation, distraction or mood enhancement, a lot of people listen to music from early in t...

  16. Two types of peak emotional responses to music: The psychophysiology of chills and tears

    Science.gov (United States)

    Mori, Kazuma; Iwanaga, Makoto

    2017-01-01

    People sometimes experience a strong emotional response to artworks. Previous studies have demonstrated that the peak emotional experience of chills (goose bumps or shivers) when listening to music involves psychophysiological arousal and a rewarding effect. However, many aspects of peak emotion are still not understood. The current research takes a new perspective of peak emotional response of tears (weeping, lump in the throat). A psychophysiological experiment showed that self-reported chills increased electrodermal activity and subjective arousal whereas tears produced slow respiration during heartbeat acceleration, although both chills and tears induced pleasure and deep breathing. A song that induced chills was perceived as being both happy and sad whereas a song that induced tears was perceived as sad. A tear-eliciting song was perceived as calmer than a chill-eliciting song. These results show that tears involve pleasure from sadness and that they are psychophysiologically calming; thus, psychophysiological responses permit the distinction between chills and tears. Because tears may have a cathartic effect, the functional significance of chills and tears seems to be different. We believe that the distinction of two types of peak emotions is theoretically relevant and further study of tears would contribute to more understanding of human peak emotional response. PMID:28387335

  17. Two types of peak emotional responses to music: The psychophysiology of chills and tears.

    Science.gov (United States)

    Mori, Kazuma; Iwanaga, Makoto

    2017-04-07

    People sometimes experience a strong emotional response to artworks. Previous studies have demonstrated that the peak emotional experience of chills (goose bumps or shivers) when listening to music involves psychophysiological arousal and a rewarding effect. However, many aspects of peak emotion are still not understood. The current research takes a new perspective of peak emotional response of tears (weeping, lump in the throat). A psychophysiological experiment showed that self-reported chills increased electrodermal activity and subjective arousal whereas tears produced slow respiration during heartbeat acceleration, although both chills and tears induced pleasure and deep breathing. A song that induced chills was perceived as being both happy and sad whereas a song that induced tears was perceived as sad. A tear-eliciting song was perceived as calmer than a chill-eliciting song. These results show that tears involve pleasure from sadness and that they are psychophysiologically calming; thus, psychophysiological responses permit the distinction between chills and tears. Because tears may have a cathartic effect, the functional significance of chills and tears seems to be different. We believe that the distinction of two types of peak emotions is theoretically relevant and further study of tears would contribute to more understanding of human peak emotional response.

  18. The effect of sadness on global-local processing.

    Science.gov (United States)

    von Mühlenen, Adrian; Bellaera, Lauren; Singh, Amrendra; Srinivasan, Narayanan

    2018-05-04

    Gable and Harmon-Jones (Psychological Science, 21(2), 211-215, 2010) reported that sadness broadens attention in a global-local letter task. This finding provided the key test for their motivational intensity account, which states that the level of spatial processing is not determined by emotional valence, but by motivational intensity. However, their finding is at odds with several other studies, showing no effect, or even a narrowing effect of sadness on attention. This paper reports two attempts to replicate the broadening effect of sadness on attention. Both experiments used a global-local letter task, but differed in terms of emotion induction: Experiment 1 used the same pictures as Gable and Harmon-Jones, taken from the IAPS dataset; Experiment 2 used a sad video underlaid with sad music. Results showed a sadness-specific global advantage in the error rates, but not in the reaction times. The same null results were also found in a South-Asian sample in both experiments, showing that effects on global/local processing were not influenced by a culturally related processing bias.

  19. Emotional response to musical repetition.

    Science.gov (United States)

    Livingstone, Steven R; Palmer, Caroline; Schubert, Emery

    2012-06-01

    Two experiments examined the effects of repetition on listeners' emotional response to music. Listeners heard recordings of orchestral music that contained a large section repeated twice. The music had a symmetric phrase structure (same-length phrases) in Experiment 1 and an asymmetric phrase structure (different-length phrases) in Experiment 2, hypothesized to alter the predictability of sensitivity to musical repetition. Continuous measures of arousal and valence were compared across music that contained identical repetition, variation (related), or contrasting (unrelated) structure. Listeners' emotional arousal ratings differed most for contrasting music, moderately for variations, and least for repeating musical segments. A computational model for the detection of repeated musical segments was applied to the listeners' emotional responses. The model detected the locations of phrase boundaries from the emotional responses better than from performed tempo or physical intensity in both experiments. These findings indicate the importance of repetition in listeners' emotional response to music and in the perceptual segmentation of musical structure.

  20. Beyond Sadness : The Multi-Emotional Trajectory of Melodrama

    NARCIS (Netherlands)

    Hanich, Julian; Menninghaus, Winfried

    In this article we investigate the astonishing variety of emotions that a brief scene in a film melodrama can evoke. We thus take issue with the reductive view of melodrama that limits this genre’s emotional effects to sadness, pity, and tear-jerking potential. Through a close analysis of a

  1. Sadness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia P; Parmentier, Fabrice B R

    2014-02-01

    Research shows that attention is ineluctably captured away from a focal visual task by rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards). The fundamental cognitive mechanisms underlying this effect have been the object of an increasing number of studies but their sensitivity to mood and emotions remains relatively unexplored despite suggestion of greater distractibility in negative emotional contexts. In this study, we examined the effect of sadness, a widespread form of emotional distress and a symptom of many disorders, on distraction by deviant sounds. Participants received either a sadness induction or a neutral mood induction by means of a mixed procedure based on music and autobiographical recall prior to taking part in an auditory-visual oddball task in which they categorized visual digits while ignoring task-irrelevant sounds. The results showed that although all participants exhibited significantly longer response times in the visual categorization task following the presentation of rare and unexpected deviant sounds relative to that of the standard sound, this distraction effect was significantly greater in participants who had received the sadness induction (a twofold increase). The residual distraction on the subsequent trial (postdeviance distraction) was equivalent in both groups, suggesting that sadness interfered with the disengagement of attention from the deviant sound and back toward the target stimulus. We propose that this disengagement impairment reflected the monopolization of cognitive resources by sadness and/or associated ruminations. Our findings suggest that sadness can increase distraction even when distractors are emotionally neutral. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  2. Impact of civil war on emotion recognition: the denial of sadness in Sierra Leone.

    Science.gov (United States)

    Umiltà, Maria Allessandra; Wood, Rachel; Loffredo, Francesca; Ravera, Roberto; Gallese, Vittorio

    2013-01-01

    Studies of children with atypical emotional experience demonstrate that childhood exposure to high levels of hostility and threat biases emotion perception. This study investigates emotion processing, in former child soldiers and non-combatant civilians. All participants have experienced prolonged violence exposure during childhood. The study, carried out in Sierra Leone, aimed to examine the effects of exposure to and forced participation in acts of extreme violence on the emotion processing of young adults war survivors. A total of 76 young, male adults (38 former child soldier survivors and 38 civilian survivors) were tested in order to assess participants' ability to identify four different facial emotion expressions from photographs and movies. Both groups were able to recognize facial expressions of emotion. However, despite their general ability to correctly identify facial emotions, participants showed a significant response bias in their recognition of sadness. Both former soldiers and civilians made more errors in identifying expressions of sadness than in the other three emotions and when mislabeling sadness participants most often described it as anger. Conversely, when making erroneous identifications of other emotions, participants were most likely to label the expressed emotion as sadness. In addition, while for three of the four emotions participants were better able to make a correct identification the greater the intensity of the expression, this pattern was not observed for sadness. During movies presentation the recognition of sadness was significantly worse for soldiers. While both former child soldiers and civilians were found to be able to identify facial emotions, a significant response bias in their attribution of negative emotions was observed. Such bias was particularly pronounced in former child soldiers. These findings point to a pervasive long-lasting effect of childhood exposure to violence on emotion processing in later life.

  3. Impact of civil war on emotion recognition: the denial of sadness in Sierra Leone.

    Directory of Open Access Journals (Sweden)

    Maria Alessandra eUmilta'

    2013-09-01

    Full Text Available Studies of children with atypical emotional experience demonstrate that childhood exposure to high levels of hostility and threat biases emotion perception. This study investigates emotion processing, in former child soldiers and non-combatant civilians. All participants have experienced prolonged violence exposure during childhood. The study, carried out in Sierra Leone, aimed to examine the effects of exposure to and forced participation in acts of extreme violence on the emotion processing of young adults war survivors. A total of 76 young, male adults (38 former child soldier survivors and 38 civilian survivors were tested in order to assess participants’ ability to identify four different facial emotion expressions from photographs and movies. Both groups were able to recognize facial expressions of emotion. However, despite their general ability to correctly identify facial emotions, participants showed a significant response bias in their recognition of sadness. Both former soldiers and civilians made more errors in identifying expressions of sadness than in the other three emotions and when mislabeling sadness participants most often described it as anger. Conversely, when making erroneous identifications of other emotions, participants were most likely to label the expressed emotion as sadness. In addition, while for three of the four emotions participants were better able to make a correct identification the greater the intensity of the expression, this pattern was not observed for sadness. During movies presentation the recognition of sadness was significantly worse for soldiers. While both former child soldiers and civilians were found to be able to identify facial emotions, a significant response bias in their attribution of negative emotions was observed. Such bias was particularly pronounced in former child soldiers. These findings point to a pervasive long-lasting effect of childhood exposure to violence on emotion processing

  4. Neural correlates of cross-modal affective priming by music in Williams syndrome.

    Science.gov (United States)

    Lense, Miriam D; Gordon, Reyna L; Key, Alexandra P F; Dykens, Elisabeth M

    2014-04-01

    Emotional connection is the main reason people engage with music, and the emotional features of music can influence processing in other domains. Williams syndrome (WS) is a neurodevelopmental genetic disorder where musicality and sociability are prominent aspects of the phenotype. This study examined oscillatory brain activity during a musical affective priming paradigm. Participants with WS and age-matched typically developing controls heard brief emotional musical excerpts or emotionally neutral sounds and then reported the emotional valence (happy/sad) of subsequently presented faces. Participants with WS demonstrated greater evoked fronto-central alpha activity to the happy vs sad musical excerpts. The size of these alpha effects correlated with parent-reported emotional reactivity to music. Although participant groups did not differ in accuracy of identifying facial emotions, reaction time data revealed a music priming effect only in persons with WS, who responded faster when the face matched the emotional valence of the preceding musical excerpt vs when the valence differed. Matching emotional valence was also associated with greater evoked gamma activity thought to reflect cross-modal integration. This effect was not present in controls. The results suggest a specific connection between music and socioemotional processing and have implications for clinical and educational approaches for WS.

  5. Tragicomedy, Melodrama, and Genre in Early Sound Films: The Case of Two “Sad Clown” Musicals

    Directory of Open Access Journals (Sweden)

    Michael G. Garber

    2016-10-01

    Full Text Available This interdisciplinary study applies the theatrical theories of stage genres to examples of the early sound cinema, the 1930 Hollywood musicals Puttin’ on the Ritz (starring Harry Richman, and with songs by Irving Berlin and Free and Easy (starring Buster Keaton. The discussion focuses on the phenomenon of the sad clown as a symbol of tragicomedy. Springing from Rick Altman’s delineation of the “sad clown” sub-subgenre of the show musical subgenre, outlined in The American Film Musical, this article shows that, in these seminal movie musicals, naïve melodrama and “gag” comedy coexist with the tonalities, structures, philosophy, and images of the sophisticated genre of tragicomedy, including by incorporating the grotesque into the mise en scene of their musical production numbers.

  6. Induction of depressed and elated mood by music influences the perception of facial emotional expressions in healthy subjects.

    Science.gov (United States)

    Bouhuys, A L; Bloem, G M; Groothuis, T G

    1995-04-04

    The judgement of healthy subject rating the emotional expressions of a set of schematic drawn faces is validated (study 1) to examine the relationship between mood (depressed/elated) and judgement of emotional expressions of these faces (study 2). Study 1: 30 healthy subjects judged 12 faces with respect to the emotions they express (fear, happiness, anger, sadness, disgust, surprise, rejection and invitation). It was found that a particular face could reflect various emotions. All eight emotions were reflected in the set of faces and the emotions were consensually judged. Moreover, gender differences in judgement could be established. Study 2: In a cross-over design, 24 healthy subjects judged the faces after listening to depressing or elating music. The faces were subdivided in six 'ambiguous' faces (i.e., expressing similar amounts of positive and negative emotions) and six 'clear' faces (i.e., faces showing a preponderance of positive or negative emotions). In addition, these two types of faces were distinguished with respect to the intensity of emotions they express. 11 subjects who showed substantial differences in experienced depression after listening to the music were selected for further analysis. It was found that, when feeling more depressed, the subjects perceived more rejection/sadness in ambiguous faces (displaying less intensive emotions) and less invitation/happiness in clear faces. In addition, subjects saw more fear in clear faces that express less intensive emotions. Hence, results show a depression-related negative bias in the perception of facial displays.

  7. Music Communicates Affects, Not Basic Emotions - A Constructionist Account of Attribution of Emotional Meanings to Music.

    Science.gov (United States)

    Cespedes-Guevara, Julian; Eerola, Tuomas

    2018-01-01

    Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music's ability to express core affects and the influence of top-down and contextual information in the listener's mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be

  8. Cochlear implant users rely on tempo rather than on pitch information during perception of musical emotion.

    Science.gov (United States)

    Caldwell, Meredith; Rankin, Summer K; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2015-09-01

    The purpose of this study was to investigate the extent to which cochlear implant (CI) users rely on tempo and mode in perception of musical emotion when compared with normal hearing (NH) individuals. A test battery of novel four-bar melodies was created and adapted to four permutations with alterations of tonality (major vs. minor) and tempo (presto vs. largo), resulting in non-ambiguous (major key/fast tempo and minor key/slow tempo) and ambiguous (major key/slow tempo, and minor key/fast tempo) musical stimuli. Both CI and NH participants listened to each clip and provided emotional ratings on a Likert scale of +5 (happy) to -5 (sad). A three-way ANOVA demonstrated an overall effect for tempo in both groups, and an overall effect for mode in the NH group. The CI group rated stimuli of the same tempo similarly, regardless of changes in mode, whereas the NH group did not. A subgroup analysis indicated the same effects in both musician and non-musician CI users and NH listeners. The results suggest that the CI group relied more heavily on tempo than mode in making musical emotion decisions. The subgroup analysis further suggests that level of musical training did not significantly impact this finding. CI users weigh temporal cues more heavily than pitch cues in inferring musical emotion. These findings highlight the significant disadvantage of CI users in comparison with NH listeners for music perception, particularly during recognition of musical emotion, a critically important feature of music.

  9. Musical anhedonia: selective loss of emotional experience in listening to music.

    Science.gov (United States)

    Satoh, Masayuki; Nakase, Taizen; Nagata, Ken; Tomimoto, Hidekazu

    2011-10-01

    Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.

  10. Musical emotions: Functions, origins, evolution

    Science.gov (United States)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  11. How music alters a kiss: superior temporal gyrus controls fusiform–amygdalar effective connectivity

    Science.gov (United States)

    Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H.; Kappelhoff, Hermann; Jacobs, Arthur M.; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-01-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform–amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. PMID:24298171

  12. Brain correlates of music-evoked emotions.

    Science.gov (United States)

    Koelsch, Stefan

    2014-03-01

    Music is a universal feature of human societies, partly owing to its power to evoke strong emotions and influence moods. During the past decade, the investigation of the neural correlates of music-evoked emotions has been invaluable for the understanding of human emotion. Functional neuroimaging studies on music and emotion show that music can modulate activity in brain structures that are known to be crucially involved in emotion, such as the amygdala, nucleus accumbens, hypothalamus, hippocampus, insula, cingulate cortex and orbitofrontal cortex. The potential of music to modulate activity in these structures has important implications for the use of music in the treatment of psychiatric and neurological disorders.

  13. Music-induced changes in functional cerebral asymmetries.

    Science.gov (United States)

    Hausmann, Markus; Hodgetts, Sophie; Eerola, Tuomas

    2016-04-01

    After decades of research, it remains unclear whether emotion lateralization occurs because one hemisphere is dominant for processing the emotional content of the stimuli, or whether emotional stimuli activate lateralised networks associated with the subjective emotional experience. By using emotion-induction procedures, we investigated the effect of listening to happy and sad music on three well-established lateralization tasks. In a prestudy, Mozart's piano sonata (K. 448) and Beethoven's Moonlight Sonata were rated as the most happy and sad excerpts, respectively. Participants listened to either one emotional excerpt, or sat in silence before completing an emotional chimeric faces task (Experiment 1), visual line bisection task (Experiment 2) and a dichotic listening task (Experiment 3 and 4). Listening to happy music resulted in a reduced right hemispheric bias in facial emotion recognition (Experiment 1) and visuospatial attention (Experiment 2) and increased left hemispheric bias in language lateralization (Experiments 3 and 4). Although Experiments 1-3 revealed an increased positive emotional state after listening to happy music, mediation analyses revealed that the effect on hemispheric asymmetries was not mediated by music-induced emotional changes. The direct effect of music listening on lateralization was investigated in Experiment 4 in which tempo of the happy excerpt was manipulated by controlling for other acoustic features. However, the results of Experiment 4 made it rather unlikely that tempo is the critical cue accounting for the effects. We conclude that listening to music can affect functional cerebral asymmetries in well-established emotional and cognitive laterality tasks, independent of music-induced changes in the emotion state. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Music Communicates Affects, Not Basic Emotions – A Constructionist Account of Attribution of Emotional Meanings to Music

    Directory of Open Access Journals (Sweden)

    Julian Cespedes-Guevara

    2018-02-01

    Full Text Available Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music’s ability to express core affects and the influence of top-down and contextual information in the listener’s mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence. Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological

  15. Music Communicates Affects, Not Basic Emotions – A Constructionist Account of Attribution of Emotional Meanings to Music

    Science.gov (United States)

    Cespedes-Guevara, Julian; Eerola, Tuomas

    2018-01-01

    Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music’s ability to express core affects and the influence of top-down and contextual information in the listener’s mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be

  16. Are Stopped Strings Preferred in Sad Music?

    Directory of Open Access Journals (Sweden)

    David Huron

    2017-01-01

    Full Text Available String instruments may be played either with open strings (where the string vibrates between the bridge and a hard wooden nut or with stopped strings (where the string vibrates between the bridge and a performer's finger pressed against the fingerboard. Compared with open strings, stopped strings permit the use of vibrato and exhibit a darker timbre. Inspired by research on the timbre of sad speech, we test whether there is a tendency to use stopped strings in nominally sad music. Specifically, we compare the proportion of potentially open-to-stopped strings in a sample of slow, minor-mode movements with matched major-mode movements. By way of illustration, a preliminary analysis of Samuel Barber's famous Adagio from his Opus 11 string quartet shows that the selected key (B-flat minor provides the optimum key for minimizing open string tones. However, examination of a broader controlled sample of quartet movements by Haydn, Mozart and Beethoven failed to exhibit the conjectured relationship. Instead, major-mode movements were found to avoid possible open strings more than slow minor-mode movements.

  17. Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech

    Directory of Open Access Journals (Sweden)

    Matthew ePoon

    2015-11-01

    Full Text Available Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound happier than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here we describe a novel, score-based exploration of the use of pitch height and timing in a set of balanced major and minor key compositions. Our corpus contained all 24 Preludes and 24 Fugues from Bach’s Well Tempered Clavier (book 1, as well as all 24 of Chopin’s Preludes for piano. These three sets are balanced with respect to both modality (major/minor and key chroma (A, B, C, etc.. Consistent with predictions derived from speech, we found major-key (nominally happy pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally sad pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post-hoc analyses illustrate interesting trade-offs, with

  18. Colour Association with Music Is Mediated by Emotion: Evidence from an Experiment Using a CIE Lab Interface and Interviews.

    Science.gov (United States)

    Lindborg, PerMagnus; Friberg, Anders K

    2015-01-01

    Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.

  19. Colour Association with Music Is Mediated by Emotion: Evidence from an Experiment Using a CIE Lab Interface and Interviews

    Science.gov (United States)

    Lindborg, PerMagnus; Friberg, Anders K.

    2015-01-01

    Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R 2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds. PMID:26642050

  20. How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity.

    Science.gov (United States)

    Pehrs, Corinna; Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H; Kappelhoff, Hermann; Jacobs, Arthur M; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-11-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform-amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. Evaluating Autonomic Parameters: The Role of ‎Sleep ‎Duration in Emotional Responses to Music

    Directory of Open Access Journals (Sweden)

    Atefeh Goshvarpour

    2016-02-01

    Full Text Available Objective: It has been recognized that sleep has an important effect on emotion processing. The aim ‎of this study was to investigate the effect of previous night sleep duration on autonomic ‎responses to musical stimuli in different emotional contexts.‎Method: A frequency based measure of GSR, PR and ECG signals were examined in 35 healthy ‎students in three groups of oversleeping, lack of sleep and normal sleep. ‎Results: The results of this study revealed that regardless of the emotional context of the musical ‎stimuli (happy, relax, fear, and sadness, there was an increase in the maximum power of ‎GSR, ECG and PR during the music time compared to the rest time in all the three ‎groups. In addition, the higher value of these measures was achieved while the ‎participants listened to relaxing music. Statistical analysis of the extracted features ‎between each pair of emotional states revealed that the most significant differences ‎were attained for ECG signals. These differences were more obvious in the participants ‎with normal sleeping (p<10-18. The higher value of the indices has been shown, ‎comparing long sleep duration with the normal one.‎Conclusion: There was a strong relation between emotion and sleep duration, and this association can ‎be observed by means of the ECG signals.‎ 

  2. Modeling listeners' emotional response to music.

    Science.gov (United States)

    Eerola, Tuomas

    2012-10-01

    An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.

  3. Unforgettable film music: The role of emotion in episodic long-term memory for music

    OpenAIRE

    Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O

    2008-01-01

    Abstract Background Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Results Recognition of 40 musical excerpts was investiga...

  4. Group Rumination: Social Interactions Around Music in People with Depression

    Science.gov (United States)

    Garrido, Sandra; Eerola, Tuomas; McFerran, Katrina

    2017-01-01

    One of the most important roles that music serves in human society is the promotion of social relationships and group cohesion. In general, emotional experiences tend to be amplified in group settings through processes of social feedback. However, previous research has established that listening to sad music can intensify negative emotions in people with tendencies to rumination and depression. This study therefore investigated the phenomenon of ruminating with music, and the question of whether listening to sad music in group settings provides social benefits for emotionally vulnerable listeners, or whether it further exaggerates depressive tendencies. Participants recruited via online depression groups and mental health websites were surveyed as to music listening habits. Results revealed that people with depression were more likely to engage in “group rumination” using music, and that this behavior could be partially explained by a general tendency to ruminate using music. Both affective states and coping styles were found to be related to the affective outcomes of group interactions around music. These findings go some way toward clarifying the situations in which group interactions around music are able to provide important social benefits for those involved, and situations in which negative emotions can be amplified by the group context. PMID:28421014

  5. Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech.

    Science.gov (United States)

    Poon, Matthew; Schutz, Michael

    2015-01-01

    Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound "happier" than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here, we describe a novel, score-based exploration of the use of pitch height and timing in a set of "balanced" major and minor key compositions. Our analysis included all 24 Preludes and 24 Fugues from Bach's Well-Tempered Clavier (book 1), as well as all 24 of Chopin's Preludes for piano. These three sets are balanced with respect to both modality (major/minor) and key chroma ("A," "B," "C," etc.). Consistent with predictions derived from speech, we found major-key (nominally "happy") pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally "sad") pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post hoc analyses illustrate interesting trade-offs, with sets featuring greater emphasis on timing distinctions between modalities exhibiting the least pitch distinction, and vice-versa. We discuss these findings in the broader context of speech-music research, as well as recent scholarship exploring the historical evolution of cue use in Western music.

  6. Predicting the emotions expressed in music

    DEFF Research Database (Denmark)

    Madsen, Jens

    With the ever-growing popularity and availability of digital music through streaming services and digital download, making sense of the millions of songs, is ever more pertinent. However the traditional approach of creating music systems has treated songs like items in a store, like books...... and movies. However music is special, having origins in a number of evolutionary adaptations. The fundamental needs and goals of a users use of music, was investigated to create the next generation of music systems. People listen to music to regulate their mood and emotions was found to be the most important...... fundamental reason. (Mis)matching peoples mood with the emotions expressed in music was found to be an essential underlying mechanism, people use to regulate their emotions. This formed the basis and overall goal of the thesis, to investigate how to create a predictive model of emotions expressed in music...

  7. Unforgettable film music: the role of emotion in episodic long-term memory for music.

    Science.gov (United States)

    Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O

    2008-05-28

    Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.

  8. Enhancing emotional experiences to dance through music: the role of valence and arousal in the cross-modal bias

    Directory of Open Access Journals (Sweden)

    Julia F. Christensen

    2014-10-01

    Full Text Available It is well established that emotional responses to stimuli presented to one perceptive modality (e.g. visual are modulated by the concurrent presentation of affective information to another modality (e.g. auditory – an effect known as the cross-modal bias. However, the affective mechanisms mediating this effect are still not fully understood. It remains unclear what role different dimensions of stimulus valence and arousal play in mediating the effect, and to what extent cross-modal influences impact not only our perception and conscious affective experiences, but also our psychophysiological emotional response. We addressed these issues by measuring participants’ subjective emotion ratings and their Galvanic Skin Responses in a cross-modal affect perception paradigm employing videos of ballet dance movements and instrumental classical music as the stimuli. We chose these stimuli to explore the cross-modal bias in a context of stimuli (ballet dance movements that most participants would have relatively little prior experience with. Results showed (i that the cross-modal bias was more pronounced for sad than for happy movements, whereas it was equivalent when contrasting high vs. low arousal movements, and (ii that movement valence did not modulate participants’ GSR, while movement arousal did such that GSR was potentiated in the case of low arousal movements with sad music and when high arousal movements were paired with happy music. Results are discussed in the context of the cross-modal affect perception literature and with regards to implications for the art community.

  9. Enhancing emotional experiences to dance through music: the role of valence and arousal in the cross-modal bias.

    Science.gov (United States)

    Christensen, Julia F; Gaigg, Sebastian B; Gomila, Antoni; Oke, Peter; Calvo-Merino, Beatriz

    2014-01-01

    It is well established that emotional responses to stimuli presented to one perceptive modality (e.g., visual) are modulated by the concurrent presentation of affective information to another modality (e.g., auditory)-an effect known as the cross-modal bias. However, the affective mechanisms mediating this effect are still not fully understood. It remains unclear what role different dimensions of stimulus valence and arousal play in mediating the effect, and to what extent cross-modal influences impact not only our perception and conscious affective experiences, but also our psychophysiological emotional response. We addressed these issues by measuring participants' subjective emotion ratings and their Galvanic Skin Responses (GSR) in a cross-modal affect perception paradigm employing videos of ballet dance movements and instrumental classical music as the stimuli. We chose these stimuli to explore the cross-modal bias in a context of stimuli (ballet dance movements) that most participants would have relatively little prior experience with. Results showed (i) that the cross-modal bias was more pronounced for sad than for happy movements, whereas it was equivalent when contrasting high vs. low arousal movements; and (ii) that movement valence did not modulate participants' GSR, while movement arousal did, such that GSR was potentiated in the case of low arousal movements with sad music and when high arousal movements were paired with happy music. Results are discussed in the context of the affective dimension of neuroentrainment and with regards to implications for the art community.

  10. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  11. [Exploration on Electroencephalogram Mechanism Differences of Negative Emotions Induced by Disgusted and Sad Situation Images].

    Science.gov (United States)

    Wang, Xin; Jin, Jingna; Li, Song; Liu, Zhipeng; Yin, Tao

    2015-12-01

    Evolutionary psychology holds such an opinion that negative situation may threaten survival, trigger avoidance motive and have poor effects on the human-body function and the psychological quality. Both disgusted and sad situations can induce negative emotions. However, differences between the two situations on attention capture and emotion cognition during the emotion induction are still not well known. Typical disgusted and sad situation images were used in the present study to induce two negative emotions, and 15 young students (7 males and 8 females, aged 27 ± 3) were recruited in the experiments. Electroencephalogram of 32 leads was recorded when the subjects were viewing situation images, and event-related potentials (ERP) of all leads were obtained for future analysis. Paired sample t tests were carried out on two ERP signals separately induced by disgusted and sad situation images to get time quantum with significant statistical differences between the two ERP signals. Root-mean-square deviations of two ERP signals during each time quantum were calculated and the brain topographic map based on root-mean-square deviations was drawn to display differences of two ERP signals in spatial. Results showed that differences of ERP signals induced by disgusted and sad situation images were mainly manifested in T1 (120-450 ms) early and T2 (800-1,000 ms) later. During the period of T1, the occipital lobe reflecting attention capture was activated by both disgusted and sad situation images, but the prefrontal cortex reflecting emotion sense was activated only by disgusted situation images. During the period of T2, the prefrontal cortex was activated by both disgusted and sad situation images. However, the parietal lobe was activated only by disgusted situation images, which showed stronger emotional perception. The research results would have enlightenment to deepen understanding of negative emotion and to exploredeep cognitive neuroscience mechanisms of negative

  12. Music-color associations are mediated by emotion.

    Science.gov (United States)

    Palmer, Stephen E; Schloss, Karen B; Xu, Zoe; Prado-León, Lilia R

    2013-05-28

    Experimental evidence demonstrates robust cross-modal matches between music and colors that are mediated by emotional associations. US and Mexican participants chose colors that were most/least consistent with 18 selections of classical orchestral music by Bach, Mozart, and Brahms. In both cultures, faster music in the major mode produced color choices that were more saturated, lighter, and yellower whereas slower, minor music produced the opposite pattern (choices that were desaturated, darker, and bluer). There were strong correlations (0.89 music and those of the colors chosen to go with the music, supporting an emotional mediation hypothesis in both cultures. Additional experiments showed similarly robust cross-modal matches from emotionally expressive faces to colors and from music to emotionally expressive faces. These results provide further support that music-to-color associations are mediated by common emotional associations.

  13. Music for a Brighter World: Brightness Judgment Bias by Musical Emotion.

    Science.gov (United States)

    Bhattacharya, Joydeep; Lindsen, Job P

    2016-01-01

    A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.

  14. Unforgettable film music: The role of emotion in episodic long-term memory for music

    Science.gov (United States)

    Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O

    2008-01-01

    Background Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Results Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Conclusion Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval. PMID:18505596

  15. Unforgettable film music: The role of emotion in episodic long-term memory for music

    Directory of Open Access Journals (Sweden)

    Altenmüller Eckart O

    2008-05-01

    Full Text Available Abstract Background Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Results Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Conclusion Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.

  16. Evaluating music emotion recognition

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A fundamental problem with nearly all work in music genre recognition (MGR)is that evaluation lacks validity with respect to the principal goals of MGR. This problem also occurs in the evaluation of music emotion recognition (MER). Standard approaches to evaluation, though easy to implement, do...... not reliably differentiate between recognizing genre or emotion from music, or by virtue of confounding factors in signals (e.g., equalization). We demonstrate such problems for evaluating an MER system, and conclude with recommendations....

  17. LSD enhances the emotional response to music.

    Science.gov (United States)

    Kaelen, M; Barrett, F S; Roseman, L; Lorenz, R; Family, N; Bolstridge, M; Curran, H V; Feilding, A; Nutt, D J; Carhart-Harris, R L

    2015-10-01

    There is renewed interest in the therapeutic potential of psychedelic drugs such as lysergic acid diethylamide (LSD). LSD was used extensively in the 1950s and 1960s as an adjunct in psychotherapy, reportedly enhancing emotionality. Music is an effective tool to evoke and study emotion and is considered an important element in psychedelic-assisted psychotherapy; however, the hypothesis that psychedelics enhance the emotional response to music has yet to be investigated in a modern placebo-controlled study. The present study sought to test the hypothesis that music-evoked emotions are enhanced under LSD. Ten healthy volunteers listened to five different tracks of instrumental music during each of two study days, a placebo day followed by an LSD day, separated by 5-7 days. Subjective ratings were completed after each music track and included a visual analogue scale (VAS) and the nine-item Geneva Emotional Music Scale (GEMS-9). Results demonstrated that the emotional response to music is enhanced by LSD, especially the emotions "wonder", "transcendence", "power" and "tenderness". These findings reinforce the long-held assumption that psychedelics enhance music-evoked emotion, and provide tentative and indirect support for the notion that this effect can be harnessed in the context of psychedelic-assisted psychotherapy. Further research is required to test this link directly.

  18. Sad benefit in face working memory: an emotional bias of melancholic depression.

    Science.gov (United States)

    Linden, Stefanie C; Jackson, Margaret C; Subramanian, Leena; Healy, David; Linden, David E J

    2011-12-01

    Emotion biases feature prominently in cognitive theories of depression and are a focus of psychological interventions. However, there is presently no stable neurocognitive marker of altered emotion-cognition interactions in depression. One reason may be the heterogeneity of major depressive disorder. Our aim in the present study was to find an emotional bias that differentiates patients with melancholic depression from controls, and patients with melancholic from those with non-melancholic depression. We used a working memory paradigm for emotional faces, where two faces with angry, happy, neutral, sad or fearful expression had to be retained over one second. Twenty patients with melancholic depression, 20 age-, education- and gender-matched control participants and 20 patients with non-melancholic depression participated in the study. We analysed performance on the working memory task using signal detection measures. We found an interaction between group and emotion on working memory performance that was driven by the higher performance for sad faces compared to other categories in the melancholic group. We computed a measure of "sad benefit", which distinguished melancholic and non-melancholic patients with good sensitivity and specificity. However, replication studies and formal discriminant analysis will be needed in order to assess whether emotion bias in working memory may become a useful diagnostic tool to distinguish these two syndromes. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Music for a Brighter World: Brightness Judgment Bias by Musical Emotion.

    Directory of Open Access Journals (Sweden)

    Joydeep Bhattacharya

    Full Text Available A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.

  20. Central auditory processing. Are the emotional perceptions of those listening to classical music inherent in the composition or acquired by the listeners?

    Science.gov (United States)

    Goycoolea, Marcos; Levy, Raquel; Ramírez, Carlos

    2013-04-01

    There is seemingly some inherent component in selected musical compositions that elicits specific emotional perceptions, feelings, and physical conduct. The purpose of the study was to determine if the emotional perceptions of those listening to classical music are inherent in the composition or acquired by the listeners. Fifteen kindergarten students, aged 5 years, from three different sociocultural groups, were evaluated. They were exposed to portions of five purposefully selected classical compositions and asked to describe their emotions when listening to these musical pieces. All were instrumental compositions without human voices or spoken language. In addition, they were played to an audience of an age at which they were capable of describing their perceptions and supposedly had no significant previous experience of classical music. Regardless of their sociocultural background, the children in the three groups consistently identified similar emotions (e.g. fear, happiness, sadness), feelings (e.g. love), and mental images (e.g. giants or dangerous animals walking) when listening to specific compositions. In addition, the musical compositions generated physical conducts that were reflected by the children's corporal expressions. Although the sensations were similar, the way of expressing them differed according to their background.

  1. Emotion regulation through listening to music in everyday situations.

    Science.gov (United States)

    Thoma, Myriam V; Ryf, Stefan; Mohiyeddini, Changiz; Ehlert, Ulrike; Nater, Urs M

    2012-01-01

    Music is a stimulus capable of triggering an array of basic and complex emotions. We investigated whether and how individuals employ music to induce specific emotional states in everyday situations for the purpose of emotion regulation. Furthermore, we wanted to examine whether specific emotion-regulation styles influence music selection in specific situations. Participants indicated how likely it would be that they would want to listen to various pieces of music (which are known to elicit specific emotions) in various emotional situations. Data analyses by means of non-metric multidimensional scaling revealed a clear preference for pieces of music that were emotionally congruent with an emotional situation. In addition, we found that specific emotion-regulation styles might influence the selection of pieces of music characterised by specific emotions. Our findings demonstrate emotion-congruent music selection and highlight the important role of specific emotion-regulation styles in the selection of music in everyday situations.

  2. Selective preservation of the beat in apperceptive music agnosia: a case study.

    Science.gov (United States)

    Baird, Amee D; Walker, David G; Biggs, Vivien; Robinson, Gail A

    2014-04-01

    Music perception involves processing of melodic, temporal and emotional dimensions that have been found to dissociate in healthy individuals and after brain injury. Two components of the temporal dimension have been distinguished, namely rhythm and metre. We describe an 18 year old male musician 'JM' who showed apperceptive music agnosia with selectively preserved metre perception, and impaired recognition of sad and peaceful music relative to age and music experience matched controls after resection of a right temporoparietal tumour. Two months post-surgery JM underwent a comprehensive neuropsychological evaluation including assessment of his music perception abilities using the Montreal Battery for Evaluation of Amusia (MBEA, Peretz, Champod, & Hyde, 2003). He also completed several experimental tasks to explore his ability to recognise famous songs and melodies, emotions portrayed by music and a broader range of environmental sounds. Five age-, gender-, education- and musical experienced-matched controls were administered the same experimental tasks. JM showed selective preservation of metre perception, with impaired performances compared to controls and scoring below the 5% cut-off on all MBEA subtests, except for the metric condition. He could identify his favourite songs and environmental sounds. He showed impaired recognition of sad and peaceful emotions portrayed in music relative to controls but intact ability to identify happy and scary music. This case study contributes to the scarce literature documenting a dissociation between rhythmic and metric processing, and the rare observation of selectively preserved metric interpretation in the context of apperceptive music agnosia. It supports the notion that the anterior portion of the superior temporal gyrus (STG) plays a role in metric processing and provides the novel observation that selectively preserved metre is sufficient to identify happy and scary, but not sad or peaceful emotions portrayed in music

  3. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    Science.gov (United States)

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  4. Musical Empathy, Emotional Co-Constitution, and the “Musical Other”

    Directory of Open Access Journals (Sweden)

    Deniz Peters

    2015-09-01

    Full Text Available Musical experience can confront us with emotions that are not currently ours. We might remain unaffected by them, or be affected: retreat from them in avoidance, or embrace them and experience them as ours. This suggests that they are another's. Whose are they? Do we arrive at them through empathy, turning our interest to the music as we do to others in an interpersonal encounter? In addressing these questions, I differentiate between musical and social empathy, rejecting the idea that the emotions arise as a direct consequence of empathizing with composers or performers. I argue that musical perception is doubly active: bodily knowledge can extend auditory perception cross-modally, which, in turn, can orient a bodily hermeneutic. Musical passages thus acquire adverbial expressivity, an expressivity which, as I discuss, is co-constituted, and engenders a "musical other." This leads me to a reinterpretation of the musical persona and to consider a dialectic between social and musical empathy that I think plays a central role in the individuation of shared emotion in musical experience. Musical empathy, then, occurs via a combination of self-involvement and self-effacement—leading us first into, and then perhaps beyond, ourselves.

  5. Why is happy-sad more difficult? Focal emotional information impairs inhibitory control in children and adults.

    Science.gov (United States)

    Kramer, Hannah J; Lagattuta, Kristin Hansen; Sayfan, Liat

    2015-02-01

    This study compared the relative difficulty of the happy-sad inhibitory control task (say "happy" for the sad face and "sad" for the happy face) against other card tasks that varied by the presence and type (focal vs. peripheral; negative vs. positive) of emotional information in a sample of 4- to 11-year-olds and adults (N = 264). Participants also completed parallel "name games" (direct labeling). All age groups made more errors and took longer to respond to happy-sad compared to other versions, and the relative difficulty of happy-sad increased with age. The happy-sad name game even posed a greater challenge than some opposite games. These data provide insight into the impact of emotions on cognitive processing across a wide age range. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  6. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    Directory of Open Access Journals (Sweden)

    Sara Invitto

    2017-08-01

    Full Text Available Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians. Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment. A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  7. Evaluating music emotion recognition:Lessons from music genre recognition?

    OpenAIRE

    Sturm, Bob L.

    2013-01-01

    A fundamental problem with nearly all work in music genre recognition (MGR)is that evaluation lacks validity with respect to the principal goals of MGR. This problem also occurs in the evaluation of music emotion recognition (MER). Standard approaches to evaluation, though easy to implement, do not reliably differentiate between recognizing genre or emotion from music, or by virtue of confounding factors in signals (e.g., equalization). We demonstrate such problems for evaluating an MER syste...

  8. Relaxing music counters heightened consolidation of emotional memory.

    Science.gov (United States)

    Rickard, Nikki S; Wong, Wendy Wing; Velik, Lauren

    2012-02-01

    Emotional events tend to be retained more strongly than other everyday occurrences, a phenomenon partially regulated by the neuromodulatory effects of arousal. Two experiments demonstrated the use of relaxing music as a means of reducing arousal levels, thereby challenging heightened long-term recall of an emotional story. In Experiment 1, participants (N=84) viewed a slideshow, during which they listened to either an emotional or neutral narration, and were exposed to relaxing or no music. Retention was tested 1 week later via a forced choice recognition test. Retention for both the emotional content (Phase 2 of the story) and material presented immediately after the emotional content (Phase 3) was enhanced, when compared with retention for the neutral story. Relaxing music prevented the enhancement for material presented after the emotional content (Phase 3). Experiment 2 (N=159) provided further support to the neuromodulatory effect of music by post-event presentation of both relaxing music and non-relaxing auditory stimuli (arousing music/background sound). Free recall of the story was assessed immediately afterwards and 1 week later. Relaxing music significantly reduced recall of the emotional story (Phase 2). The findings provide further insight into the capacity of relaxing music to attenuate the strength of emotional memory, offering support for the therapeutic use of music for such purposes. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Emotional Responses to Music: Experience, Expression, and Physiology

    Science.gov (United States)

    Lundqvist, Lars-Olov; Carlsson, Fredrik; Hilmersson, Per; Juslin, Patrik N.

    2009-01-01

    A crucial issue in research on music and emotion is whether music evokes genuine emotional responses in listeners (the emotivist position) or whether listeners merely perceive emotions expressed by the music (the cognitivist position). To investigate this issue, we measured self-reported emotion, facial muscle activity, and autonomic activity in…

  10. Towards a neural basis of music-evoked emotions.

    Science.gov (United States)

    Koelsch, Stefan

    2010-03-01

    Music is capable of evoking exceptionally strong emotions and of reliably affecting the mood of individuals. Functional neuroimaging and lesion studies show that music-evoked emotions can modulate activity in virtually all limbic and paralimbic brain structures. These structures are crucially involved in the initiation, generation, detection, maintenance, regulation and termination of emotions that have survival value for the individual and the species. Therefore, at least some music-evoked emotions involve the very core of evolutionarily adaptive neuroaffective mechanisms. Because dysfunctions in these structures are related to emotional disorders, a better understanding of music-evoked emotions and their neural correlates can lead to a more systematic and effective use of music in therapy. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Sad or fearful? The influence of body posture on adults' and children's perception of facial displays of emotion.

    Science.gov (United States)

    Mondloch, Catherine J

    2012-02-01

    The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception. 2011 Elsevier Inc. All rights reserved.

  12. Emotions evoked by the sound of music: characterization, classification, and measurement.

    Science.gov (United States)

    Zentner, Marcel; Grandjean, Didier; Scherer, Klaus R

    2008-08-01

    One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.

  13. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  14. A developmental study of the affective value of tempo and mode in music.

    Science.gov (United States)

    Dalla Bella, S; Peretz, I; Rousseau, L; Gosselin, N

    2001-07-01

    Do children use the same properties as adults in determining whether music sounds happy or sad? We addressed this question with a set of 32 excerpts (16 happy and 16 sad) taken from pre-existing music. The tempo (i.e. the number of beats per minute) and the mode (i.e. the specific subset of pitches used to write a given musical excerpt) of these excerpts were modified independently and jointly in order to measure their effects on happy-sad judgments. Adults and children from 3 to 8 years old were required to judge whether the excerpts were happy or sad. The results show that as adults, 6--8-year-old children are affected by mode and tempo manipulations. In contrast, 5-year-olds' responses are only affected by a change of tempo. The youngest children (3--4-year-olds) failed to distinguish the happy from the sad tone of the music above chance. The results indicate that tempo is mastered earlier than mode to infer the emotional tone conveyed by music.

  15. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music.

    Science.gov (United States)

    Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.

  16. Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions

    Science.gov (United States)

    Reybrouck, Mark; Eerola, Tuomas

    2017-01-01

    The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning. The theoretical grounding elaborates on the transition from referential to affective semantics, the distinction between expression and induction of emotions, and the tension between discrete-digital and analog-continuous processing of the sounds. The empirical background provides evidence from several findings such as infant-directed speech, referential emotive vocalizations and separation calls in lower mammals, the distinction between the acoustic and vehicle mode of sound perception, and the bodily and physiological reactions to the sounds. It is argued, finally, that early affective processing reflects the way emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. As such there is a dynamic tension between nature and nurture, which is reflected in the nature-nurture-nature cycle of musical sense-making. PMID:28421015

  17. Impaired emotion recognition in music in Parkinson's disease

    NARCIS (Netherlands)

    van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched

  18. The effects of tempo and familiarity on children's affective interpretation of music.

    Science.gov (United States)

    Mote, Jasmine

    2011-06-01

    When and how does one learn to associate emotion with music? This study attempted to address this issue by examining whether preschool children use tempo as a cue in determining whether a song is happy or sad. Instrumental versions of children's songs were played at different tempos to adults and children ages 3 to 5 years. Familiar and unfamiliar songs were used to examine whether familiarity affected children's identification of emotion in music. The results indicated that adults, 4 year olds and 5 year olds rated fast songs as significantly happier than slow songs. However, 3 year olds failed to rate fast songs differently than slow songs at above-chance levels. Familiarity did not significantly affect children's identification of happiness and sadness in music.

  19. Music-to-Color Associations of Single-Line Piano Melodies in Non-synesthetes.

    Science.gov (United States)

    Palmer, Stephen E; Langlois, Thomas A; Schloss, Karen B

    2016-01-01

    Prior research has shown that non-synesthetes' color associations to classical orchestral music are strongly mediated by emotion. The present study examines similar cross-modal music-to-color associations for much better controlled musical stimuli: 64 single-line piano melodies that were generated from four basic melodies by Mozart, whose global musical parameters were manipulated in tempo(slow/fast), note-density (sparse/dense), mode (major/minor) and pitch-height (low/high). Participants first chose the three colors (from 37) that they judged to be most consistent with (and, later, the three that were most inconsistent with) the music they were hearing. They later rated each melody and each color for the strength of its association along four emotional dimensions: happy/sad, agitated/calm, angry/not-angry and strong/weak. The cross-modal choices showed that faster music in the major mode was associated with lighter, more saturated, yellower (warmer) colors than slower music in the minor mode. These results replicate and extend those of Palmer et al. (2013, Proc. Natl Acad. Sci. 110, 8836-8841) with more precisely controlled musical stimuli. Further results replicated strong evidence for emotional mediation of these cross-modal associations, in that the emotional ratings of the melodies were very highly correlated with the emotional associations of the colors chosen as going best/worst with the melodies (r = 0.92, 0.85, 0.82 and 0.70 for happy/sad, strong/weak,angry/not-angry and agitated/calm, respectively). The results are discussed in terms of common emotional associations forming a cross-modal bridge between highly disparate sensory inputs.

  20. Regulating Anger under Stress via Cognitive Reappraisal and Sadness.

    Science.gov (United States)

    Zhan, Jun; Wu, Xiaofei; Fan, Jin; Guo, Jianyou; Zhou, Jianshe; Ren, Jun; Liu, Chang; Luo, Jing

    2017-01-01

    Previous studies have reported the failure of cognitive emotion regulation (CER), especially in regulating unpleasant emotions under stress. The underlying reason for this failure was the application of CER depends heavily on the executive function of the prefrontal cortex (PFC), but this function can be impaired by stress-related neuroendocrine hormones. This observation highlights the necessity of developing self-regulatory strategies that require less top-down cognitive control. Based on traditional Chinese philosophy and medicine, which examine how different types of emotions promote or counteract one another, we have developed a novel emotion regulation strategy whereby one emotion is used to alter another. For example, our previous experiment showed that sadness induction (after watching a sad film) could reduce aggressive behavior associated with anger [i.e., "sadness counteracts anger" (SCA)] (Zhan et al., 2015). Relative to the CER strategy requiring someone to think about certain cognitive reappraisals to reinterpret the meaning of an unpleasant situation, watching a film or listening to music and experiencing the emotion contained therein seemingly requires less cognitive effort and control; therefore, this SCA strategy may be an alternative strategy that compensates for the limitations of cognitive regulation strategies, especially in stressful situations. The present study was designed to directly compare the effects of the CER and SCA strategy in regulating anger and anger-related aggression in stressful and non-stressful conditions. Participants' subjective feeling of anger, anger-related aggressive behavior, skin conductance, and salivary cortisol and alpha-amylase levels were measured. Our findings revealed that acute stress impaired one's ability to use CR to control angry responses provoked by others, whereas stress did not influence the efficiency of the SCA strategy. Compared with sadness or neutral emotion induction, CER induction was found to

  1. Regulating Anger under Stress via Cognitive Reappraisal and Sadness

    Directory of Open Access Journals (Sweden)

    Jun Zhan

    2017-08-01

    Full Text Available Previous studies have reported the failure of cognitive emotion regulation (CER, especially in regulating unpleasant emotions under stress. The underlying reason for this failure was the application of CER depends heavily on the executive function of the prefrontal cortex (PFC, but this function can be impaired by stress-related neuroendocrine hormones. This observation highlights the necessity of developing self-regulatory strategies that require less top-down cognitive control. Based on traditional Chinese philosophy and medicine, which examine how different types of emotions promote or counteract one another, we have developed a novel emotion regulation strategy whereby one emotion is used to alter another. For example, our previous experiment showed that sadness induction (after watching a sad film could reduce aggressive behavior associated with anger [i.e., “sadness counteracts anger” (SCA] (Zhan et al., 2015. Relative to the CER strategy requiring someone to think about certain cognitive reappraisals to reinterpret the meaning of an unpleasant situation, watching a film or listening to music and experiencing the emotion contained therein seemingly requires less cognitive effort and control; therefore, this SCA strategy may be an alternative strategy that compensates for the limitations of cognitive regulation strategies, especially in stressful situations. The present study was designed to directly compare the effects of the CER and SCA strategy in regulating anger and anger-related aggression in stressful and non-stressful conditions. Participants’ subjective feeling of anger, anger-related aggressive behavior, skin conductance, and salivary cortisol and alpha-amylase levels were measured. Our findings revealed that acute stress impaired one’s ability to use CR to control angry responses provoked by others, whereas stress did not influence the efficiency of the SCA strategy. Compared with sadness or neutral emotion induction, CER

  2. Impaired emotion recognition in music in Parkinson's disease

    NARCIS (Netherlands)

    van Tricht, M.J.; Smeding, H.M.M.; Speelman, J.D.; Schmand, B.A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson’s disease (PD) and 20 matched

  3. Looking at food in sad mood: do attention biases lead emotional eaters into overeating after a negative mood induction?

    Science.gov (United States)

    Werthmann, Jessica; Renner, Fritz; Roefs, Anne; Huibers, Marcus J H; Plumanns, Lana; Krott, Nora; Jansen, Anita

    2014-04-01

    Emotional eating is associated with overeating and the development of obesity. Yet, empirical evidence for individual (trait) differences in emotional eating and cognitive mechanisms that contribute to eating during sad mood remain equivocal. The aim of this study was to test if attention bias for food moderates the effect of self-reported emotional eating during sad mood (vs neutral mood) on actual food intake. It was expected that emotional eating is predictive of elevated attention for food and higher food intake after an experimentally induced sad mood and that attentional maintenance on food predicts food intake during a sad versus a neutral mood. Participants (N = 85) were randomly assigned to one of the two experimental mood induction conditions (sad/neutral). Attentional biases for high caloric foods were measured by eye tracking during a visual probe task with pictorial food and neutral stimuli. Self-reported emotional eating was assessed with the Dutch Eating Behavior Questionnaire (DEBQ) and ad libitum food intake was tested by a disguised food offer. Hierarchical multivariate regression modeling showed that self-reported emotional eating did not account for changes in attention allocation for food or food intake in either condition. Yet, attention maintenance on food cues was significantly related to increased intake specifically in the neutral condition, but not in the sad mood condition. The current findings show that self-reported emotional eating (based on the DEBQ) might not validly predict who overeats when sad, at least not in a laboratory setting with healthy women. Results further suggest that attention maintenance on food relates to eating motivation when in a neutral affective state, and might therefore be a cognitive mechanism contributing to increased food intake in general, but maybe not during sad mood. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Studying emotion induced by music through a crowdsourcing game

    NARCIS (Netherlands)

    Aljanaki, A.; Wiering, F.; Veltkamp, R.C.

    One of the major reasons why people find music so enjoyable is its emotional impact. Creating emotion-based playlists is a natural way of organizing music. The usability of online music streaming services could be greatly improved by developing emotion-based access methods, and automatic music

  5. Musical Manipulations and the Emotionally Extended Mind

    Directory of Open Access Journals (Sweden)

    Joel Krueger

    2015-05-01

    Full Text Available I respond to Kersten's criticism in his article "Music and Cognitive Extension" of my approach to the musically extended emotional mind in Krueger (2014. I specify how we manipulate—and in so doing, integrate with—music when, as active listeners, we become part of a musically extended cognitive system. I also indicate how Kersten's account might be enriched by paying closer attention to the way that music functions as an environmental artifact for emotion regulation.

  6. Music Education Intervention Improves Vocal Emotion Recognition

    Science.gov (United States)

    Mualem, Orit; Lavidor, Michal

    2015-01-01

    The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…

  7. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Science.gov (United States)

    Lense, Miriam D.; Shivers, Carolyn M.; Dykens, Elisabeth M.

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia. PMID:23966965

  8. Impaired Emotion Recognition in Music in Parkinson's Disease

    Science.gov (United States)

    van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction…

  9. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  10. Social and Emotional Function of Music Listening: Reasons for Listening to Music

    Science.gov (United States)

    Gurgen, Elif Tekin

    2016-01-01

    Problem Statement: The reasons that people listen to music have been investigated for many years. Research results over the past 50 years have showed that individual musical preference is influenced by multiple factors. Many studies have shown throughout that music has been used to induce emotional states, express, activate, control emotions,…

  11. Brain-Activity-Driven Real-Time Music Emotive Control

    OpenAIRE

    Giraldo, Sergio; Ramirez, Rafael

    2013-01-01

    Active music listening has emerged as a study field that aims to enable listeners to interactively control music. Most of active music listening systems aim to control music aspects such as playback, equalization, browsing, and retrieval, but few of them aim to control expressive aspects of music to convey emotions. In this study our aim is to enrich the music listening experience by allowing listeners to control expressive parameters in music performances using their perceived emotional stat...

  12. Metaphor and music emotion: Ancient views and future directions.

    Science.gov (United States)

    Pannese, Alessia; Rappaz, Marc-André; Grandjean, Didier

    2016-08-01

    Music is often described in terms of emotion. This notion is supported by empirical evidence showing that engaging with music is associated with subjective feelings, and with objectively measurable responses at the behavioural, physiological, and neural level. Some accounts, however, reject the idea that music may directly induce emotions. For example, the 'paradox of negative emotion', whereby music described in negative terms is experienced as enjoyable, suggests that music might move the listener through indirect mechanisms in which the emotional experience elicited by music does not always coincide with the emotional label attributed to it. Here we discuss the role of metaphor as a potential mediator in these mechanisms. Drawing on musicological, philosophical, and neuroscientific literature, we suggest that metaphor acts at key stages along and between physical, biological, cognitive, and contextual processes, and propose a model of music experience in which metaphor mediates between language, emotion, and aesthetic response. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. An integrative review of the enjoyment of sadness associated with music.

    OpenAIRE

    Eerola, T.; Vuoskoski, J. K.; Peltola, H.-R.; Putkinen, V.; Schäfer, K.

    2017-01-01

    The recent surge of interest towards the paradoxical pleasure produced by sad music has generated a handful of theories and an array of empirical explorations on the topic. However, none of these have attempted to weigh the existing evidence in a systematic fashion. The present work puts forward an integrative framework laid out over three levels of explanation – biological, psycho-social, and cultural – to compare and integrate the existing findings in a meaningful way. First, we review the ...

  14. Emotional tears facilitate the recognition of sadness and the perceived need for social support.

    Science.gov (United States)

    Balsters, Martijn J H; Krahmer, Emiel J; Swerts, Marc G J; Vingerhoets, Ad J J M

    2013-02-12

    The tearing effect refers to the relevance of tears as an important visual cue adding meaning to human facial expression. However, little is known about how people process these visual cues and their mediating role in terms of emotion perception and person judgment. We therefore conducted two experiments in which we measured the influence of tears on the identification of sadness and the perceived need for social support at an early perceptional level. In two experiments (1 and 2), participants were exposed to sad and neutral faces. In both experiments, the face stimuli were presented for 50 milliseconds. In experiment 1, tears were digitally added to sad faces in one condition. Participants demonstrated a significant faster recognition of sad faces with tears compared to those without tears. In experiment 2, tears were added to neutral faces as well. Participants had to indicate to what extent the displayed individuals were in need of social support. Study participants reported a greater perceived need for social support to both sad and neutral faces with tears than to those without tears. This study thus demonstrated that emotional tears serve as important visual cues at an early (pre-attentive) level.

  15. A percepção de emoções em trechos de música ocidental erudita The perception of emotions in excerpts of classical Western music

    Directory of Open Access Journals (Sweden)

    Danilo Ramos

    2012-12-01

    Full Text Available O objetivo deste estudo foi verificar respostas emocionais a trechos musicais do repertório erudito ocidental. Músicos e não músicos ouviam cada trecho musical e associavam-no a categorias emocionais (Alegria, Tristeza, Serenidade ou Medo/Raiva. Os resultados indicaram que, para ambos os grupos, cada trecho musical, na maioria, não foi associado a mais de uma categoria emocional. De um modo geral, as associações foram semelhantes entre os grupos, embora as respostas dos músicos tenham sido mais consistentes. Estes resultados sugerem um processamento cognitivo de respostas emocionais à música ocidental relacionado à estrutura cognitiva do evento, a diferenças entre indivíduos e à expertise musical.The aim of this study was to evaluate emotional responses to musical excerpts from Western repertoire. Musicians and nonmusicians listened to each musical excerpt and linked it to emotional categories (Joy, Sadness, Serenity or Fear / Anger. The results indicated that each musical excerpt, in majority, was not associated to more than one emotional category, for both groups. In general, associations were similar between groups, although the responses of musicians have been more consistent. These results suggest a cognitive processing of emotional responses to music related to the cognitive structure of the event, to individual differences and to musical expertise.

  16. Cognitive Function, Origin, and Evolution of Musical Emotions

    Directory of Open Access Journals (Sweden)

    Leonid Perlovsky

    2013-12-01

    Full Text Available Cognitive function of music, its origin, and evolution has been a mystery until recently. Here we discuss a theory of a fundamental function of music in cognition and culture. Music evolved in parallel with language. The evolution of language toward a semantically powerful tool required freeing from uncontrolled emotions. Knowledge evolved fast along with language. This created cognitive dissonances, contradictions among knowledge and instincts, which differentiated consciousness. To sustain evolution of language and culture, these contradictions had to be unified. Music was the mechanism of unification. Differentiated emotions are needed for resolving cognitive dissonances. As knowledge has been accumulated, contradictions multiplied and correspondingly more varied emotions had to evolve. While language differentiated psyche, music unified it. Thus the need for refined musical emotions in the process of cultural evolution is grounded in fundamental mechanisms of cognition. This is why today's human mind and cultures cannot exist without today's music.

  17. Affinity for Music: A Study of the Role of Emotion in Musical Instrument Learning

    Science.gov (United States)

    StGeorge, Jennifer; Holbrook, Allyson; Cantwell, Robert

    2014-01-01

    For many people, the appeal of music lies in its connection to human emotions. A significant body of research has explored the emotions that are experienced through either the formal structure of music or through its symbolic messages. Yet in the instrumental music education field, this emotional connection is rarely examined. In this article, it…

  18. Emotional Responses to Music: Shifts in Frontal Brain Asymmetry Mark Periods of Musical Change.

    Science.gov (United States)

    Arjmand, Hussain-Abdulah; Hohagen, Jesper; Paton, Bryan; Rickard, Nikki S

    2017-01-01

    Recent studies have demonstrated increased activity in brain regions associated with emotion and reward when listening to pleasurable music. Unexpected change in musical features intensity and tempo - and thereby enhanced tension and anticipation - is proposed to be one of the primary mechanisms by which music induces a strong emotional response in listeners. Whether such musical features coincide with central measures of emotional response has not, however, been extensively examined. In this study, subjective and physiological measures of experienced emotion were obtained continuously from 18 participants (12 females, 6 males; 18-38 years) who listened to four stimuli-pleasant music, unpleasant music (dissonant manipulations of their own music), neutral music, and no music, in a counter-balanced order. Each stimulus was presented twice: electroencephalograph (EEG) data were collected during the first, while participants continuously subjectively rated the stimuli during the second presentation. Frontal asymmetry (FA) indices from frontal and temporal sites were calculated, and peak periods of bias toward the left (indicating a shift toward positive affect) were identified across the sample. The music pieces were also examined to define the temporal onset of key musical features. Subjective reports of emotional experience averaged across the condition confirmed participants rated their music selection as very positive, the scrambled music as negative, and the neutral music and silence as neither positive nor negative. Significant effects in FA were observed in the frontal electrode pair FC3-FC4, and the greatest increase in left bias from baseline was observed in response to pleasurable music. These results are consistent with findings from previous research. Peak FA responses at this site were also found to co-occur with key musical events relating to change, for instance, the introduction of a new motif, or an instrument change, or a change in low level acoustic

  19. Emotional Responses to Music: Shifts in Frontal Brain Asymmetry Mark Periods of Musical Change

    Directory of Open Access Journals (Sweden)

    Hussain-Abdulah Arjmand

    2017-12-01

    Full Text Available Recent studies have demonstrated increased activity in brain regions associated with emotion and reward when listening to pleasurable music. Unexpected change in musical features intensity and tempo – and thereby enhanced tension and anticipation – is proposed to be one of the primary mechanisms by which music induces a strong emotional response in listeners. Whether such musical features coincide with central measures of emotional response has not, however, been extensively examined. In this study, subjective and physiological measures of experienced emotion were obtained continuously from 18 participants (12 females, 6 males; 18–38 years who listened to four stimuli—pleasant music, unpleasant music (dissonant manipulations of their own music, neutral music, and no music, in a counter-balanced order. Each stimulus was presented twice: electroencephalograph (EEG data were collected during the first, while participants continuously subjectively rated the stimuli during the second presentation. Frontal asymmetry (FA indices from frontal and temporal sites were calculated, and peak periods of bias toward the left (indicating a shift toward positive affect were identified across the sample. The music pieces were also examined to define the temporal onset of key musical features. Subjective reports of emotional experience averaged across the condition confirmed participants rated their music selection as very positive, the scrambled music as negative, and the neutral music and silence as neither positive nor negative. Significant effects in FA were observed in the frontal electrode pair FC3–FC4, and the greatest increase in left bias from baseline was observed in response to pleasurable music. These results are consistent with findings from previous research. Peak FA responses at this site were also found to co-occur with key musical events relating to change, for instance, the introduction of a new motif, or an instrument change, or a

  20. Repetition and Emotive Communication in Music Versus Speech

    Directory of Open Access Journals (Sweden)

    Elizabeth Hellmuth eMargulis

    2013-04-01

    Full Text Available Music and speech are often placed alongside one another as comparative cases. Their relative overlaps and disassociations have been well explored (e.g. Patel, 2010. But one key attribute distinguishing these two domains has often been overlooked: the greater preponderance of repetition in music in comparison to speech. Recent fMRI studies have shown that familiarity – achieved through repetition – is a critical component of emotional engagement with music (Pereira et al., 2011. If repetition is fundamental to emotional responses to music, and repetition is a key distinguisher between the domains of music and speech, then close examination of the phenomenon of repetition might help clarify the ways that music elicits emotion differently than speech.

  1. Fusion of Electroencephalogram dynamics and musical contents for estimating emotional responses in music listening

    Directory of Open Access Journals (Sweden)

    Yuan-Pin eLin

    2014-05-01

    Full Text Available Electroencephalography (EEG-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI, neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications, the music modality would play a complementary role and augment the EEG results from around 61% to 67% in valence classification and from around 58% to 67% in arousal classification. The musical timbre appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.

  2. Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening.

    Science.gov (United States)

    Lin, Yuan-Pin; Yang, Yi-Hsuan; Jung, Tzyy-Ping

    2014-01-01

    Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61-67% in valence classification and from around 58-67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.

  3. Verbal and facial-emotional Stroop tasks reveal specific attentional interferences in sad mood

    NARCIS (Netherlands)

    Isaac, L.; Vrijsen, J.N.; Eling, P.A.T.M.; Oostrom, I.I.H. van; Speckens, A.E.M.; Becker, E.S.

    2012-01-01

    Mood congruence refers to the tendency of individuals to attend to information more readily when it has the same emotional content as their current mood state. The aim of the present study was to ascertain whether attentional interference occurred for participants in sad mood states for emotionally

  4. Modeling emotional content of music using system identification.

    Science.gov (United States)

    Korhonen, Mark D; Clausi, David A; Jernigan, M Ed

    2006-06-01

    Research was conducted to develop a methodology to model the emotional content of music as a function of time and musical features. Emotion is quantified using the dimensions valence and arousal, and system-identification techniques are used to create the models. Results demonstrate that system identification provides a means to generalize the emotional content for a genre of music. The average R2 statistic of a valid linear model structure is 21.9% for valence and 78.4% for arousal. The proposed method of constructing models of emotional content generalizes previous time-series models and removes ambiguity from classifiers of emotion.

  5. Effects of music interventions on emotional States and running performance.

    Science.gov (United States)

    Lane, Andrew M; Davis, Paul A; Devonport, Tracey J

    2011-01-01

    The present study compared the effects of two different music interventions on changes in emotional states before and during running, and also explored effects of music interventions upon performance outcome. Volunteer participants (n = 65) who regularly listened to music when running registered online to participate in a three-stage study. Participants attempted to attain a personally important running goal to establish baseline performance. Thereafter, participants were randomly assigned to either a self-selected music group or an Audiofuel music group. Audiofuel produce pieces of music designed to assist synchronous running. The self-selected music group followed guidelines for selecting motivating playlists. In both experimental groups, participants used the Brunel Music Rating Inventory-2 (BMRI-2) to facilitate selection of motivational music. Participants again completed the BMRI-2 post- intervention to assess the motivational qualities of Audiofuel music or the music they selected for use during the study. Results revealed no significant differences between self-selected music and Audiofuel music on all variables analyzed. Participants in both music groups reported increased pleasant emotions and decreased unpleasant emotions following intervention. Significant performance improvements were demonstrated post-intervention with participants reporting a belief that emotional states related to performance. Further analysis indicated that enhanced performance was significantly greater among participants reporting music to be motivational as indicated by high scores on the BMRI-2. Findings suggest that both individual athletes and practitioners should consider using the BMRI-2 when selecting music for running. Key pointsListening to music with a high motivational quotient as indicated by scores on the BMRI-2 was associated with enhanced running performance and meta-emotional beliefs that emotions experienced during running helped performance.Beliefs on the

  6. Non-response to sad mood induction: implications for emotion research.

    Science.gov (United States)

    Rottenberg, Jonathan; Kovacs, Maria; Yaroslavsky, Ilya

    2018-05-01

    Experimental induction of sad mood states is a mainstay of laboratory research on affect and cognition, mood regulation, and mood disorders. Typically, the success of such mood manipulations is reported as a statistically significant pre- to post-induction change in the self-rated intensity of the target affect. The present commentary was motivated by an unexpected finding in one of our studies concerning the response rate to a well-validated sad mood induction. Using the customary statistical approach, we found a significant mean increase in self-rated sadness intensity with a moderate effect size, verifying the "success" of the mood induction. However, that "success" masked that, between one-fifth and about one-third of our samples (adolescents who had histories of childhood-onset major depressive disorder and healthy controls) reported absolutely no sadness in response to the mood induction procedure. We consider implications of our experience for emotion research by (1) commenting upon the typically overlooked phenomenon of nonresponse, (2) suggesting changes in reporting practices regarding mood induction success, and (3) outlining future directions to help scientists determine why some subjects do not respond to experimental mood induction.

  7. Music and Emotion-A Case for North Indian Classical Music.

    Science.gov (United States)

    Valla, Jeffrey M; Alappatt, Jacob A; Mathur, Avantika; Singh, Nandini C

    2017-01-01

    The ragas of North Indian Classical Music (NICM) have been historically known to elicit emotions. Recently, Mathur et al. (2015) provided empirical support for these historical assumptions, that distinct ragas elicit distinct emotional responses. In this review, we discuss the findings of Mathur et al. (2015) in the context of the structure of NICM. Using, Mathur et al. (2015) as a demonstrative case-in-point, we argue that ragas of NICM can be viewed as uniquely designed stimulus tools for investigating the tonal and rhythmic influences on musical emotion.

  8. Recognition of the Emotional Content of Music Depending on the Characteristics of the Musical Material and Experience of Students

    Directory of Open Access Journals (Sweden)

    Knyazeva T.S.,

    2015-02-01

    Full Text Available We studied the effect of the factors affecting the recognition of the emotional content of the music. We tested hypotheses about the influence of the valence of the music, ethnic style and the listening experience on the success of music recognition. The empirical study involved 26 Russian musicians (average age of 25,7 years. For the study of musical perception we used bipolar semantic differential. We revealed that the valence of music material affects the recognition of the emotional content of music, and the ethno style does not. It was found that senior students recognize the emotional context of the music more effectively. The results show the universal nature of emotional and musical ear, equally successfully recognizing music of different ethnic style, as well as support the notion of higher significance of negative valence of emotional content in the process of musical perception. A study of factors influencing the emotional understanding of music is important for the development of models of emotion recognition, theoretical constructs of emotional intelligence, and for the theory and practice of music education.

  9. Music-evoked emotions: principles, brain correlates, and implications for therapy.

    Science.gov (United States)

    Koelsch, Stefan

    2015-03-01

    This paper describes principles underlying the evocation of emotion with music: evaluation, resonance, memory, expectancy/tension, imagination, understanding, and social functions. Each of these principles includes several subprinciples, and the framework on music-evoked emotions emerging from these principles and subprinciples is supposed to provide a starting point for a systematic, coherent, and comprehensive theory on music-evoked emotions that considers both reception and production of music, as well as the relevance of emotion-evoking principles for music therapy. © 2015 New York Academy of Sciences.

  10. Music and Emotion: the Dispositional or Arousal theory

    Directory of Open Access Journals (Sweden)

    Alessandra Buccella

    2012-05-01

    Full Text Available One of the ways of analysing the relationship between music and emotions in through musical expressiveness.As the theory I discuss in this paper puts it, expressiveness in a particular kind of music's secondary quality or, to use the term which gives the theory its name, a disposition of music to arouse a certain emotional response in listeners.The most accurate version of the dispositional theory is provided by Derek Matravers in his book Art and Emotion and in other papers: what I will try to do, then, is to illustrate Matravers theory and claim that it is a good solution to many problems concerning music and its capacity to affect our inner states.

  11. Audio-based deep music emotion recognition

    Science.gov (United States)

    Liu, Tong; Han, Li; Ma, Liangkai; Guo, Dongwei

    2018-05-01

    As the rapid development of multimedia networking, more and more songs are issued through the Internet and stored in large digital music libraries. However, music information retrieval on these libraries can be really hard, and the recognition of musical emotion is especially challenging. In this paper, we report a strategy to recognize the emotion contained in songs by classifying their spectrograms, which contain both the time and frequency information, with a convolutional neural network (CNN). The experiments conducted on the l000-song dataset indicate that the proposed model outperforms traditional machine learning method.

  12. Sadness and ruminative thinking independently depress people's moods.

    Science.gov (United States)

    Jahanitabesh, Azra; Cardwell, Brittany A; Halberstadt, Jamin

    2017-11-02

    Depression and rumination often co-occur in clinical populations, but it is not clear which causes which, or if both are manifestations of an underlying pathology. Does rumination simply exacerbate whatever affect a person is experiencing, or is it a negative experience in and of itself? In two experiments we answer this question by independently manipulating emotion and rumination. Participants were allocated to sad or neutral (in Experiment 1), or sad, neutral or happy (Experiment 2) mood conditions, via a combination of emotionally evocative music and autobiographical recall. Afterwards, in both studies, participants either ruminated by thinking about self-relevant statements or, in a control group, thought about self-irrelevant statements. Taken together, our data show that, independent of participants' mood, ruminators reported more negative affect relative to controls. The findings are consistent with theories suggesting that self-focus is itself unpleasant, and illustrate that depressive rumination comprises both affective and ruminative components, which could be targeted independently in clinical samples. © 2017 International Union of Psychological Science.

  13. From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions

    Science.gov (United States)

    Juslin, Patrik N.

    2013-09-01

    The sound of music may arouse profound emotions in listeners. But such experiences seem to involve a ‘paradox’, namely that music - an abstract form of art, which appears removed from our concerns in everyday life - can arouse emotions - biologically evolved reactions related to human survival. How are these (seemingly) non-commensurable phenomena linked together? Key is to understand the processes through which sounds are imbued with meaning. It can be argued that the survival of our ancient ancestors depended on their ability to detect patterns in sounds, derive meaning from them, and adjust their behavior accordingly. Such an ecological perspective on sound and emotion forms the basis of a recent multi-level framework that aims to explain emotional responses to music in terms of a large set of psychological mechanisms. The goal of this review is to offer an updated and expanded version of the framework that can explain both ‘everyday emotions’ and ‘aesthetic emotions’. The revised framework - referred to as BRECVEMA - includes eight mechanisms: Brain Stem Reflex, Rhythmic Entrainment, Evaluative Conditioning, Contagion, Visual Imagery, Episodic Memory, Musical Expectancy, and Aesthetic Judgment. In this review, it is argued that all of the above mechanisms may be directed at information that occurs in a ‘musical event’ (i.e., a specific constellation of music, listener, and context). Of particular significance is the addition of a mechanism corresponding to aesthetic judgments of the music, to better account for typical ‘appreciation emotions’ such as admiration and awe. Relationships between aesthetic judgments and other mechanisms are reviewed based on the revised framework. It is suggested that the framework may contribute to a long-needed reconciliation between previous approaches that have conceptualized music listeners' responses in terms of either ‘everyday emotions’ or ‘aesthetic emotions’.

  14. Functional MRI of music emotion processing in frontotemporal dementia.

    Science.gov (United States)

    Agustus, Jennifer L; Mahoney, Colin J; Downey, Laura E; Omar, Rohani; Cohen, Miriam; White, Mark J; Scott, Sophie K; Mancini, Laura; Warren, Jason D

    2015-03-01

    Frontotemporal dementia is an important neurodegenerative disorder of younger life led by profound emotional and social dysfunction. Here we used fMRI to assess brain mechanisms of music emotion processing in a cohort of patients with frontotemporal dementia (n = 15) in relation to healthy age-matched individuals (n = 11). In a passive-listening paradigm, we manipulated levels of emotion processing in simple arpeggio chords (mode versus dissonance) and emotion modality (music versus human emotional vocalizations). A complex profile of disease-associated functional alterations was identified with separable signatures of musical mode, emotion level, and emotion modality within a common, distributed brain network, including posterior and anterior superior temporal and inferior frontal cortices and dorsal brainstem effector nuclei. Separable functional signatures were identified post-hoc in patients with and without abnormal craving for music (musicophilia): a model for specific abnormal emotional behaviors in frontotemporal dementia. Our findings indicate the potential of music to delineate neural mechanisms of altered emotion processing in dementias, with implications for future disease tracking and therapeutic strategies. © 2014 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals Inc. on behalf of The New York Academy of Sciences.

  15. Cognitive approaches to analysis of emotions in music listening

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.

    2013-01-01

    In recent years research into music cognition and perception has increasingly gained territory. A fact which is not always realised by music theorists is that, from the perspective of cognitive psychology and empirical methodology, the representatives of the expanding field of cognitive music...... research frequently address questions and propose theoretical frameworks that ought to have implications for music theory of a more traditional kind. Yet, such cognitive theories and empirical findings have not had radical impact on general analytical practice and teaching of music theory. For theorists...... interested in musical meaning the emotional impact of music has always been a major concern. In this paper I will explore how multiple cognitive theories and empirical findings can be applied to account for emotional response to three subjectively chosen excerpts of strongly emotion-inducing music: Namely...

  16. What does music express? Basic emotions and beyond

    Directory of Open Access Journals (Sweden)

    Patrik N. Juslin

    2013-09-01

    Full Text Available Numerous studies have investigated whether music can reliably convey emotions to listeners, and - if so - what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of ‘multiple layers’ of musical expression of emotions. The ‘core’ layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this ‘core’ layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions - though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.

  17. What does music express? Basic emotions and beyond.

    Science.gov (United States)

    Juslin, Patrik N

    2013-01-01

    Numerous studies have investigated whether music can reliably convey emotions to listeners, and-if so-what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of "multiple layers" of musical expression of emotions. The "core" layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this "core" layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions-though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.

  18. Emotion detection model of Filipino music

    Science.gov (United States)

    Noblejas, Kathleen Alexis; Isidro, Daryl Arvin; Samonte, Mary Jane C.

    2017-02-01

    This research explored the creation of a model to detect emotion from Filipino songs. The emotion model used was based from Paul Ekman's six basic emotions. The songs were classified into the following genres: kundiman, novelty, pop, and rock. The songs were annotated by a group of music experts based on the emotion the song induces to the listener. Musical features of the songs were extracted using jAudio while the lyric features were extracted by Bag-of- Words feature representation. The audio and lyric features of the Filipino songs were extracted for classification by the chosen three classifiers, Naïve Bayes, Support Vector Machines, and k-Nearest Neighbors. The goal of the research was to know which classifier would work best for Filipino music. Evaluation was done by 10-fold cross validation and accuracy, precision, recall, and F-measure results were compared. Models were also tested with unknown test data to further determine the models' accuracy through the prediction results.

  19. The Role of Emotional Skills in Music Education

    Science.gov (United States)

    Campayo-Muñoz, Emilia-Ángeles; Cabedo-Mas, Alberto

    2017-01-01

    Developing emotional skills is one of the challenges that concern teachers and researchers in education, since these skills promote well-being and enhance cognitive performance. Music is an excellent tool with which to express emotions and for this reason music education should play a role in individuals' emotional development. This paper reviews…

  20. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  1. Music Mood Player Implementation Applied in Daycare Using Self Organizing Map Method

    OpenAIRE

    Dewi, Kadek Cahya; Putri, Luh Arida Ayu Rahning

    2011-01-01

    . Music is an art, entertainment and human activity that involve some organized sounds. Music is closely related to human psychology. A piece of music often associated with certain adjectives such as happy, sad, romantic and many more. The linkage between the music with a certain mood has been widely used in various occasions by people, there for music classification based on relevance to a particular emotion is important. Daycare is one example of an institution that used music as therapy or...

  2. Emotional expressions in voice and music: same code, same effect?

    Science.gov (United States)

    Escoffier, Nicolas; Zhong, Jidan; Schirmer, Annett; Qiu, Anqi

    2013-08-01

    Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition. Copyright © 2011 Wiley Periodicals, Inc.

  3. Processing of emotional faces in congenital amusia: An emotional music priming event-related potential study.

    Science.gov (United States)

    Zhishuai, Jin; Hong, Liu; Daxing, Wu; Pin, Zhang; Xuejing, Lu

    2017-01-01

    Congenital amusia is characterized by lifelong impairments in music perception and processing. It is unclear whether pitch detection deficits impact amusic individuals' perception of musical emotion. In the current work, 19 amusics and 21 healthy controls were subjected to electroencephalography (EEG) while being exposed to music excerpts and emotional faces. We assessed each individual's ability to discriminate positive- and negative-valenced emotional faces and analyzed electrophysiological indices, in the form of event-related potentials (ERPs) recorded at 32 sites, following exposure to emotionally positive or negative music excerpts. We observed smaller N2 amplitudes in response to facial expressions in the amusia group than in the control group, suggesting that amusics were less affected by the musical stimuli. The late-positive component (LPC) in amusics was similar to that in controls. Our results suggest that the neurocognitive deficit characteristic of congenital amusia is fundamentally an impairment in musical information processing rather than an impairment in emotional processing.

  4. Processing of emotional faces in congenital amusia: An emotional music priming event-related potential study

    Directory of Open Access Journals (Sweden)

    Jin Zhishuai

    2017-01-01

    Full Text Available Congenital amusia is characterized by lifelong impairments in music perception and processing. It is unclear whether pitch detection deficits impact amusic individuals' perception of musical emotion. In the current work, 19 amusics and 21 healthy controls were subjected to electroencephalography (EEG while being exposed to music excerpts and emotional faces. We assessed each individual's ability to discriminate positive- and negative-valenced emotional faces and analyzed electrophysiological indices, in the form of event-related potentials (ERPs recorded at 32 sites, following exposure to emotionally positive or negative music excerpts. We observed smaller N2 amplitudes in response to facial expressions in the amusia group than in the control group, suggesting that amusics were less affected by the musical stimuli. The late-positive component (LPC in amusics was similar to that in controls. Our results suggest that the neurocognitive deficit characteristic of congenital amusia is fundamentally an impairment in musical information processing rather than an impairment in emotional processing.

  5. Regulating sadness and fear from outside and within: mothers' emotion socialization and adolescents' parasympathetic regulation predict the development of internalizing difficulties.

    Science.gov (United States)

    Hastings, Paul D; Klimes-Dougan, Bonnie; Kendziora, Kimberly T; Brand, Ann; Zahn-Waxler, Carolyn

    2014-11-01

    Multilevel models of developmental psychopathology implicate both characteristics of the individual and their rearing environment in the etiology of internalizing problems and disorders. Maladaptive regulation of fear and sadness, the core of anxiety and depression, arises from the conjoint influences of ineffective parasympathetic regulation of emotion and ineffective emotion socialization experiences. In 171 youths (84 female, M = 13.69 years, SD = 1.84), we measured changes of respiratory sinus arrhythmia (RSA) in response to sadness- and fear-inducing film clips and maternal supportive and punitive responses to youths' internalizing emotions. Youths and mothers reported on youths' internalizing problems and anxiety and depression symptoms concurrently and 2 years later at Time 2. Maternal supportive emotion socialization predicted fewer, and punitive socialization predicted more, mother-reported internalizing problems at Time 2 only for youths who showed RSA suppression to fear-inducing films. More RSA suppression to sadness-inducing films predicted more youth-reported internalizing problems at Time 2 in girls only. In addition, less supportive emotion socialization predicted more youth-reported depression symptoms at Time 2 only for girls who showed more RSA suppression to sadness. RSA suppression to sadness versus fear might reflect different patterns of atypical parasympathetic regulation of emotional arousal, both of which increase the risk for internalizing difficulties in youths, and especially girls, who lack maternal support for regulating emotions.

  6. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing.

    Directory of Open Access Journals (Sweden)

    Sara Giannantonio

    Full Text Available Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use. Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real, regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and

  7. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing

    Science.gov (United States)

    Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in

  8. Music as Emotional Self-Regulation throughout Adulthood

    Science.gov (United States)

    Saarikallio, Suvi

    2011-01-01

    Emotional self-regulation is acknowledged as one of the most important reasons for musical engagement at all ages. Yet there is little knowledge on how this self-regulatory use of music develops across the life span. A qualitative study was conducted to initially explore central processes and strategies of the emotional self-regulation during…

  9. Multimodal Analysis of Piano Performances Portraying Different Emotions

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Frimodt-Møller, Søren

    2013-01-01

    changes, and 3 times where the music was intended to portray the emotions happy, sad and angry, respectively. Motion-capture data from all of the performances was recorded alongside the audio. We analyze differences in the data for the differ- ent emotions, both with respect to the size and shape...

  10. Emotional cues, emotional signals, and their contrasting effects on listener valence

    DEFF Research Database (Denmark)

    Christensen, Justin

    2015-01-01

    that are mimetic of emotional cues interact in less clear and less cohesive manners with their corresponding haptic signals. For my investigations, subjects listen to samples from the International Affective Digital Sounds Library[2] and selected musical works on speakers in combination with a tactile transducer...... and of benefit to both the sender and the receiver of the signal, otherwise they would cease to have the intended effect of communication. In contrast with signals, animal cues are much more commonly unimodal as they are unintentional by the sender. In my research, I investigate whether subjects exhibit...... are more emotional cues (e.g. sadness or calmness). My hypothesis is that musical and sound stimuli that are mimetic of emotional signals should combine to elicit a stronger response when presented as a multimodal stimulus as opposed to as a unimodal stimulus, whereas musical or sound stimuli...

  11. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  12. What does music express? Basic emotions and beyond

    Science.gov (United States)

    Juslin, Patrik N.

    2013-01-01

    Numerous studies have investigated whether music can reliably convey emotions to listeners, and—if so—what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of “multiple layers” of musical expression of emotions. The “core” layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this “core” layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions—though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions. PMID:24046758

  13. Emotional reactions to music: psychophysiological correlates and applications to affective disorders

    OpenAIRE

    Kalda, Tiina

    2013-01-01

    Music has been used to evoke emotions for centuries. The mechanisms underlying this effect have remained largely unclear. This thesis contributes to research on how music\\ud evokes emotions by investigating two mechanisms from the model of Juslin and Västfjäll (2008) - musical expectancy and emotional contagion. In the perception studies the focus is on how musical expectancy violations are detected by either musically trained or untrained individuals. In the music-making studies, we concentr...

  14. Learning Combinations of Multiple Feature Representations for Music Emotion Prediction

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2015-01-01

    Music consists of several structures and patterns evolving through time which greatly influences the human decoding of higher-level cognitive aspects of music like the emotions expressed in music. For tasks, such as genre, tag and emotion recognition, these structures have often been identified...... and used as individual and non-temporal features and representations. In this work, we address the hypothesis whether using multiple temporal and non-temporal representations of different features is beneficial for modeling music structure with the aim to predict the emotions expressed in music. We test...

  15. Sad or Fearful? The Influence of Body Posture on Adults' and Children's Perception of Facial Displays of Emotion

    Science.gov (United States)

    Mondloch, Catherine J.

    2012-01-01

    The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body…

  16. The influence of self-generated emotions on physical performance: an investigation of happiness, anger, anxiety, and sadness.

    Science.gov (United States)

    Rathschlag, Marco; Memmert, Daniel

    2013-04-01

    The present study examined the relationship between self-generated emotions and physical performance. All participants took part in five emotion induction conditions (happiness, anger, anxiety, sadness, and an emotion-neutral state) and we investigated their influence on the force of the finger musculature (Experiment 1), the jump height of a counter-movement jump (Experiment 2), and the velocity of a thrown ball (Experiment 3). All experiments showed that participants could produce significantly better physical performances when recalling anger or happiness emotions in contrast to the emotion-neutral state. Experiments 1 and 2 also revealed that physical performance in the anger and the happiness conditions was significantly enhanced compared with the anxiety and the sadness conditions. Results are discussed in relation to the Lazarus (1991, 2000a) cognitive-motivational-relational (CMR) theory framework.

  17. Anticipated Coping with Interpersonal Stressors: Links with the Emotional Reactions of Sadness, Anger, and Fear

    Science.gov (United States)

    Zimmer-Gembeck, Melanie J.; Skinner, Ellen A.; Morris, Helen; Thomas, Rae

    2013-01-01

    The same stressor can evoke different emotions across individuals, and emotions can prompt certain coping responses. Responding to four videotaped interpersonal stressors, adolescents ("N" = 230, the average values of "X"[subscript age] = 10 years) reported their sadness, fear "and" anger, and 12 coping strategies.…

  18. Emotion-based Music Rretrieval on a Well-reduced Audio Feature Space

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Chua, Bee Yong; Nanopoulos, Alexandros

    2009-01-01

    -emotion. However, the real-time systems that retrieve music over large music databases, can achieve order of magnitude performance increase, if applying multidimensional indexing over a dimensionally reduced audio feature space. To meet this performance achievement, in this paper, extensive studies are conducted......Music expresses emotion. A number of audio extracted features have influence on the perceived emotional expression of music. These audio features generate a high-dimensional space, on which music similarity retrieval can be performed effectively, with respect to human perception of the music...... on a number of dimensionality reduction algorithms, including both classic and novel approaches. The paper clearly envisages which dimensionality reduction techniques on the considered audio feature space, can preserve in average the accuracy of the emotion-based music retrieval....

  19. When the Wedding March becomes sad: Semantic memory impairment for music in the semantic variant of primary progressive aphasia.

    Science.gov (United States)

    Macoir, Joël; Berubé-Lalancette, Sarah; Wilson, Maximiliano A; Laforce, Robert; Hudon, Carol; Gravel, Pierre; Potvin, Olivier; Duchesne, Simon; Monetta, Laura

    2016-12-01

    Music can induce particular emotions and activate semantic knowledge. In the semantic variant of primary progressive aphasia (svPPA), semantic memory is impaired as a result of anterior temporal lobe (ATL) atrophy. Semantics is responsible for the encoding and retrieval of factual knowledge about music, including associative and emotional attributes. In the present study, we report the performance of two individuals with svPPA in three experiments. NG with bilateral ATL atrophy and ND with atrophy largely restricted to the left ATL. Experiment 1 assessed the recognition of musical excerpts and both patients were unimpaired. Experiment 2 studied the emotions conveyed by music and only NG showed impaired performance. Experiment 3 tested the association of semantic concepts to musical excerpts and both patients were impaired. These results suggest that the right ATL seems essential for the recognition of emotions conveyed by music and that the left ATL is involved in binding music to semantics. They are in line with the notion that the ATLs are devoted to the binding of different modality-specific properties and suggest that they are also differentially involved in the processing of factual and emotional knowledge associated with music.

  20. EFFECTS OF MUSIC INTERVENTIONS ON EMOTIONAL STATES AND RUNNING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Andrew M. Lane

    2011-06-01

    Full Text Available The present study compared the effects of two different music interventions on changes in emotional states before and during running, and also explored effects of music interventions upon performance outcome. Volunteer participants (n = 65 who regularly listened to music when running registered online to participate in a three-stage study. Participants attempted to attain a personally important running goal to establish baseline performance. Thereafter, participants were randomly assigned to either a self-selected music group or an Audiofuel music group. Audiofuel produce pieces of music designed to assist synchronous running. The self-selected music group followed guidelines for selecting motivating playlists. In both experimental groups, participants used the Brunel Music Rating Inventory-2 (BMRI-2 to facilitate selection of motivational music. Participants again completed the BMRI-2 post- intervention to assess the motivational qualities of Audiofuel music or the music they selected for use during the study. Results revealed no significant differences between self-selected music and Audiofuel music on all variables analyzed. Participants in both music groups reported increased pleasant emotions and decreased unpleasant emotions following intervention. Significant performance improvements were demonstrated post-intervention with participants reporting a belief that emotional states related to performance. Further analysis indicated that enhanced performance was significantly greater among participants reporting music to be motivational as indicated by high scores on the BMRI-2. Findings suggest that both individual athletes and practitioners should consider using the BMRI-2 when selecting music for running

  1. Emotional memory for musical excerpts in young and older adults.

    OpenAIRE

    Irene eAlonso; Irene eAlonso; Irene eAlonso; Delphine eDellacherie; Delphine eDellacherie; Séverine eSamson; Séverine eSamson

    2015-01-01

    The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotion...

  2. Emotional memory for musical excerpts in young and older adults

    Science.gov (United States)

    Alonso, Irene; Dellacherie, Delphine; Samson, Séverine

    2015-01-01

    The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24 h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24 h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and dementia. PMID

  3. Emotional memory for musical excerpts in young and older adults.

    Science.gov (United States)

    Alonso, Irene; Dellacherie, Delphine; Samson, Séverine

    2015-01-01

    The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24 h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24 h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and dementia.

  4. Emotional memory for musical excerpts in young and older adults.

    Directory of Open Access Journals (Sweden)

    Irene eAlonso

    2015-03-01

    Full Text Available The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009, but it remains unclear whether positively (Eschrich et al., 2008 or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013 may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of to the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and

  5. An Age-Related Mechanism of Emotion Regulation: Regulating Sadness Promotes Children's Learning by Broadening Information Processing

    Science.gov (United States)

    Davis, Elizabeth L.

    2016-01-01

    Emotion regulation predicts positive academic outcomes like learning, but little is known about "why". Effective emotion regulation likely promotes learning by broadening the scope of what may be attended to after an emotional event. One hundred twenty-six 6- to 13-year-olds' (54% boys) regulation of sadness was examined for changes in…

  6. Animal signals and emotion in music: Coordinating affect across groups

    Directory of Open Access Journals (Sweden)

    Gregory A. Bryant

    2013-12-01

    Full Text Available Researchers studying the emotional impact of music have not traditionally been concerned with the principled relationship between form and function in evolved animal signals. The acoustic structure of musical forms is related in important ways to emotion perception, and thus research on nonhuman animal vocalizations is relevant for understanding emotion in music. Musical behavior occurs in cultural contexts that include many other coordinated activities which mark group identity, and can allow people to communicate within and between social alliances. The emotional impact of music might be best understood as a proximate mechanism serving an ultimately social function. Here I describe recent work that reveals intimate connections between properties of certain animal signals and evocative aspects of human music, including 1 examinations of the role of nonlinearities (e.g., broadband noise in nonhuman animal vocalizations, and the analogous production and perception of these features in human music, and 2 an analysis of group musical performances and possible relationships to nonhuman animal chorusing and emotional contagion effects. Communicative features in music are likely due primarily to evolutionary byproducts of phylogenetically older, but still intact communication systems. But in some cases, such as the coordinated rhythmic sounds produced by groups of musicians, our appreciation and emotional engagement might be due to the operation of an adaptive social signaling system. Future empirical work should examine human musical behavior through the comparative lens of behavioral ecology and an adaptationist cognitive science. By this view, particular coordinated sound combinations generated by musicians exploit evolved perceptual response biases—many shared across species—and proliferate through cultural evolutionary processes.

  7. Psychoacoustic cues to emotion in speech prosody and music.

    Science.gov (United States)

    Coutinho, Eduardo; Dibben, Nicola

    2013-01-01

    There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.

  8. Studying induced musical emotion via a corpus of annotations collected through crowd‐sourcing

    NARCIS (Netherlands)

    Aljanaki, Anna; Wiering, Frans; Veltkamp, Remco

    2014-01-01

    One of the major reasons why music is so enjoyable is its emotional impact. For many people, music is an important everyday aid of emotional regulation. As such, music is used by musical therapists to and in entertainment industry. Recently, mechanisms of emotional induction through music received a

  9. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    Science.gov (United States)

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Neural activation associated with the cognitive emotion regulation of sadness in healthy children

    Directory of Open Access Journals (Sweden)

    Andy C. Belden

    2014-07-01

    Full Text Available When used effectively, cognitive reappraisal of distressing events is a highly adaptive cognitive emotion regulation (CER strategy, with impairments in cognitive reappraisal associated with greater risk for psychopathology. Despite extensive literature examining the neural correlates of cognitive reappraisal in healthy and psychiatrically ill adults, there is a dearth of data to inform the neural bases of CER in children, a key gap in the literature necessary to map the developmental trajectory of cognitive reappraisal. In this fMRI study, psychiatrically healthy schoolchildren were instructed to use cognitive reappraisal to modulate their emotional reactions and responses of negative affect after viewing sad photos. Consistent with the adult literature, when actively engaged in reappraisal compared to passively viewing sad photos, children showed increased activation in the vlPFC, dlPFC, and dmPFC as well as in parietal and temporal lobe regions. When children used cognitive reappraisal to minimize their experience of negative affect after viewing sad stimuli they exhibited dampened amygdala responses. Results are discussed in relation to the importance of identifying and characterizing neural processes underlying adaptive CER strategies in typically developing children in order to understand how these systems go awry and relate to the risk and occurrence of affective disorders.

  11. Investigating emotional top down modulation of ambiguous faces by single pulse TMS on early visual cortices

    Directory of Open Access Journals (Sweden)

    Zachary Adam Yaple

    2016-06-01

    Full Text Available Top-down processing is a mechanism in which memory, context and expectation are used to perceive stimuli. For this study we investigated how emotion content, induced by music mood, influences perception of happy and sad emoticons. Using single pulse TMS we stimulated right occipital face area (rOFA, primary visual cortex (V1 and vertex while subjects performed a face-detection task and listened to happy and sad music. At baseline, incongruent audio-visual pairings decreased performance, demonstrating dependence of emotion while perceiving ambiguous faces. However, performance of face identification decreased during rOFA stimulation regardless of emotional content. No effects were found between Cz and V1 stimulation. These results suggest that while rOFA is important for processing faces regardless of emotion, V1 stimulation had no effect. Our findings suggest that early visual cortex activity may not integrate emotional auditory information with visual information during emotion top-down modulation of faces.

  12. Musical emotions: predicting second-by-second subjective feelings of emotion from low-level psychoacoustic features and physiological measurements.

    Science.gov (United States)

    Coutinho, Eduardo; Cangelosi, Angelo

    2011-08-01

    We sustain that the structure of affect elicited by music is largely dependent on dynamic temporal patterns in low-level music structural parameters. In support of this claim, we have previously provided evidence that spatiotemporal dynamics in psychoacoustic features resonate with two psychological dimensions of affect underlying judgments of subjective feelings: arousal and valence. In this article we extend our previous investigations in two aspects. First, we focus on the emotions experienced rather than perceived while listening to music. Second, we evaluate the extent to which peripheral feedback in music can account for the predicted emotional responses, that is, the role of physiological arousal in determining the intensity and valence of musical emotions. Akin to our previous findings, we will show that a significant part of the listeners' reported emotions can be predicted from a set of six psychoacoustic features--loudness, pitch level, pitch contour, tempo, texture, and sharpness. Furthermore, the accuracy of those predictions is improved with the inclusion of physiological cues--skin conductance and heart rate. The interdisciplinary work presented here provides a new methodology to the field of music and emotion research based on the combination of computational and experimental work, which aid the analysis of the emotional responses to music, while offering a platform for the abstract representation of those complex relationships. Future developments may aid specific areas, such as, psychology and music therapy, by providing coherent descriptions of the emotional effects of specific music stimuli. 2011 APA, all rights reserved

  13. From Sound to Significance: Exploring the Mechanisms Underlying Emotional Reactions to Music.

    Science.gov (United States)

    Juslin, Patrik N; Barradas, Gonçalo; Eerola, Tuomas

    2015-01-01

    A common approach to studying emotional reactions to music is to attempt to obtain direct links between musical surface features such as tempo and a listener's responses. However, such an analysis ultimately fails to explain why emotions are aroused in the listener. In this article we explore an alternative approach, which aims to account for musical emotions in terms of a set of psychological mechanisms that are activated by different types of information in a musical event. This approach was tested in 4 experiments that manipulated 4 mechanisms (brain stem reflex, contagion, episodic memory, musical expectancy) by selecting existing musical pieces that featured information relevant for each mechanism. The excerpts were played to 60 listeners, who were asked to rate their felt emotions on 15 scales. Skin conductance levels and facial expressions were measured, and listeners reported subjective impressions of relevance to specific mechanisms. Results indicated that the target mechanism conditions evoked emotions largely as predicted by a multimechanism framework and that mostly similar effects occurred across the experiments that included different pieces of music. We conclude that a satisfactory account of musical emotions requires consideration of how musical features and responses are mediated by a range of underlying mechanisms.

  14. Emotional Readiness and Music Therapeutic Activities

    Science.gov (United States)

    Drossinou-Korea, Maria; Fragkouli, Aspasia

    2016-01-01

    The purpose of this study is to understand the children's expression with verbal and nonverbal communication in the Autistic spectrum. We study the emotional readiness and the music therapeutic activities which exploit the elements of music. The method followed focused on the research field of special needs education. Assumptions on the parameters…

  15. Fear, Sadness and Hope: Which Emotions Maximize Impact of Anti-Tobacco Mass Media Advertisements among Lower and Higher SES Groups?

    Science.gov (United States)

    Durkin, Sarah; Bayly, Megan; Brennan, Emily; Biener, Lois; Wakefield, Melanie

    2018-01-01

    Emotive anti-tobacco advertisements can increase quitting. Discrete emotion theories suggest evoking fear may be more effective than sadness; less research has focused on hope. A weekly cross-sectional survey of smokers and recent quitters (N = 7683) measured past-month quit attempts. The main predictor was level of exposure to four different types of anti-tobacco advertisements broadcast in the two months prior to quit attempts: advertisements predominantly evoking fear, sadness, hope, or evoking multiple negative emotions (i.e., fear, guilt, and/or sadness). Greater exposure to fear-evoking advertisements (OR = 2.16, p < .01) increased odds of making a quit attempt and showed similar effectiveness among those in lower and higher SES areas. Greater exposure to advertisements evoking multiple negative emotions increased quit attempts (OR = 1.70, p < .01), but interactions indicated this was driven by those in lower SES, but not higher SES areas. Greater exposure to hope-evoking advertisements enhanced effects of fear-evoking advertisements among those in higher SES, but not lower SES areas. Findings suggest to be maximally effective across the whole population avoid messages evoking sadness and use messages eliciting fear. If the aim is to specifically motivate those living in lower SES areas where smoking rates are higher, multiple negative emotion messages, but not hope-evoking messages, may also be effective.

  16. Inferior Frontal Gyrus Activation Underlies the Perception of Emotions, While Precuneus Activation Underlies the Feeling of Emotions during Music Listening

    Science.gov (United States)

    Tabei, Ken-ichi

    2015-01-01

    While music triggers many physiological and psychological reactions, the underlying neural basis of perceived and experienced emotions during music listening remains poorly understood. Therefore, using functional magnetic resonance imaging (fMRI), I conducted a comparative study of the different brain areas involved in perceiving and feeling emotions during music listening. I measured fMRI signals while participants assessed the emotional expression of music (perceived emotion) and their emotional responses to music (felt emotion). I found that cortical areas including the prefrontal, auditory, cingulate, and posterior parietal cortices were consistently activated by the perceived and felt emotional tasks. Moreover, activity in the inferior frontal gyrus increased more during the perceived emotion task than during a passive listening task. In addition, the precuneus showed greater activity during the felt emotion task than during a passive listening task. The findings reveal that the bilateral inferior frontal gyri and the precuneus are important areas for the perception of the emotional content of music as well as for the emotional response evoked in the listener. Furthermore, I propose that the precuneus, a brain region associated with self-representation, might be involved in assessing emotional responses. PMID:26504353

  17. Inferior Frontal Gyrus Activation Underlies the Perception of Emotions, While Precuneus Activation Underlies the Feeling of Emotions during Music Listening.

    Science.gov (United States)

    Tabei, Ken-ichi

    2015-01-01

    While music triggers many physiological and psychological reactions, the underlying neural basis of perceived and experienced emotions during music listening remains poorly understood. Therefore, using functional magnetic resonance imaging (fMRI), I conducted a comparative study of the different brain areas involved in perceiving and feeling emotions during music listening. I measured fMRI signals while participants assessed the emotional expression of music (perceived emotion) and their emotional responses to music (felt emotion). I found that cortical areas including the prefrontal, auditory, cingulate, and posterior parietal cortices were consistently activated by the perceived and felt emotional tasks. Moreover, activity in the inferior frontal gyrus increased more during the perceived emotion task than during a passive listening task. In addition, the precuneus showed greater activity during the felt emotion task than during a passive listening task. The findings reveal that the bilateral inferior frontal gyri and the precuneus are important areas for the perception of the emotional content of music as well as for the emotional response evoked in the listener. Furthermore, I propose that the precuneus, a brain region associated with self-representation, might be involved in assessing emotional responses.

  18. Neural responses to nostalgia-evoking music modeled by elements of dynamic musical structure and individual differences in affective traits.

    Science.gov (United States)

    Barrett, Frederick S; Janata, Petr

    2016-10-01

    Nostalgia is an emotion that is most commonly associated with personally and socially relevant memories. It is primarily positive in valence and is readily evoked by music. It is also an idiosyncratic experience that varies between individuals based on affective traits. We identified frontal, limbic, paralimbic, and midbrain brain regions in which the strength of the relationship between ratings of nostalgia evoked by music and blood-oxygen-level-dependent (BOLD) signal was predicted by affective personality measures (nostalgia proneness and the sadness scale of the Affective Neuroscience Personality Scales) that are known to modulate the strength of nostalgic experiences. We also identified brain areas including the inferior frontal gyrus, substantia nigra, cerebellum, and insula in which time-varying BOLD activity correlated more strongly with the time-varying tonal structure of nostalgia-evoking music than with music that evoked no or little nostalgia. These findings illustrate one way in which the reward and emotion regulation networks of the brain are recruited during the experiencing of complex emotional experiences triggered by music. These findings also highlight the importance of considering individual differences when examining the neural responses to strong and idiosyncratic emotional experiences. Finally, these findings provide a further demonstration of the use of time-varying stimulus-specific information in the investigation of music-evoked experiences. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The function of music in the development of empathy in children: the construction of the educational course “Music and well-being ” and the evaluation of its effects

    Directory of Open Access Journals (Sweden)

    Giuseppe Sellari

    2011-12-01

    Full Text Available In the present research, the authors examined the contents and the methods of the educational course Music and well-Being (that uses global musical activities based on listening, and on vocal and instrumental production in order to check its efficiency in improving empathy in a group of four year old children. The results show that the training has been efficient in improving the empathic ability of children towards all emotions considered (joy, sadness, fear, anger and above all towards emotions of negative hedonic tone.

  20. Neural correlates of sad feelings in healthy girls.

    Science.gov (United States)

    Lévesque, J; Joanette, Y; Mensour, B; Beaudoin, G; Leroux, J-M; Bourgouin, P; Beauregard, M

    2003-01-01

    Emotional development is indisputably one of the cornerstones of personality development during infancy. According to the differential emotions theory (DET), primary emotions are constituted of three distinct components: the neural-evaluative, the expressive, and the experiential. The DET further assumes that these three components are biologically based and functional nearly from birth. Such a view entails that the neural substrate of primary emotions must be similar in children and adults. Guided by this assumption of the DET, the present functional magnetic resonance imaging study was conducted to identify the neural correlates of sad feelings in healthy children. Fourteen healthy girls (aged 8-10) were scanned while they watched sad film excerpts aimed at externally inducing a transient state of sadness (activation task). Emotionally neutral film excerpts were also presented to the subjects (reference task). The subtraction of the brain activity measured during the viewing of the emotionally neutral film excerpts from that noted during the viewing of the sad film excerpts revealed that sad feelings were associated with significant bilateral activations of the midbrain, the medial prefrontal cortex (Brodmann area [BA] 10), and the anterior temporal pole (BA 21). A significant locus of activation was also noted in the right ventrolateral prefrontal cortex (BA 47). These results are compatible with those of previous functional neuroimaging studies of sadness in adults. They suggest that the neural substrate underlying the subjective experience of sadness is comparable in children and adults. Such a similitude provides empirical support to the DET assumption that the neural substrate of primary emotions is biologically based.

  1. Predictive Modeling of Expressed Emotions in Music Using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2013-01-01

    We introduce a two-alternative forced-choice (2AFC) experimental paradigm to quantify expressed emotions in music using the arousal and valence (AV) dimensions. A wide range of well-known audio features are investigated for predicting the expressed emotions in music using learning curves...... and essential baselines. We furthermore investigate the scalability issues of using 2AFC in quantifying emotions expressed in music on large-scale music databases. The possibility of dividing the annotation task between multiple individuals, while pooling individuals’ comparisons is investigated by looking...... comparisons at random by using learning curves. We show that a suitable predictive model of expressed valence in music can be achieved from only 15% of the total number of comparisons when using the Expected Value of Information (EVOI) active learning scheme. For the arousal dimension we require 9...

  2. Neural Processing of Emotional Musical and Nonmusical Stimuli in Depression.

    Directory of Open Access Journals (Sweden)

    Rebecca J Lepping

    Full Text Available Anterior cingulate cortex (ACC and striatum are part of the emotional neural circuitry implicated in major depressive disorder (MDD. Music is often used for emotion regulation, and pleasurable music listening activates the dopaminergic system in the brain, including the ACC. The present study uses functional MRI (fMRI and an emotional nonmusical and musical stimuli paradigm to examine how neural processing of emotionally provocative auditory stimuli is altered within the ACC and striatum in depression.Nineteen MDD and 20 never-depressed (ND control participants listened to standardized positive and negative emotional musical and nonmusical stimuli during fMRI scanning and gave subjective ratings of valence and arousal following scanning.ND participants exhibited greater activation to positive versus negative stimuli in ventral ACC. When compared with ND participants, MDD participants showed a different pattern of activation in ACC. In the rostral part of the ACC, ND participants showed greater activation for positive information, while MDD participants showed greater activation to negative information. In dorsal ACC, the pattern of activation distinguished between the types of stimuli, with ND participants showing greater activation to music compared to nonmusical stimuli, while MDD participants showed greater activation to nonmusical stimuli, with the greatest response to negative nonmusical stimuli. No group differences were found in striatum.These results suggest that people with depression may process emotional auditory stimuli differently based on both the type of stimulation and the emotional content of that stimulation. This raises the possibility that music may be useful in retraining ACC function, potentially leading to more effective and targeted treatments.

  3. Play it again, Sam: brain correlates of emotional music recognition.

    Science.gov (United States)

    Altenmüller, Eckart; Siggel, Susann; Mohammadi, Bahram; Samii, Amir; Münte, Thomas F

    2014-01-01

    Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory. Eighteen non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 s length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 s) was followed by a 20 s resting period during which participants pressed either the "old" or the "new" button to indicate whether they had heard the piece before. Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces. Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance and is recognized very fast.

  4. Play it again Sam: Brain Correlates of Emotional Music Recognition

    Directory of Open Access Journals (Sweden)

    Eckart eAltenmüller

    2014-02-01

    Full Text Available AbstractBackground: Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory.Methods: 18 non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 second length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 sec was followed by a 20 sec resting period during which participants pressed either the old or the new to indicate whether they had heard the piece before. Results: Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces. Conclusion: Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance.

  5. Emotional memory for musical excerpts in young and older adults

    OpenAIRE

    Alonso, Irene; Dellacherie, Delphine; Samson, S?verine

    2015-01-01

    International audience; The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these i...

  6. Electroencephalographic dynamics of musical emotion perception revealed by independent spectral components.

    Science.gov (United States)

    Lin, Yuan-Pin; Duann, Jeng-Ren; Chen, Jyh-Horng; Jung, Tzyy-Ping

    2010-04-21

    This study explores the electroencephalographic (EEG) correlates of emotional experience during music listening. Independent component analysis and analysis of variance were used to separate statistically independent spectral changes of the EEG in response to music-induced emotional processes. An independent brain process with equivalent dipole located in the fronto-central region exhibited distinct δ-band and θ-band power changes associated with self-reported emotional states. Specifically, the emotional valence was associated with δ-power decreases and θ-power increases in the frontal-central area, whereas the emotional arousal was accompanied by increases in both δ and θ powers. The resultant emotion-related component activations that were less interfered by the activities from other brain processes complement previous EEG studies of emotion perception to music.

  7. Music and mirror neurons: from motion to 'e'motion.

    Science.gov (United States)

    Molnar-Szakacs, Istvan; Overy, Katie

    2006-12-01

    The ability to create and enjoy music is a universal human trait and plays an important role in the daily life of most cultures. Music has a unique ability to trigger memories, awaken emotions and to intensify our social experiences. We do not need to be trained in music performance or appreciation to be able to reap its benefits-already as infants, we relate to it spontaneously and effortlessly. There has been a recent surge in neuroimaging investigations of the neural basis of musical experience, but the way in which the abstract shapes and patterns of musical sound can have such profound meaning to us remains elusive. Here we review recent neuroimaging evidence and suggest that music, like language, involves an intimate coupling between the perception and production of hierarchically organized sequential information, the structure of which has the ability to communicate meaning and emotion. We propose that these aspects of musical experience may be mediated by the human mirror neuron system.

  8. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Affective responses to music in depressed individuals : Aesthetic judgments, emotions, and the impact of music-evoked autobiographical memories

    OpenAIRE

    Sakka, Laura Stavroula

    2018-01-01

    Music’s powerful influence on our affective states is often utilized in everyday life for emotion regulation and in music-therapeutic interventions against depression. Given this ability of music to influence emotions and symptoms in depressed people, it appears imperative to understand how these individuals affectively respond to music. The primary aim of this thesis is to explore whether depressed individuals have distinct affective responses to music, in terms of aesthetic judgments, emoti...

  10. Differential alpha coherence hemispheric patterns in men and women during pleasant and unpleasant musical emotions.

    Science.gov (United States)

    Flores-Gutiérrez, Enrique O; Díaz, José-Luis; Barrios, Fernando A; Guevara, Miguel Angel; Del Río-Portilla, Yolanda; Corsi-Cabrera, María; Del Flores-Gutiérrez, Enrique O

    2009-01-01

    Potential sex differences in EEG coherent activity during pleasant and unpleasant musical emotions were investigated. Musical excerpts by Mahler, Bach, and Prodromidès were played to seven men and seven women and their subjective emotions were evaluated in relation to alpha band intracortical coherence. Different brain links in specific frequencies were associated to pleasant and unpleasant emotions. Pleasant emotions (Mahler, Bach) increased upper alpha couplings linking left anterior and posterior regions. Unpleasant emotions (Prodromidès) were sustained by posterior midline coherence exclusively in the right hemisphere in men and bilateral in women. Combined music induced bilateral oscillations among posterior sensory and predominantly left association areas in women. Consistent with their greater positive attributions to music, the coherent network is larger in women, both for musical emotion and for unspecific musical effects. Musical emotion entails specific coupling among cortical regions and involves coherent upper alpha activity between posterior association areas and frontal regions probably mediating emotional and perceptual integration. Linked regions by combined music suggest more working memory contribution in women and attention in men.

  11. Music, emotion, and time perception: the influence of subjective emotional valence and arousal?

    Science.gov (United States)

    Droit-Volet, Sylvie; Ramos, Danilo; Bueno, José L. O.; Bigand, Emmanuel

    2013-01-01

    The present study used a temporal bisection task with short (2 s) stimulus durations to investigate the effect on time estimation of several musical parameters associated with emotional changes in affective valence and arousal. In order to manipulate the positive and negative valence of music, Experiments 1 and 2 contrasted the effect of musical structure with pieces played normally and backwards, which were judged to be pleasant and unpleasant, respectively. This effect of valence was combined with a subjective arousal effect by changing the tempo of the musical pieces (fast vs. slow) (Experiment 1) or their instrumentation (orchestral vs. piano pieces). The musical pieces were indeed judged more arousing with a fast than with a slow tempo and with an orchestral than with a piano timbre. In Experiment 3, affective valence was also tested by contrasting the effect of tonal (pleasant) vs. atonal (unpleasant) versions of the same musical pieces. The results showed that the effect of tempo in music, associated with a subjective arousal effect, was the major factor that produced time distortions with time being judged longer for fast than for slow tempi. When the tempo was held constant, no significant effect of timbre on the time judgment was found although the orchestral music was judged to be more arousing than the piano music. Nevertheless, emotional valence did modulate the tempo effect on time perception, the pleasant music being judged shorter than the unpleasant music. PMID:23882233

  12. Music, Emotion and Time Perception: The influence of subjective emotional valence and arousal?

    Directory of Open Access Journals (Sweden)

    SYLVIE eDROIT-VOLET

    2013-07-01

    Full Text Available The present study used a temporal bisection task with short (< 2 s and long (> 2 s stimulus durations to investigate the effect on time estimation of several musical parameters associated with emotional changes in affective valence and arousal. In order to manipulate the positive and negative valence of music, Experiments 1 and 2 contrasted the effect of musical structure with pieces played normally and backwards, which were judged to be pleasant and unpleasant, respectively. This effect of valence was combined with a subjective arousal effect by changing the tempo of the musical pieces (fast vs. slow (Experiment 1 or their instrumentation (orchestral vs. piano pieces. The musical pieces were indeed judged more arousing with a fast than with a slow tempo and with an orchestral than with a piano timbre. In Experiment 3, affective valence was also tested by contrasting the effect of tonal (pleasant versus atonal (unpleasant versions of the same musical pieces. The results showed that the effect of tempo in music, associated with a subjective arousal effect, was the major factor that produced time distortions with time being judged longer for fast than for slow tempi. When the tempo was held constant, no significant effect of timbre on the time judgment was found although the orchestral music was judged to be more arousing than the piano music. Nevertheless, emotional valence did modulate the tempo effect on time perception, the pleasant music being judged shorter than the unpleasant music.

  13. Music, emotion, and time perception: the influence of subjective emotional valence and arousal?

    Science.gov (United States)

    Droit-Volet, Sylvie; Ramos, Danilo; Bueno, José L O; Bigand, Emmanuel

    2013-01-01

    The present study used a temporal bisection task with short (2 s) stimulus durations to investigate the effect on time estimation of several musical parameters associated with emotional changes in affective valence and arousal. In order to manipulate the positive and negative valence of music, Experiments 1 and 2 contrasted the effect of musical structure with pieces played normally and backwards, which were judged to be pleasant and unpleasant, respectively. This effect of valence was combined with a subjective arousal effect by changing the tempo of the musical pieces (fast vs. slow) (Experiment 1) or their instrumentation (orchestral vs. piano pieces). The musical pieces were indeed judged more arousing with a fast than with a slow tempo and with an orchestral than with a piano timbre. In Experiment 3, affective valence was also tested by contrasting the effect of tonal (pleasant) vs. atonal (unpleasant) versions of the same musical pieces. The results showed that the effect of tempo in music, associated with a subjective arousal effect, was the major factor that produced time distortions with time being judged longer for fast than for slow tempi. When the tempo was held constant, no significant effect of timbre on the time judgment was found although the orchestral music was judged to be more arousing than the piano music. Nevertheless, emotional valence did modulate the tempo effect on time perception, the pleasant music being judged shorter than the unpleasant music.

  14. A systematic review on the neural effects of music on emotion regulation: implications for music therapy practice.

    Science.gov (United States)

    Moore, Kimberly Sena

    2013-01-01

    Emotion regulation (ER) is an internal process through which a person maintains a comfortable state of arousal by modulating one or more aspects of emotion. The neural correlates underlying ER suggest an interplay between cognitive control areas and areas involved in emotional reactivity. Although some studies have suggested that music may be a useful tool in ER, few studies have examined the links between music perception/production and the neural mechanisms that underlie ER and resulting implications for clinical music therapy treatment. Objectives of this systematic review were to explore and synthesize what is known about how music and music experiences impact neural structures implicated in ER, and to consider clinical implications of these findings for structuring music stimuli to facilitate ER. A comprehensive electronic database search resulted in 50 studies that met predetermined inclusion and exclusion criteria. Pertinent data related to the objective were extracted and study outcomes were analyzed and compared for trends and common findings. Results indicated there are certain music characteristics and experiences that produce desired and undesired neural activation patterns implicated in ER. Desired activation patterns occurred when listening to preferred and familiar music, when singing, and (in musicians) when improvising; undesired activation patterns arose when introducing complexity, dissonance, and unexpected musical events. Furthermore, the connection between music-influenced changes in attention and its link to ER was explored. Implications for music therapy practice are discussed and preliminary guidelines for how to use music to facilitate ER are shared.

  15. Relations of nostalgia with music to emotional response and recall of autobiographical memory

    OpenAIRE

    小林, 麻美; 岩永, 誠; 生和, 秀敏

    2002-01-01

    Previous researches suggest that musical mood and preferences affects on emotional response, and that context of music also affects on musical-dependent memory. We often feel 'nostalgia' when listening to old familiar tunes. Nostalgia is related to eliciting positive emotions, recall of autobiographical memory and positive evaluations for recall contents. The present study aimed to examine effects of musical mood, preference and nostalgia on emotional responses, the amounts of recall of autob...

  16. Sensorimotor adaptation is influenced by background music.

    Science.gov (United States)

    Bock, Otmar

    2010-06-01

    It is well established that listening to music can modify subjects' cognitive performance. The present study evaluates whether this so-called Mozart Effect extends beyond cognitive tasks and includes sensorimotor adaptation. Three subject groups listened to musical pieces that in the author's judgment were serene, neutral, or sad, respectively. This judgment was confirmed by the subjects' introspective reports. While listening to music, subjects engaged in a pointing task that required them to adapt to rotated visual feedback. All three groups adapted successfully, but the speed and magnitude of adaptive improvement was more pronounced with serene music than with the other two music types. In contrast, aftereffects upon restoration of normal feedback were independent of music type. These findings support the existence of a "Mozart effect" for strategic movement control, but not for adaptive recalibration. Possibly, listening to music modifies neural activity in an intertwined cognitive-emotional network.

  17. Music for the ageing brain: Cognitive, emotional, social, and neural benefits of musical leisure activities in stroke and dementia.

    Science.gov (United States)

    Särkämö, Teppo

    2017-01-01

    Music engages an extensive network of auditory, cognitive, motor, and emotional processing regions in the brain. Coupled with the fact that the emotional and cognitive impact of music is often well preserved in ageing and dementia, music is a powerful tool in the care and rehabilitation of many ageing-related neurological diseases. In addition to formal music therapy, there has been a growing interest in self- or caregiver-implemented musical leisure activities or hobbies as a widely applicable means to support psychological wellbeing in ageing and in neurological rehabilitation. This article reviews the currently existing evidence on the cognitive, emotional, and neural benefits of musical leisure activities in normal ageing as well as in the rehabilitation and care of two of the most common and ageing-related neurological diseases: stroke and dementia.

  18. Feeling sad makes us feel older: Effects of a sad-mood induction on subjective age.

    Science.gov (United States)

    Dutt, Anne J; Wahl, Hans-Werner

    2017-08-01

    A mood-induction paradigm was implemented in a sample of 144 adults covering midlife and old age (40-80 years) to investigate associations between mood and subjective age. Sad or neutral mood was induced by texts and music pieces. Subjective age was operationalized as felt age relative to chronological age. Participants receiving the sad-mood induction reported changes toward older felt ages from pre- to postinduction. Participants receiving the neutral-mood induction reported comparable levels of subjective age at pre- and postinduction. Effects were comparable across middle- and older aged participants. Results suggest that sad affective states might dampen subjective age. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Autism, emotion recognition and the mirror neuron system: the case of music.

    Science.gov (United States)

    Molnar-Szakacs, Istvan; Wang, Martha J; Laugeson, Elizabeth A; Overy, Katie; Wu, Wai-Ling; Piggot, Judith

    2009-11-16

    Understanding emotions is fundamental to our ability to navigate and thrive in a complex world of human social interaction. Individuals with Autism Spectrum Disorders (ASD) are known to experience difficulties with the communication and understanding of emotion, such as the nonverbal expression of emotion and the interpretation of emotions of others from facial expressions and body language. These deficits often lead to loneliness and isolation from peers, and social withdrawal from the environment in general. In the case of music however, there is evidence to suggest that individuals with ASD do not have difficulties recognizing simple emotions. In addition, individuals with ASD have been found to show normal and even superior abilities with specific aspects of music processing, and often show strong preferences towards music. It is possible these varying abilities with different types of expressive communication may be related to a neural system referred to as the mirror neuron system (MNS), which has been proposed as deficient in individuals with autism. Music's power to stimulate emotions and intensify our social experiences might activate the MNS in individuals with ASD, and thus provide a neural foundation for music as an effective therapeutic tool. In this review, we present literature on the ontogeny of emotion processing in typical development and in individuals with ASD, with a focus on the case of music.

  20. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    Science.gov (United States)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  1. Moved through Music: The Effect of Experienced Emotions on Performers' Movement Characteristics

    Science.gov (United States)

    Van Zijl, Anemone G. W.; Luck, Geoff

    2013-01-01

    Do performers who feel sad move differently compared to those who express sadness? Although performers' expressive movements have been widely studied, little is known about how performers' experienced emotions affect such movements. To investigate this, we made 72 motion-capture recordings of eight violinists playing a melodic phrase in response…

  2. Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

    Science.gov (United States)

    Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J

    2015-12-01

    It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,pmusic induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01). Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Music and emotions in the brain: familiarity matters.

    Directory of Open Access Journals (Sweden)

    Carlos Silva Pereira

    Full Text Available The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data.

  4. Music and emotions in the brain: familiarity matters.

    Science.gov (United States)

    Pereira, Carlos Silva; Teixeira, João; Figueiredo, Patrícia; Xavier, João; Castro, São Luís; Brattico, Elvira

    2011-01-01

    The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI) experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data.

  5. Music and Emotions in the Brain: Familiarity Matters

    Science.gov (United States)

    Pereira, Carlos Silva; Teixeira, João; Figueiredo, Patrícia; Xavier, João; Castro, São Luís; Brattico, Elvira

    2011-01-01

    The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI) experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data. PMID:22110619

  6. Fear across the senses: brain responses to music, vocalizations and facial expressions.

    Science.gov (United States)

    Aubé, William; Angulo-Perkins, Arafat; Peretz, Isabelle; Concha, Luis; Armony, Jorge L

    2015-03-01

    Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing 'biologically relevant' emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related functional magnetic resonance imaging study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, non-linguistic vocalizations and short novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant blood oxygen level-dependent signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Sensitivity to musical emotion is influenced by tonal structure in congenital amusia

    OpenAIRE

    Jiang, Cunmei; Liu, Fang; Wong, Patrick C. M.

    2017-01-01

    Emotional communication in music depends on multiple attributes including psychoacoustic features and tonal system information, the latter of which is unique to music. The present study investigated whether congenital amusia, a lifelong disorder of musical processing, impacts sensitivity to musical emotion elicited by timbre and tonal system information. Twenty-six amusics and 26 matched controls made tension judgments on Western (familiar) and Indian (unfamiliar) melodies played on piano and...

  8. Theory-guided Therapeutic Function of Music to facilitate emotion regulation development in preschool-aged children

    Directory of Open Access Journals (Sweden)

    Kimberly eSena Moore

    2015-10-01

    Full Text Available Emotion regulation is an umbrella term to describe interactive, goal-dependent explicit and implicit processes that are intended to help an individual manage and shift an emotional experience. The primary window for appropriate emotion regulation development occurs during the infant, toddler, and preschool years. Atypical emotion regulation development is considered a risk factor for mental health problems and has been implicated as a primary mechanism underlying childhood pathologies. Current treatments are predominantly verbal- and behavioral-based and lack the opportunity to practice in-the-moment management of emotionally charged situations. There is also an absence of caregiver-child interaction in these treatment strategies. Based on behavioral and neural support for music as a therapeutic mechanism, the incorporation of intentional music experiences, facilitated by a music therapist, may be one way to address these limitations. Musical Contour Regulation Facilitation is an interactive therapist-child music-based intervention for emotion regulation development practice in preschoolers. The Musical Contour Regulation Facilitation intervention uses the deliberate contour and temporal structure of a music therapy session to mirror the changing flow of the caregiver-child interaction through the alternation of high arousal and low arousal music experiences. The purpose of this paper is to describe the Therapeutic Function of Music, a theory-based description of the structural characteristics for a music-based stimulus to musically facilitate developmentally appropriate high arousal and low arousal in-the-moment emotion regulation experiences. The Therapeutic Function of Music analysis is based on a review of the music theory, music neuroscience, and music development literature and provides a preliminary model of the structural characteristics of the music as a core component of the Musical Contour Regulation Facilitation intervention.

  9. Multimodal Detection of Music Performances for Intelligent Emotion Based Lighting

    DEFF Research Database (Denmark)

    Bonde, Esben Oxholm Skjødt; Hansen, Ellen Kathrine; Triantafyllidis, Georgios

    2016-01-01

    Playing music is about conveying emotions and the lighting at a concert can help do that. However, new and unknown bands that play at smaller venues and bands that don’t have the budget to hire a dedicated light technician have to miss out on lighting that will help them to convey the emotions...... of what they play. In this paper it is investigated whether it is possible or not to develop an intelligent system that through a multimodal input detects the intended emotions of the played music and in realtime adjusts the lighting accordingly. A concept for such an intelligent lighting system...... is developed and described. Through existing research on music and emotion, as well as on musicians’ body movements related to the emotion they want to convey, a row of cues is defined. This includes amount, speed, fluency and regularity for the visual and level, tempo, articulation and timbre for the auditory...

  10. Music therapy, emotions and the heart: a pilot study.

    Science.gov (United States)

    Raglio, Alfredo; Oasi, Osmano; Gianotti, Marta; Bellandi, Daniele; Manzoni, Veronica; Goulene, Karine; Imbriani, Chiara; Badiale, Marco Stramba

    2012-01-01

    The autonomic nervous system plays an important role in the control of cardiac function. It has been suggested that sound and music may have effects on the autonomic control of the heart inducing emotions, concomitantly with the activation of specific brain areas, i.e. the limbic area, and they may exert potential beneficial effects. This study is a prerequisite and defines a methodology to assess the relation between changes in cardiac physiological parameters such as heart rate, QT interval and their variability and the psychological responses to music therapy sessions. We assessed the cardiac physiological parameters and psychological responses to a music therapy session. ECG Holter recordings were performed before, during and after a music therapy session in 8 healthy individuals. The different behaviors of the music therapist and of the subjects have been analyzed with a specific music therapy assessment (Music Therapy Checklist). After the session mean heart rate decreased (p = 0.05), high frequency of heart rate variability tended to be higher and QTc variability tended to be lower. During music therapy session "affect attunements" have been found in all subjects but one. A significant emotional activation was associated to a higher dynamicity and variations of sound-music interactions. Our results may represent the rational basis for larger studies in diferent clinical conditions.

  11. Why Does Music Therapy Help in Autism?

    Directory of Open Access Journals (Sweden)

    Neha Khetrapal

    2009-04-01

    Full Text Available Music therapy is shown to be an effective intervention for emotional recognition deficits in autism. However, researchers to date have yet to propose a model that accounts for the neurobiological and cognitive components that are responsible for such improvements. The current paper outlines a model whereby the encoding of tonal pitch is proposed as the underlying mechanism. Accurate tonal pitch perception is important for recognizing emotions like happiness and sadness in the auditory domain. Once acquired, the ability to perceive tonal pitch functions as a domain-specific module that proves beneficial for music cognition. There is biological preparedness for the development of such a module and it is hypothesized to be preserved in autism. The current paper reinforces the need to build intervention programs based on this preserved module in autism, and proposes that this module may form the basis for a range of benefits related to music therapy. Possible brain areas associated with this module are suggested.

  12. Emotion Recognition From Singing Voices Using Contemporary Commercial Music and Classical Styles.

    Science.gov (United States)

    Hakanpää, Tua; Waaramaa, Teija; Laukkanen, Anne-Maria

    2018-02-22

    This study examines the recognition of emotion in contemporary commercial music (CCM) and classical styles of singing. This information may be useful in improving the training of interpretation in singing. This is an experimental comparative study. Thirteen singers (11 female, 2 male) with a minimum of 3 years' professional-level singing studies (in CCM or classical technique or both) participated. They sang at three pitches (females: a, e1, a1, males: one octave lower) expressing anger, sadness, joy, tenderness, and a neutral state. Twenty-nine listeners listened to 312 short (0.63- to 4.8-second) voice samples, 135 of which were sung using a classical singing technique and 165 of which were sung in a CCM style. The listeners were asked which emotion they heard. Activity and valence were derived from the chosen emotions. The percentage of correct recognitions out of all the answers in the listening test (N = 9048) was 30.2%. The recognition percentage for the CCM-style singing technique was higher (34.5%) than for the classical-style technique (24.5%). Valence and activation were better perceived than the emotions themselves, and activity was better recognized than valence. A higher pitch was more likely to be perceived as joy or anger, and a lower pitch as sorrow. Both valence and activation were better recognized in the female CCM samples than in the other samples. There are statistically significant differences in the recognition of emotions between classical and CCM styles of singing. Furthermore, in the singing voice, pitch affects the perception of emotions, and valence and activity are more easily recognized than emotions. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Modeling Temporal Structure in Music for Emotion Prediction using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2014-01-01

    such as emotions, genre, and similarity. This paper addresses the specific hypothesis whether temporal information is essential for predicting expressed emotions in music, as a prototypical example of a cognitive aspect of music. We propose to test this hypothesis using a novel processing pipeline: 1) Extracting...

  14. [Effects of Different Genres of Music on the Psycho-Physiological Responses of Undergraduates].

    Science.gov (United States)

    Lee, Hsin-Ping; Liu, Yu-Chen; Lin, Mei-Feng

    2016-12-01

    -physiological responses. In the present study, participants with high-state anxiety registered elevated parasympathetic activity after listening to 10 minutes of tense and sad music. Simultaneous listening effects were detected only in joyful and peaceful music, which reduced subjective anxiety and depression. The results of the present study advocate that music interveners and clinical care providers select joyful, peaceful, and tense music to help alleviate the anxiety and negative emotions of their patients. Furthermore, the psycho-physiological changes of these patients should be assessed after listening to this music.

  15. Generalizations of the subject-independent feature set for music-induced emotion recognition.

    Science.gov (United States)

    Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping

    2011-01-01

    Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.

  16. I-space: the effects of emotional valence and source of music on interpersonal distance.

    Directory of Open Access Journals (Sweden)

    Ana Tajadura-Jiménez

    Full Text Available BACKGROUND: The ubiquitous use of personal music players in over-crowded public transport alludes to the hypothesis that apart from making the journey more pleasant, listening to music through headphones may also affect representations of our personal space, that is, the emotionally-tinged zone around the human body that people feel is "their space". We evaluated the effects of emotional valence (positive versus negative and source (external, i.e. loudspeakers, versus embedded, i.e. headphones of music on the participant's interpersonal distance when interacting with others. METHODOLOGY/PRINCIPAL FINDINGS: Personal space was evaluated as the comfort interpersonal distance between participant and experimenter during both active and passive approach tasks. Our results show that, during passive approach tasks, listening to positive versus negative emotion-inducing music reduces the representation of personal space, allowing others to come closer to us. With respect to a no-music condition, an embedded source of positive emotion-inducing music reduced personal space, while an external source of negative emotion-inducing music expanded personal space. CONCLUSIONS/SIGNIFICANCE: The results provide the first empirical evidence of the relation between induced emotional state, as a result of listening to positive music through headphones, and personal space when interacting with others. This research might help to understand the benefit that people find in using personal music players in crowded situations, such as when using the public transport in urban settings.

  17. Pleasurable emotional response to music: a case of neurodegenerative generalized auditory agnosia.

    Science.gov (United States)

    Matthews, Brandy R; Chang, Chiung-Chih; De May, Mary; Engstrom, John; Miller, Bruce L

    2009-06-01

    Recent functional neuroimaging studies implicate the network of mesolimbic structures known to be active in reward processing as the neural substrate of pleasure associated with listening to music. Psychoacoustic and lesion studies suggest that there is a widely distributed cortical network involved in processing discreet musical variables. Here we present the case of a young man with auditory agnosia as the consequence of cortical neurodegeneration who continues to experience pleasure when exposed to music. In a series of musical tasks, the subject was unable to accurately identify any of the perceptual components of music beyond simple pitch discrimination, including musical variables known to impact the perception of affect. The subject subsequently misidentified the musical character of personally familiar tunes presented experimentally, but continued to report that the activity of 'listening' to specific musical genres was an emotionally rewarding experience. The implications of this case for the evolving understanding of music perception, music misperception, music memory, and music-associated emotion are discussed.

  18. How We Remember the Emotional Intensity of Past Musical Experiences

    Directory of Open Access Journals (Sweden)

    Thomas eSchäfer

    2014-08-01

    Full Text Available Listening to music usually elicits emotions that can vary considerably in their intensity over the course of listening. Yet, after listening to a piece of music, people are easily able to evaluate the music’s overall emotional intensity. There are two different hypotheses about how affective experiences are temporally processed and integrated: (1 all moments’ intensities are integrated, resulting in an averaged value; (2 the overall evaluation is built from specific single moments, such as the moments of highest emotional intensity (peaks, the end, or a combination of these. Here we investigated what listeners do when building an overall evaluation of a musical experience. Participants listened to unknown songs and provided moment-to-moment ratings of experienced intensity of emotions. Subsequently, they evaluated the overall emotional intensity of each song. Results indicate that participants’ evaluations were predominantly influenced by their average impression but that, in addition, the peaks and end emotional intensities contributed substantially. These results indicate that both types of processes play a role: All moments are integrated into an averaged value but single moments might be assigned a higher value in the calculation of this average.

  19. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

    Science.gov (United States)

    Good, Arla; Gordon, Karen A; Papsin, Blake C; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A

    Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.

  20. Sensitivity to musical emotion is influenced by tonal structure in congenital amusia.

    Science.gov (United States)

    Jiang, Cunmei; Liu, Fang; Wong, Patrick C M

    2017-08-08

    Emotional communication in music depends on multiple attributes including psychoacoustic features and tonal system information, the latter of which is unique to music. The present study investigated whether congenital amusia, a lifelong disorder of musical processing, impacts sensitivity to musical emotion elicited by timbre and tonal system information. Twenty-six amusics and 26 matched controls made tension judgments on Western (familiar) and Indian (unfamiliar) melodies played on piano and sitar. Like controls, amusics used timbre cues to judge musical tension in Western and Indian melodies. While controls assigned significantly lower tension ratings to Western melodies compared to Indian melodies, thus showing a tonal familiarity effect on tension ratings, amusics provided comparable tension ratings for Western and Indian melodies on both timbres. Furthermore, amusics rated Western melodies as more tense compared to controls, as they relied less on tonality cues than controls in rating tension for Western melodies. The implications of these findings in terms of emotional responses to music are discussed.

  1. Eyes wide shut: amygdala mediates eyes-closed effect on emotional experience with music.

    Science.gov (United States)

    Lerner, Yulia; Papo, David; Zhdanov, Andrey; Belozersky, Libi; Hendler, Talma

    2009-07-15

    The perceived emotional value of stimuli and, as a consequence the subjective emotional experience with them, can be affected by context-dependent styles of processing. Therefore, the investigation of the neural correlates of emotional experience requires accounting for such a variable, a matter of an experimental challenge. Closing the eyes affects the style of attending to auditory stimuli by modifying the perceptual relationship with the environment without changing the stimulus itself. In the current study, we used fMRI to characterize the neural mediators of such modification on the experience of emotionality in music. We assumed that closed eyes position will reveal interplay between different levels of neural processing of emotions. More specifically, we focused on the amygdala as a central node of the limbic system and on its co-activation with the Locus Ceruleus (LC) and Ventral Prefrontal Cortex (VPFC); regions involved in processing of, respectively, 'low', visceral-, and 'high', cognitive-related, values of emotional stimuli. Fifteen healthy subjects listened to negative and neutral music excerpts with eyes closed or open. As expected, behavioral results showed that closing the eyes while listening to emotional music resulted in enhanced rating of emotionality, specifically of negative music. In correspondence, fMRI results showed greater activation in the amygdala when subjects listened to the emotional music with eyes closed relative to eyes open. More so, by using voxel-based correlation and a dynamic causal model analyses we demonstrated that increased amygdala activation to negative music with eyes closed led to increased activations in the LC and VPFC. This finding supports a system-based model of perceived emotionality in which the amygdala has a central role in mediating the effect of context-based processing style by recruiting neural operations involved in both visceral (i.e. 'low') and cognitive (i.e. 'high') related processes of emotions.

  2. Emotions Induced by Operatic Music: Psychophysiological Effects of Music, Plot, and Acting: A Scientist's Tribute to Maria Callas

    Science.gov (United States)

    Baltes, Felicia Rodica; Avram, Julia; Miclea, Mircea; Miu, Andrei C.

    2011-01-01

    Operatic music involves both singing and acting (as well as rich audiovisual background arising from the orchestra and elaborate scenery and costumes) that multiply the mechanisms by which emotions are induced in listeners. The present study investigated the effects of music, plot, and acting performance on emotions induced by opera. There were…

  3. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity

    Science.gov (United States)

    Mado Proverbio, C.A. Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-01-01

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding. PMID:26469712

  4. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity.

    Science.gov (United States)

    Proverbio, Alice Mado; Mado Proverbio, C A Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-10-15

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.

  5. The Experience of Anger and Sadness in Everyday Problems Impacts Age Differences in Emotion Regulation

    Science.gov (United States)

    Blanchard-Fields, Fredda; Coats, Abby Heckman

    2008-01-01

    The authors examined regulation of the discrete emotions anger and sadness in adolescents through older adults in the context of describing everyday problem situations. The results support previous work; in comparison to younger age groups, older adults reported that they experienced less anger and reported that they used more passive and fewer…

  6. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    OpenAIRE

    Invitto, Sara; Calcagn?, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emo...

  7. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants

    Science.gov (United States)

    Gordon, Karen A.; Papsin, Blake C.; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A.

    2017-01-01

    Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. PMID:28085739

  8. Collecting annotations for induced musical emotion via online game with a purpose emotify

    NARCIS (Netherlands)

    Aljanaki, Anna; Wiering, Frans; Veltkamp, Remco

    2014-01-01

    One of the major reasons why music is so enjoyable is its emotional impact. Indexing and searching by emotion would greatly increase the usability of online music collections. However, there is no consensus on the question which model of emotion would fit this task best. Such a model should be easy

  9. Alteration of complex negative emotions induced by music in euthymic patients with bipolar disorder.

    Science.gov (United States)

    Choppin, Sabine; Trost, Wiebke; Dondaine, Thibaut; Millet, Bruno; Drapier, Dominique; Vérin, Marc; Robert, Gabriel; Grandjean, Didier

    2016-02-01

    Research has shown bipolar disorder to be characterized by dysregulation of emotion processing, including biases in facial expression recognition that is most prevalent during depressive and manic states. Very few studies have examined induced emotions when patients are in a euthymic phase, and there has been no research on complex emotions. We therefore set out to test emotional hyperreactivity in response to musical excerpts inducing complex emotions in bipolar disorder during euthymia. We recruited 21 patients with bipolar disorder (BD) in a euthymic phase and 21 matched healthy controls. Participants first rated their emotional reactivity on two validated self-report scales (ERS and MAThyS). They then rated their music-induced emotions on nine continuous scales. The targeted emotions were wonder, power, melancholy and tension. We used a specific generalized linear mixed model to analyze the behavioral data. We found that participants in the euthymic bipolar group experienced more intense complex negative emotions than controls when the musical excerpts induced wonder. Moreover, patients exhibited greater emotional reactivity in daily life (ERS). Finally, a greater experience of tension while listening to positive music seemed to be mediated by greater emotional reactivity and a deficit in executive functions. The heterogeneity of the BD group in terms of clinical characteristics may have influenced the results. Euthymic patients with bipolar disorder exhibit more complex negative emotions than controls in response to positive music. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Pain, Sadness, Aggression and Joy

    DEFF Research Database (Denmark)

    Grodal, Torben Kragh

    2007-01-01

    Based on film examples and evolutionary psychology, the article discusses why viewers are fascinated not only with funny and pleasure-evoking films, but also with sad and disgust-evoking ones. The article argues that a series of adaptations modify simple pleasure-unpleasure-mechanisms. Besides...... discussing how action-oriented films convert negative experiences to challenges, the article especially analyse how sad films are rituals of bonding (kinbonding, bonding to brothers in arms, tribal bonding etc. and the sadness is a way to express the importance of bonding in the negative. Keywords......: attachment, cognitive film theory, coping and emotions, evolutionary psychology, hedonic valence, melodrama, sadness, tragedy....

  11. Music and emotion-a composer's perspective.

    Science.gov (United States)

    Douek, Joel

    2013-01-01

    This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavors to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives.

  12. A grounded theory of young tennis players use of music to manipulate emotional state.

    Science.gov (United States)

    Bishop, Daniel T; Karageorghis, Costas I; Loizou, Georgios

    2007-10-01

    The main objectives of this study were (a) to elucidate young tennis players' use of music to manipulate emotional states, and (b) to present a model grounded in present data to illustrate this phenomenon and to stimulate further research. Anecdotal evidence suggests that music listening is used regularly by elite athletes as a preperformance strategy, but only limited empirical evidence corroborates such use. Young tennis players (N = 14) were selected purposively for interview and diary data collection. Results indicated that participants consciously selected music to elicit various emotional states; frequently reported consequences of music listening included improved mood, increased arousal, and visual and auditory imagery. The choice of music tracks and the impact of music listening were mediated by a number of factors, including extramusical associations, inspirational lyrics, music properties, and desired emotional state. Implications for the future investigation of preperformance music are discussed.

  13. Enhancement of subjective pain experience and changes of brain function on sadness

    International Nuclear Information System (INIS)

    Yoshino, Atsuo; Takahashi, Terumichi; Okamoto, Yasumasa; Yoshimura, Shinpei; Kunisato, Yoshihiko; Okada, Go; Yamawaki, Shigeto; Onoda, Keiichi

    2012-01-01

    Pain is a multidimensional experience. Previous psychological studies have shown that a person's subjective pain threshold can change when certain emotions are recognized. We examined this association by using functional magnetic resonance imaging (fMRI) (15 healthy subjects) and magnetoencephalography (MEG) (19 healthy subjects). Subjects experienced pain stimuli in different emotional contexts induced by the presentation of sad, happy or neutral facial stimuli. They also rated their subjective pain intensity. We found: The intensity of subjective pain ratings increased in the sad emotional context, pain-related activation in the anterior cingulate cortex (ACC) was more pronounced in the sad context, and we demonstrated amygdala to ACC connections during the experience of pain in the sad context, and event-related desynchronization (ERD) of lower beta bands in the right hemisphere after pain stimuli was larger in the sad emotional condition. These results show that emotional stimuli can modulate neural responses to pain stimuli, and that it may be relevant to understanding the broader relationship between somatic complaints and negative emotion. (author)

  14. The role of background music in the experience of watching YouTube videos about death and dying

    Directory of Open Access Journals (Sweden)

    Panagiotis Pentaris

    2015-12-01

    Full Text Available YouTube is the largest video sharing site live at the moment. It has been used to communicate a vast array of information, while it allows for user-generated content. This paper will focus on YouTube videos that communicate death, and in particular will present findings from a preliminary study undertaken by the authors considering the role that background music plays in these videos. Specifically, this study explores the experiences of the viewers of death-related YouTube videos with and without background music while it makes comparisons in relation to the impact that music has on the viewers’ emotional experiences. We conclude that background music elicits emotions and enhances feelings of sadness and sympathy in relation to the visual content of videos while recommendations for future research are made.

  15. Familiarity mediates the relationship between emotional arousal and pleasure during music listening

    Science.gov (United States)

    van den Bosch, Iris; Salimpoor, Valorie N.; Zatorre, Robert J.

    2013-01-01

    Emotional arousal appears to be a major contributing factor to the pleasure that listeners experience in response to music. Accordingly, a strong positive correlation between self-reported pleasure and electrodermal activity (EDA), an objective indicator of emotional arousal, has been demonstrated when individuals listen to familiar music. However, it is not yet known to what extent familiarity contributes to this relationship. In particular, as listening to familiar music involves expectations and predictions over time based on veridical knowledge of the piece, it could be that such memory factors plays a major role. Here, we tested such a contribution by using musical stimuli entirely unfamiliar to listeners. In a second experiment we repeated the novel music to experimentally establish a sense of familiarity. We aimed to determine whether (1) pleasure and emotional arousal would continue to correlate when listeners have no explicit knowledge of how the tones will unfold, and (2) whether this could be enhanced by experimentally-induced familiarity. In the first experiment, we presented 33 listeners with 70 unfamiliar musical excerpts in two sessions. There was no relationship between the degree of experienced pleasure and emotional arousal as measured by EDA. In the second experiment, 7 participants listened to 35 unfamiliar excerpts over two sessions separated by 30 min. Repeated exposure significantly increased EDA, even though individuals did not explicitly recall having heard all the pieces before. Furthermore, increases in self-reported familiarity significantly enhanced experienced pleasure and there was a general, though not significant, increase in EDA. These results suggest that some level of expectation and predictability mediated by prior exposure to a given piece of music play an important role in the experience of emotional arousal in response to music. PMID:24046738

  16. Not just fear and sadness: meta-analytic evidence of pervasive emotion recognition deficits for facial and vocal expressions in psychopathy.

    Science.gov (United States)

    Dawel, Amy; O'Kearney, Richard; McKone, Elinor; Palermo, Romina

    2012-11-01

    The present meta-analysis aimed to clarify whether deficits in emotion recognition in psychopathy are restricted to certain emotions and modalities or whether they are more pervasive. We also attempted to assess the influence of other important variables: age, and the affective factor of psychopathy. A systematic search of electronic databases and a subsequent manual search identified 26 studies that included 29 experiments (N = 1376) involving six emotion categories (anger, disgust, fear, happiness, sadness, surprise) across three modalities (facial, vocal, postural). Meta-analyses found evidence of pervasive impairments across modalities (facial and vocal) with significant deficits evident for several emotions (i.e., not only fear and sadness) in both adults and children/adolescents. These results are consistent with recent theorizing that the amygdala, which is believed to be dysfunctional in psychopathy, has a broad role in emotion processing. We discuss limitations of the available data that restrict the ability of meta-analysis to consider the influence of age and separate the sub-factors of psychopathy, highlighting important directions for future research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Autonomic Effects of Music in Health and Crohn's Disease: The Impact of Isochronicity, Emotional Valence, and Tempo

    OpenAIRE

    Krabs, Roland Uwe; Enk, Ronny; Teich, Niels; Koelsch, Stefan

    2015-01-01

    Background: Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS) responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a) whether simply a musical tactus (without common emotional components of music) is sufficient to elicit ANS effects; (b) whether changes in the tempo of a musical piece contribute to the ANS effects; (c) whether emotional valence of music influences ANS...

  18. Multidimensional scaling of emotional responses to music in patients with temporal lobe resection.

    Science.gov (United States)

    Dellacherie, D; Bigand, E; Molin, P; Baulac, M; Samson, S

    2011-10-01

    The present study investigated emotional responses to music by using multidimensional scaling (MDS) analysis in patients with right or left medial temporal lobe (MTL) lesions and matched normal controls (NC). Participants were required to evaluate emotional dissimilarities of nine musical excerpts that were selected to express graduated changes along the valence and arousal dimensions. For this purpose, they rated dissimilarity between pairs of stimuli on an eight-point scale and the resulting matrices were submitted to an MDS analysis. The results showed that patients did not differ from NC participants in evaluating emotional feelings induced by the musical excerpts, suggesting that all participants were able to distinguish refined emotions. We concluded that the ability to detect and use emotional valence and arousal when making dissimilarity judgments was not strongly impaired by a right or left MTL lesion. This finding has important clinical implications and is discussed in light of current neuropsychological studies on emotion. It suggests that emotional responses to music can be at least partially preserved at a non-verbal level in patients with unilateral temporal lobe damage including the amygdala. Copyright © 2011 Elsevier Srl. All rights reserved.

  19. Emotion felt by the listener and expressed by the music: literature review and theoretical perspectives.

    Science.gov (United States)

    Schubert, Emery

    2013-12-17

    In his seminal paper, Gabrielsson (2002) distinguishes between emotion felt by the listener, here: "internal locus of emotion" (IL), and the emotion the music is expressing, here: "external locus of emotion" (EL). This paper tabulates 16 comparisons of felt versus expressed emotions in music published in the decade 2003-2012 consisting of 19 studies/experiments and provides some theoretical perspectives. The key findings were that (1) IL rating was frequently rated statistically the same or lower than the corresponding EL rating (e.g., lower felt happiness rating compared to the apparent happiness of the music), and that (2) self-select and preferred music had a smaller gap across the emotion loci than experimenter-selected and disliked music. These key findings were explained by an "inhibited" emotional contagion mechanism, where the otherwise matching felt emotion may have been attenuated by some other factor such as social context. Matching between EL and IL for loved and self-selected pieces was explained by the activation of "contagion" circuits. Physiological arousal, personality and age, as well as musical features (tempo, mode, putative emotions) also influenced perceived and felt emotion distinctions. A variety of data collection formats were identified, but mostly using rating items. In conclusion, a more systematic use of terminology appears desirable. Two broad categories, namely matched and unmatched, are proposed as being sufficient to capture the relationships between EL and IL, instead of four categories as suggested by Gabrielsson.

  20. Music induces universal emotion-related psychophysiological responses: comparing Canadian listeners to Congolese Pygmies

    Science.gov (United States)

    Egermann, Hauke; Fernando, Nathalie; Chuen, Lorraine; McAdams, Stephen

    2015-01-01

    Subjective and psychophysiological emotional responses to music from two different cultures were compared within these two cultures. Two identical experiments were conducted: the first in the Congolese rainforest with an isolated population of Mebenzélé Pygmies without any exposure to Western music and culture, the second with a group of Western music listeners, with no experience with Congolese music. Forty Pygmies and 40 Canadians listened in pairs to 19 music excerpts of 29–99 s in duration in random order (eight from the Pygmy population and 11 Western instrumental excerpts). For both groups, emotion components were continuously measured: subjective feeling (using a two- dimensional valence and arousal rating interface), peripheral physiological activation, and facial expression. While Pygmy music was rated as positive and arousing by Pygmies, ratings of Western music by Westerners covered the range from arousing to calming and from positive to negative. Comparing psychophysiological responses to emotional qualities of Pygmy music across participant groups showed no similarities. However, Western stimuli, rated as high and low arousing by Canadians, created similar responses in both participant groups (with high arousal associated with increases in subjective and physiological activation). Several low-level acoustical features of the music presented (tempo, pitch, and timbre) were shown to affect subjective and physiological arousal similarly in both cultures. Results suggest that while the subjective dimension of emotional valence might be mediated by cultural learning, changes in arousal might involve a more basic, universal response to low-level acoustical characteristics of music. PMID:25620935

  1. Music Induces Universal Emotion-Related Psychophysiological Responses: Comparing Canadian Listeners To Congolese Pygmies

    Directory of Open Access Journals (Sweden)

    Hauke eEgermann

    2015-01-01

    Full Text Available Subjective and psychophysiological emotional responses to music from two different cultures were compared within these two cultures. Two identical experiments were conducted: the first in the Congolese rainforest with an isolated population of Mbenzélé Pygmies without any exposure to Western music and culture, the second with a group of Western music listeners, with no experience with Congolese music. Forty Pygmies and 40 Canadians listened in pairs to 19 music excerpts of 29 to 99 seconds in duration in random order (8 from the Pygmy population and 11 Western instrumental excerpts. For both groups, emotion components were continuously measured: subjective feeling (using a two- dimensional valence and arousal rating interface, peripheral physiological activation, and facial expression. While Pygmy music was rated as positive and arousing by Pygmies, ratings of Western music by Westerners covered the range from arousing to calming and from positive to negative. Comparing psychophysiological responses to emotional qualities of Pygmy music across participant groups showed no similarities. However, Western stimuli, rated as high and low arousing by Canadians, created similar responses in both participant groups (with high arousal associated with increases in subjective and physiological activation. Several low-level acoustical features of the music presented (tempo, pitch, and timbre were shown to affect subjective and physiological arousal similarly in both cultures. Results suggest that while the subjective dimension of emotional valence might be mediated by cultural learning, changes in arousal might involve a more basic, universal response to low-level acoustical characteristics of music.

  2. Play it again with feeling: computer feedback in musical communication of emotions.

    Science.gov (United States)

    Juslin, Patrik N; Karlsson, Jessika; Lindström, Erik; Friberg, Anders; Schoonderwaldt, Erwin

    2006-06-01

    Communication of emotions is of crucial importance in music performance. Yet research has suggested that this skill is neglected in music education. This article presents and evaluates a computer program that automatically analyzes music performances and provides feedback to musicians in order to enhance their communication of emotions. Thirty-six semi-professional jazz /rock guitar players were randomly assigned to one of 3 conditions: (1) feedback from the computer program, (2) feedback from music teachers, and (3) repetition without feedback. Performance measures revealed the greatest improvement in communication accuracy for the computer program, but usability measures indicated that certain aspects of the program could be improved. Implications for music education are discussed.

  3. Musical and emotional attunement - unique and essential in music therapy with children on the autism spectrum

    DEFF Research Database (Denmark)

    Holck, Ulla; Geretsegger, Monika

    2016-01-01

    Background: In improvisational music therapy for children with autism spectrum disorder (ASD), facilitating musical and emotional attunement has been found to be one of the unique and essential principles. Methods: Using videotaped sequences of therapy sessions from an international study (TIME...

  4. Emotional power of music in patients with memory disorders: clinical implications of cognitive neuroscience.

    Science.gov (United States)

    Samson, Séverine; Dellacherie, Delphine; Platel, Hervé

    2009-07-01

    By adapting methods of cognitive psychology to neuropsychology, we examined memory and familiarity abilities in music in relation to emotion. First we present data illustrating how the emotional content of stimuli influences memory for music. Second, we discuss recent findings obtained in patients with two different brain disorders (medically intractable epilepsy and Alzheimer's disease) that show relatively spared memory performance for music, despite severe verbal memory disorders. Studies on musical memory and its relation to emotion open up paths for new strategies in cognitive rehabilitation and reinstate the importance of examining interactions between cognitive and clinical neurosciences.

  5. Music therapy in the age of enlightenment.

    Science.gov (United States)

    Rorke, M A

    2001-01-01

    As music therapists continue to discover more about the therapeutic powers of music, it is interesting now and then to look to the past in order to seek the roots of our contemporary practices. In this regard, the writings of eighteenth-century physicians are pivotal in the development of music therapy, for it was these individuals who first began to depend greatly upon scientific experimentation and observation to formulate their procedures. Representative of this stage in the history of music therapy are the findings of the renowned London physician Richard Brocklesby, the only doctor to write a treatise on music therapy in eighteenth-century England. The subjects treated by Brocklesby in his Reflections on the Power of Music (1749) include his musical remedies for the excesses of various emotions-particularly fear, excessive joy, and excessive sadness. He also discusses his musical remedies for diseases of the mind recognized in the eighteenth century-delirium, frenzy, melancholia, and maniacal cases. He considers music as well an aid to the elderly and to pregnant women. In short, Brocklesby provides a lively account of the curative powers of music as viewed in the mid-eighteenth century by an excellent medical mind.

  6. Music and movement share a dynamic structure that supports universal expressions of emotion

    Science.gov (United States)

    Sievers, Beau; Polansky, Larry; Casey, Michael; Wheatley, Thalia

    2013-01-01

    Music moves us. Its kinetic power is the foundation of human behaviors as diverse as dance, romance, lullabies, and the military march. Despite its significance, the music-movement relationship is poorly understood. We present an empirical method for testing whether music and movement share a common structure that affords equivalent and universal emotional expressions. Our method uses a computer program that can generate matching examples of music and movement from a single set of features: rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. We applied our method in two experiments, one in the United States and another in an isolated tribal village in Cambodia. These experiments revealed three things: (i) each emotion was represented by a unique combination of features, (ii) each combination expressed the same emotion in both music and movement, and (iii) this common structure between music and movement was evident within and across cultures. PMID:23248314

  7. Are there age differences in attention to emotional images following a sad mood induction? Evidence from a free-viewing eye-tracking paradigm.

    Science.gov (United States)

    Speirs, Calandra; Belchev, Zorry; Fernandez, Amanda; Korol, Stephanie; Sears, Christopher

    2017-10-30

    Two experiments examined age differences in the effect of a sad mood induction (MI) on attention to emotional images. Younger and older adults viewed sets of four images while their eye gaze was tracked throughout an 8-s presentation. Images were viewed before and after a sad MI to assess the effect of a sad mood on attention to positive and negative scenes. Younger and older adults exhibited positively biased attention after the sad MI, significantly increasing their attention to positive images, with no evidence of an age difference in either experiment. A test of participants' recognition memory for the images indicated that the sad MI reduced memory accuracy for sad images for younger and older adults. The results suggest that heightened attention to positive images following a sad MI reflects an affect regulation strategy related to mood repair. The implications for theories of the positivity effect are discussed.

  8. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    Science.gov (United States)

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.

    Directory of Open Access Journals (Sweden)

    Bruno L Giordano

    Full Text Available Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.

  10. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  11. Music-evoked nostalgia: affect, memory, and personality.

    Science.gov (United States)

    Barrett, Frederick S; Grimm, Kevin J; Robins, Richard W; Wildschut, Tim; Sedikides, Constantine; Janata, Petr

    2010-06-01

    Participants listened to randomly selected excerpts of popular music and rated how nostalgic each song made them feel. Nostalgia was stronger to the extent that a song was autobiographically salient, arousing, familiar, and elicited a greater number of positive, negative, and mixed emotions. These effects were moderated by individual differences (nostalgia proneness, mood state, dimensions of the Affective Neurosciences Personality Scale, and factors of the Big Five Inventory). Nostalgia proneness predicted stronger nostalgic experiences, even after controlling for other individual difference measures. Nostalgia proneness was predicted by the Sadness dimension of the Affective Neurosciences Personality Scale and Neuroticism of the Big Five Inventory. Nostalgia was associated with both joy and sadness, whereas nonnostalgic and nonautobiographical experiences were associated with irritation.

  12. The Analysis of the Strength, Distribution and Direction for the EEG Phase Synchronization by Musical Stimulus

    Science.gov (United States)

    Ogawa, Yutaro; Ikeda, Akira; Kotani, Kiyoshi; Jimbo, Yasuhiko

    In this study, we propose the EEG phase synchronization analysis including not only the average strength of the synchronization but also the distribution and directions under the conditions that evoked emotion by musical stimuli. The experiment is performed with the two different musical stimuli that evoke happiness or sadness for 150 seconds. It is found that the average strength of synchronization indicates no difference between the right side and the left side of the frontal lobe during the happy stimulus, the distribution and directions indicate significant differences. Therefore, proposed analysis is useful for detecting emotional condition because it provides information that cannot be obtained only by the average strength of synchronization.

  13. Do you remember your sad face? The roles of negative cognitive style and sad mood.

    Science.gov (United States)

    Caudek, Corrado; Monni, Alessandra

    2013-01-01

    We studied the effects of negative cognitive style, sad mood, and facial affect on the self-face advantage in a sample of 66 healthy individuals (mean age 26.5 years, range 19-47 years). The sample was subdivided into four groups according to inferential style and responsivity to sad mood induction. Following a sad mood induction, we examined the effect on working memory of an incidental association between facial affect, facial identity, and head-pose orientation. Overall, head-pose recognition was more accurate for the self-face than for nonself face (self-face advantage, SFA). However, participants high in negative cognitive style who experienced higher levels of sadness displayed a stronger SFA for sad expressions than happy expressions. The remaining participants displayed an opposite bias (a stronger SFA for happy expressions than sad expressions), or no bias. These findings highlight the importance of trait-vulnerability status in the working memory biases related to emotional facial expressions.

  14. Emotion felt by the listener and expressed by the music: literature review and theoretical perspectives

    Science.gov (United States)

    Schubert, Emery

    2013-01-01

    In his seminal paper, Gabrielsson (2002) distinguishes between emotion felt by the listener, here: “internal locus of emotion” (IL), and the emotion the music is expressing, here: “external locus of emotion” (EL). This paper tabulates 16 comparisons of felt versus expressed emotions in music published in the decade 2003–2012 consisting of 19 studies/experiments and provides some theoretical perspectives. The key findings were that (1) IL rating was frequently rated statistically the same or lower than the corresponding EL rating (e.g., lower felt happiness rating compared to the apparent happiness of the music), and that (2) self-select and preferred music had a smaller gap across the emotion loci than experimenter-selected and disliked music. These key findings were explained by an “inhibited” emotional contagion mechanism, where the otherwise matching felt emotion may have been attenuated by some other factor such as social context. Matching between EL and IL for loved and self-selected pieces was explained by the activation of “contagion” circuits. Physiological arousal, personality and age, as well as musical features (tempo, mode, putative emotions) also influenced perceived and felt emotion distinctions. A variety of data collection formats were identified, but mostly using rating items. In conclusion, a more systematic use of terminology appears desirable. Two broad categories, namely matched and unmatched, are proposed as being sufficient to capture the relationships between EL and IL, instead of four categories as suggested by Gabrielsson. PMID:24381565

  15. Effects of emotion regulation strategies on music-elicited emotions: An experimental study explaining individual differences

    NARCIS (Netherlands)

    Karreman, A.; Laceulle, O.M.; Hanser, Waldie; Vingerhoets, Ad

    This experimental study examined if emotional experience can be manipulated by applying an emotion regulation strategy during music listening and if individual differences in effects of strategies can be explained by person characteristics. Adults (N = 466) completed questionnaires and rated

  16. Effects of emotion regulation strategies on music elicited emotions : An experimental study explaining individual differences

    NARCIS (Netherlands)

    Karreman, A.; Laceulle, O.M.; Hanser, W.E.; Vingerhoets, A.J.J.M.

    2017-01-01

    This experimental study examined if emotional experience can be manipulated by applying an emotion regulation strategy during music listening and if individual differences in effects of strategies can be explained by person characteristics. Adults (N = 466) completed questionnaires and rated

  17. Musical activity and emotional competence - a twin study.

    Science.gov (United States)

    Theorell, Töres P; Lennartsson, Anna-Karin; Mosing, Miriam A; Ullén, Fredrik

    2014-01-01

    The hypothesis was tested that musical activities may contribute to the prevention of alexithymia. We tested whether musical creative achievement and musical practice are associated with lower alexithymia. 8000 Swedish twins aged 27-54 were studied. Alexithymia was assessed using the Toronto Alexithymia Scale-20. Musical achievement was rated on a 7-graded scale. Participants estimated number of hours of music practice during different ages throughout life. A total life estimation of number of accumulated hours was made. They were also asked about ensemble playing. In addition, twin modelling was used to explore the genetic architecture of the relation between musical practice and alexithymia. Alexithymia was negatively associated with (i) musical creative achievement, (ii) having played a musical instrument as compared to never having played, and - for the subsample of participants that had played an instrument - (iii) total hours of musical training (r = -0.12 in men and -0.10 in women). Ensemble playing added significant variance. Twin modelling showed that alexithymia had a moderate heritability of 36% and that the association with musical practice could be explained by shared genetic influences. Associations between musical training and alexithymia remained significant when controlling for education, depression, and intelligence. Musical achievement and musical practice are associated with lower levels of alexithymia in both men and women. Musical engagement thus appears to be associated with higher emotional competence, although effect sizes are small. The association between musical training and alexithymia appears to be entirely genetically mediated, suggesting genetic pleiotropy.

  18. Expressive Suppression and Enhancement During Music-Elicited Emotions in Younger and Older Adults

    Directory of Open Access Journals (Sweden)

    Sandrine eVieillard

    2015-02-01

    Full Text Available When presented with emotional visual scenes, older adults have been found to be equally capable to regulate emotion expression as younger adults, corroborating the view that emotion regulation skills are maintained or even improved in later adulthood. However, the possibility that gaze direction might help achieve an emotion control goal has not been taken into account, raising the question whether the effortful processing of expressive regulation is really spared from the general age-related decline. Since it does not allow perceptual attention to be redirected away from the emotional source, music provides a useful way to address this question. In the present study, affective, behavioral and physiological consequences of free expression of emotion, expressive suppression and expressive enhancement were measured in 31 younger and 30 older adults while they listened to positive and negative musical excerpts. The main results indicated that compared to younger adults, older adults reported experiencing less emotional intensity in response to negative music during the free expression of emotion condition. No age difference was found in the ability to amplify or reduce emotional expressions. However, an age-related decline in the ability to reduce the intensity of emotional state and an age-related increase in physiological reactivity were found when participants were instructed to suppress negative expression. Taken together, the current data support previous findings suggesting an age-related change in response to music. They also corroborate the observation that older adults are as efficient as younger adults at controlling behavioral expression. But most importantly, they suggest that when faced with auditory sources of negative emotion, older age does not always confer a better ability to regulate emotions.

  19. Effects of Music on Emotion Regulation: A Systematic Literature Review

    NARCIS (Netherlands)

    Luck, Geoff; Brabant, Olivier; Uhlig, Sylka; Jaschke, Artur; Scherder, E.J.A.

    2013-01-01

    Music and its use for emotion regulation processes, to this day remains an unresolved question. Multiple experimental layouts encompassing its daily life use and clinical applications across different cultures and con-tinents have preserved music as a self-regulative tool. Therefore it is seen as a

  20. Associations between Sadness and Anger Regulation Coping, Emotional Expression, and Physical and Relational Aggression among Urban Adolescents

    Science.gov (United States)

    Sullivan, Terri N.; Helms, Sarah W.; Kliewer, Wendy; Goodman, Kimberly L.

    2010-01-01

    This study examined associations between self-reports of sadness and anger regulation coping, reluctance to express emotion, and physical and relational aggression between two cohorts of predominantly African-American fifth (N = 191; 93 boys and 98 girls) and eighth (N = 167; 73 boys and 94 girls) graders. Multiple regression analyses indicated…

  1. Neural Activations of Guided Imagery and Music in Negative Emotional Processing: A Functional MRI Study.

    Science.gov (United States)

    Lee, Sang Eun; Han, Yeji; Park, HyunWook

    2016-01-01

    The Bonny Method of Guided Imagery and Music uses music and imagery to access and explore personal emotions associated with episodic memories. Understanding the neural mechanism of guided imagery and music (GIM) as combined stimuli for emotional processing informs clinical application. We performed functional magnetic resonance imaging (fMRI) to demonstrate neural mechanisms of GIM for negative emotional processing when personal episodic memory is recalled and re-experienced through GIM processes. Twenty-four healthy volunteers participated in the study, which used classical music and verbal instruction stimuli to evoke negative emotions. To analyze the neural mechanism, activated regions associated with negative emotional and episodic memory processing were extracted by conducting volume analyses for the contrast between GIM and guided imagery (GI) or music (M). The GIM stimuli showed increased activation over the M-only stimuli in five neural regions associated with negative emotional and episodic memory processing, including the left amygdala, left anterior cingulate gyrus, left insula, bilateral culmen, and left angular gyrus (AG). Compared with GI alone, GIM showed increased activation in three regions associated with episodic memory processing in the emotional context, including the right posterior cingulate gyrus, bilateral parahippocampal gyrus, and AG. No neural regions related to negative emotional and episodic memory processing showed more activation for M and GI than for GIM. As a combined multimodal stimulus, GIM may increase neural activations related to negative emotions and episodic memory processing. Findings suggest a neural basis for GIM with personal episodic memories affecting cortical and subcortical structures and functions. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users.

    Science.gov (United States)

    Vannson, Nicolas; Innes-Brown, Hamish; Marozeau, Jeremy

    2015-08-26

    Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity and preference. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants' preference ratings, or their judgments of intended emotion. © The Author(s) 2015.

  3. Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users

    Directory of Open Access Journals (Sweden)

    Nicolas Vannson

    2015-08-01

    Full Text Available Musical enjoyment for cochlear implant (CI recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic. Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear and the diotic mode (both parts in both ears. Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy. Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants’ preference ratings, or their judgments of intended emotion.

  4. "I felt sad and did not enjoy life": Cultural context and the associations between anhedonia, depressed mood, and momentary emotions.

    Science.gov (United States)

    Chentsova-Dutton, Yulia E; Choi, Eunsoo; Ryder, Andrew G; Reyes, Jenny

    2015-10-01

    The meanings of "anhedonia" and "depressed mood," the cardinal emotional symptoms of major depression, may be shaped by cultural norms regarding pleasure and sadness. Thirty-two European Americans, 26 Hispanic Americans, 33 Asian Americans, and 20 Russian Americans provided reports of (a) depressive symptoms, (b) momentary emotions and pleasure, and (c) global subjective well-being. Momentary reports were collected over 10 days using handheld personal digital assistants. Reports of anhedonia were associated with heightened levels of momentary low arousal negative emotions (e.g., sadness), whereas reports of depressed mood were associated with dampened levels of momentary positive emotions (e.g., happiness). Symptoms of anhedonia and depressed mood interacted in their associations with momentary pleasure. In addition, the associations of anhedonia and depressed mood with positive emotions and life satisfaction differed across cultural groups. Specifically, these symptoms were associated with dampened positive emotions in the Asian American group only. Additionally, anhedonia was associated with dampened global life satisfaction in the European American group only. These results suggest that reports of anhedonia and depressed mood cannot be interpreted at face value as specific and culture-free indicators of emotional deficits. Instead, they appear to signal changes in the balance of positive and negative emotions, with the exact nature of these signals shaped at least in part by cultural context. This conclusion has important consequences for the clinical interpretation of depressive symptoms in multicultural societies. © The Author(s) 2015.

  5. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Science.gov (United States)

    Coutinho, Eduardo; Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  6. Specificity of Cognitive Vulnerability in Fear and Sad Affect: Anxiety Sensitivity, Looming Cognitive Style and Hopelessness in Emotion Reactivity and Recovery

    DEFF Research Database (Denmark)

    del Palacio Gonzalez, Adriana; Clark, David A.

    2015-01-01

    Cognitive models of the pathogenesis of anxiety and depressive disorders identify looming cognitive style (LCS) and anxiety sensitivity (AS) as vulnerability factors specifically related to anxiety disorders, whereas hopelessness (HS) is considered more applicable to depression. Given...... their dimensional perspective most cognitive models of psychopathology may also be extended to explain normal emotional responses. To investigate the effect of cognitive vulnerability on the reactivity and regulation of negative emotion, 183 undergraduates were assigned to watch a sad or fearful movie clip. Then......, all participants retrieved positive memories in an expressive writing task. Hierarchical regression analyses revealed that LCS was uniquely associated with greater fear reactivity. AS and HS were significant predictors of greater fear and sadness after a down-regulation task, respectively...

  7. Music and Emotion: a composer’s perspective

    Directory of Open Access Journals (Sweden)

    Joel eDouek

    2013-11-01

    Full Text Available This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavours to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives.

  8. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Directory of Open Access Journals (Sweden)

    Eduardo Coutinho

    Full Text Available Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech and cross-domain experiments (i.e., models trained in one modality and tested on the other. In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  9. Shared acoustic codes underlie emotional communication in music and speech—Evidence from deep transfer learning

    Science.gov (United States)

    Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain. PMID:28658285

  10. The effects of happiness and sadness on Children's snack consumption.

    Science.gov (United States)

    Tan, Cin Cin; Holub, Shayla C

    2018-04-01

    Children appear to engage in emotional eating (i.e., eating in response to negative and positive emotions), but existing research has predominantly relied on parent-report and child-report, which may not necessarily reflect children's actual emotional eating behaviors. This study examined the effects of happiness and sadness on children's observed snack consumption and examined whether child characteristics (i.e., weight, gender, and age) interact with mood to predict snack consumption. To elicit mood, children (N = 91; M ages  = 6.8 years; 48 boys) were randomly assigned to one of the three mood induction conditions (happy, sad, or neutral); children's snack consumption was observed and measured after mood induction. Findings showed that children in the sad condition consumed more energy from chocolate, followed by children in the happy condition, and then the neutral condition. However, the reverse pattern was observed for goldfish crackers: children in the neutral condition consumed more energy from this savory snack than children in the happy condition, followed by children in sad condition. Child weight status and gender did not interact with mood to predict snack consumption. Child age did interact with mood: older children consumed more chocolates in the sad condition compared to younger children. Child age was not related to snack consumption in the happy and neutral conditions. This study suggests that emotional eating in response to positive and negative emotions is evident during early childhood, but that this is behavior is developing during this period. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. The effects of distraction and reappraisal on children's parasympathetic regulation of sadness and fear.

    Science.gov (United States)

    Davis, Elizabeth L; Quiñones-Camacho, Laura E; Buss, Kristin A

    2016-02-01

    Children commonly experience negative emotions like sadness and fear, and much recent empirical attention has been devoted to understanding the factors supporting and predicting effective emotion regulation. Respiratory sinus arrhythmia (RSA), a cardiac index of parasympathetic function, has emerged as a key physiological correlate of children's self-regulation. But little is known about how children's use of specific cognitive emotion regulation strategies corresponds to concurrent parasympathetic regulation (i.e., RSA reactivity while watching an emotion-eliciting video). The current study describes an experimental paradigm in which 101 5- and 6-year-olds were randomly assigned to one of three different emotion regulation conditions: Control, Distraction, or Reappraisal. All children watched a sad film and a scary film (order counterbalanced), and children in the Distraction and Reappraisal conditions received instructions to deploy the target strategy to manage sadness/fear while they watched. Consistent with predictions, children assigned to use either emotion regulation strategy showed greater RSA augmentation from baseline than children in the Control condition (all children showed an overall increase in RSA levels from baseline), suggesting enhanced parasympathetic calming when children used distraction or reappraisal to regulate sadness and fear. But this pattern was found only among children who viewed the sad film before the scary film. Among children who viewed the scary film first, reappraisal promoted marginally better parasympathetic regulation of fear (no condition differences emerged for parasympathetic regulation of sadness when the sad film was viewed second). Results are discussed in terms of their implications for our understanding of children's emotion regulation and affective physiology. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Approaches of a Secondary Music Teacher in Response to the Social and Emotional Lives of Students

    Science.gov (United States)

    Edgar, Scott

    2015-01-01

    Music teachers interact regularly with students experiencing social and emotional challenges and are often under-prepared to do so. The purpose of this study was to examine approaches of a secondary general music teacher in responding to the social and emotional challenges of eight students in a music classroom at an alternative high school. A…

  13. Locus of emotion: the effect of task order and age on emotion perceived and emotion felt in response to music.

    Science.gov (United States)

    Schubert, Emery

    2007-01-01

    The relationship between emotions perceived to be expressed (external locus EL) versus emotions felt (internal locus--IL) in response to music was examined using 5 contrasting pieces of Romantic, Western art music. The main hypothesis tested was that emotion expressed along the dimensions of emotional-strength, valence, and arousal were lower in magnitude for IL than EL. IL and EL judgments made together after one listening (Experiment 2, n = 18) produced less differentiated responses than when each task was performed after separate listenings (Experiment 1, n = 28). This merging of responses in the locus-task-together condition started to disappear as statistical power was increased. Statistical power was increased by recruiting an additional subject pool of elderly individuals (Experiment 3, n = 19, mean age 75 years). Their valence responses were more positive, and their emotional-strength ratings were generally lower, compared to their younger counterparts. Overall data analysis revealed that IL responses fluctuated slightly more than EL emotions, meaning that the latter are more stable. An additional dimension of dominance-submissiveness was also examined, and was useful in differentiating between pieces, but did not return a difference between IL and EL. Some therapy applications of these findings are discussed.

  14. Effects of mood induction via music on cardiovascular measures of negative emotion during simulated driving.

    Science.gov (United States)

    Fairclough, Stephen H; van der Zwaag, Marjolein; Spiridon, Elena; Westerink, Joyce

    2014-04-22

    A study was conducted to investigate the potential of mood induction via music to influence cardiovascular correlates of negative emotions experience during driving behaviour. One hundred participants were randomly assigned to one of five groups, four of whom experienced different categories of music: High activation/positive valence (HA/PV), high activation/negative valence (HA/NV), low activation/positive valence (LA/PV) and low activation/negative valence (LA/NV). Following exposure to their respective categories of music, participants were required to complete a simulated driving journey with a fixed time schedule. Negative emotion was induced via exposure to stationary traffic during the simulated route. Cardiovascular reactivity was measured via blood pressure, heart rate and cardiovascular impedance. Subjective self-assessment of anger and mood was also recorded. Results indicated that low activation music, regardless of valence, reduced systolic reactivity during the simulated journey relative to HA/NV music and the control (no music) condition. Self-reported data indicated that participants were not consciously aware of any influence of music on their subjective mood. It is concluded that cardiovascular reactivity to negative mood may be mediated by the emotional properties of music. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Affinity for Poetry and Aesthetic Appreciation of Joyful and Sad Poems.

    Science.gov (United States)

    Kraxenberger, Maria; Menninghaus, Winfried

    2016-01-01

    Artworks with sad and affectively negative content have repeatedly been reported to elicit positive aesthetic appreciation. This topic has received much attention both in the history of poetics and aesthetics as well as in recent studies on sad films and sad music. However, poetry and aesthetic evaluations of joyful and sad poetry have received only little attention in empirical studies to date. We collected beauty and liking ratings for 24 sad and 24 joyful poems from 128 participants. Following previous studies, we computed an integrated measure for overall aesthetic appreciation based on the beauty and liking ratings to test for differences in appreciation between joyful and sad poems. Further, we tested whether readers' judgments are related to their affinity for poetry. Results show that sad poems are rated significantly higher for aesthetic appreciation than joyful poems, and that aesthetic appreciation is influenced by the participants' affinity for poetry.

  16. Skin reactions to histamine of healthy subjects after hypnotically induced emotions of sadness, anger, and happiness.

    Science.gov (United States)

    Zachariae, R; Jørgensen, M M; Egekvist, H; Bjerring, P

    2001-08-01

    The severity of symptoms in asthma and other hypersensitivity-related disorders has been associated with changes in mood but little is known about the mechanisms possibly mediating such a relationship. The purpose of this study was to examine the influence of mood on skin reactivity to histamine by comparing the effects of hypnotically induced emotions on flare and wheal reactions to cutaneous histamine prick tests. Fifteen highly hypnotically susceptible volunteers had their cutaneous reactivity to histamine measured before hypnosis at 1, 2, 3, 4, 5, 10, and 15 min after the histamine prick. These measurements were repeated under three hypnotically induced emotions of sadness, anger, and happiness presented in a counterbalanced order. Skin reactions were measured as change in histamine flare and wheal area in mm2 per minute. The increase in flare reaction in the time interval from 1 to 3 min during happiness and anger was significantly smaller than flare reactions during sadness (P<0.05). No effect of emotion was found for wheal reactions. Hypnotic susceptibility scores were associated with increased flare reactions at baseline (r=0.56; P<0.05) and during the condition of happiness (r=0.56; P<0.05). Our results agree with previous studies showing mood to be a predictor of cutaneous immediate-type hypersensitivity and histamine skin reactions. The results are also in concordance with earlier findings of an association between hypnotic susceptibility and increased reactivity to an allergen.

  17. Norms for 10,491 Spanish words for five discrete emotions: Happiness, disgust, anger, fear, and sadness.

    Science.gov (United States)

    Stadthagen-González, Hans; Ferré, Pilar; Pérez-Sánchez, Miguel A; Imbault, Constance; Hinojosa, José Antonio

    2017-09-18

    The discrete emotion theory proposes that affective experiences can be reduced to a limited set of universal "basic" emotions, most commonly identified as happiness, sadness, anger, fear, and disgust. Here we present norms for 10,491 Spanish words for those five discrete emotions collected from a total of 2,010 native speakers, making it the largest set of norms for discrete emotions in any language to date. When used in conjunction with the norms from Hinojosa, Martínez-García et al. (Behavior Research Methods, 48, 272-284, 2016) and Ferré, Guasch, Martínez-García, Fraga, & Hinojosa (Behavior Research Methods, 49, 1082-1094, 2017), researchers now have access to ratings of discrete emotions for 13,633 Spanish words. Our norms show a high degree of inter-rater reliability and correlate highly with those from Ferré et al. (2017). Our exploration of the relationship between the five discrete emotions and relevant lexical and emotional variables confirmed findings of previous studies conducted with smaller datasets. The availability of such large set of norms will greatly facilitate the study of emotion, language and related fields. The norms are available as supplementary materials to this article.

  18. EMuJoy: software for continuous measurement of perceived emotions in music.

    Science.gov (United States)

    Nagel, Frederik; Kopiez, Reinhard; Grewe, Oliver; Altenmüller, Eckart

    2007-05-01

    An adequate study of emotions in music and film should be based on the real-time measurement of self-reported data using a continuous-response method. The recording system discussed in this article reflects two important aspects of such research: First, for a better comparison of results, experimental and technical standards for continuous measurement should be taken into account, and second, the recording system should be open to the inclusion of multimodal stimuli. In light of these two considerations, our article addresses four basic principles of the continuous measurement of emotions: (1) the dimensionality of the emotion space, (2) data acquisition (e.g., the synchronization of media and the self-reported data), (3) interface construction for emotional responses, and (4) the use of multiple stimulus modalities. Researcher-developed software (EMuJoy) is presented as a freeware solution for the continuous measurement of responses to different media, along with empirical data from the self-reports of 38 subjects listening to emotional music and viewing affective pictures.

  19. The emotional impact of national music on young and older adults differing in posttraumatic stress disorder symptoms.

    Science.gov (United States)

    Bensimon, Moshe; Bodner, Ehud; Shrira, Amit

    2017-10-01

    In spite of previous evidence regarding the function of national songs as a contextual stimulus, their effect on the emotional state of older adults living with different levels of posttraumatic stress disorder (PTSD) symptoms has not be been examined. Following the 2014 Israel-Gaza conflict, we examined the emotional effects of listening to happy national songs (songs of Independence Day) and sad national songs (Memorial Day songs) on young (N = 144, mean age = 29.4) and older adults (N = 132, mean age = 68.5). Respondents were exposed to happy or sad national songs, and completed measures of exposure to missile attacks, related PTSD symptoms, and positive and negative emotions. Sad national songs were related to higher negative affect among young adults who were lower on PTSD symptoms, but not among their older counterparts. In contrast, sad national songs were related to higher negative affect among older adults who were higher on PTSD symptoms, but not among their young counterparts. These findings support the strength and vulnerability model, as they demonstrate that relative to young adults, older adults are generally more capable to withstand negative stimuli, yet are more sensitive to negative stimuli when they suffer from chronic vulnerability, as in the case of higher level of PTSD symptoms.

  20. Mismatch negativity of sad syllables is absent in patients with major depressive disorder.

    Science.gov (United States)

    Pang, Xiaomei; Xu, Jing; Chang, Yi; Tang, Di; Zheng, Ya; Liu, Yanhua; Sun, Yiming

    2014-01-01

    Major depressive disorder (MDD) is an important and highly prevalent mental disorder characterized by anhedonia and a lack of interest in everyday activities. Additionally, patients with MDD appear to have deficits in various cognitive abilities. Although a number of studies investigating the central auditory processing of low-level sound features in patients with MDD have demonstrated that this population exhibits impairments in automatic processing, the influence of emotional voice processing has yet to be addressed. To explore the automatic processing of emotional prosodies in patients with MDD, we analyzed the ability to detect automatic changes using event-related potentials (ERPs). This study included 18 patients with MDD and 22 age- and sex-matched healthy controls. Subjects were instructed to watch a silent movie but to ignore the afferent acoustic emotional prosodies presented to both ears while continuous electroencephalographic activity was synchronously recorded. Prosodies included meaningless syllables, such as "dada" spoken with happy, angry, sad, or neutral tones. The mean amplitudes of the ERPs elicited by emotional stimuli and the peak latency of the emotional differential waveforms were analyzed. The sad MMN was absent in patients with MDD, whereas the happy and angry MMN components were similar across groups. The abnormal sad emotional MMN component was not significantly correlated with the HRSD-17 and HAMA scores, respectively. The data indicate that patients with MDD are impaired in their ability to automatically process sad prosody, whereas their ability to process happy and angry prosodies remains normal. The dysfunctional sad emotion-related MMN in patients with MDD were not correlated with depression symptoms. The blunted MMN of sad prosodies could be considered a trait of MDD.

  1. The Effect of Sad Facial Expressions on Weight Judgment

    Directory of Open Access Journals (Sweden)

    Trent D Weston

    2015-04-01

    Full Text Available Although the body weight evaluation (e.g., normal or overweight of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes towards obese person. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged.

  2. Maladaptive and adaptive emotion regulation through music: a behavioral and neuroimaging study of males and females

    Science.gov (United States)

    Carlson, Emily; Saarikallio, Suvi; Toiviainen, Petri; Bogert, Brigitte; Kliuchko, Marina; Brattico, Elvira

    2015-01-01

    Music therapists use guided affect regulation in the treatment of mood disorders. However, self-directed uses of music in affect regulation are not fully understood. Some uses of music may have negative effects on mental health, as can non-music regulation strategies, such as rumination. Psychological testing and functional magnetic resonance imaging (fMRI) were used explore music listening strategies in relation to mental health. Participants (n = 123) were assessed for depression, anxiety and Neuroticism, and uses of Music in Mood Regulation (MMR). Neural responses to music were measured in the medial prefrontal cortex (mPFC) in a subset of participants (n = 56). Discharge, using music to express negative emotions, related to increased anxiety and Neuroticism in all participants and particularly in males. Males high in Discharge showed decreased activity of mPFC during music listening compared with those using less Discharge. Females high in Diversion, using music to distract from negative emotions, showed more mPFC activity than females using less Diversion. These results suggest that the use of Discharge strategy can be associated with maladaptive patterns of emotional regulation, and may even have long-term negative effects on mental health. This finding has real-world applications in psychotherapy and particularly in clinical music therapy. PMID:26379529

  3. Maladaptive and adaptive emotion regulation through music: A behavioural and neuroimaging study of males and females

    Directory of Open Access Journals (Sweden)

    Emily eCarlson

    2015-08-01

    Full Text Available Music therapists use guided affect regulation in the treatment of mood disorders. However, self-directed uses of music in affect regulation are not fully understood. Some uses of music may have negative effects on mental health, as can non-music regulation strategies, such as rumination. Psychological testing and functional magnetic resonance imaging (fMRI were used explore music listening strategies in relation to mental health. Participants (n=123 were assessed for depression, anxiety and Neuroticism, and uses of Music in Mood Regulation (MMR. Neural responses to music were measured in the medial prefrontal cortex (mPFC in a subset of participants (n=56. Discharge, using music to express negative emotions, related to increased anxiety and Neuroticism in all participants and particularly in males. Males high in Discharge showed decreased activity of mPFC during music listening compared with those using less Discharge. Females high in Diversion, using music to distract from negative emotions, showed more mPFC activity than females using less Diversion. These results suggest that the use of Discharge strategy can be associated with maladaptive patterns of emotional regulation, and may even have long-term negative effects on mental health. This finding has real-world applications in psychotherapy and particularly in clinical music therapy.

  4. Effect of regulating anger and sadness on decision-making.

    Science.gov (United States)

    Szasz, Paul Lucian; Hofmann, Stefan G; Heilman, Renata M; Curtiss, Joshua

    2016-11-01

    The aim of the current study was to investigate the effects of reappraisal, acceptance, and rumination for regulating anger and sadness on decision-making. Participants (N = 165) were asked to recall two autobiographical events in which they felt intense anger and sadness, respectively. Participants were then instructed to reappraise, accept, ruminate, or not use any strategies to regulate their feelings of anger and sadness. Following this manipulation, risk aversion, and decision-making strategies were measured using a computer-based measure of risk-taking and a simulated real-life decision-making task. Participants who were instructed to reappraise their emotions showed the least anger and sadness, the most adaptive decision-making strategies, but the least risk aversion as compared to the participants in the other conditions. These findings suggest that emotion regulation strategies of negative affective states have an immediate effect on decision-making and risk-taking behaviors.

  5. Musical activity and emotional competence – a twin study

    Directory of Open Access Journals (Sweden)

    Tores PG Theorell

    2014-07-01

    Full Text Available The hypothesis was tested that musical creative achievement and musical practice are associated with lower alexithymia. 8000 Swedish twins aged 27-54 were studied. Alexithymia was assessed using the Toronto Alexithymia Scale (TAS-20. Musical achievement was rated on a 7-graded scale. Participants estimated number of hours of music practice during different ages throughout life. A total life estimation of number of accumulated hours was made. They were also asked about ensemble playing. In addition, twin modelling was used to explore the genetic architecture of the relation between musical practice and alexithymia. Alexithymia was negatively associated with (i musical creative achievement, (ii having played a musical instrument as compared to never having played, and – for the subsample of participants that had played an instrument – (iii total hours of musical training (r = -.12 – in men and -.10 in women. Ensemble playing added significant variance. Twin modelling showed that alexithymia had a moderate heritability of 36% and that the association with musical practice could be explained by shared genetic influences. Associations between musical training and alexithymia remained significant when controlling for education, depression, and intelligence. Musical achievement and musical practice are associated with lower levels of alexithymia in both men and women. Musical engagement thus appears to be associated with higher emotional competence, although effect sizes are small. The association between musical training and alexithymia appears to be entirely genetically mediated, suggesting genetic pleiotropy.

  6. Designing for group music improvisation: a case for jamming with your emotions

    NARCIS (Netherlands)

    Ostos Rios, G.A.; Funk, M.; Hengeveld, B.J.

    2016-01-01

    During improvisation, musicians express themselves through live music. This project looks at the relationship between musicians during music improvisation, the processes of expression and communication taking place during performance and possible ways to use musicians’ emotions, to influence a

  7. Emotion felt by the listener and expressed by the music: a literature review and theoretical investigation

    Directory of Open Access Journals (Sweden)

    Emery eSchubert

    2013-12-01

    Full Text Available In his seminal paper, Gabrielsson (2002 distinguishes between emotion felt by the listener, here: ‘internal locus of emotion’ (IL, and the emotion the music is expressing, here: 'external locus emotion' (EL. This paper tabulates 16 such publications published in the decade 2003-2012 consisting of 19 studies/experiments and provides some theoretical perspectives. The key findings were that (1 IL ratings was frequently rated statistically the same or lower than the corresponding EL rating (e.g. lower felt happiness rating compared to the apparent happiness of the music, and that (2 self-select and preferred music had a smaller gap across the emotion loci than experimenter selected and disliked music. These key findings were explained by an ‘inhibited’ emotional contagion mechanism, where the otherwise matching felt emotion may have been attenuated by some other factor such as social context. Matching between EL and IL for loved and self-selected pieces was explained by the activation of ‘contagion’ circuits. Physiological arousal, personality and age, as well as musical features (tempo, mode, putative emotions were observed to influence perceived and felt emotion distinctions. A variety of data collection formats were identified, but mostly using continuous rating scales. In conclusion, a more systematic use of terminology appears desirable with respect to theory-building. Whether two broad categories, namely matched and unmatched, are sufficient to capture the relationships between EL and IL, instead of four categories as suggested by Gabrielsson, is subject to future research.

  8. Sad facial cues inhibit temporal attention: evidence from an event-related potential study.

    Science.gov (United States)

    Kong, Xianxian; Chen, Xiaoqiang; Tan, Bo; Zhao, Dandan; Jin, Zhenlan; Li, Ling

    2013-06-19

    We examined the influence of different emotional cues (happy or sad) on temporal attention (short or long interval) using behavioral as well as event-related potential recordings during a Stroop task. Emotional stimuli cued short and long time intervals, inducing 'sad-short', 'sad-long', 'happy-short', and 'happy-long' conditions. Following the intervals, participants performed a numeric Stroop task. Behavioral results showed the temporal attention effects in the sad-long, happy-long, and happy-short conditions, in which valid cues quickened the reaction times, but not in the sad-short condition. N2 event-related potential components showed sad cues to have decreased activity for short intervals compared with long intervals, whereas happy cues did not. Taken together, these findings provide evidence for different modulation of sad and happy facial cues on temporal attention. Furthermore, sad cues inhibit temporal attention, resulting in longer reaction time and decreased neural activity in the short interval by diverting more attentional resources.

  9. Happy faces are preferred regardless of familiarity--sad faces are preferred only when familiar.

    Science.gov (United States)

    Liao, Hsin-I; Shimojo, Shinsuke; Yeh, Su-Ling

    2013-06-01

    Familiarity leads to preference (e.g., the mere exposure effect), yet it remains unknown whether it is objective familiarity, that is, repetitive exposure, or subjective familiarity that contributes to preference. In addition, it is unexplored whether and how different emotions influence familiarity-related preference. The authors investigated whether happy or sad faces are preferred or perceived as more familiar and whether this subjective familiarity judgment correlates with preference for different emotional faces. An emotional face--happy or sad--was paired with a neutral face, and participants rated the relative preference and familiarity of each of the paired faces. For preference judgment, happy faces were preferred and sad faces were less preferred, compared with neutral faces. For familiarity judgment, happy faces did not show any bias, but sad faces were perceived as less familiar than neutral faces. Item-by-item correlational analyses show preference for sad faces--but not happy faces--positively correlate with familiarity. These results suggest a direct link between positive emotion and preference, and argue at least partly against a common cause for familiarity and preference. Instead, facial expression of different emotional valence modulates the link between familiarity and preference.

  10. Problem of Formation of Emotional Culture of Musical College Students

    Directory of Open Access Journals (Sweden)

    G N Kazantseva

    2012-06-01

    Full Text Available The structurally functional model of emotional culture of the personality and the characteristics of the three levels of its development is submitted in the article. The empirical check of the model is described in the course of the implementation of the program of emotional culture formation of musical college students.

  11. Emotions over time: synchronicity and development of subjective, physiological, and facial affective reactions to music.

    Science.gov (United States)

    Grewe, Oliver; Nagel, Frederik; Kopiez, Reinhard; Altenmüller, Eckart

    2007-11-01

    Most people are able to identify basic emotions expressed in music and experience affective reactions to music. But does music generally induce emotion? Does it elicit subjective feelings, physiological arousal, and motor reactions reliably in different individuals? In this interdisciplinary study, measurement of skin conductance, facial muscle activity, and self-monitoring were synchronized with musical stimuli. A group of 38 participants listened to classical, rock, and pop music and reported their feelings in a two-dimensional emotion space during listening. The first entrance of a solo voice or choir and the beginning of new sections were found to elicit interindividual changes in subjective feelings and physiological arousal. Quincy Jones' "Bossa Nova" motivated movement and laughing in more than half of the participants. Bodily reactions such as "goose bumps" and "shivers" could be stimulated by the "Tuba Mirum" from Mozart's Requiem in 7 of 38 participants. In addition, the authors repeated the experiment seven times with one participant to examine intraindividual stability of effects. This exploratory combination of approaches throws a new light on the astonishing complexity of affective music listening.

  12. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  13. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  14. Tuned In emotion regulation program using music listening: Effectiveness for adolescents in educational settings

    Directory of Open Access Journals (Sweden)

    Genevieve Anita Dingle

    2016-06-01

    Full Text Available This paper presents an effectiveness study of Tuned In, a novel emotion regulation intervention that uses participant selected music to evoke emotions in session and teaches participants emotional awareness and regulation skills. The group program content is informed by a two dimensional model of emotion (arousal, valence, along with music psychology theories about how music evokes emotional responses. The program has been evaluated in two samples of adolescents: 41 at risk adolescents (76% males; Mage = 14.8 years attending an educational re-engagement program and 216 students (100% females; Mage = 13.6 years attending a mainstream secondary school. Results showed significant pre- to post-program improvements in measures of emotion awareness, identification, and regulation (p < .01 to p = .06 in the smaller at risk sample and all p < .001 in the mainstream school sample. Participant ratings of engagement and likelihood of using the strategies learned in the program were high. Tuned In shows promise as a brief emotion regulation intervention for adolescents, and these findings extend an earlier study with young adults. Tuned In is a-theoretical in regard to psychotherapeutic approach and could be integrated with other program components as required.

  15. Tuned In Emotion Regulation Program Using Music Listening: Effectiveness for Adolescents in Educational Settings.

    Science.gov (United States)

    Dingle, Genevieve A; Hodges, Joseph; Kunde, Ashleigh

    2016-01-01

    This paper presents an effectiveness study of Tuned In, a novel emotion regulation intervention that uses participant selected music to evoke emotions in session and teaches participants emotional awareness and regulation skills. The group program content is informed by a two dimensional model of emotion (arousal, valence), along with music psychology theories about how music evokes emotional responses. The program has been evaluated in two samples of adolescents: 41 "at risk" adolescents (76% males; M age = 14.8 years) attending an educational re-engagement program and 216 students (100% females; M age = 13.6 years) attending a mainstream secondary school. Results showed significant pre- to post-program improvements in measures of emotion awareness, identification, and regulation (p < 0.01 to p = 0.06 in the smaller "at risk" sample and all p < 0.001 in the mainstream school sample). Participant ratings of engagement and likelihood of using the strategies learned in the program were high. Tuned In shows promise as a brief emotion regulation intervention for adolescents, and these findings extend an earlier study with young adults. Tuned In is a-theoretical in regard to psychotherapeutic approach and could be integrated with other program components as required.

  16. Theory-guided Therapeutic Function of Music to facilitate emotion regulation development in preschool-aged children.

    Science.gov (United States)

    Sena Moore, Kimberly; Hanson-Abromeit, Deanna

    2015-01-01

    Emotion regulation (ER) is an umbrella term to describe interactive, goal-dependent explicit, and implicit processes that are intended to help an individual manage and shift an emotional experience. The primary window for appropriate ER development occurs during the infant, toddler, and preschool years. Atypical ER development is considered a risk factor for mental health problems and has been implicated as a primary mechanism underlying childhood pathologies. Current treatments are predominantly verbal- and behavioral-based and lack the opportunity to practice in-the-moment management of emotionally charged situations. There is also an absence of caregiver-child interaction in these treatment strategies. Based on behavioral and neural support for music as a therapeutic mechanism, the incorporation of intentional music experiences, facilitated by a music therapist, may be one way to address these limitations. Musical Contour Regulation Facilitation (MCRF) is an interactive therapist-child music-based intervention for ER development practice in preschoolers. The MCRF intervention uses the deliberate contour and temporal structure of a music therapy session to mirror the changing flow of the caregiver-child interaction through the alternation of high arousal and low arousal music experiences. The purpose of this paper is to describe the Therapeutic Function of Music (TFM), a theory-based description of the structural characteristics for a music-based stimulus to musically facilitate developmentally appropriate high arousal and low arousal in-the-moment ER experiences. The TFM analysis is based on a review of the music theory, music neuroscience, and music development literature and provides a preliminary model of the structural characteristics of the music as a core component of the MCRF intervention.

  17. Theory-guided Therapeutic Function of Music to facilitate emotion regulation development in preschool-aged children

    Science.gov (United States)

    Sena Moore, Kimberly; Hanson-Abromeit, Deanna

    2015-01-01

    Emotion regulation (ER) is an umbrella term to describe interactive, goal-dependent explicit, and implicit processes that are intended to help an individual manage and shift an emotional experience. The primary window for appropriate ER development occurs during the infant, toddler, and preschool years. Atypical ER development is considered a risk factor for mental health problems and has been implicated as a primary mechanism underlying childhood pathologies. Current treatments are predominantly verbal- and behavioral-based and lack the opportunity to practice in-the-moment management of emotionally charged situations. There is also an absence of caregiver–child interaction in these treatment strategies. Based on behavioral and neural support for music as a therapeutic mechanism, the incorporation of intentional music experiences, facilitated by a music therapist, may be one way to address these limitations. Musical Contour Regulation Facilitation (MCRF) is an interactive therapist-child music-based intervention for ER development practice in preschoolers. The MCRF intervention uses the deliberate contour and temporal structure of a music therapy session to mirror the changing flow of the caregiver–child interaction through the alternation of high arousal and low arousal music experiences. The purpose of this paper is to describe the Therapeutic Function of Music (TFM), a theory-based description of the structural characteristics for a music-based stimulus to musically facilitate developmentally appropriate high arousal and low arousal in-the-moment ER experiences. The TFM analysis is based on a review of the music theory, music neuroscience, and music development literature and provides a preliminary model of the structural characteristics of the music as a core component of the MCRF intervention. PMID:26528171

  18. Autonomic effects of music in health and Crohn's disease: the impact of isochronicity, emotional valence, and tempo.

    Directory of Open Access Journals (Sweden)

    Roland Uwe Krabs

    Full Text Available Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a whether simply a musical tactus (without common emotional components of music is sufficient to elicit ANS effects; (b whether changes in the tempo of a musical piece contribute to the ANS effects; (c whether emotional valence of music influences ANS effects; and (d whether music-elicited ANS effects are comparable in healthy subjects and patients with Crohn´s disease (CD, an inflammatory bowel disease suspected to be associated with autonomic dysfunction.To address these issues, three experiments were conducted, with a total of n = 138 healthy subjects and n = 19 CD patients. Heart rate (HR, heart rate variability (HRV, and electrodermal activity (EDA were recorded while participants listened to joyful pleasant music, isochronous tones, and unpleasant control stimuli.Compared to silence, both pleasant music and unpleasant control stimuli elicited an increase in HR and a decrease in a variety of HRV parameters. Surprisingly, similar ANS effects were elicited by isochronous tones (i.e., simply by a tactus. ANS effects did not differ between pleasant and unpleasant stimuli, and different tempi of the music did not entrain ANS activity. Finally, music-evoked ANS effects did not differ between healthy individuals and CD patients.The isochronous pulse of music (i.e., the tactus is a major factor of music-evoked ANS effects. These ANS effects are characterized by increased sympathetic activity. The emotional valence of a musical piece contributes surprisingly little to the ANS activity changes evoked by that piece.

  19. Autonomic effects of music in health and Crohn's disease: the impact of isochronicity, emotional valence, and tempo.

    Science.gov (United States)

    Krabs, Roland Uwe; Enk, Ronny; Teich, Niels; Koelsch, Stefan

    2015-01-01

    Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS) responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a) whether simply a musical tactus (without common emotional components of music) is sufficient to elicit ANS effects; (b) whether changes in the tempo of a musical piece contribute to the ANS effects; (c) whether emotional valence of music influences ANS effects; and (d) whether music-elicited ANS effects are comparable in healthy subjects and patients with Crohn´s disease (CD, an inflammatory bowel disease suspected to be associated with autonomic dysfunction). To address these issues, three experiments were conducted, with a total of n = 138 healthy subjects and n = 19 CD patients. Heart rate (HR), heart rate variability (HRV), and electrodermal activity (EDA) were recorded while participants listened to joyful pleasant music, isochronous tones, and unpleasant control stimuli. Compared to silence, both pleasant music and unpleasant control stimuli elicited an increase in HR and a decrease in a variety of HRV parameters. Surprisingly, similar ANS effects were elicited by isochronous tones (i.e., simply by a tactus). ANS effects did not differ between pleasant and unpleasant stimuli, and different tempi of the music did not entrain ANS activity. Finally, music-evoked ANS effects did not differ between healthy individuals and CD patients. The isochronous pulse of music (i.e., the tactus) is a major factor of music-evoked ANS effects. These ANS effects are characterized by increased sympathetic activity. The emotional valence of a musical piece contributes surprisingly little to the ANS activity changes evoked by that piece.

  20. Autonomic Effects of Music in Health and Crohn's Disease: The Impact of Isochronicity, Emotional Valence, and Tempo

    Science.gov (United States)

    Krabs, Roland Uwe; Enk, Ronny; Teich, Niels; Koelsch, Stefan

    2015-01-01

    Background Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS) responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a) whether simply a musical tactus (without common emotional components of music) is sufficient to elicit ANS effects; (b) whether changes in the tempo of a musical piece contribute to the ANS effects; (c) whether emotional valence of music influences ANS effects; and (d) whether music-elicited ANS effects are comparable in healthy subjects and patients with Crohn´s disease (CD, an inflammatory bowel disease suspected to be associated with autonomic dysfunction). Methods To address these issues, three experiments were conducted, with a total of n = 138 healthy subjects and n = 19 CD patients. Heart rate (HR), heart rate variability (HRV), and electrodermal activity (EDA) were recorded while participants listened to joyful pleasant music, isochronous tones, and unpleasant control stimuli. Results Compared to silence, both pleasant music and unpleasant control stimuli elicited an increase in HR and a decrease in a variety of HRV parameters. Surprisingly, similar ANS effects were elicited by isochronous tones (i.e., simply by a tactus). ANS effects did not differ between pleasant and unpleasant stimuli, and different tempi of the music did not entrain ANS activity. Finally, music-evoked ANS effects did not differ between healthy individuals and CD patients. Conclusions The isochronous pulse of music (i.e., the tactus) is a major factor of music-evoked ANS effects. These ANS effects are characterized by increased sympathetic activity. The emotional valence of a musical piece contributes surprisingly little to the ANS activity changes evoked by that piece. PMID:25955253

  1. Functional cerebral distance and the effect of emotional music on spatial rotation scores in undergraduate women and men.

    Science.gov (United States)

    Bertsch, Sharon; Knee, H Donald; Webb, Jeffrey L

    2011-02-01

    The influence of listening to music on subsequent spatial rotation scores has a controversial history. The effect is unreliable, seeming to depend on several as yet unexplored factors. Using a large sample (167 women, 160 men; M age = 18.9 yr.), two related variables were investigated: participants' sex and the emotion conveyed by the music. Participants listened to 90 sec. of music that portrayed emotions of approach (happiness), or withdrawal (anger), or heard no music at all. They then performed a two-dimensional spatial rotation task. No significant difference was found in spatial rotation scores between groups exposed to music and those who were not. However, a significant interaction was found based on the sex of the participants and the emotion portrayed in the music they heard. Women's scores increased (relative to a no-music condition) only after hearing withdrawal-based music, while men's scores increased only after listening to the approach-based music. These changes were explained using the theory of functional cerebral distance.

  2. Emotional Temperament in Food-Related Metaphors: A Cross-Cultural Account of the Conceptualizations of SADNESS

    Directory of Open Access Journals (Sweden)

    Zahra Khajeh

    2013-11-01

    Full Text Available What people in a society and culture eat or the way they consume their food may become a source domain for emotional temperament and therefore an implication for portrayal of their specific cultural models. Adopting the basic assumptions of the Lakoffian School on ‘experiential realism’ and ‘universal embodiment’ this study is an attempt to delve into the conceptual system of Persian in order to explore its specific socio-cultural motivations for the construction and semantic changes in the use of metaphorical concepts of sadness. The metaphorical uses of food-related concepts in Persian manifest that, in spite of some correspondences to those in English, sadness metaphorical concepts are distinctive in Persian. The conceptual metaphor variations reveal many vestiges of Hippocratic notions of humoral doctrine and Avicennian Traditional Medicine, suggesting that the cultural models of humoralism and dietetics have left their traces deeply in the Persians’ belief systems. The effects, therefore, have been extended into Persian metaphoric language.

  3. Coping with preoperative anxiety in cesarean section: physiological, cognitive, and emotional effects of listening to favorite music.

    Science.gov (United States)

    Kushnir, Jonathan; Friedman, Ahuva; Ehrenfeld, Mally; Kushnir, Talma

    2012-06-01

    Listening to music has a stress-reducing effect in surgical procedures. The effects of listening to music immediately before a cesarean section have not been studied. The objective of this study was to assess the effects of listening to selected music while waiting for a cesarean section on emotional reactions, on cognitive appraisal of the threat of surgery, and on stress-related physiological reactions. A total of 60 healthy women waiting alone to undergo an elective cesarean section for medical reasons only were randomly assigned either to an experimental or a control group. An hour before surgery they reported mood, and threat perception. Vital signs were assessed by a nurse. The experimental group listened to preselected favorite music for 40 minutes, and the control group waited for the operation without music. At the end of this period, all participants responded to a questionnaire assessing mood and threat perception, and the nurse measured vital signs. Women who listened to music before a cesarean section had a significant increase in positive emotions and a significant decline in negative emotions and perceived threat of the situation when compared with women in the control group, who exhibited a decline in positive emotions, an increase in the perceived threat of the situation, and had no change in negative emotions. Women who listened to music also exhibited a significant reduction in systolic blood pressure compared with a significant increase in diastolic blood pressure and respiratory rate in the control group. Listening to favorite music immediately before a cesarean section may be a cost-effective, emotion-focused coping strategy. (BIRTH 39:2 June 2012). © 2012, Copyright the Authors Journal compilation © 2012, Wiley Periodicals, Inc.

  4. Music in mind, a randomized controlled trial of music therapy for young people with behavioural and emotional problems: study protocol.

    Science.gov (United States)

    Porter, Sam; Holmes, Valerie; McLaughlin, Katrina; Lynn, Fiona; Cardwell, Chris; Braiden, Hannah-Jane; Doran, Jackie; Rogan, Sheelagh

    2012-10-01

    This article is a report of a trial protocol to determine if improvizational music therapy leads to clinically significant improvement in communication and interaction skills for young people experiencing social, emotional or behavioural problems. Music therapy is often considered an effective intervention for young people experiencing social, emotional or behavioural difficulties. However, this assumption lacks empirical evidence. Music in mind is a multi-centred single-blind randomized controlled trial involving 200 young people (aged 8-16 years) and their parents. Eligible participants will have a working diagnosis within the ambit of international classification of disease 10 mental and behavioural disorders and will be recruited over 15 months from six centres within the Child and Adolescent Mental Health Services of a large health and social care trust in Northern Ireland. Participants will be randomly allocated in a 1:1 ratio to receive standard care alone or standard care plus 12 weekly music therapy sessions delivered by the Northern Ireland Music Therapy Trust. Baseline data will be collected from young people and their parents using standardized outcome measures for communicative and interaction skills (primary endpoint), self-esteem, social functioning, depression and family functioning. Follow-up data will be collected 1 and 13 weeks after the final music therapy session. A cost-effectiveness analysis will also be carried out. This study will be the largest trial to date examining the effect of music therapy on young people experiencing social, emotional or behavioural difficulties and will provide empirical evidence for the use of music therapy among this population. Trial registration. This study is registered in the ISRCTN Register, ISRCTN96352204. Ethical approval was gained in October 2010. © 2012 Blackwell Publishing Ltd.

  5. Modeling Expressed Emotions in Music using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Nielsen, Jens Brehm; Jensen, Bjørn Sand

    2012-01-01

    We introduce a two-alternative forced-choice experimental paradigm to quantify expressed emotions in music using the two wellknown arousal and valence (AV) dimensions. In order to produce AV scores from the pairwise comparisons and to visualize the locations of excerpts in the AV space, we...

  6. On the interaction between sad mood and cognitive control: the effect of induced sadness on electrophysiological modulations underlying Stroop conflict processing.

    Science.gov (United States)

    Nixon, Elena; Liddle, Peter F; Nixon, Neil L; Liotti, Mario

    2013-03-01

    The present study employed high-density ERPs to examine the effect of induced sad mood on the spatiotemporal correlates of conflict monitoring and resolution in a colour-word Stroop interference task. Neuroimaging evidence and dipole modelling implicates the involvement of the anterior cingulate cortex (ACC) and medial prefrontal cortex (mPFC) regions in conflict-laden interference control. On the basis that these structures have been found to mediate emotion-cognition interactions in negative mood states, it was predicted that Stroop-related cognitive control, which relies heavily on anterior neural sources, would be affected by effective sad mood provocation. Healthy participants (N=14) were induced into transient sadness via use of autobiographical sad scripts, a well-validated mood induction technique (Liotti et al., 2000a, 2002). In accord with previous research, interference effects were shown at both baseline and sad states while Stroop conflict was associated with early (N450) and late (Late Positive Component; LPC) electrophysiological modulations at both states. Sad mood induction attenuated the N450 effect in line with our expectation that it would be susceptible to modulation by mood, given its purported anterior limbic source. The LPC effect was displayed at the typical posterior lateral sites but, as predicted, was not affected by sad mood. However, frontocentral LPC activity-presumably generated from an additional anterior limbic source-was affected at sad state, hinting a role in conflict monitoring. Although the neurophysiological underpinnings of interference control are yet to be clarified, this study provided further insight into emotion-cognition interactions as indexed by Stroop conflict-laden processing. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Beyond simple pessimism: effects of sadness and anger on social perception.

    Science.gov (United States)

    Keltner, D; Ellsworth, P C; Edwards, K

    1993-05-01

    In keeping with cognitive appraisal models of emotion, it was hypothesized that sadness and anger would exert different influences on causal judgments. Two experiments provided initial support for this hypothesis. Sad Ss perceived situationally caused events as more likely (Experiment 1) and situational forces more responsible for an ambiguous event (Experiment 2) than angry Ss, who, in contrast, perceived events caused by humans as more likely and other people as more responsible. Experiments 3, 4, and 5 showed that the experience of these emotions, rather than their cognitive constituents, mediates these effects. The nonemotional exposure to situational or human agency information did not influence causal judgments (Experiment 3), whereas the induction of sadness and anger without explicit agency information did (Experiments 4 and 5). Discussion is focused on the influence of emotion on social judgment.

  8. Starting a Community Musical Theatre Orchestra

    Science.gov (United States)

    Sorenson, Burke

    2007-01-01

    Musical theatre is one of the great genres of music, yet very few community theatres use live music to accompany their productions. Sadly, many community theatres that formerly employed pit orchestras are replacing them with electronic music. Some producers would welcome live music, but they worry about the potential cost. There are so many…

  9. Anger, Sadness and Fear in Response to Breaking Crime and Accident News Stories: How Emotions Influence Support for Alcohol-Control Public Policies via Concern about Risks

    Science.gov (United States)

    Solloway, Tyler; Slater, Michael D.; Chung, Adrienne; Goodall, Catherine

    2015-01-01

    Prior research shows that discrete emotions, notably anger and fear, can explain effects of news articles on health and alcohol-control policy support. This study advances prior work by coding expressed emotional responses to messages (as opposed to directly manipulated emotions or forced responses), incorporating and controlling for central thoughts, including sadness (a particularly relevant response to tragic stories), and examining concern’s mediating role between emotion and policy support. An experiment with a national online adult panel had participants read one of 60 violent crime or accident news stories, each manipulated to mention or withhold alcohol’s causal contribution. Multi-group structural equation models suggest that stories not mentioning alcohol had a direct effect on policy support via fear and central thoughts, unmediated by concern. When alcohol was mentioned, sadness and anger affects alcohol-control support through concern. Findings help confirm that emotional responses are key in determining news story effects on public support of health policies. PMID:26491487

  10. Preparing Empirical Methodologies to Examine Enactive Subjects Experiencing Musical Emotions

    DEFF Research Database (Denmark)

    Christensen, Justin

    2016-01-01

    in listeners. Many of these theories search for universal emotional essences and cause-and-effect relationships that often result in erasing the body from these experiences. Still, after reducing these emotional responses to discrete categories or localized brain functions, these theories have not been very...... successful in finding universal emotional essence in response to music. In this paper, I argue that we need to bring the body back into this research, to allow for listener variability, and include multiple levels of focus to help find meaningful relationships of emotional responses. I also appeal...

  11. Intelligence and musical mode preference

    DEFF Research Database (Denmark)

    Bonetti, Leonardo; Costa, Marco

    2016-01-01

    The relationship between fluid intelligence and preference for major–minor musical mode was investigated in a sample of 80 university students. Intelligence was assessed by the Raven’s Advanced Progressive Matrices. Musical mode preference was assessed by presenting 14 pairs of musical stimuli...... differences at the cognitive and personality level related to the enjoyment of sad music....

  12. A music therapy tool for assessing parent-child interaction in cases of emotional neglect

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl; H. McKinney, Cathy

    2015-01-01

    Using a music therapy approach to assess emotional communication and parent–child interaction is new to the field of child protection. However, musical improvisations in music therapy has long been known as an analogue to affect attunement and early non-verbal communication between parent and inf...

  13. Together we cry: Social motives and preferences for group-based sadness.

    Science.gov (United States)

    Porat, Roni; Halperin, Eran; Mannheim, Ittay; Tamir, Maya

    2016-01-01

    Group-based emotions play an important role in helping people feel that they belong to their group. People are motivated to belong, but does this mean that they actively try to experience group-based emotions to increase their sense of belonging? In this investigation, we propose that people may be motivated to experience even group-based emotions that are typically considered unpleasant to satisfy their need to belong. To test this hypothesis, we examined people's preferences for group-based sadness in the context of the Israeli National Memorial Day. In two correlational (Studies 1a and 1b) and two experimental (Studies 2 and 3) studies, we demonstrate that people with a stronger need to belong have a stronger preference to experience group-based sadness. This effect was mediated by the expectation that experiencing sadness would be socially beneficial (Studies 1 and 2). We discuss the implications of our findings for understanding motivated emotion regulation and intergroup relations.

  14. Getting into the musical zone: Trait emotional intelligence and amount of practice predict flow in pianists

    Directory of Open Access Journals (Sweden)

    Manuela Maria Marin

    2013-11-01

    Full Text Available Being ‘in flow’ or ‘in the zone’ is defined as an extremely focused state of consciousness which occurs during intense engagement in an activity. In general, flow has been linked to peak performances (high achievement and feelings of intense pleasure and happiness. However, empirical research on flow in music performance is scarce, although it may offer novel insights into the question of why musicians engage in musical activities for extensive periods of time. Here, we focused on individual differences in a group of 76 piano performance students and assessed their flow experience in piano performance as well as their trait emotional intelligence. Multiple regression analysis revealed that flow was predicted by the amount of daily practice and trait emotional intelligence. Other background variables (gender, age, duration of piano training and age of first piano training were not predictive. To predict high achievement in piano performance (i.e., winning a prize in a piano competition, a seven-predictor logistic regression model was fitted to the data, and we found that the odds of winning a prize in a piano competition were predicted by the amount of daily practice and the age at which piano training began. Interestingly, a positive relationship between flow and high achievement was not supported. Further, we explored the role of musical emotions and musical styles in the induction of flow by a self-developed questionnaire. Results suggest that besides individual differences among pianists, specific structural and compositional features of musical pieces and related emotional expressions may facilitate flow experiences. Altogether, these findings highlight the role of emotion in the experience of flow during music performance, and call for further experiments addressing emotion in relation to the performer and the music alike.

  15. Getting into the musical zone: trait emotional intelligence and amount of practice predict flow in pianists

    Science.gov (United States)

    Marin, Manuela M.; Bhattacharya, Joydeep

    2013-01-01

    Being “in flow” or “in the zone” is defined as an extremely focused state of consciousness which occurs during intense engagement in an activity. In general, flow has been linked to peak performances (high achievement) and feelings of intense pleasure and happiness. However, empirical research on flow in music performance is scarce, although it may offer novel insights into the question of why musicians engage in musical activities for extensive periods of time. Here, we focused on individual differences in a group of 76 piano performance students and assessed their flow experience in piano performance as well as their trait emotional intelligence. Multiple regression analysis revealed that flow was predicted by the amount of daily practice and trait emotional intelligence. Other background variables (gender, age, duration of piano training and age of first piano training) were not predictive. To predict high achievement in piano performance (i.e., winning a prize in a piano competition), a seven-predictor logistic regression model was fitted to the data, and we found that the odds of winning a prize in a piano competition were predicted by the amount of daily practice and the age at which piano training began. Interestingly, a positive relationship between flow and high achievement was not supported. Further, we explored the role of musical emotions and musical styles in the induction of flow by a self-developed questionnaire. Results suggest that besides individual differences among pianists, specific structural and compositional features of musical pieces and related emotional expressions may facilitate flow experiences. Altogether, these findings highlight the role of emotion in the experience of flow during music performance and call for further experiments addressing emotion in relation to the performer and the music alike. PMID:24319434

  16. Listening to music and physiological and psychological functioning: the mediating role of emotion regulation and stress reactivity.

    Science.gov (United States)

    Thoma, M V; Scholz, U; Ehlert, U; Nater, U M

    2012-01-01

    Music listening has been suggested to have short-term beneficial effects. The aim of this study was to investigate the association and potential mediating mechanisms between various aspects of habitual music-listening behaviour and physiological and psychological functioning. An internet-based survey was conducted in university students, measuring habitual music-listening behaviour, emotion regulation, stress reactivity, as well as physiological and psychological functioning. A total of 1230 individuals (mean = 24.89 ± 5.34 years, 55.3% women) completed the questionnaire. Quantitative aspects of habitual music-listening behaviour, i.e. average duration of music listening and subjective relevance of music, were not associated with physiological and psychological functioning. In contrast, qualitative aspects, i.e. reasons for listening (especially 'reducing loneliness and aggression', and 'arousing or intensifying specific emotions') were significantly related to physiological and psychological functioning (all p = 0.001). These direct effects were mediated by distress-augmenting emotion regulation and individual stress reactivity. The habitual music-listening behaviour appears to be a multifaceted behaviour that is further influenced by dispositions that are usually not related to music listening. Consequently, habitual music-listening behaviour is not obviously linked to physiological and psychological functioning.

  17. Emotional, motivational and interpersonal responsiveness of children with autism in improvisational music therapy

    DEFF Research Database (Denmark)

    Kim, Jinah; Wigram, Tony; Gold, Christian

    2009-01-01

    Through behavioural analysis, this study investigated the social-motivational aspects of musical interaction between the child and the therapist in improvisational music therapy by measuring emotional, motivational and interpersonal responsiveness in children with autism during joint attention ep...

  18. Disgust, Sadness, and Appraisal: Disgusted Consumers Dislike Food More Than Sad Ones

    Science.gov (United States)

    Motoki, Kosuke; Sugiura, Motoaki

    2018-01-01

    According to the affect-as-information framework, consumers base judgments on their feelings. Disgust is associated with two kinds of appraisal: one in which the consumer avoids and distances him/herself immediately from the object concerned, and another in which the consumer is disgusted due to contamination and impurities within the environment. The first instance indicates that disgust can decrease a consumer’s preference for a product, regardless of its category. In contrast, the second case suggests that a product’s degree of depreciation is greater in products vulnerable to contamination, such as foods. However, it remains largely unknown how incidental disgust affects product preferences in accordance with the two appraisal-related goals. The present research investigates how incidental disgust (as opposed to sadness, an equally valenced but distinct emotion of appraisal) influences consumer preferences for products with or without a risk of contamination. Twenty-four participants repeatedly judged foods or household products after seeing an emotional image (conveying disgust, sadness, or neutrality). Foods and household products are the two representative product categories in grocery stores, but only foods are associated with a risk of contamination. The results showed that incidental disgust led to negative evaluations of both types of products; however, compared to sadness, incidental disgust demonstrated a stronger negative effect on preference for foods than household products. These findings elucidate that disgust and the appraisal of contamination specifically devalue foods, and broaden the application of the appraisal-information framework in consumer settings. PMID:29467697

  19. Disgust, Sadness, and Appraisal: Disgusted Consumers Dislike Food More Than Sad Ones

    Directory of Open Access Journals (Sweden)

    Kosuke Motoki

    2018-02-01

    Full Text Available According to the affect-as-information framework, consumers base judgments on their feelings. Disgust is associated with two kinds of appraisal: one in which the consumer avoids and distances him/herself immediately from the object concerned, and another in which the consumer is disgusted due to contamination and impurities within the environment. The first instance indicates that disgust can decrease a consumer’s preference for a product, regardless of its category. In contrast, the second case suggests that a product’s degree of depreciation is greater in products vulnerable to contamination, such as foods. However, it remains largely unknown how incidental disgust affects product preferences in accordance with the two appraisal-related goals. The present research investigates how incidental disgust (as opposed to sadness, an equally valenced but distinct emotion of appraisal influences consumer preferences for products with or without a risk of contamination. Twenty-four participants repeatedly judged foods or household products after seeing an emotional image (conveying disgust, sadness, or neutrality. Foods and household products are the two representative product categories in grocery stores, but only foods are associated with a risk of contamination. The results showed that incidental disgust led to negative evaluations of both types of products; however, compared to sadness, incidental disgust demonstrated a stronger negative effect on preference for foods than household products. These findings elucidate that disgust and the appraisal of contamination specifically devalue foods, and broaden the application of the appraisal-information framework in consumer settings.

  20. Disgust, Sadness, and Appraisal: Disgusted Consumers Dislike Food More Than Sad Ones.

    Science.gov (United States)

    Motoki, Kosuke; Sugiura, Motoaki

    2018-01-01

    According to the affect-as-information framework, consumers base judgments on their feelings. Disgust is associated with two kinds of appraisal: one in which the consumer avoids and distances him/herself immediately from the object concerned, and another in which the consumer is disgusted due to contamination and impurities within the environment. The first instance indicates that disgust can decrease a consumer's preference for a product, regardless of its category. In contrast, the second case suggests that a product's degree of depreciation is greater in products vulnerable to contamination, such as foods. However, it remains largely unknown how incidental disgust affects product preferences in accordance with the two appraisal-related goals. The present research investigates how incidental disgust (as opposed to sadness, an equally valenced but distinct emotion of appraisal) influences consumer preferences for products with or without a risk of contamination. Twenty-four participants repeatedly judged foods or household products after seeing an emotional image (conveying disgust, sadness, or neutrality). Foods and household products are the two representative product categories in grocery stores, but only foods are associated with a risk of contamination. The results showed that incidental disgust led to negative evaluations of both types of products; however, compared to sadness, incidental disgust demonstrated a stronger negative effect on preference for foods than household products. These findings elucidate that disgust and the appraisal of contamination specifically devalue foods, and broaden the application of the appraisal-information framework in consumer settings.

  1. The Feeling of the Story: Narrating to Regulate Anger and Sadness

    Science.gov (United States)

    Pasupathi, Monisha; Wainryb, Cecilia; Mansfield, Cade D.; Bourne, Stacia

    2017-01-01

    Admonitions to tell one’s story in order to feel better reflect the belief that narrative is an effective emotion regulation tool. The present studies evaluate the effectiveness of narrative for regulating sadness and anger, and provide quantitative comparisons of narrative with distraction, reappraisal, and reexposure. The results for sadness (n = 93) and anger (n = 89) reveal that narrative is effective at down-regulating negative emotions, particularly when narratives place events in the past tense and include positive emotions. The results suggest that if people tell the “right” kind of story about their experiences, narrative reduces emotional distress linked to those experiences. PMID:26745208

  2. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    Science.gov (United States)

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Exploring Musical Activities and Their Relationship to Emotional Well-Being in Elderly People across Europe: A Study Protocol.

    Science.gov (United States)

    Grau-Sánchez, Jennifer; Foley, Meabh; Hlavová, Renata; Muukkonen, Ilkka; Ojinaga-Alfageme, Olatz; Radukic, Andrijana; Spindler, Melanie; Hundevad, Bodil

    2017-01-01

    Music is a powerful, pleasurable stimulus that can induce positive feelings and can therefore be used for emotional self-regulation. Musical activities such as listening to music, playing an instrument, singing or dancing are also an important source for social contact, promoting interaction and the sense of belonging with others. Recent evidence has suggested that after retirement, other functions of music, such as self-conceptual processing related to autobiographical memories, become more salient. However, few studies have addressed the meaningfulness of music in the elderly. This study aims to investigate elderly people's habits and preferences related to music, study the role music plays in their everyday life, and explore the relationship between musical activities and emotional well-being across different countries of Europe. A survey will be administered to elderly people over the age of 65 from five different European countries (Bosnia and Herzegovina, Czechia, Germany, Ireland, and UK) and to a control group. Participants in both groups will be asked about basic sociodemographic information, habits and preferences in their participation in musical activities and emotional well-being. Overall, the aim of this study is to gain a deeper understanding of the role of music in the elderly from a psychological perspective. This advanced knowledge could help to develop therapeutic applications, such as musical recreational programs for healthy older people or elderly in residential care, which are better able to meet their emotional and social needs.

  4. The neural basis of attaining conscious awareness of sad mood.

    Science.gov (United States)

    Smith, Ryan; Braden, B Blair; Chen, Kewei; Ponce, Francisco A; Lane, Richard D; Baxter, Leslie C

    2015-09-01

    The neural processes associated with becoming aware of sad mood are not fully understood. We examined the dynamic process of becoming aware of sad mood and recovery from sad mood. Sixteen healthy subjects underwent fMRI while participating in a sadness induction task designed to allow for variable mood induction times. Individualized regressors linearly modeled the time periods during the attainment of self-reported sad and baseline "neutral" mood states, and the validity of the linearity assumption was further tested using independent component analysis. During sadness induction the dorsomedial and ventrolateral prefrontal cortices, and anterior insula exhibited a linear increase in the blood oxygen level-dependent (BOLD) signal until subjects became aware of a sad mood and then a subsequent linear decrease as subjects transitioned from sadness back to the non-sadness baseline condition. These findings extend understanding of the neural basis of conscious emotional experience.

  5. Perinatal sadness among Shuar women: support for an evolutionary theory of psychic pain.

    Science.gov (United States)

    Hagen, Edward H; Barrett, H Clark

    2007-03-01

    Psychiatry faces an internal contradiction in that it regards mild sadness and low mood as normal emotions, yet when these emotions are directed toward a new infant, it regards them as abnormal. We apply parental investment theory, a widely used framework from evolutionary biology, to maternal perinatal emotions, arguing that negative emotions directed toward a new infant could serve an important evolved function. If so, then under some definitions of psychiatric disorder, these emotions are not disorders. We investigate the applicability of parental investment theory to maternal postpartum emotions among Shuar mothers. Shuar mothers' conceptions of perinatal sadness closely match predictions of parental investment theory.

  6. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study

    OpenAIRE

    Vieillard , Sandrine; Gilet , Anne-Laure ,

    2013-01-01

    International audience; There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplo...

  7. Lower body weight is associated with less negative emotions in sad autobiographical memories of patients with anorexia nervosa.

    Science.gov (United States)

    Brockmeyer, Timo; Grosse Holtforth, Martin; Bents, Hinrich; Herzog, Wolfgang; Friederich, Hans-Christoph

    2013-12-15

    Food restriction and weight-loss have been proposed to represent pathogenic mechanisms of emotion regulation in anorexia nervosa (AN). However, there is a lack of studies empirically examining this hypothesis. Therefore, the present study compared 25 women with AN and 25 healthy control women (HC) regarding spontaneous emotional processing of autobiographic memories. Participants' idiographic memories of sad autobiographic events were analyzed using computerized, quantitative text analysis as an unobtrusive approach of nonreactive assessment. Compared to HC, AN patients retrieved more negative but a comparable number of positive emotions. Moreover, the lesser the body weight in AN patients, the lesser negative emotions they retrieved, irrespective of current levels of depressive symptoms and duration of illness. No such association was found in HC. These preliminary findings are in line with models of AN proposing that food restriction and weight-loss may be negatively reinforced by the alleviation of aversive emotional responses. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Display rules for anger, sadness, and pain: it depends on who is watching.

    Science.gov (United States)

    Zeman, J; Garber, J

    1996-06-01

    This study examined factors that may influence children's decisions to control or express their emotions including type of emotion (anger, sadness, physical pain), type of audience (mother, father, peer, alone), age, and sex. Children's reported use of display rules, reasons for their decisions, and reported method of expression were examined. Subjects were 32 boys and 32 girls in each of the first (M = 7.25 years old), third (M = 9.33 years old), and fifth grades (M = 11.75 years old). Regardless of the type of emotion experienced, children reported controlling their expression of emotion significantly more in the presence of peers than when they were with either their mother or father or when they were alone. Younger children reported expressing sadness and anger significantly more often than did older children, and girls were more likely than boys to report expressing sadness and pain. Children's primary reason for controlling their emotional expressions was the expectation of a negative interpersonal interaction following disclosure.

  9. Emotional Expressivity and Emotion Regulation: Relation to Academic Functioning among Elementary School Children

    Science.gov (United States)

    Kwon, Kyongboon; Hanrahan, Amanda R.; Kupzyk, Kevin A.

    2017-01-01

    We examined emotional expressivity (i.e., happiness, sadness, and anger) and emotion regulation (regulation of exuberance, sadness, and anger) as they relate to academic functioning (motivation, engagement, and achievement). Also, we tested the premise that emotional expressivity and emotion regulation are indirectly associated with achievement…

  10. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  11. Emotional, Motivational and Interpersonal Responsiveness of Children with Autism in Improvisational Music Therapy

    Science.gov (United States)

    Kim, Jinah; Wigram, Tony; Gold, Christian

    2009-01-01

    Through behavioural analysis, this study investigated the social-motivational aspects of musical interaction between the child and the therapist in improvisational music therapy by measuring emotional, motivational and interpersonal responsiveness in children with autism during joint engagement episodes. The randomized controlled study (n = 10)…

  12. Assessing the Role of Emotional Associations in Mediating Crossmodal Correspondences between Classical Music and Red Wine

    Directory of Open Access Journals (Sweden)

    Qian (Janice Wang

    2017-01-01

    Full Text Available Several recent studies have demonstrated that people intuitively make consistent matches between classical music and specific wines. It is not clear, however, what governs such crossmodal mappings. Here, we assess the role of emotion—specifically different dimensional aspects of valence, arousal, and dominance—in mediating such mappings. Participants matched three different red wines to three different pieces of classical music. Subsequently, they made emotion ratings separately for each wine and each musical selection. The results revealed that certain wine–music pairings were rated as being significantly better matches than others. More importantly, there was evidence that the participants’ dominance and arousal ratings for the wines and the music predicted their matching rating for each wine–music pairing. These results therefore support the view that wine–music associations are not arbitrary but can be explained, at least in part, by common emotional associations.

  13. Towards Predicting Expressed Emotion in Music from Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2012-01-01

    We introduce five regression models for the modeling of expressed emotion in music using data obtained in a two alternative forced choice listening experiment. The predictive performance of the proposed models is compared using learning curves, showing that all models converge to produce a similar...

  14. He throws like a girl (but only when he's sad): emotion affects sex-decoding of biological motion displays.

    Science.gov (United States)

    Johnson, Kerri L; McKay, Lawrie S; Pollick, Frank E

    2011-05-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming the morphological confounding inherent in facial displays. In four studies, participants' judgments revealed gender stereotyping. Observers accurately perceived emotion from biological motion displays (Study 1), and this affected sex categorizations. Angry displays were overwhelmingly judged to be men; sad displays were judged to be women (Studies 2-4). Moreover, this pattern remained strong when stimuli were equated for velocity (Study 3). We argue that these results were obtained because perceivers applied gender stereotypes of emotion to infer sex category (Study 4). Implications for both vision sciences and social psychology are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Unconsciously Triggered Emotional Conflict by Emotional Facial Expressions

    Science.gov (United States)

    Chen, Antao; Cui, Qian; Zhang, Qinglin

    2013-01-01

    The present study investigated whether emotional conflict and emotional conflict adaptation could be triggered by unconscious emotional information as assessed in a backward-masked affective priming task. Participants were instructed to identify the valence of a face (e.g., happy or sad) preceded by a masked happy or sad face. The results of two experiments revealed the emotional conflict effect but no emotional conflict adaptation effect. This demonstrates that emotional conflict can be triggered by unconsciously presented emotional information, but participants may not adjust their subsequent performance trial-by trial to reduce this conflict. PMID:23409084

  16. Facial and prosodic emotion recognition in social anxiety disorder.

    Science.gov (United States)

    Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei

    2017-07-01

    Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.

  17. Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style

    Science.gov (United States)

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception. PMID:22242169

  18. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Directory of Open Access Journals (Sweden)

    Eduardo A Garza Villarreal

    Full Text Available Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  19. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Science.gov (United States)

    Villarreal, Eduardo A Garza; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  20. Music to My Eyes: Cross-Modal Interactions in the Perception of Emotions in Musical Performance

    Science.gov (United States)

    Vines, Bradley W.; Krumhansl, Carol L.; Wanderley, Marcelo M.; Dalca, Ioana M.; Levitin, Daniel J.

    2011-01-01

    We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles:…

  1. Emotional and Motivational Uses of Music in Sports and Exercise: A Questionnaire Study among Athletes

    Science.gov (United States)

    Laukka, Petri; Quick, Lina

    2013-01-01

    Music is present in many sport and exercise situations, but empirical investigations on the motives for listening to music in sports remain scarce. In this study, Swedish elite athletes (N = 252) answered a questionnaire that focused on the emotional and motivational uses of music in sports and exercise. The questionnaire contained both…

  2. Music listening in families and peer groups: benefits for young people's social cohesion and emotional well-being across four cultures.

    Science.gov (United States)

    Boer, Diana; Abubakar, Amina

    2014-01-01

    Families are central to the social and emotional development of youth, and most families engage in musical activities together, such as listening to music or talking about their favorite songs. However, empirical evidence of the positive effects of musical family rituals on social cohesion and emotional well-being is scarce. Furthermore, the role of culture in the shaping of musical family rituals and their psychological benefits has been neglected entirely. This paper investigates musical rituals in families and in peer groups (as an important secondary socialization context) in two traditional/collectivistic and two secular/individualistic cultures, and across two developmental stages (adolescence vs. young adulthood). Based on cross-sectional data from 760 young people in Kenya, the Philippines, New Zealand, and Germany, our study revealed that across cultures music listening in families and in peer groups contributes to family and peer cohesion, respectively. Furthermore, the direct contribution of music in peer groups on well-being appears across cultural contexts, whereas musical family rituals affect emotional well-being in more traditional/collectivistic contexts. Developmental analyses show that musical family rituals are consistently and strongly related to family cohesion across developmental stages, whereas musical rituals in peer groups appear more dependent on the developmental stage (in interaction with culture). Contributing to developmental as well as cross-cultural psychology, this research elucidated musical rituals and their positive effects on the emotional and social development of young people across cultures. The implications for future research and family interventions are discussed.

  3. The influence of music-elicited emotions and relative pitch on absolute pitch memory for familiar melodies.

    Science.gov (United States)

    Jakubowski, Kelly; Müllensiefen, Daniel

    2013-01-01

    Levitin's findings that nonmusicians could produce from memory the absolute pitches of self-selected pop songs have been widely cited in the music psychology literature. These findings suggest that latent absolute pitch (AP) memory may be a more widespread trait within the population than traditional AP labelling ability. However, it has been left unclear what factors may facilitate absolute pitch retention for familiar pieces of music. The aim of the present paper was to investigate factors that may contribute to latent AP memory using Levitin's sung production paradigm for AP memory and comparing results to the outcomes of a pitch labelling task, a relative pitch memory test, measures of music-induced emotions, and various measures of participants' musical backgrounds. Our results suggest that relative pitch memory and the quality and degree of music-elicited emotions impact on latent AP memory.

  4. Emotions and Understanding in Music. A Transcendental and Empirical Approach

    Czech Academy of Sciences Publication Activity Database

    Kolman, Vojtěch

    2014-01-01

    Roč. 44, č. 1 (2014), s. 83-100 ISSN 0046-8541 R&D Projects: GA ČR(CZ) GA13-20785S Institutional support: RVO:67985955 Keywords : emotions * philosophy of music * idealism * Hegel * Wittgenstein * expectations * Brandom Subject RIV: AA - Philosophy ; Religion

  5. The use of music in facilitating emotional expression in the terminally ill.

    Science.gov (United States)

    Clements-Cortes, Amy

    2004-01-01

    The expression and discussion of feelings of loss and grief can be very difficult for terminally ill patients. Expressing their emotions can help these patients experience a more relaxed and comfortable state. This paper discusses the role of music therapy in palliative care and the function music plays in accessing emotion. It also describes techniques used in assisting clients to express their thoughts and feelings. Case examples of three in-patient palliative care clients at Baycrest Centre for Geriatric Care are presented. The goals set for these patients were to decrease depressive symptoms and social isolation, increase communication and self-expression, stimulate reminiscence and life review, and enhance relaxation. The clients were all successful in reaching their individual goals.

  6. Cognitive, emotional, and neural benefits of musical leisure activities in aging and neurological rehabilitation: A critical review.

    Science.gov (United States)

    Särkämö, Teppo

    2017-04-28

    Music has the capacity to engage auditory, cognitive, motor, and emotional functions across cortical and subcortical brain regions and is relatively preserved in aging and dementia. Thus, music is a promising tool in the rehabilitation of aging-related neurological illnesses, such as stroke and Alzheimer disease. As the population ages and the incidence and prevalence of these illnesses rapidly increases, music-based interventions that are enjoyable and effective in the everyday care of the patients are needed. In addition to formal music therapy, musical leisure activities, such as music listening and singing, which patients can do on their own or with a caregiver, are a promising way to support psychological well-being during aging and in neurological rehabilitation. This review article provides an overview of current evidence on the cognitive, emotional, and neural effects of musical leisure activities both during normal aging and in the rehabilitation and care of stroke patients and people with dementia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  7. Involuntary and voluntary recall of musical memories: A comparison of temporal accuracy and emotional responses.

    Science.gov (United States)

    Jakubowski, Kelly; Bashir, Zaariyah; Farrugia, Nicolas; Stewart, Lauren

    2018-01-29

    Comparisons between involuntarily and voluntarily retrieved autobiographical memories have revealed similarities in encoding and maintenance, with differences in terms of specificity and emotional responses. Our study extended this research area into the domain of musical memory, which afforded a unique opportunity to compare the same memory as accessed both involuntarily and voluntarily. Specifically, we compared instances of involuntary musical imagery (INMI, or "earworms")-the spontaneous mental recall and repetition of a tune-to deliberate recall of the same tune as voluntary musical imagery (VMI) in terms of recall accuracy and emotional responses. Twenty participants completed two 3-day tasks. In an INMI task, participants recorded information about INMI episodes as they occurred; in a VMI task, participants were prompted via text message to deliberately imagine each tune they had previously experienced as INMI. In both tasks, tempi of the imagined tunes were recorded by tapping to the musical beat while wearing an accelerometer and additional information (e.g., tune name, emotion ratings) was logged in a diary. Overall, INMI and VMI tempo measurements for the same tune were strongly correlated. Tempo recall for tunes that have definitive, recorded versions was relatively accurate, and tunes that were retrieved deliberately (VMI) were not recalled more accurately in terms of tempo than spontaneous and involuntary instances of imagined music (INMI). Some evidence that INMI elicited stronger emotional responses than VMI was also revealed. These results demonstrate several parallels to previous literature on involuntary memories and add new insights on the phenomenology of INMI.

  8. Emotional Variability and Clarity in Depression and Social Anxiety

    Science.gov (United States)

    Thompson, Renee J.; Boden, Matthew Tyler; Gotlib, Ian H.

    2016-01-01

    Recent research has underscored the importance of elucidating specific patterns of emotion that characterize mental disorders. We examined two emotion traits, emotional variability and emotional clarity, in relation to both categorical (diagnostic interview) and dimensional (self-report) measures of Major Depressive Disorder (MDD) and Social Anxiety Disorder (SAD) in women diagnosed with MDD only (n=35), SAD only (n=31), MDD and SAD (n=26), or no psychiatric disorder (n=38). Results of the categorical analyses suggest that elevated emotional variability and diminished emotional clarity are transdiagnostic of MDD and SAD. More specifically, emotional variability was elevated for MDD and SAD diagnoses compared to no diagnosis, showing an additive effect for co-occurring MDD and SAD. Similarly diminished levels of emotional clarity characterized all three clinical groups compared to the healthy control group. Dimensional findings suggest that whereas emotional variability is associated more consistently with depression than with social anxiety, emotional clarity is associated more consistently with social anxiety than with depression. Results are interpreted using a threshold- and dose-response framework. PMID:26371579

  9. How sadness and happiness influence ethnic stereotyping

    Directory of Open Access Journals (Sweden)

    Žeželj Iris

    2012-01-01

    Full Text Available Incidental affective states tend to influence stereotyping in counterintuitive way: experimentally induced happiness leads to more stereotyping while experimentally induced sadness leads to less stereotyping. It was therefore predicted that happy subjects would a. would make more stereotype-consistent errors in memory task; b. attribute more stereotypical features to a specific ethnic group, and c. be less sensitive to ethnic discrimination in comparison to sad subjects. In a sample of 90 high school students from Belgrade, Serbia, differently valenced affects were successfully induced using 'autobiographic recollection' procedure. Experiment 1 showed that happy and sad subjects did not differ in the number of stereotype consistent errors in memory task. In experiment 2, however, happy subjects in comparison to sad subjects attributed more stereotypic traits to a non-stereotypical exemplar of a national category and expected him to behave more stereotypically in the future. Additionally, in thought listing task, happy subjects recorded more irrelevant and less story-focused thoughts in comparison to sad subjects. Finally, in Experiment 3 (N=66 sad subjects demonstrated more sensitivity to ethnic discrimination in comparison to happy subjects. These findings are discussed in terms of the impact of emotional experience on social information-processing strategies.

  10. Music listening in families and peer groups: Benefits for young people's social cohesion and emotional well-being across four cultures

    Directory of Open Access Journals (Sweden)

    Diana eBoer

    2014-05-01

    Full Text Available Families are central to the social and emotional development of youth, and most families engage in musical activities together, such as listening to music or talking about their favorite songs. However, empirical evidence of the positive effects of musical family rituals on social cohesion and emotional well-being is scarce. Furthermore, the role of culture in the shaping of musical family rituals and their psychological benefits has been neglected entirely. This paper investigates musical rituals in families and in peer groups (as an important secondary socialization context in two traditional/collectivistic and two secular/individualistic cultures, and across two developmental stages (adolescence vs. young adulthood. Based on cross-sectional data from 760 young people in Kenya, the Philippines, New Zealand and Germany, our study revealed that across cultures music listening in families and in peer groups contributes to family and peer cohesion respectively. Furthermore, the direct contribution of music in peer groups on well-being appears across cultural contexts, whereas musical family rituals affect emotional well-being in more traditional/collectivistic contexts. Developmental analyses show that musical family rituals are consistently and strongly related to family cohesion across developmental stages, whereas musical rituals in peer groups appear more dependent on the developmental stage (in interaction with culture. Contributing to developmental as well as cross-cultural psychology, this research elucidated musical rituals and their positive effects on the emotional and social development of young people across cultures. The implications for future research and family interventions are discussed.

  11. Music listening in families and peer groups: benefits for young people's social cohesion and emotional well-being across four cultures

    Science.gov (United States)

    Boer, Diana; Abubakar, Amina

    2014-01-01

    Families are central to the social and emotional development of youth, and most families engage in musical activities together, such as listening to music or talking about their favorite songs. However, empirical evidence of the positive effects of musical family rituals on social cohesion and emotional well-being is scarce. Furthermore, the role of culture in the shaping of musical family rituals and their psychological benefits has been neglected entirely. This paper investigates musical rituals in families and in peer groups (as an important secondary socialization context) in two traditional/collectivistic and two secular/individualistic cultures, and across two developmental stages (adolescence vs. young adulthood). Based on cross-sectional data from 760 young people in Kenya, the Philippines, New Zealand, and Germany, our study revealed that across cultures music listening in families and in peer groups contributes to family and peer cohesion, respectively. Furthermore, the direct contribution of music in peer groups on well-being appears across cultural contexts, whereas musical family rituals affect emotional well-being in more traditional/collectivistic contexts. Developmental analyses show that musical family rituals are consistently and strongly related to family cohesion across developmental stages, whereas musical rituals in peer groups appear more dependent on the developmental stage (in interaction with culture). Contributing to developmental as well as cross-cultural psychology, this research elucidated musical rituals and their positive effects on the emotional and social development of young people across cultures. The implications for future research and family interventions are discussed. PMID:24847296

  12. Musical Preferences are Linked to Cognitive Styles.

    Directory of Open Access Journals (Sweden)

    David M Greenberg

    Full Text Available Why do we like the music we do? Research has shown that musical preferences and personality are linked, yet little is known about other influences on preferences such as cognitive styles. To address this gap, we investigated how individual differences in musical preferences are explained by the empathizing-systemizing (E-S theory. Study 1 examined the links between empathy and musical preferences across four samples. By reporting their preferential reactions to musical stimuli, samples 1 and 2 (Ns = 2,178 and 891 indicated their preferences for music from 26 different genres, and samples 3 and 4 (Ns = 747 and 320 indicated their preferences for music from only a single genre (rock or jazz. Results across samples showed that empathy levels are linked to preferences even within genres and account for significant proportions of variance in preferences over and above personality traits for various music-preference dimensions. Study 2 (N = 353 replicated and extended these findings by investigating how musical preferences are differentiated by E-S cognitive styles (i.e., 'brain types'. Those who are type E (bias towards empathizing preferred music on the Mellow dimension (R&B/soul, adult contemporary, soft rock genres compared to type S (bias towards systemizing who preferred music on the Intense dimension (punk, heavy metal, and hard rock. Analyses of fine-grained psychological and sonic attributes in the music revealed that type E individuals preferred music that featured low arousal (gentle, warm, and sensual attributes, negative valence (depressing and sad, and emotional depth (poetic, relaxing, and thoughtful, while type S preferred music that featured high arousal (strong, tense, and thrilling, and aspects of positive valence (animated and cerebral depth (complexity. The application of these findings for clinicians, interventions, and those on the autism spectrum (largely type S or extreme type S are discussed.

  13. Musical Preferences are Linked to Cognitive Styles

    Science.gov (United States)

    Greenberg, David M.; Baron-Cohen, Simon; Stillwell, David J.; Kosinski, Michal; Rentfrow, Peter J.

    2015-01-01

    Why do we like the music we do? Research has shown that musical preferences and personality are linked, yet little is known about other influences on preferences such as cognitive styles. To address this gap, we investigated how individual differences in musical preferences are explained by the empathizing-systemizing (E-S) theory. Study 1 examined the links between empathy and musical preferences across four samples. By reporting their preferential reactions to musical stimuli, samples 1 and 2 (Ns = 2,178 and 891) indicated their preferences for music from 26 different genres, and samples 3 and 4 (Ns = 747 and 320) indicated their preferences for music from only a single genre (rock or jazz). Results across samples showed that empathy levels are linked to preferences even within genres and account for significant proportions of variance in preferences over and above personality traits for various music-preference dimensions. Study 2 (N = 353) replicated and extended these findings by investigating how musical preferences are differentiated by E-S cognitive styles (i.e., ‘brain types’). Those who are type E (bias towards empathizing) preferred music on the Mellow dimension (R&B/soul, adult contemporary, soft rock genres) compared to type S (bias towards systemizing) who preferred music on the Intense dimension (punk, heavy metal, and hard rock). Analyses of fine-grained psychological and sonic attributes in the music revealed that type E individuals preferred music that featured low arousal (gentle, warm, and sensual attributes), negative valence (depressing and sad), and emotional depth (poetic, relaxing, and thoughtful), while type S preferred music that featured high arousal (strong, tense, and thrilling), and aspects of positive valence (animated) and cerebral depth (complexity). The application of these findings for clinicians, interventions, and those on the autism spectrum (largely type S or extreme type S) are discussed. PMID:26200656

  14. Musical Preferences are Linked to Cognitive Styles.

    Science.gov (United States)

    Greenberg, David M; Baron-Cohen, Simon; Stillwell, David J; Kosinski, Michal; Rentfrow, Peter J

    2015-01-01

    Why do we like the music we do? Research has shown that musical preferences and personality are linked, yet little is known about other influences on preferences such as cognitive styles. To address this gap, we investigated how individual differences in musical preferences are explained by the empathizing-systemizing (E-S) theory. Study 1 examined the links between empathy and musical preferences across four samples. By reporting their preferential reactions to musical stimuli, samples 1 and 2 (Ns = 2,178 and 891) indicated their preferences for music from 26 different genres, and samples 3 and 4 (Ns = 747 and 320) indicated their preferences for music from only a single genre (rock or jazz). Results across samples showed that empathy levels are linked to preferences even within genres and account for significant proportions of variance in preferences over and above personality traits for various music-preference dimensions. Study 2 (N = 353) replicated and extended these findings by investigating how musical preferences are differentiated by E-S cognitive styles (i.e., 'brain types'). Those who are type E (bias towards empathizing) preferred music on the Mellow dimension (R&B/soul, adult contemporary, soft rock genres) compared to type S (bias towards systemizing) who preferred music on the Intense dimension (punk, heavy metal, and hard rock). Analyses of fine-grained psychological and sonic attributes in the music revealed that type E individuals preferred music that featured low arousal (gentle, warm, and sensual attributes), negative valence (depressing and sad), and emotional depth (poetic, relaxing, and thoughtful), while type S preferred music that featured high arousal (strong, tense, and thrilling), and aspects of positive valence (animated) and cerebral depth (complexity). The application of these findings for clinicians, interventions, and those on the autism spectrum (largely type S or extreme type S) are discussed.

  15. Emotional competence and extrinsic emotion regulation directed toward an ostracized person.

    Science.gov (United States)

    Nozaki, Yuki

    2015-12-01

    Positive interpersonal relationships hinge on individuals' competence in regulating others' emotions as well as their own. Nevertheless, little is known about the relationship between emotional competence and specific interpersonal behaviors. In particular, it is unclear which situations require emotional competence for extrinsic emotion regulation and whether emotionally competent individuals actually attempt to regulate others' emotions. To clarify these issues, the current investigation examined the relationship between emotional competence and extrinsic emotion regulation directed toward an ostracized person. The results of Study 1 (N = 39) indicated that interpersonal emotional competence (competence related to others' emotions) was positively associated with participants' efforts to relieve the ostracized person's sadness. In Study 2 (N = 120), this relationship was moderated by the ostracized person's emotional expression. In particular, participants with high interpersonal emotional competence were more likely to attempt to regulate the sadness of ostracized individuals who expressed neutral affect. In contrast, when the ostracized person expressed sadness, there were no significant relationships between high or low interpersonal emotional competence and extrinsic emotion regulation behavior. These results offer novel insight into how emotionally competent individuals use their competence to benefit others. (c) 2015 APA, all rights reserved).

  16. Emotion, Embodied Mind and the Therapeutic Aspects of Musical Experience in Everyday Life

    Directory of Open Access Journals (Sweden)

    Dylan van der Schyff

    2013-07-01

    Full Text Available The capacity for music to function as a force for bio-cognitive organisation is considered in clinical and everyday contexts. Given the deeply embodied nature of such therapeutic responses to music, it is argued that cognitivist approaches may be insufficient to fully explain music’s affective power. Following this, an embodied approach is considered, where the emotional-affective response to music is discussed in terms of primary bodily systems and the innate cross-modal perceptive capacities of the embodied human mind. It is suggested that such an approach may extend the largely cognitivist view taken by much of contemporary music psychology and philosophy of music by pointing the way towards a conception of musical meaning that begins with our most primordial interactions with the world.

  17. Emotional recognition from dynamic facial, vocal and musical expressions following traumatic brain injury.

    Science.gov (United States)

    Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle

    2017-01-01

    To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.

  18. Affective Music Information Retrieval

    OpenAIRE

    Wang, Ju-Chiang; Yang, Yi-Hsuan; Wang, Hsin-Min

    2015-01-01

    Much of the appeal of music lies in its power to convey emotions/moods and to evoke them in listeners. In consequence, the past decade witnessed a growing interest in modeling emotions from musical signals in the music information retrieval (MIR) community. In this article, we present a novel generative approach to music emotion modeling, with a specific focus on the valence-arousal (VA) dimension model of emotion. The presented generative model, called \\emph{acoustic emotion Gaussians} (AEG)...

  19. Sad mood induction has an opposite effect on amygdala response to emotional stimuli in euthymic patients with bipolar disorder and healthy controls.

    Science.gov (United States)

    Horacek, Jiri; Mikolas, Pavol; Tintera, Jaroslav; Novak, Tomas; Palenicek, Tomas; Brunovsky, Martin; Höschl, Cyril; Alda, Martin

    2015-03-01

    Aberrant amygdala reactivity to affective stimuli represents a candidate factor predisposing patients with bipolar disorder (BD) to relapse, but it is unclear to what extent amygdala reactivity is state-dependent. We evaluated the modulatory influence of mood on amygdala reactivity and functional connectivity in patients with remitted BD and healthy controls. Amygdala response to sad versus neutral faces was investigated using fMRI during periods of normal and sad mood induced by autobiographical scripts. We assessed the functional connectivity of the amygdala to characterize the influence of mood state on the network responsible for the amygdala response. We included 20 patients with remitted BD and 20 controls in our study. The sad and normal mood exerted opposite effects on the amygdala response to emotional faces in patients compared with controls (F1,38 = 5.85, p = 0.020). Sad mood amplified the amygdala response to sad facial stimuli in controls but attenuated the amygdala response in patients. The groups differed in functional connectivity between the amygdala and the inferior prefrontal gyrus (p ≤ 0.05, family-wise error-corrected) of ventrolateral prefrontal cortex (vlPFC) corresponding to Brodmann area 47. The sad mood challenge increased connectivity during the period of processing sad faces in patients but decreased connectivity in controls. Limitations to our study included long-term medication use in the patient group and the fact that we mapped only depressive (not manic) reactivity. Our results support the role of the amygdala-vlPFC as the system of dysfunctional contextual affective processing in patients with BD. Opposite amygdala reactivity unmasked by the mood challenge paradigm could represent a trait marker of altered mood regulation in patients with BD.

  20. The influence of caregiver singing and background music on vocally expressed emotions and moods in dementia care: a qualitative analysis.

    Science.gov (United States)

    Götell, Eva; Brown, Steven; Ekman, Sirkka-Liisa

    2009-04-01

    Music and singing are considered to have a strong impact on human emotions. Such an effect has been demonstrated in caregiving contexts with dementia patients. The aim of the study was to illuminate vocally expressed emotions and moods in the communication between caregivers and persons with severe dementia during morning care sessions. Three types of caring sessions were compared: the "usual" way, with no music; with background music playing; and with the caregiver singing to and/or with the patient. Nine persons with severe dementia living in a nursing home in Sweden and five professional caregivers participated in this study. Qualitative content analysis was used to examine videotaped recordings of morning care sessions, with a focus on vocally expressed emotions and moods during verbal communication. Compared to no music, the presence of background music and caregiver singing improved the mutuality of the communication between caregiver and patient, creating a joint sense of vitality. Positive emotions were enhanced, and aggressiveness was diminished. Whereas background music increased the sense of playfulness, caregiver singing enhanced the sense of sincerity and intimacy in the interaction. Caregiver singing and background music can help the caregiver improve the patient's ability to express positive emotions and moods, and to elicit a sense of vitality on the part of the person with severe dementia. The results further support the value of caregiver singing as a method to improve the quality of dementia care.

  1. Looking at the (mis) fortunes of others while listening to music

    NARCIS (Netherlands)

    Arriaga, P.; Esteves, F.; Feddes, A.R.

    2014-01-01

    The present study examined whether eye gaze behaviour regarding pictures of other people in fortunate (positive) and unfortunate (negative) circumstances is influenced by background music. Sixtythree participants were randomly assigned to three background music conditions (happy music, sad music, or

  2. Laughter exaggerates happy and sad faces depending on visual context.

    Science.gov (United States)

    Sherman, Aleksandra; Sweeny, Timothy D; Grabowecky, Marcia; Suzuki, Satoru

    2012-04-01

    Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.

  3. Emotional clarity and attention to emotions in cognitive behavioral group therapy and mindfulness-based stress reduction for social anxiety disorder.

    Science.gov (United States)

    Butler, Rachel M; Boden, Matthew T; Olino, Thomas M; Morrison, Amanda S; Goldin, Philippe R; Gross, James J; Heimberg, Richard G

    2018-04-01

    We examined (1) differences between controls and patients with social anxiety disorder (SAD) in emotional clarity and attention to emotions; (2) changes in emotional clarity and attention to emotions associated with cognitive-behavioral group therapy (CBGT), mindfulness-based stress reduction (MBSR), or a waitlist (WL) condition; and (3) whether emotional clarity and attention to emotions moderated changes in social anxiety across treatment. Participants were healthy controls (n = 37) and patients with SAD (n = 108) who were assigned to CBGT, MBSR, or WL in a randomized controlled trial. At pretreatment, posttreatment, and 12-month follow-up, patients with SAD completed measures of social anxiety, emotional clarity, and attention to emotions. Controls completed measures at baseline only. At pretreatment, patients with SAD had lower levels of emotional clarity than controls. Emotional clarity increased significantly among patients receiving CBGT, and changes were maintained at 12-month follow-up. Emotional clarity at posttreatment did not differ between CBGT and MBSR or between MBSR and WL. Changes in emotional clarity predicted changes in social anxiety, but emotional clarity did not moderate treatment outcome. Analyses of attention to emotions were not significant. Implications for the role of emotional clarity in the treatment of SAD are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Neural Substrates for Processing Task-Irrelevant Sad Images in Adolescents

    Science.gov (United States)

    Wang, Lihong; Huettel, Scott; De Bellis, Michael D.

    2008-01-01

    Neural systems related to cognitive and emotional processing were examined in adolescents using event-related functional magnetic resonance imaging (fMRI). Ten healthy adolescents performed an emotional oddball task. Subjects detected infrequent circles (targets) within a continual stream of phase-scrambled images (standards). Sad and neutral…

  5. The effect of musical experience on emotional self-reports and psychophysiological responses to dissonance.

    Science.gov (United States)

    Dellacherie, Delphine; Roy, Mathieu; Hugueville, Laurent; Peretz, Isabelle; Samson, Séverine

    2011-03-01

    To study the influence of musical education on emotional reactions to dissonance, we examined self-reports and physiological responses to dissonant and consonant musical excerpts in listeners with low (LE: n=15) and high (HE: n=13) musical experience. The results show that dissonance induces more unpleasant feelings and stronger physiological responses in HE than in LE participants, suggesting that musical education reinforces aversion to dissonance. Skin conductance (SCR) and electromyographic (EMG) signals were analyzed according to a defense cascade model, which takes into account two successive time windows corresponding to orienting and defense responses. These analyses suggest that musical experience can influence the defense response to dissonance and demonstrate a powerful role of musical experience not only in autonomic but also in expressive responses to music. Copyright © 2010 Society for Psychophysiological Research.

  6. On the validity of the autobiographical emotional memory task for emotion induction.

    Directory of Open Access Journals (Sweden)

    Caitlin Mills

    Full Text Available The Autobiographical Emotional Memory Task (AEMT, which involves recalling and writing about intense emotional experiences, is a widely used method to experimentally induce emotions. The validity of this method depends upon the extent to which it can induce specific desired emotions (intended emotions, while not inducing any other (incidental emotions at different levels across one (or more conditions. A review of recent studies that used this method indicated that most studies exclusively monitor post-writing ratings of the intended emotions, without assessing the possibility that the method may have differentially induced other incidental emotions as well. We investigated the extent of this issue by collecting both pre- and post-writing ratings of incidental emotions in addition to the intended emotions. Using methods largely adapted from previous studies, participants were assigned to write about a profound experience of anger or fear (Experiment 1 or happiness or sadness (Experiment 2. In line with previous research, results indicated that intended emotions (anger and fear were successfully induced in the respective conditions in Experiment 1. However, disgust and sadness were also induced while writing about an angry experience compared to a fearful experience. Similarly, although happiness and sadness were induced in the appropriate conditions, Experiment 2 indicated that writing about a sad experience also induced disgust, fear, and anger, compared to writing about a happy experience. Possible resolutions to avoid the limitations of the AEMT to induce specific discrete emotions are discussed.

  7. On the Validity of the Autobiographical Emotional Memory Task for Emotion Induction

    Science.gov (United States)

    Mills, Caitlin; D'Mello, Sidney

    2014-01-01

    The Autobiographical Emotional Memory Task (AEMT), which involves recalling and writing about intense emotional experiences, is a widely used method to experimentally induce emotions. The validity of this method depends upon the extent to which it can induce specific desired emotions (intended emotions), while not inducing any other (incidental) emotions at different levels across one (or more) conditions. A review of recent studies that used this method indicated that most studies exclusively monitor post-writing ratings of the intended emotions, without assessing the possibility that the method may have differentially induced other incidental emotions as well. We investigated the extent of this issue by collecting both pre- and post-writing ratings of incidental emotions in addition to the intended emotions. Using methods largely adapted from previous studies, participants were assigned to write about a profound experience of anger or fear (Experiment 1) or happiness or sadness (Experiment 2). In line with previous research, results indicated that intended emotions (anger and fear) were successfully induced in the respective conditions in Experiment 1. However, disgust and sadness were also induced while writing about an angry experience compared to a fearful experience. Similarly, although happiness and sadness were induced in the appropriate conditions, Experiment 2 indicated that writing about a sad experience also induced disgust, fear, and anger, compared to writing about a happy experience. Possible resolutions to avoid the limitations of the AEMT to induce specific discrete emotions are discussed. PMID:24776697

  8. Glad to be sad, and other examples of benign masochism

    Directory of Open Access Journals (Sweden)

    Paul Rozin

    2013-07-01

    Full Text Available We provide systematic evidence for the range and importance of hedonic reversals as a major source of pleasure, and incorporate these findings into the theory of benign masochism. Twenty-nine different initially aversive activities are shown to produce pleasure (hedonic reversals in substantial numbers of individuals from both college student and Mechanical Turk samples. Hedonic reversals group, by factor analysis, into sadness, oral irritation, fear, physical activity/exhaustion, pain, strong alcohol-related tastes, bitter tastes, and disgust. Liking for sad experiences (music, novels, movies, paintings forms a coherent entity, and is related to enjoyment of crying in response to sad movies. For fear and oral irritation, individuals also enjoy the body's defensive reactions. Enjoyment of sadness is higher in females across domains. We explain these findings in terms of benign masochism, enjoyment of negative bodily reactions and feelings in the context of feeling safe, or pleasure at ``mind over body''. In accordance with benign masochism, for many people, the favored level of initially negative experiences is just below the level that cannot be tolerated.

  9. A sad mood increases attention to unhealthy food images in women with food addiction.

    Science.gov (United States)

    Frayn, Mallory; Sears, Christopher R; von Ranson, Kristin M

    2016-05-01

    Food addiction and emotional eating both influence eating and weight, but little is known of how negative mood affects the attentional processes that may contribute to food addiction. The purpose of this study was to compare attention to food images in adult women (N = 66) with versus without food addiction, before and after a sad mood induction (MI). Participants' eye fixations were tracked and recorded throughout 8-s presentations of displays with healthy food, unhealthy food, and non-food images. Food addiction was self-reported using the Yale Food Addiction Scale. The sad MI involved watching an 8-min video about a young child who passed away from cancer. It was predicted that: (1) participants in the food addiction group would attend to unhealthy food significantly more than participants in the control group, and (2) participants in the food addiction group would increase their attention to unhealthy food images following the sad MI, due to increased emotional reactivity and poorer emotional regulation. As predicted, the sad MI had a different effect for those with versus without food addiction: for participants with food addiction, attention to unhealthy images increased following the sad MI and attention to healthy images decreased, whereas for participants without food addiction the sad MI did not alter attention to food. These findings contribute to researchers' understanding of the cognitive factors underlying food addiction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. [The expression of sadness in a working class bairro in Salvador, Bahia, Brazil].

    Science.gov (United States)

    Costa, L A; Pereira, A M

    1995-01-01

    This paper examines the peculiarities of the expression of emotion in a poor neighborhood from Northeastern Brazil, the bairro of Nordeste de Amaralina, in Salvador, Bahia. Focusing on the expression of sadness, we built a scheme in which to understand how the informants perceive, identify, and deal with this emotion in the course of their daily lives. We attempted to reach an understanding of the wavs people in the bairro interpret sadness. In order to accomplish this goal. we built a semantic network which revealed three main clusters of emotional expression: the inner set, the bodily set, and the interactional set. We came to realize the various superpositions benween the universe of emotional expression and the local concept of person.

  11. Musical rhythm and affect. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    Science.gov (United States)

    Witek, Maria A. G.; Kringelbach, Morten L.; Vuust, Peter

    2015-06-01

    The Quartet Theory of Human Emotion (QT) proposed by Koelsch et al. [1] adds to existing affective models, e.g. by directing more attention to emotional contagion, attachment-related and non-goal-directed emotions. Such an approach seems particularly appropriate to modelling musical emotions, and music is indeed a recurring example in the text, used to illustrate the distinct characteristics of the affect systems that are at the centre of the theory. Yet, it would seem important for any theory of emotion to account for basic functions such as prediction and anticipation, which are only briefly mentioned. Here we propose that QT, specifically its focus on emotional contagion, attachment-related and non-goal directed emotions, might help generate new ideas about a largely neglected source of emotion - rhythm - a musical property that relies fundamentally on the mechanism of prediction.

  12. Using Music to Teach about the Great Depression

    Science.gov (United States)

    Stevens, Robert L.; Fogel, Jared A.

    2007-01-01

    The Great Depression is typically taught through history textbooks, but the music of this time allows students to learn about this era through different perspectives. The Great Depression witnessed many musical styles--from the light heartedness of popular music to the sadness of the blues, gospel, which offered inspiration, to the tension between…

  13. Music alters visual perception.

    Directory of Open Access Journals (Sweden)

    Jacob Jolij

    Full Text Available BACKGROUND: Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception. METHODS AND FINDINGS: We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers' mood. CONCLUSIONS: As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world.

  14. Observer's Mood Manipulates Level of Visual Processing: Evidence from Face and Nonface Stimuli

    Directory of Open Access Journals (Sweden)

    Setareh Mokhtari

    2011-05-01

    Full Text Available For investigating the effect of observer's mood on level of processing of visual stimuli, happy or sad mood was induced in two groups of participants through asking them to deliberate one of their sad or happy memories while listening to a congruent piece of music. This was followed by a computer-based task that required counting some features (arcs or lines of emotional schematic faces (with either sad or happy expressions for group 1, and counting same features of meaningless combined shapes for group 2. Reaction time analysis indicated there is a significant difference in RTs after listening to the sad music compared with happy music for group 1; participants with sad moods were significantly slower when they worked on local levels of schematic faces with sad expressions. Happy moods did not show any specific effect on reaction time of participants who were working on local details of emotionally expressive faces. Sad moods or happy moods had no significant effect on reaction time of working on parts of meaningless shapes. It seems that sad moods as a contextual factor elevate the ability of sad expression to grab the attention and block fast access to the local parts of the holistic meaningful shapes.

  15. Jenefer Robinson. Deeper than Reason, Emotion and its Role in Literature, Music and Art

    OpenAIRE

    PHELAN, Richard

    2011-01-01

    This book sets out to examine the role of emotion in both the construction and reception of art. It begins by a survey of recent theories of emotion and then applies them to the action of emotion in the fields of literature, music and, to a lesser extent, painting. Part One thus addresses the question of what emotions are and how they operate. It considers in particular the theory of emotions as judgements as argued by philosophers Robert Gordon, Gabriele Taylor, Robert Solomon, and William L...

  16. Expressive-Emotional Sides of the Development of The Preschool Child Speech by Means Onto Psychological Music Therapy

    OpenAIRE

    Volzhentseva Iryna

    2017-01-01

    ABSTRACT In this article the problem of expressive-emotional sides of preschool child’s speech components development is considered by means of ontomusic therapy. Due to the theoretical analysis of psycho physiological theories, which methodologically substantiated the development of emotional and expressive sides of children’s speech by means of active music therapy and the interaction of speech and music as the related, mutually influencing at each other sign and semiotic kinds of activ...

  17. Emotion socialization in the context of risk and psychopathology: Mother and father socialization of anger and sadness in adolescents with depressive disorder.

    Science.gov (United States)

    Shortt, Joann Wu; Katz, Lynn Fainsilber; Allen, Nicholas; Leve, Craig; Davis, Betsy; Sheeber, Lisa

    2016-02-01

    This study examined parental emotion socialization processes associated with adolescent unipolar depressive disorder. Adolescent participants (N=107; 42 boys) were selected either to meet criteria for current unipolar depressive disorder or to be psychologically healthy as defined by no lifetime history of psychopathology or mental health treatment and low levels of current depressive symptomatology. A multisource/method measurement strategy was used to assess mothers' and fathers' responses to adolescent sad and angry emotion. Each parent and the adolescents completed questionnaire measures of parental emotion socialization behavior, and participated in meta-emotion interviews and parent-adolescent interactions. As hypothesized, parents of adolescents with depressive disorder engaged in fewer supportive responses and more unsupportive responses overall relative to parents of nondepressed adolescents. Between group differences were more pronounced for families of boys, and for fathers relative to mothers. The findings indicate that parent emotion socialization is associated with adolescent depression and highlight the importance of including fathers in studies of emotion socialization, especially as it relates to depression.

  18. Clinical and Demographic Factors Associated with the Cognitive and Emotional Efficacy of Regular Musical Activities in Dementia.

    Science.gov (United States)

    Särkämö, Teppo; Laitinen, Sari; Numminen, Ava; Kurki, Merja; Johnson, Julene K; Rantanen, Pekka

    2016-01-01

    Recent evidence suggests that music-based interventions can be beneficial in maintaining cognitive, emotional, and social functioning in persons with dementia (PWDs). Our aim was to determine how clinical, demographic, and musical background factors influence the cognitive and emotional efficacy of caregiver-implemented musical activities in PWDs. In a randomized controlled trial, 89 PWD-caregiver dyads received a 10-week music coaching intervention involving either singing or music listening or standard care. Extensive neuropsychological testing and mood and quality of life (QoL) measures were performed before and after the intervention (n = 84) and six months later (n = 74). The potential effects of six key background variables (dementia etiology and severity, age, care situation, singing/instrument playing background) on the outcome of the intervention were assessed. Singing was beneficial especially in improving working memory in PWDs with mild dementia and in maintaining executive function and orientation in younger PWDs. Music listening was beneficial in supporting general cognition, working memory, and QoL especially in PWDs with moderate dementia not caused by Alzheimer's disease (AD) who were in institutional care. Both music interventions alleviated depression especially in PWDs with mild dementia and AD. The musical background of the PWD did not influence the efficacy of the music interventions. Our findings suggest that clinical and demographic factors can influence the cognitive and emotional efficacy of caregiver-implemented musical activities and are, therefore, recommended to take into account when applying and developing the intervention to achieve the greatest benefit.

  19. Music and Emotion. Review of "Music and Emotion", monographic issue of the journal «Music Analysis», edited by Michael Spitzer, n. 29/1-2-3 (2010

    Directory of Open Access Journals (Sweden)

    Mario Baroni

    2014-03-01

    Full Text Available The last fifteen years have seen the development of a lively interest, on an international scale, in the topic of emotion in music, documented by numerous noteworthy publications: the two volumes edited by Juslin and Sloboda [2001; 2010], the monographic issues of the journal «Musicae Scientiae» published in 2001 and 2011, and many other texts published in various parts of the world. The topic is not without repercussions also for those involved in musical analysis; and it is no mere chance that the English journal «Music Analysis» chose to dedicate the volume we are now reviewing to the subject. The contributions contained in the volume derive from an international meeting held in Durham in September 2009 and offer a fairly lively cross-section of a wide series of questions. Our only regret is that with the exception of three nordic names (Juslin, Lindström and Eerola, whose place is now solidly established in English language publications all the authors come from the now almost self-referential context of British or American research. But of course this is nothing new.

  20. General emotion processing in social anxiety disorder: neural issues of cognitive control.

    Science.gov (United States)

    Brühl, Annette Beatrix; Herwig, Uwe; Delsignore, Aba; Jäncke, Lutz; Rufer, Michael

    2013-05-30

    Anxiety disorders are characterized by deficient emotion regulation prior to and in anxiety-evoking situations. Patients with social anxiety disorder (SAD) have increased brain activation also during the anticipation and perception of non-specific emotional stimuli pointing to biased general emotion processing. In the current study we addressed the neural correlates of emotion regulation by cognitive control during the anticipation and perception of non-specific emotional stimuli in patients with SAD. Thirty-two patients with SAD underwent functional magnetic resonance imaging during the announced anticipation and perception of emotional stimuli. Half of them were trained and instructed to apply reality-checking as a control strategy, the others anticipated and perceived the stimuli. Reality checking significantly (pperception of negative emotional stimuli. The medial prefrontal cortex was comparably active in both groups (p>0.50). The results suggest that cognitive control in patients with SAD influences emotion processing structures, supporting the usefulness of emotion regulation training in the psychotherapy of SAD. In contrast to studies in healthy subjects, cognitive control was not associated with increased activation of prefrontal regions in SAD. This points to possibly disturbed general emotion regulating circuits in SAD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Music Therapy: A Career in Music Therapy

    Science.gov (United States)

    About Music Therapy & Music Therapy Training M usic therapy is a healthcare profession that uses music to help individuals of all ages improve physical, cognitive, emotional, and social functioning. Music therapists work with children and adults with developmental ...

  2. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity

    OpenAIRE

    Mado Proverbio, C.A. Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-01-01

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or sil...

  3. Quantification of vascular function changes under different emotion states: A pilot study.

    Science.gov (United States)

    Xia, Yirong; Yang, Licai; Mao, Xueqin; Zheng, Dingchang; Liu, Chengyu

    2017-01-01

    Recent studies have indicated that physiological parameters change with different emotion states. This study aimed to quantify the changes of vascular function at different emotion and sub-emotion states. Twenty young subjects were studied with their finger photoplethysmographic (PPG) pulses recorded at three distinct emotion states: natural (1 minute), happiness and sadness (10 minutes for each). Within the period of happiness and sadness emotion states, two sub-emotion states (calmness and outburst) were identified with the synchronously recorded videos. Reflection index (RI) and stiffness index (SI), two widely used indices of vascular function, were derived from the PPG pulses to quantify their differences between three emotion states, as well as between two sub-emotion states. The results showed that, when compared with the natural emotion, RI and SI decreased in both happiness and sadness emotions. The decreases in RI were significant for both happiness and sadness emotions (both Pemotion (Pemotions, there was significant difference in RI (Pemotion in comparison with the calmness one for both happiness and sadness emotions (both Pemotion only in sadness emotion (Pemotion measurements. This pilot study confirmed that vascular function changes with diffenrt emotion states could be quantified by the simple PPG measurement.

  4. Misery loves company: when sadness increases the desire for social connectedness.

    Science.gov (United States)

    Gray, Heather M; Ishii, Keiko; Ambady, Nalini

    2011-11-01

    In three experiments, the authors investigated the effects of sadness on the desire for social connectedness. They hypothesized that sadness serves an adaptive function by motivating people to reach out to others and preferentially attend to information related to one's current level of social connectedness, but only when it is instigated by social loss. Consistent with this hypothesis, the authors observed that sadness induced by an emotional depiction of social loss enhanced (a) attention to nonverbal cues, an important source of information concerning an individual's current level of social connectedness (Experiment 1), and (b) the desire to engage in social behaviors (Experiment 2). In Experiment 3 the authors found that sadness that results from imagined social loss uniquely produced this pattern of effects. Sadness that resulted from imagined failure had different effects on motivation and no effect on sensitivity to nonverbal cues. These results support and refine functional explanations for the universality of sadness.

  5. Parental Personality and Its Relationship to Socialization of Sadness in Children.

    Science.gov (United States)

    Race, Eleanor; Brand, Ann E.

    The relationship between parental personality traits and how parents socialize their children's emotions is largely unexplored. This study examined the association of personality traits such as Neuroticism and Agreeableness, and emotion traits such as Anxiety and Trait Depression to the strategies parents use to socialize their children's sadness,…

  6. Evolutionary considerations on complex emotions and music-induced emotions. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    Science.gov (United States)

    Gingras, Bruno; Marin, Manuela M.

    2015-06-01

    Recent efforts to uncover the neural underpinnings of emotional experiences have provided a foundation for novel neurophysiological theories of emotions, adding to the existing body of psychophysiological, motivational, and evolutionary theories. Besides explicitly modeling human-specific emotions and considering the interactions between emotions and language, Koelsch et al.'s original contribution to this challenging endeavor is to identify four brain areas as distinct "affect systems" which differ in terms of emotional qualia and evolutionary pathways [1]. Here, we comment on some features of this promising Quartet Theory of Emotions, focusing particularly on evolutionary and biological aspects related to the four affect systems and their relation to prevailing emotion theories, as well as on the role of music-induced emotions.

  7. Dissociation of sad facial expressions and autonomic nervous system responding in boys with disruptive behavior disorders

    OpenAIRE

    Marsh, Penny; Beauchaine, Theodore P.; Williams, Bailey

    2007-01-01

    Although deficiencies in emotional responding have been linked to externalizing behaviors in children, little is known about how discrete response systems (e.g., expressive, physiological) are coordinated during emotional challenge among these youth. We examined time-linked correspondence of sad facial expressions and autonomic reactivity during an empathy-eliciting task among boys with disruptive behavior disorders (n = 31) and controls (n = 23). For controls, sad facial expressions were ass...

  8. "Cycling around an emotional core of sadness": emotion regulation in a couple after the loss of a child.

    Science.gov (United States)

    Hooghe, An; Neimeyer, Robert A; Rober, Peter

    2012-09-01

    In contrast to the traditional view of working through grief by confronting it, recent theories have emphasized an oscillating process of confronting and avoiding the pain of loss. In this qualitative study, we sought a better understanding of this process by conducting a detailed case study of a bereaved couple after the loss of their infant daughter. We employed multiple data collection methods (using interviews and written feedback) and an intensive auditing process in our thematic analysis, with special attention to a recurrent metaphor used by this bereaved couple in describing their personal and relational experience. The findings suggest the presence of a dialectic tension between the need to be close to the deceased child and the need for distance from the pain of the loss, which was evidenced on both individual and relational levels. For this couple, the image of "cycling around an emotional core of sadness" captured their dynamic way of dealing with this dialectic of closeness and distance.

  9. Mood Dependent Music Generator

    DEFF Research Database (Denmark)

    Scirea, Marco

    2013-01-01

    Music is one of the most expressive media to show and manipulate emotions, but there have been few studies on how to generate music connected to emotions. Such studies have always been shunned upon by musicians affirming that a machine cannot create expressive music, as it's the composer......'s and player's experiences and emotions that get poured into the piece. At the same time another problem is that music is highly complicated (and subjective) and finding out which elements transmit certain emotions is not an easy task. This demo wants to show how the manipulation of a set of features can...... actually change the mood the music transmits, hopefully awakening an interest in this area of research....

  10. Is music a memory booster in normal aging? The influence of emotion.

    Science.gov (United States)

    Ratovohery, Stéphie; Baudouin, Alexia; Gachet, Aude; Palisson, Juliette; Narme, Pauline

    2018-05-17

    Age-related differences in episodic memory have been explained by a decrement in strategic encoding implementation. It has been shown in clinical populations that music can be used during the encoding stage as a mnemonic strategy to learn verbal information. The effectiveness of this strategy remains equivocal in older adults (OA). Furthermore, the impact of the emotional valence of the music used has never been investigated in this context. Thirty OA and 24 young adults (YA) learned texts that were either set to music that was positively or negatively valenced, or spoken only. Immediate and delayed recalls were measured. Results showed that: (i) OA perform worse than YA in immediate and delayed recall; (ii) sung lyrics are better remembered than spoken ones in OA, but only when the associated music is positively-valenced; (iii) this pattern is observed regardless the retention delay. These findings support the benefit of a musical encoding on verbal learning in healthy OA and are consistent with the positivity effect classically reported in normal aging. Added to the potential applications in daily life, the results are discussed with respect to the theoretical hypotheses of the mechanisms underlying the advantage of musical encoding.

  11. Study Protocol RapMusicTherapy for emotion regulation in a school setting

    NARCIS (Netherlands)

    Uhlig, S.; Jansen, E.; Scherder, E.J.A.

    2015-01-01

    The growing risk of the development of problem behaviors in adolescents (ages 10-15) requires effective methods for prevention, supporting self-regulative capacities. Music listening as an effective self-regulative tool for emotions and behavioral adaptation for adolescents and youth is widely

  12. EEG-Based Analysis of the Emotional Effect of Music Therapy on Palliative Care Cancer Patients

    Directory of Open Access Journals (Sweden)

    Rafael Ramirez

    2018-03-01

    Full Text Available Music is known to have the power to induce strong emotions. The present study assessed, based on Electroencephalography (EEG data, the emotional response of terminally ill cancer patients to a music therapy intervention in a randomized controlled trial. A sample of 40 participants from the palliative care unit in the Hospital del Mar in Barcelona was randomly assigned to two groups of 20. The first group [experimental group (EG] participated in a session of music therapy (MT, and the second group [control group (CG] was provided with company. Based on our previous work on EEG-based emotion detection, instantaneous emotional indicators in the form of a coordinate in the arousal-valence plane were extracted from the participants’ EEG data. The emotional indicators were analyzed in order to quantify (1 the overall emotional effect of MT on the patients compared to controls, and (2 the relative effect of the different MT techniques applied during each session. During each MT session, five conditions were considered: I (initial patient’s state before MT starts, C1 (passive listening, C2 (active listening, R (relaxation, and F (final patient’s state. EEG data analysis showed a significant increase in valence (p = 0.0004 and arousal (p = 0.003 between I and F in the EG. No significant changes were found in the CG. This results can be interpreted as a positive emotional effect of MT in advanced cancer patients. In addition, according to pre- and post-intervention questionnaire responses, participants in the EG also showed a significant decrease in tiredness, anxiety and breathing difficulties, as well as an increase in levels of well-being. No equivalent changes were observed in the CG.

  13. EEG-Based Analysis of the Emotional Effect of Music Therapy on Palliative Care Cancer Patients

    Science.gov (United States)

    Ramirez, Rafael; Planas, Josep; Escude, Nuria; Mercade, Jordi; Farriols, Cristina

    2018-01-01

    Music is known to have the power to induce strong emotions. The present study assessed, based on Electroencephalography (EEG) data, the emotional response of terminally ill cancer patients to a music therapy intervention in a randomized controlled trial. A sample of 40 participants from the palliative care unit in the Hospital del Mar in Barcelona was randomly assigned to two groups of 20. The first group [experimental group (EG)] participated in a session of music therapy (MT), and the second group [control group (CG)] was provided with company. Based on our previous work on EEG-based emotion detection, instantaneous emotional indicators in the form of a coordinate in the arousal-valence plane were extracted from the participants’ EEG data. The emotional indicators were analyzed in order to quantify (1) the overall emotional effect of MT on the patients compared to controls, and (2) the relative effect of the different MT techniques applied during each session. During each MT session, five conditions were considered: I (initial patient’s state before MT starts), C1 (passive listening), C2 (active listening), R (relaxation), and F (final patient’s state). EEG data analysis showed a significant increase in valence (p = 0.0004) and arousal (p = 0.003) between I and F in the EG. No significant changes were found in the CG. This results can be interpreted as a positive emotional effect of MT in advanced cancer patients. In addition, according to pre- and post-intervention questionnaire responses, participants in the EG also showed a significant decrease in tiredness, anxiety and breathing difficulties, as well as an increase in levels of well-being. No equivalent changes were observed in the CG. PMID:29551984

  14. Effect of music therapy with emotional-approach coping on preprocedural anxiety in cardiac catheterization: a randomized controlled trial.

    Science.gov (United States)

    Ghetti, Claire M

    2013-01-01

    Individuals undergoing cardiac catheterization are likely to experience elevated anxiety periprocedurally, with highest anxiety levels occurring immediately prior to the procedure. Elevated anxiety has the potential to negatively impact these individuals psychologically and physiologically in ways that may influence the subsequent procedure. This study evaluated the use of music therapy, with a specific emphasis on emotional-approach coping, immediately prior to cardiac catheterization to impact periprocedural outcomes. The randomized, pretest/posttest control group design consisted of two experimental groups--the Music Therapy with Emotional-Approach Coping group [MT/EAC] (n = 13), and a talk-based Emotional-Approach Coping group (n = 14), compared with a standard care Control group (n = 10). MT/EAC led to improved positive affective states in adults awaiting elective cardiac catheterization, whereas a talk-based emphasis on emotional-approach coping or standard care did not. All groups demonstrated a significant overall decrease in negative affect. The MT/EAC group demonstrated a statistically significant, but not clinically significant, increase in systolic blood pressure most likely due to active engagement in music making. The MT/EAC group trended toward shortest procedure length and least amount of anxiolytic required during the procedure, while the EAC group trended toward least amount of analgesic required during the procedure, but these differences were not statistically significant. Actively engaging in a session of music therapy with an emphasis on emotional-approach coping can improve the well-being of adults awaiting cardiac catheterization procedures.

  15. The Sad, the Mad and the Bad: Co-Existing Discourses of Girlhood

    Science.gov (United States)

    Brown, Marion

    2011-01-01

    Three significant, prevailing and overlapping narratives of teenage girls have dominated North American popular consciousness since the early 1990s: the sad girl, victimized by male privilege and misogyny of adolescence and beyond; the mad grrrls who rejected this vulnerability through music and media; and the bad girls of much current popular…

  16. Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    NARCIS (Netherlands)

    Başkent, Deniz; Fuller, Christina; Galvin, John; Schepel, Like; Gaudrain, Etienne; Free, Rolien

    2018-01-01

    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech

  17. A joint behavioral and emotive analysis of synchrony in music therapy of children with autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Paola Venuti

    2016-12-01

    Full Text Available Background Synchrony is an essential component of interactive exchanges. In mother-infant interaction, synchrony underlies reciprocity and emotive regulation. A severe lack of synchrony is indeed a core issue within the communication and interaction deficit that characterizes autism spectrum disorders (ASD in accordance with the DSM-5 classification. Based on emerging evidence that music therapy can improve the communication and regulation ability in children with ASD, we aim to verify quantitatively whether: 1 children with ASD improve synchrony with their therapist during music therapy sessions, and 2 this ability persists in different structured contexts. Participants and procedure Twenty-five children, aged from 4 to 6 years (M = 57.80, SD = 16.70, with an autistic disorder diagnosis based on DSM IV-TR and the Autism Diagnostic Observation Schedule (ADOS, participated in the study. An observational tool for coding behaviors and emotive states of synchrony (Child Behavioral and Emotional status Code [CBEC] and Adult Behavioral and Emotional status Code [ABEC] was applied in video recorded sessions of improvisational music therapy (IMT for the subject-therapist pair. For each subject, we considered the 20 central minutes of the first, tenth and twentieth session of IMT. To verify the persistence of effect in a different context with a different adult, we administered and coded the interactive ADOS section (anticipation of a routine with objects applied after session 20 of therapy. Results During the IMT cycle, the amount of synchronic activity increases, with a significant difference from Session 1 to Session 20 in behavioral synchrony and emotional attunement. Also, the increase of synchrony is confirmed at the end of the therapy cycle as measured by an interactive ADOS section. Conclusions Synchrony is an effective indicator of efficacy for music therapy in children with ASD, in particular to evaluate the expansion of positive emotive

  18. Comparison of Sadness, Anger, and Fear Facial Expressions When Toddlers Look at Their Mothers

    Science.gov (United States)

    Buss, Kristin A.; Kiel, Elizabeth J.

    2004-01-01

    Research suggests that sadness expressions may be more beneficial to children than other emotions when eliciting support from caregivers. It is unclear, however, when children develop the ability to regulate their displays of distress. The current study addressed this question. Distress facial expressions (e.g., fear, anger, and sadness) were…

  19. Music, emotion, and autobiographical memory: they're playing your song.

    Science.gov (United States)

    Schulkind, M D; Hennis, L K; Rubin, D C

    1999-11-01

    Very long-term memory for popular music was investigated. Older and younger adults listened to 20-sec excerpts of popular songs drawn from across the 20th century. The subjects gave emotionality and preference ratings and tried to name the title, artist, and year of popularity for each excerpt. They also performed a cued memory test for the lyrics. The older adults' emotionality ratings were highest for songs from their youth; they remembered more about these songs, as well. However, the stimuli failed to cue many autobiographical memories of specific events. Further analyses revealed that the older adults were less likely than the younger adults to retrieve multiple attributes of a song together (i.e., title and artist) and that there was a significant positive correlation between emotion and memory, especially for the older adults. These results have implications for research on long-term memory, as well as on the relationship between emotion and memory.

  20. REM-Enriched Naps Are Associated with Memory Consolidation for Sad Stories and Enhance Mood-Related Reactivity.

    Science.gov (United States)

    Gilson, Médhi; Deliens, Gaétane; Leproult, Rachel; Bodart, Alice; Nonclercq, Antoine; Ercek, Rudy; Peigneux, Philippe

    2015-12-29

    Emerging evidence suggests that emotion and affect modulate the relation between sleep and cognition. In the present study, we investigated the role of rapid-eye movement (REM) sleep in mood regulation and memory consolidation for sad stories. In a counterbalanced design, participants (n = 24) listened to either a neutral or a sad story during two sessions, spaced one week apart. After listening to the story, half of the participants had a short (45 min) morning nap. The other half had a long (90 min) morning nap, richer in REM and N2 sleep. Story recall, mood evolution and changes in emotional response to the re-exposure to the story were assessed after the nap. Although recall performance was similar for sad and neutral stories irrespective of nap duration, sleep measures were correlated with recall performance in the sad story condition only. After the long nap, REM sleep density positively correlated with retrieval performance, while re-exposure to the sad story led to diminished mood and increased skin conductance levels. Our results suggest that REM sleep may not only be associated with the consolidation of intrinsically sad material, but also enhances mood reactivity, at least on the short term.

  1. [Non pharmacological treatment for Alzheimer's disease: comparison between musical and non-musical interventions].

    Science.gov (United States)

    Narme, Pauline; Tonini, Audrey; Khatir, Fatiha; Schiaratura, Loris; Clément, Sylvain; Samson, Séverine

    2012-06-01

    On account of the limited effectiveness of pharmacological treatments in Alzheimer's disease (AD), there is a growing interest on nonpharmacological treatments, including musical intervention. Despite the large number of studies showing the multiple benefits of music on behavioral, emotional and cognitive disorders of patients with AD, only a few of them used a rigorous method. Finally, the specificity of musical as compared to non-musical and pleasant interventions has rarely been addressed. To investigate this issue, two randomized controlled trials were conducted contrasting the effects of musical to painting (Study 1) or cooking (Study 2) interventions on emotional state of 33 patients with AD. The patients' emotional state was assessed by analyzing professional caregivers' judgments of the patient's mood, then facial expressions and valence of the discourse from short-filmed interviews. In the first study (n=22), each intervention lasted 3 weeks (two sessions per week) and the patients' emotional state was assessed before, during and after intervention periods. After the interventions, the results showed that facial expression, discourse content and mood assessment improved (more positive than negative expressions) as compared to pre-intervention assessment. However, musical intervention was more effective and had longer effects as compared with painting. In the second study (n=11), we further examined long lasting effects of music as compared to cooking by adding evaluation of the patients' emotional state 2 and 4 weeks after the last intervention. Again, music was more effective to improve the emotional state. Music had positive effects that remained significant up to 4 weeks after the intervention, while cooking only produced short-term effect on mood. In both studies, benefits were significant in more than 80% of patients. Taken together, these findings show that music intervention has specific effects on patients' emotional well being, offering promising

  2. The joy of heartfelt music: An examination of emotional and physiological responses.

    Science.gov (United States)

    Lynar, Emily; Cvejic, Erin; Schubert, Emery; Vollmer-Conna, Ute

    2017-10-01

    Music-listening can be a powerful therapeutic tool for mood rehabilitation, yet quality evidence for its validity as a singular treatment is scarce. Specifically, the relationship between music-induced mood improvement and meaningful physiological change, as well as the influence of music- and person-related covariates on these outcomes are yet to be comprehensively explored. Ninety-four healthy participants completed questionnaires probing demographics, personal information, and musical background. Participants listened to two prescribed musical pieces (one classical, one jazz), an "uplifting" piece of their own choice, and an acoustic control stimulus (white noise) in randomised order. Physiological responses (heart rate, respiration, galvanic skin response) were recorded throughout. After each piece, participants rated their subjective responses on a series of Likert scales. Subjectively, the self-selected pieces induced the most joy, and the classical piece was perceived as most relaxing, consistent with the arousal ratings proposed by a music selection panel. These two stimuli led to the greatest overall improvement in composite emotional state from baseline. Psycho-physiologically, self-selected pieces often elicited a "eustress" response ("positive arousal"), whereas classical music was associated with the highest heart rate variability. Very few person-related covariates appeared to affect responses, and music-related covariates (besides self-selection) appeared arbitrary. These data provide strong evidence that optimal music for therapy varies between individuals. Our findings additionally suggest that the self-selected music was most effective for inducing a joyous state; while low arousal classical music was most likely to shift the participant into a state of relaxation. Therapy should attempt to find the most effective and "heartfelt" music for each listener, according to therapeutic goals. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Proactive and reactive control depends on emotional valence: a Stroop study with emotional expressions and words.

    Science.gov (United States)

    Kar, Bhoomika Rastogi; Srinivasan, Narayanan; Nehabala, Yagyima; Nigam, Richa

    2018-03-01

    We examined proactive and reactive control effects in the context of task-relevant happy, sad, and angry facial expressions on a face-word Stroop task. Participants identified the emotion expressed by a face that contained a congruent or incongruent emotional word (happy/sad/angry). Proactive control effects were measured in terms of the reduction in Stroop interference (difference between incongruent and congruent trials) as a function of previous trial emotion and previous trial congruence. Reactive control effects were measured in terms of the reduction in Stroop interference as a function of current trial emotion and previous trial congruence. Previous trial negative emotions exert greater influence on proactive control than the positive emotion. Sad faces in the previous trial resulted in greater reduction in the Stroop interference for happy faces in the current trial. However, current trial angry faces showed stronger adaptation effects compared to happy faces. Thus, both proactive and reactive control mechanisms are dependent on emotional valence of task-relevant stimuli.

  4. Involuntary and voluntary recall of musical memories: a comparison of temporal accuracy and emotional responses.

    OpenAIRE

    Jakubowski, Kelly; Bashir, Zaariyah; Farrugia, Nicolas; Stewart, Lauren

    2018-01-01

    Comparisons between involuntarily and voluntarily retrieved autobiographical memories have revealed similarities in encoding and maintenance, with differences in terms of specificity and emotional responses. Our study extended this research area into the domain of musical memory, which afforded a unique opportunity to compare the same memory as accessed both involuntarily and voluntarily. Specifically, we compared instances of involuntary musical imagery (INMI, or “earworms”)—the spontaneous ...

  5. Distinctive mood induction effects of fear or sadness on anger and aggressive behavior

    Directory of Open Access Journals (Sweden)

    Jun eZhan

    2015-06-01

    Full Text Available A recent study has reported that the successful implementation of cognitive regulation of emotion depends on higher-level cognitive functions, such as top-down control, which may be impaired in stressful situations. This calls for a need of cognition free self-regulatory strategies that do not require top-down control. In contrast to the cognitive regulation of emotion that emphasizes the role of cognition, traditional Chinese philosophy and medicine views the relationship among different types of emotions as promoting or counteracting each other, without the involvement of cognition, which provides an insightful perspective for developing cognition free regulatory strategies. In this study, we examined two hypotheses regarding the modulation of anger and aggressive behavior: sadness counteracts anger or aggressive behavior, whereas fear promotes anger or aggressive behavior. Participants were first provoked by reading the extremely negative feedback on their viewpoints (Study 1 or by watching anger-inducing movie clips (Study 2; then, these angry participants were assigned to three equivalent groups and view sad, fear, or neutral materials respectively to evoke the corresponding emotions. The results found participants yielded a lower level of aggressive behavior when sadness was induced afterward, and a higher level of anger when fear was induced afterward. These results provided evidence supporting the hypothesis of mutual promotion or counteraction relationships among these types of emotion and implied a cognition free approach for regulating anger and aggressive behavior.

  6. Emotional Aging: A Discrete Emotions Perspective

    Directory of Open Access Journals (Sweden)

    Ute eKunzmann

    2014-05-01

    Full Text Available Perhaps the most important single finding in the field of emotional aging has been that the overall quality of affective experience steadily improves during adulthood and can be maintained into old age. Recent lifespan developmental theories have provided motivation- and experience-based explanations for this phenomenon. These theories suggest that, as individuals grow older, they become increasingly motivated and able to regulate their emotions, which could result in reduced negativity and enhanced positivity. The objective of this paper is to expand existing theories and empirical research on emotional aging by presenting a discrete emotions perspective. To illustrate the usefulness of this approach, we focus on a discussion of the literature examining age differences in anger and sadness. These two negative emotions have been subsumed under the singular concept of negative affect. From a discrete emotions perspective, however, they are highly distinct. Sadness is elicited by an irreversible loss and associated with low situational control, high goal adjustment tendencies, and the motivation to search for social support. The experience of anger, by contrast, is typically triggered by other individuals who intentio

  7. The emotional body and time perception.

    Science.gov (United States)

    Droit-Volet, Sylvie; Gil, Sandrine

    2016-01-01

    We examined the effects of emotional bodily expressions on the perception of time. Participants were shown bodily expressions of fear, happiness and sadness in a temporal bisection task featuring different stimulus duration ranges. Stimulus durations were judged to be longer for bodily expressions of fear than for those of sadness, whereas no significant difference was observed between sad and happy postures. In addition, the magnitude of the lengthening effect of fearful versus sad postures increased with duration range. These results suggest that the perception of fearful bodily expressions increases the level of arousal which, in turn, speeds up the internal clock system underlying the representation of time. The effect of bodily expressions on time perception is thus consistent with findings for other highly arousing emotional stimuli, such as emotional facial expressions.

  8. The effects of music listening after a stressful task on immune functions, neuroendocrine responses, and emotional states in college students.

    Science.gov (United States)

    Hirokawa, Eri; Ohira, Hideki

    2003-01-01

    The purpose of this study was to examine the effects of listening to high-uplifting or low-uplifting music after a stressful task on (a) immune functions, (b) neuroendocrine responses, and (c) emotional states in college students. Musical selections that were evaluated as high-uplifting or low-uplifting by Japanese college students were used as musical stimuli. Eighteen Japanese subjects performed stressful tasks before they experienced each of these experimental conditions: (a) high-uplifting music, (b) low-uplifting music, and (c) silence. Subjects' emotional states, the Secretory IgA (S-IgA) level, active natural killer (NK) cell level, the numbers of T lymphocyte CD4+, CD8+, CD16+, dopamine, norepinephrine, and epinephrine levels were measured before and after each experimental condition. Results indicated low-uplifting music had a trend of increasing a sense of well-being. High-uplifting music showed trends of increasing the norepinephrine level, liveliness, and decreasing depression. Active NK cells were decreased after 20 min of silence. Results of the study were inconclusive, but high-uplifting and low-uplifting music had different effects on immune, neuroendocrine, and psychological responses. Classification of music is important to research that examines the effects of music on these responses. Recommendations for future research are discussed.

  9. Music Influences Ratings of the Affect of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Waldie E Hanser

    2013-09-01

    Full Text Available This review provides an overview of recent studies that have examined how music influences the judgment of emotional stimuli, including affective pictures and film clips. The relevant findings are incorporated within a broader theory of music and emotion, and suggestions for future research are offered.Music is important in our daily lives, and one of its primary uses by listeners is the active regulation of one's mood. Despite this widespread use as a regulator of mood and its general pervasiveness in our society, the number of studies investigating the issue of whether, and how, music affects mood and emotional behaviour is limited however. Experiments investigating the effects of music have generally focused on how the emotional valence of background music impacts how affective pictures and/or film clips are evaluated. These studies have demonstrated strong effects of music on the emotional judgment of such stimuli. Most studies have reported concurrent background music to enhance the emotional valence when music and pictures are emotionally congruent. On the other hand, when music and pictures are emotionally incongruent, the ratings of the affect of the pictures will in- or decrease depending on the emotional valence of the background music. These results appear to be consistent in studies investigating the effects of (background music.

  10. Listening and Musical Engagement: An Exploration of the Effects of Different Listening Strategies on Attention, Emotion, and Peak Affective Experiences

    Science.gov (United States)

    Diaz, Frank M.

    2015-01-01

    Music educators often use guided listening strategies as a means of enhancing engagement during music listening activities. Although previous research suggests that these strategies are indeed helpful in facilitating some form of cognitive and emotional engagement, little is known about how these strategies might function for music of differing…

  11. Selective attention to emotional cues and emotion recognition in healthy subjects: the role of mineralocorticoid receptor stimulation.

    Science.gov (United States)

    Schultebraucks, Katharina; Deuter, Christian E; Duesenberg, Moritz; Schulze, Lars; Hellmann-Regen, Julian; Domke, Antonia; Lockenvitz, Lisa; Kuehl, Linn K; Otte, Christian; Wingenfeld, Katja

    2016-09-01

    Selective attention toward emotional cues and emotion recognition of facial expressions are important aspects of social cognition. Stress modulates social cognition through cortisol, which acts on glucocorticoid (GR) and mineralocorticoid receptors (MR) in the brain. We examined the role of MR activation on attentional bias toward emotional cues and on emotion recognition. We included 40 healthy young women and 40 healthy young men (mean age 23.9 ± 3.3), who either received 0.4 mg of the MR agonist fludrocortisone or placebo. A dot-probe paradigm was used to test for attentional biases toward emotional cues (happy and sad faces). Moreover, we used a facial emotion recognition task to investigate the ability to recognize emotional valence (anger and sadness) from facial expression in four graded categories of emotional intensity (20, 30, 40, and 80 %). In the emotional dot-probe task, we found a main effect of treatment and a treatment × valence interaction. Post hoc analyses revealed an attentional bias away from sad faces after placebo intake and a shift in selective attention toward sad faces compared to placebo. We found no attentional bias toward happy faces after fludrocortisone or placebo intake. In the facial emotion recognition task, there was no main effect of treatment. MR stimulation seems to be important in modulating quick, automatic emotional processing, i.e., a shift in selective attention toward negative emotional cues. Our results confirm and extend previous findings of MR function. However, we did not find an effect of MR stimulation on emotion recognition.

  12. Music close to one's heart: heart rate variability with music, diagnostic with e-bra and smartphone

    Science.gov (United States)

    Hegde, Shantala; Kumar, Prashanth S.; Rai, Pratyush; Mathur, Gyanesh N.; Varadan, Vijay K.

    2012-04-01

    Music is a powerful elicitor of emotions. Emotions evoked by music, through autonomic correlates have been shown to cause significant modulation of parameters like heart rate and blood pressure. Consequently, Heart Rate Variability (HRV) analysis can be a powerful tool to explore evidence based therapeutic functions of music and conduct empirical studies on effect of musical emotion on heart function. However, there are limitations with current studies. HRV analysis has produced variable results to different emotions evoked via music, owing to variability in the methodology and the nature of music chosen. Therefore, a pragmatic understanding of HRV correlates of musical emotion in individuals listening to specifically chosen music whilst carrying out day to day routine activities is needed. In the present study, we aim to study HRV as a single case study, using an e-bra with nano-sensors to record heart rate in real time. The e-bra developed previously, has several salient features that make it conducive for this study- fully integrated garment, dry electrodes for easy use and unrestricted mobility. The study considers two experimental conditions:- First, HRV will be recorded when there is no music in the background and second, when music chosen by the researcher and by the subject is playing in the background.

  13. The Effect of Mozart Music on Child Facial Expression Recognition%莫扎特音乐对幼儿表情识别能力的影响

    Institute of Scientific and Technical Information of China (English)

    王玲; 赵蕾; 卢英俊

    2012-01-01

    本研究考察莫扎特音乐以及不同诱发唤醒度和不同情绪类型的音乐对3-5岁幼儿面部表情(高兴、悲伤和中性表情)识别的影响。结果表明:与同是高唤醒度正性情绪的音乐相比,具有高结构性和周期性的莫扎特音乐反而会对幼儿的表情识别产生干扰;而聆听低唤醒度负性情绪的音乐有利于幼儿大脑达到适当的觉醒水平,进入适当的情绪状态,从而对其表情识别产生促进作用。%We studied 3-5-year-old children's facial expression (happy, sad and neutral) recognition in response to Mozart music as well as to music with different arousal degrees and emotional types. The results showed: compared with music with high arousal degree and positive emotion, Mozart music, with high structural and cyclical features, interfered children's facial expression recognition; while listening to music with low arousal degree and negative emotion helped children's brain reach a proper state for suitable emotion, therefore promoted facial recognition.

  14. REM-Enriched Naps Are Associated with Memory Consolidation for Sad Stories and Enhance Mood-Related Reactivity

    Directory of Open Access Journals (Sweden)

    Médhi Gilson

    2015-12-01

    Full Text Available Emerging evidence suggests that emotion and affect modulate the relation between sleep and cognition. In the present study, we investigated the role of rapid-eye movement (REM sleep in mood regulation and memory consolidation for sad stories. In a counterbalanced design, participants (n = 24 listened to either a neutral or a sad story during two sessions, spaced one week apart. After listening to the story, half of the participants had a short (45 min morning nap. The other half had a long (90 min morning nap, richer in REM and N2 sleep. Story recall, mood evolution and changes in emotional response to the re-exposure to the story were assessed after the nap. Although recall performance was similar for sad and neutral stories irrespective of nap duration, sleep measures were correlated with recall performance in the sad story condition only. After the long nap, REM sleep density positively correlated with retrieval performance, while re-exposure to the sad story led to diminished mood and increased skin conductance levels. Our results suggest that REM sleep may not only be associated with the consolidation of intrinsically sad material, but also enhances mood reactivity, at least on the short term.

  15. Children's understanding of facial expression of emotion: II. Drawing of emotion-faces.

    Science.gov (United States)

    Missaghi-Lakshman, M; Whissell, C

    1991-06-01

    67 children from Grades 2, 4, and 7 drew faces representing the emotional expressions of fear, anger, surprise, disgust, happiness, and sadness. The children themselves and 29 adults later decoded the drawings in an emotion-recognition task. Children were the more accurate decoders, and their accuracy and the accuracy of adults increased significantly for judgments of 7th-grade drawings. The emotions happy and sad were most accurately decoded. There were no significant differences associated with sex. In their drawings, children utilized a symbol system that seems to be based on a highlighting or exaggeration of features of the innately governed facial expression of emotion.

  16. Empathy for positive and negative emotions in social anxiety disorder.

    Science.gov (United States)

    Morrison, Amanda S; Mateen, Maria A; Brozovich, Faith A; Zaki, Jamil; Goldin, Philippe R; Heimberg, Richard G; Gross, James J

    2016-12-01

    Social anxiety disorder (SAD) is associated with elevated negative and diminished positive affective experience. However, little is known about the way in which individuals with SAD perceive and respond emotionally to the naturally-unfolding negative and positive emotions of others, that is, cognitive empathy and affective empathy, respectively. In the present study, participants with generalized SAD (n = 32) and demographically-matched healthy controls (HCs; n = 32) completed a behavioral empathy task. Cognitive empathy was indexed by the correlation between targets' and participants' continuous ratings of targets' emotions, whereas affective empathy was indexed by the correlation between targets' and participants' continuous self-ratings of emotion. Individuals with SAD differed from HCs only in positive affective empathy: they were less able to vicariously share others' positive emotions. Mediation analyses revealed that poor emotional clarity and negative interpersonal perceptions among those with SAD might account for this finding. Future research using experimental methodology is needed to examine whether this finding represents an inability or unwillingness to share positive affect. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. ERP evidence for own-age effects on late stages of processing sad faces.

    Science.gov (United States)

    Fölster, Mara; Werheid, Katja

    2016-08-01

    Faces convey important information on interaction partners, such as their emotional state and age. Faces of the same age are, according to recent research, preferentially processed. The aim of the present study was to investigate whether the neural processes underlying this own-age effect are influenced by the emotional expression of the face, and to explore possible explanations such as the frequency or quality of contact to own-age versus other-age groups. Event-related potentials were recorded while 19 younger (18-30 years) and 19 older (64-86 years) observers watched younger and older sad and happy faces. Sad but not happy faces elicited higher late positive potential amplitudes for own-age than for other-age faces. This own-age effect was significant for older, but not for younger, observers, and correlated with the quality of contact with the own-age versus the other-age group. This pattern suggests that sad own-age faces are motivationally more relevant.

  18. Music therapy improvisation

    Directory of Open Access Journals (Sweden)

    Mira Kuzma

    2001-09-01

    Full Text Available In this article, the technique of music therapy – music therapy improvisation is introduced. In this form of music therapy the improvising partners share meaning through the improvisation: the improvisation is not an end in itself: it portrays meaning that is personal, complex and can be shared with the partner. The therapeutic work, then, is meeting and matching the client's music in order to give the client an experience of "being known", being responded through sounds and being able to express things and communicate meaningfully. Rather than the client playing music, the therapy is about developing the engagement through sustained, joint improvisations. In music therapy, music and emotion share fundamental features: one may represent the other, i.e., we hear the music not as music but as dynamic emotional states. The concept of dynamic structure explains why music makes therapeutic sense.

  19. Sadness and Depression

    Science.gov (United States)

    ... Videos for Educators Search English Español Sadness and Depression KidsHealth / For Kids / Sadness and Depression Print en ... big difference in your life. When Sadness Is Depression When you're in a sad mood, it ...

  20. Social functioning and autonomic nervous system sensitivity across vocal and musical emotion in Williams syndrome and autism spectrum disorder.

    Science.gov (United States)

    Järvinen, Anna; Ng, Rowena; Crivelli, Davide; Neumann, Dirk; Arnold, Andrew J; Woo-VonHoogenstyn, Nicholas; Lai, Philip; Trauner, Doris; Bellugi, Ursula

    2016-01-01

    Both Williams syndrome (WS) and autism spectrum disorders (ASD) are associated with unusual auditory phenotypes with respect to processing vocal and musical stimuli, which may be shaped by the atypical social profiles that characterize the syndromes. Autonomic nervous system (ANS) reactivity to vocal and musical emotional stimuli was examined in 12 children with WS, 17 children with ASD, and 20 typically developing (TD) children, and related to their level of social functioning. The results of this small-scale study showed that after controlling for between-group differences in cognitive ability, all groups showed similar emotion identification performance across conditions. Additionally, in ASD, lower autonomic reactivity to human voice, and in TD, to musical emotion, was related to more normal social functioning. Compared to TD, both clinical groups showed increased arousal to vocalizations. A further result highlighted uniquely increased arousal to music in WS, contrasted with a decrease in arousal in ASD and TD. The ASD and WS groups exhibited arousal patterns suggestive of diminished habituation to the auditory stimuli. The results are discussed in the context of the clinical presentation of WS and ASD. © 2015 Wiley Periodicals, Inc.