WorldWideScience

Sample records for emotion-based music rretrieval

  1. Emotion-based Music Rretrieval on a Well-reduced Audio Feature Space

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Chua, Bee Yong; Nanopoulos, Alexandros

    2009-01-01

    -emotion. However, the real-time systems that retrieve music over large music databases, can achieve order of magnitude performance increase, if applying multidimensional indexing over a dimensionally reduced audio feature space. To meet this performance achievement, in this paper, extensive studies are conducted......Music expresses emotion. A number of audio extracted features have influence on the perceived emotional expression of music. These audio features generate a high-dimensional space, on which music similarity retrieval can be performed effectively, with respect to human perception of the music...... on a number of dimensionality reduction algorithms, including both classic and novel approaches. The paper clearly envisages which dimensionality reduction techniques on the considered audio feature space, can preserve in average the accuracy of the emotion-based music retrieval....

  2. Extraction Of Audio Features For Emotion Recognition System Based On Music

    Directory of Open Access Journals (Sweden)

    Kee Moe Han

    2015-08-01

    Full Text Available Music is the combination of melody linguistic information and the vocalists emotion. Since music is a work of art analyzing emotion in music by computer is a difficult task. Many approaches have been developed to detect the emotions included in music but the results are not satisfactory because emotion is very complex. In this paper the evaluations of audio features from the music files are presented. The extracted features are used to classify the different emotion classes of the vocalists. Musical features extraction is done by using Music Information Retrieval MIR tool box in this paper. The database of 100 music clips are used to classify the emotions perceived in music clips. Music may contain many emotions according to the vocalists mood such as happy sad nervous bored peace etc. In this paper the audio features related to the emotions of the vocalists are extracted to use in emotion recognition system based on music.

  3. Audio-based deep music emotion recognition

    Science.gov (United States)

    Liu, Tong; Han, Li; Ma, Liangkai; Guo, Dongwei

    2018-05-01

    As the rapid development of multimedia networking, more and more songs are issued through the Internet and stored in large digital music libraries. However, music information retrieval on these libraries can be really hard, and the recognition of musical emotion is especially challenging. In this paper, we report a strategy to recognize the emotion contained in songs by classifying their spectrograms, which contain both the time and frequency information, with a convolutional neural network (CNN). The experiments conducted on the l000-song dataset indicate that the proposed model outperforms traditional machine learning method.

  4. Music, memory and emotion.

    Science.gov (United States)

    Jäncke, Lutz

    2008-08-08

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory.

  5. Music, memory and emotion

    Science.gov (United States)

    Jäncke, Lutz

    2008-01-01

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory. PMID:18710596

  6. Musical emotions: Functions, origins, evolution

    Science.gov (United States)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  7. Sad music induces pleasant emotion.

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Katahira, Kentaro; Okanoya, Kazuo

    2013-01-01

    In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion.

  8. Sad music induces pleasant emotion

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Katahira, Kentaro; Okanoya, Kazuo

    2013-01-01

    In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion. PMID:23785342

  9. Studying emotion induced by music through a crowdsourcing game

    NARCIS (Netherlands)

    Aljanaki, A.; Wiering, F.; Veltkamp, R.C.

    One of the major reasons why people find music so enjoyable is its emotional impact. Creating emotion-based playlists is a natural way of organizing music. The usability of online music streaming services could be greatly improved by developing emotion-based access methods, and automatic music

  10. Music, memory and emotion

    OpenAIRE

    J?ncke, Lutz

    2008-01-01

    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory. Music has a prominent role in the everyday life of many people. Whether it is for recreation, distraction or mood enhancement, a lot of people listen to music from early in t...

  11. Modeling listeners' emotional response to music.

    Science.gov (United States)

    Eerola, Tuomas

    2012-10-01

    An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.

  12. Emotional response to musical repetition.

    Science.gov (United States)

    Livingstone, Steven R; Palmer, Caroline; Schubert, Emery

    2012-06-01

    Two experiments examined the effects of repetition on listeners' emotional response to music. Listeners heard recordings of orchestral music that contained a large section repeated twice. The music had a symmetric phrase structure (same-length phrases) in Experiment 1 and an asymmetric phrase structure (different-length phrases) in Experiment 2, hypothesized to alter the predictability of sensitivity to musical repetition. Continuous measures of arousal and valence were compared across music that contained identical repetition, variation (related), or contrasting (unrelated) structure. Listeners' emotional arousal ratings differed most for contrasting music, moderately for variations, and least for repeating musical segments. A computational model for the detection of repeated musical segments was applied to the listeners' emotional responses. The model detected the locations of phrase boundaries from the emotional responses better than from performed tempo or physical intensity in both experiments. These findings indicate the importance of repetition in listeners' emotional response to music and in the perceptual segmentation of musical structure.

  13. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion.

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally "happy") pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers "trade off" cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music-widely recognized for its artistic significance-complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  14. Music Communicates Affects, Not Basic Emotions - A Constructionist Account of Attribution of Emotional Meanings to Music.

    Science.gov (United States)

    Cespedes-Guevara, Julian; Eerola, Tuomas

    2018-01-01

    Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music's ability to express core affects and the influence of top-down and contextual information in the listener's mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be

  15. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy”) pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech. PMID:29249997

  16. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Directory of Open Access Journals (Sweden)

    Michael Schutz

    2017-11-01

    Full Text Available Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor, a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy” pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015. Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  17. Musical anhedonia: selective loss of emotional experience in listening to music.

    Science.gov (United States)

    Satoh, Masayuki; Nakase, Taizen; Nagata, Ken; Tomimoto, Hidekazu

    2011-10-01

    Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.

  18. Fusion of Electroencephalogram dynamics and musical contents for estimating emotional responses in music listening

    Directory of Open Access Journals (Sweden)

    Yuan-Pin eLin

    2014-05-01

    Full Text Available Electroencephalography (EEG-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI, neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications, the music modality would play a complementary role and augment the EEG results from around 61% to 67% in valence classification and from around 58% to 67% in arousal classification. The musical timbre appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.

  19. Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening.

    Science.gov (United States)

    Lin, Yuan-Pin; Yang, Yi-Hsuan; Jung, Tzyy-Ping

    2014-01-01

    Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61-67% in valence classification and from around 58-67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.

  20. Brain correlates of music-evoked emotions.

    Science.gov (United States)

    Koelsch, Stefan

    2014-03-01

    Music is a universal feature of human societies, partly owing to its power to evoke strong emotions and influence moods. During the past decade, the investigation of the neural correlates of music-evoked emotions has been invaluable for the understanding of human emotion. Functional neuroimaging studies on music and emotion show that music can modulate activity in brain structures that are known to be crucially involved in emotion, such as the amygdala, nucleus accumbens, hypothalamus, hippocampus, insula, cingulate cortex and orbitofrontal cortex. The potential of music to modulate activity in these structures has important implications for the use of music in the treatment of psychiatric and neurological disorders.

  1. Emotion detection model of Filipino music

    Science.gov (United States)

    Noblejas, Kathleen Alexis; Isidro, Daryl Arvin; Samonte, Mary Jane C.

    2017-02-01

    This research explored the creation of a model to detect emotion from Filipino songs. The emotion model used was based from Paul Ekman's six basic emotions. The songs were classified into the following genres: kundiman, novelty, pop, and rock. The songs were annotated by a group of music experts based on the emotion the song induces to the listener. Musical features of the songs were extracted using jAudio while the lyric features were extracted by Bag-of- Words feature representation. The audio and lyric features of the Filipino songs were extracted for classification by the chosen three classifiers, Naïve Bayes, Support Vector Machines, and k-Nearest Neighbors. The goal of the research was to know which classifier would work best for Filipino music. Evaluation was done by 10-fold cross validation and accuracy, precision, recall, and F-measure results were compared. Models were also tested with unknown test data to further determine the models' accuracy through the prediction results.

  2. Music Communicates Affects, Not Basic Emotions – A Constructionist Account of Attribution of Emotional Meanings to Music

    Directory of Open Access Journals (Sweden)

    Julian Cespedes-Guevara

    2018-02-01

    Full Text Available Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music’s ability to express core affects and the influence of top-down and contextual information in the listener’s mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence. Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological

  3. Music Communicates Affects, Not Basic Emotions – A Constructionist Account of Attribution of Emotional Meanings to Music

    Science.gov (United States)

    Cespedes-Guevara, Julian; Eerola, Tuomas

    2018-01-01

    Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music’s ability to express core affects and the influence of top-down and contextual information in the listener’s mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be

  4. Mapping aesthetic musical emotions in the brain

    OpenAIRE

    Trost, Johanna Wiebke; Ethofer, Thomas Stefan; Zentner, Marcel Robert; Vuilleumier, Patrik

    2012-01-01

    Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness...

  5. Multimodal Detection of Music Performances for Intelligent Emotion Based Lighting

    DEFF Research Database (Denmark)

    Bonde, Esben Oxholm Skjødt; Hansen, Ellen Kathrine; Triantafyllidis, Georgios

    2016-01-01

    Playing music is about conveying emotions and the lighting at a concert can help do that. However, new and unknown bands that play at smaller venues and bands that don’t have the budget to hire a dedicated light technician have to miss out on lighting that will help them to convey the emotions...... of what they play. In this paper it is investigated whether it is possible or not to develop an intelligent system that through a multimodal input detects the intended emotions of the played music and in realtime adjusts the lighting accordingly. A concept for such an intelligent lighting system...... is developed and described. Through existing research on music and emotion, as well as on musicians’ body movements related to the emotion they want to convey, a row of cues is defined. This includes amount, speed, fluency and regularity for the visual and level, tempo, articulation and timbre for the auditory...

  6. Expression of emotion in Eastern and Western music mirrors vocalization.

    Science.gov (United States)

    Bowling, Daniel Liu; Sundararajan, Janani; Han, Shui'er; Purves, Dale

    2012-01-01

    In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.

  7. Unforgettable film music: The role of emotion in episodic long-term memory for music

    OpenAIRE

    Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O

    2008-01-01

    Abstract Background Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Results Recognition of 40 musical excerpts was investiga...

  8. Mapping aesthetic musical emotions in the brain.

    Science.gov (United States)

    Trost, Wiebke; Ethofer, Thomas; Zentner, Marcel; Vuilleumier, Patrik

    2012-12-01

    Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.

  9. Predicting the emotions expressed in music

    DEFF Research Database (Denmark)

    Madsen, Jens

    With the ever-growing popularity and availability of digital music through streaming services and digital download, making sense of the millions of songs, is ever more pertinent. However the traditional approach of creating music systems has treated songs like items in a store, like books...... and movies. However music is special, having origins in a number of evolutionary adaptations. The fundamental needs and goals of a users use of music, was investigated to create the next generation of music systems. People listen to music to regulate their mood and emotions was found to be the most important...... fundamental reason. (Mis)matching peoples mood with the emotions expressed in music was found to be an essential underlying mechanism, people use to regulate their emotions. This formed the basis and overall goal of the thesis, to investigate how to create a predictive model of emotions expressed in music...

  10. Unforgettable film music: the role of emotion in episodic long-term memory for music.

    Science.gov (United States)

    Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O

    2008-05-28

    Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.

  11. Expression of emotion in Eastern and Western music mirrors vocalization.

    Directory of Open Access Journals (Sweden)

    Daniel Liu Bowling

    Full Text Available In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.

  12. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  13. Music and emotions: from enchantment to entrainment.

    Science.gov (United States)

    Vuilleumier, Patrik; Trost, Wiebke

    2015-03-01

    Producing and perceiving music engage a wide range of sensorimotor, cognitive, and emotional processes. Emotions are a central feature of the enjoyment of music, with a large variety of affective states consistently reported by people while listening to music. However, besides joy or sadness, music often elicits feelings of wonder, nostalgia, or tenderness, which do not correspond to emotion categories typically studied in neuroscience and whose neural substrates remain largely unknown. Here we review the similarities and differences in the neural substrates underlying these "complex" music-evoked emotions relative to other more "basic" emotional experiences. We suggest that these emotions emerge through a combination of activation in emotional and motivational brain systems (e.g., including reward pathways) that confer its valence to music, with activation in several other areas outside emotional systems, including motor, attention, or memory-related regions. We then discuss the neural substrates underlying the entrainment of cognitive and motor processes by music and their relation to affective experience. These effects have important implications for the potential therapeutic use of music in neurological or psychiatric diseases, particularly those associated with motor, attention, or affective disturbances. © 2015 New York Academy of Sciences.

  14. Music-color associations are mediated by emotion.

    Science.gov (United States)

    Palmer, Stephen E; Schloss, Karen B; Xu, Zoe; Prado-León, Lilia R

    2013-05-28

    Experimental evidence demonstrates robust cross-modal matches between music and colors that are mediated by emotional associations. US and Mexican participants chose colors that were most/least consistent with 18 selections of classical orchestral music by Bach, Mozart, and Brahms. In both cultures, faster music in the major mode produced color choices that were more saturated, lighter, and yellower whereas slower, minor music produced the opposite pattern (choices that were desaturated, darker, and bluer). There were strong correlations (0.89 music and those of the colors chosen to go with the music, supporting an emotional mediation hypothesis in both cultures. Additional experiments showed similarly robust cross-modal matches from emotionally expressive faces to colors and from music to emotionally expressive faces. These results provide further support that music-to-color associations are mediated by common emotional associations.

  15. Music evokes vicarious emotions in listeners.

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Okanoya, Kazuo

    2014-01-01

    Why do we listen to sad music? We seek to answer this question using a psychological approach. It is possible to distinguish perceived emotions from those that are experienced. Therefore, we hypothesized that, although sad music is perceived as sad, listeners actually feel (experience) pleasant emotions concurrent with sadness. This hypothesis was supported, which led us to question whether sadness in the context of art is truly an unpleasant emotion. While experiencing sadness may be unpleasant, it may also be somewhat pleasant when experienced in the context of art, for example, when listening to sad music. We consider musically evoked emotion vicarious, as we are not threatened when we experience it, in the way that we can be during the course of experiencing emotion in daily life. When we listen to sad music, we experience vicarious sadness. In this review, we propose two sides to sadness by suggesting vicarious emotion.

  16. Music for a Brighter World: Brightness Judgment Bias by Musical Emotion.

    Science.gov (United States)

    Bhattacharya, Joydeep; Lindsen, Job P

    2016-01-01

    A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.

  17. Unforgettable film music: The role of emotion in episodic long-term memory for music

    Science.gov (United States)

    Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O

    2008-01-01

    Background Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Results Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Conclusion Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval. PMID:18505596

  18. Unforgettable film music: The role of emotion in episodic long-term memory for music

    Directory of Open Access Journals (Sweden)

    Altenmüller Eckart O

    2008-05-01

    Full Text Available Abstract Background Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Results Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Conclusion Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.

  19. Evaluating music emotion recognition

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A fundamental problem with nearly all work in music genre recognition (MGR)is that evaluation lacks validity with respect to the principal goals of MGR. This problem also occurs in the evaluation of music emotion recognition (MER). Standard approaches to evaluation, though easy to implement, do...... not reliably differentiate between recognizing genre or emotion from music, or by virtue of confounding factors in signals (e.g., equalization). We demonstrate such problems for evaluating an MER system, and conclude with recommendations....

  20. Mapping Aesthetic Musical Emotions in the Brain

    Science.gov (United States)

    Ethofer, Thomas; Zentner, Marcel; Vuilleumier, Patrik

    2012-01-01

    Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions. PMID:22178712

  1. LSD enhances the emotional response to music.

    Science.gov (United States)

    Kaelen, M; Barrett, F S; Roseman, L; Lorenz, R; Family, N; Bolstridge, M; Curran, H V; Feilding, A; Nutt, D J; Carhart-Harris, R L

    2015-10-01

    There is renewed interest in the therapeutic potential of psychedelic drugs such as lysergic acid diethylamide (LSD). LSD was used extensively in the 1950s and 1960s as an adjunct in psychotherapy, reportedly enhancing emotionality. Music is an effective tool to evoke and study emotion and is considered an important element in psychedelic-assisted psychotherapy; however, the hypothesis that psychedelics enhance the emotional response to music has yet to be investigated in a modern placebo-controlled study. The present study sought to test the hypothesis that music-evoked emotions are enhanced under LSD. Ten healthy volunteers listened to five different tracks of instrumental music during each of two study days, a placebo day followed by an LSD day, separated by 5-7 days. Subjective ratings were completed after each music track and included a visual analogue scale (VAS) and the nine-item Geneva Emotional Music Scale (GEMS-9). Results demonstrated that the emotional response to music is enhanced by LSD, especially the emotions "wonder", "transcendence", "power" and "tenderness". These findings reinforce the long-held assumption that psychedelics enhance music-evoked emotion, and provide tentative and indirect support for the notion that this effect can be harnessed in the context of psychedelic-assisted psychotherapy. Further research is required to test this link directly.

  2. What does music express? Basic emotions and beyond

    Directory of Open Access Journals (Sweden)

    Patrik N. Juslin

    2013-09-01

    Full Text Available Numerous studies have investigated whether music can reliably convey emotions to listeners, and - if so - what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of ‘multiple layers’ of musical expression of emotions. The ‘core’ layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this ‘core’ layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions - though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.

  3. What does music express? Basic emotions and beyond.

    Science.gov (United States)

    Juslin, Patrik N

    2013-01-01

    Numerous studies have investigated whether music can reliably convey emotions to listeners, and-if so-what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of "multiple layers" of musical expression of emotions. The "core" layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this "core" layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions-though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.

  4. Music for a Brighter World: Brightness Judgment Bias by Musical Emotion.

    Directory of Open Access Journals (Sweden)

    Joydeep Bhattacharya

    Full Text Available A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.

  5. From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions

    Science.gov (United States)

    Juslin, Patrik N.

    2013-09-01

    The sound of music may arouse profound emotions in listeners. But such experiences seem to involve a ‘paradox’, namely that music - an abstract form of art, which appears removed from our concerns in everyday life - can arouse emotions - biologically evolved reactions related to human survival. How are these (seemingly) non-commensurable phenomena linked together? Key is to understand the processes through which sounds are imbued with meaning. It can be argued that the survival of our ancient ancestors depended on their ability to detect patterns in sounds, derive meaning from them, and adjust their behavior accordingly. Such an ecological perspective on sound and emotion forms the basis of a recent multi-level framework that aims to explain emotional responses to music in terms of a large set of psychological mechanisms. The goal of this review is to offer an updated and expanded version of the framework that can explain both ‘everyday emotions’ and ‘aesthetic emotions’. The revised framework - referred to as BRECVEMA - includes eight mechanisms: Brain Stem Reflex, Rhythmic Entrainment, Evaluative Conditioning, Contagion, Visual Imagery, Episodic Memory, Musical Expectancy, and Aesthetic Judgment. In this review, it is argued that all of the above mechanisms may be directed at information that occurs in a ‘musical event’ (i.e., a specific constellation of music, listener, and context). Of particular significance is the addition of a mechanism corresponding to aesthetic judgments of the music, to better account for typical ‘appreciation emotions’ such as admiration and awe. Relationships between aesthetic judgments and other mechanisms are reviewed based on the revised framework. It is suggested that the framework may contribute to a long-needed reconciliation between previous approaches that have conceptualized music listeners' responses in terms of either ‘everyday emotions’ or ‘aesthetic emotions’.

  6. Emotion regulation through listening to music in everyday situations.

    Science.gov (United States)

    Thoma, Myriam V; Ryf, Stefan; Mohiyeddini, Changiz; Ehlert, Ulrike; Nater, Urs M

    2012-01-01

    Music is a stimulus capable of triggering an array of basic and complex emotions. We investigated whether and how individuals employ music to induce specific emotional states in everyday situations for the purpose of emotion regulation. Furthermore, we wanted to examine whether specific emotion-regulation styles influence music selection in specific situations. Participants indicated how likely it would be that they would want to listen to various pieces of music (which are known to elicit specific emotions) in various emotional situations. Data analyses by means of non-metric multidimensional scaling revealed a clear preference for pieces of music that were emotionally congruent with an emotional situation. In addition, we found that specific emotion-regulation styles might influence the selection of pieces of music characterised by specific emotions. Our findings demonstrate emotion-congruent music selection and highlight the important role of specific emotion-regulation styles in the selection of music in everyday situations.

  7. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    Science.gov (United States)

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  8. Emotion rendering in music: range and characteristic values of seven musical variables.

    Science.gov (United States)

    Bresin, Roberto; Friberg, Anders

    2011-10-01

    Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 × 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than

  9. Musical Empathy, Emotional Co-Constitution, and the “Musical Other”

    Directory of Open Access Journals (Sweden)

    Deniz Peters

    2015-09-01

    Full Text Available Musical experience can confront us with emotions that are not currently ours. We might remain unaffected by them, or be affected: retreat from them in avoidance, or embrace them and experience them as ours. This suggests that they are another's. Whose are they? Do we arrive at them through empathy, turning our interest to the music as we do to others in an interpersonal encounter? In addressing these questions, I differentiate between musical and social empathy, rejecting the idea that the emotions arise as a direct consequence of empathizing with composers or performers. I argue that musical perception is doubly active: bodily knowledge can extend auditory perception cross-modally, which, in turn, can orient a bodily hermeneutic. Musical passages thus acquire adverbial expressivity, an expressivity which, as I discuss, is co-constituted, and engenders a "musical other." This leads me to a reinterpretation of the musical persona and to consider a dialectic between social and musical empathy that I think plays a central role in the individuation of shared emotion in musical experience. Musical empathy, then, occurs via a combination of self-involvement and self-effacement—leading us first into, and then perhaps beyond, ourselves.

  10. Music to my ears: Age-related decline in musical and facial emotion recognition.

    Science.gov (United States)

    Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted

    2017-12-01

    We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. What does music express? Basic emotions and beyond

    Science.gov (United States)

    Juslin, Patrik N.

    2013-01-01

    Numerous studies have investigated whether music can reliably convey emotions to listeners, and—if so—what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of “multiple layers” of musical expression of emotions. The “core” layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this “core” layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions—though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions. PMID:24046758

  12. Impaired emotion recognition in music in Parkinson's disease.

    Science.gov (United States)

    van Tricht, Mirjam J; Smeding, Harriet M M; Speelman, Johannes D; Schmand, Ben A

    2010-10-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction and other disease characteristics in emotion recognition was also evaluated. We used 32 musical excerpts that expressed happiness, sadness, fear or anger. PD patients were impaired in recognizing fear and anger in music. Fear recognition was associated with executive functions in PD patients and in healthy controls, but the emotion recognition impairments of PD patients persisted after adjusting for executive functioning. We found no differences in the recognition of happy or sad music. Emotion recognition was not related to depressive symptoms, disease duration or severity of motor symptoms. We conclude that PD patients are impaired in recognizing complex emotions in music. Although this impairment is related to executive dysfunction, our findings most likely reflect an additional primary deficit in emotional processing. 2010 Elsevier Inc. All rights reserved.

  13. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    Directory of Open Access Journals (Sweden)

    Sara Invitto

    2017-08-01

    Full Text Available Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians. Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment. A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  14. Explicit versus implicit neural processing of musical emotions

    OpenAIRE

    Bogert, Brigitte; Numminen-Kontti, Taru; Gold, Benjamin; Sams, Mikko; Numminen, Jussi; Burunat, Iballa; Lampinen, Jouko; Brattico, Elvira

    2016-01-01

    Music is often used to regulate emotions and mood. Typically, music conveys and induces emotions even when one does not attend to them. Studies on the neural substrates of musical emotions have, however, only examined brain activity when subjects have focused on the emotional content of the music. Here we address with functional magnetic resonance imaging (fMRI) the neural processing of happy, sad, and fearful music with a paradigm in which 56 subjects were instructed to either classify the e...

  15. Evaluating music emotion recognition:Lessons from music genre recognition?

    OpenAIRE

    Sturm, Bob L.

    2013-01-01

    A fundamental problem with nearly all work in music genre recognition (MGR)is that evaluation lacks validity with respect to the principal goals of MGR. This problem also occurs in the evaluation of music emotion recognition (MER). Standard approaches to evaluation, though easy to implement, do not reliably differentiate between recognizing genre or emotion from music, or by virtue of confounding factors in signals (e.g., equalization). We demonstrate such problems for evaluating an MER syste...

  16. Theory-guided Therapeutic Function of Music to facilitate emotion regulation development in preschool-aged children

    Directory of Open Access Journals (Sweden)

    Kimberly eSena Moore

    2015-10-01

    Full Text Available Emotion regulation is an umbrella term to describe interactive, goal-dependent explicit and implicit processes that are intended to help an individual manage and shift an emotional experience. The primary window for appropriate emotion regulation development occurs during the infant, toddler, and preschool years. Atypical emotion regulation development is considered a risk factor for mental health problems and has been implicated as a primary mechanism underlying childhood pathologies. Current treatments are predominantly verbal- and behavioral-based and lack the opportunity to practice in-the-moment management of emotionally charged situations. There is also an absence of caregiver-child interaction in these treatment strategies. Based on behavioral and neural support for music as a therapeutic mechanism, the incorporation of intentional music experiences, facilitated by a music therapist, may be one way to address these limitations. Musical Contour Regulation Facilitation is an interactive therapist-child music-based intervention for emotion regulation development practice in preschoolers. The Musical Contour Regulation Facilitation intervention uses the deliberate contour and temporal structure of a music therapy session to mirror the changing flow of the caregiver-child interaction through the alternation of high arousal and low arousal music experiences. The purpose of this paper is to describe the Therapeutic Function of Music, a theory-based description of the structural characteristics for a music-based stimulus to musically facilitate developmentally appropriate high arousal and low arousal in-the-moment emotion regulation experiences. The Therapeutic Function of Music analysis is based on a review of the music theory, music neuroscience, and music development literature and provides a preliminary model of the structural characteristics of the music as a core component of the Musical Contour Regulation Facilitation intervention.

  17. Relaxing music counters heightened consolidation of emotional memory.

    Science.gov (United States)

    Rickard, Nikki S; Wong, Wendy Wing; Velik, Lauren

    2012-02-01

    Emotional events tend to be retained more strongly than other everyday occurrences, a phenomenon partially regulated by the neuromodulatory effects of arousal. Two experiments demonstrated the use of relaxing music as a means of reducing arousal levels, thereby challenging heightened long-term recall of an emotional story. In Experiment 1, participants (N=84) viewed a slideshow, during which they listened to either an emotional or neutral narration, and were exposed to relaxing or no music. Retention was tested 1 week later via a forced choice recognition test. Retention for both the emotional content (Phase 2 of the story) and material presented immediately after the emotional content (Phase 3) was enhanced, when compared with retention for the neutral story. Relaxing music prevented the enhancement for material presented after the emotional content (Phase 3). Experiment 2 (N=159) provided further support to the neuromodulatory effect of music by post-event presentation of both relaxing music and non-relaxing auditory stimuli (arousing music/background sound). Free recall of the story was assessed immediately afterwards and 1 week later. Relaxing music significantly reduced recall of the emotional story (Phase 2). The findings provide further insight into the capacity of relaxing music to attenuate the strength of emotional memory, offering support for the therapeutic use of music for such purposes. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Emotional Responses to Music: Experience, Expression, and Physiology

    Science.gov (United States)

    Lundqvist, Lars-Olov; Carlsson, Fredrik; Hilmersson, Per; Juslin, Patrik N.

    2009-01-01

    A crucial issue in research on music and emotion is whether music evokes genuine emotional responses in listeners (the emotivist position) or whether listeners merely perceive emotions expressed by the music (the cognitivist position). To investigate this issue, we measured self-reported emotion, facial muscle activity, and autonomic activity in…

  19. Music-evoked emotions in schizophrenia.

    Science.gov (United States)

    Abe, Daijyu; Arai, Makoto; Itokawa, Masanari

    2017-07-01

    Previous studies have reported that people with schizophrenia have impaired musical abilities. Here we developed a simple music-based assay to assess patient's ability to associate a minor chord with sadness. We further characterize correlations between impaired musical responses and psychiatric symptoms. We exposed participants sequentially to two sets of sound stimuli, first a C-major progression and chord, and second a C-minor progression and chord. Participants were asked which stimulus they associated with sadness, the first set, the second set, or neither. The severity of psychiatric symptoms was assessed using the Positive and Negative Syndrome Scale (PANSS). Study participants were 29 patients diagnosed with schizophrenia and 29 healthy volunteers matched in age, gender and musical background. 37.9% (95% confidence interval [CI]:19.1-56.7) of patients with schizophrenia associated the minor chord set as sad, compared with 97.9% (95%CI: 89.5-103.6) of controls. Four patients were diagnosed with treatment-resistant schizophrenia, and all four failed to associate the minor chord with sadness. Patients who did not recognize minor chords as sad had significantly higher scores on all PANSS subscales. A simple test allows music-evoked emotions to be assessed in schizophrenia patient, and may show potential relationships between music-evoked emotions and psychiatric symptoms. Copyright © 2016. Published by Elsevier B.V.

  20. Towards a neural basis of music-evoked emotions.

    Science.gov (United States)

    Koelsch, Stefan

    2010-03-01

    Music is capable of evoking exceptionally strong emotions and of reliably affecting the mood of individuals. Functional neuroimaging and lesion studies show that music-evoked emotions can modulate activity in virtually all limbic and paralimbic brain structures. These structures are crucially involved in the initiation, generation, detection, maintenance, regulation and termination of emotions that have survival value for the individual and the species. Therefore, at least some music-evoked emotions involve the very core of evolutionarily adaptive neuroaffective mechanisms. Because dysfunctions in these structures are related to emotional disorders, a better understanding of music-evoked emotions and their neural correlates can lead to a more systematic and effective use of music in therapy. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Emotional responses to music: towards scientific perspectives on music therapy.

    Science.gov (United States)

    Suda, Miyuki; Morimoto, Kanehisa; Obata, Akiko; Koizumi, Hideaki; Maki, Atsushi

    2008-01-08

    Neurocognitive research has the potential to identify the relevant effects of music therapy. In this study, we examined the effect of music mode (major vs. minor) on stress reduction using optical topography and an endocrinological stress marker. In salivary cortisol levels, we observed that stressful conditions such as mental fatigue (thinking and creating a response) was reduced more by major mode music than by minor mode music. We suggest that music specifically induces an emotional response similar to a pleasant experience or happiness. Moreover, we demonstrated the typical asymmetrical pattern of stress responses in upper temporal cortex areas, and suggested that happiness/sadness emotional processing might be related to stress reduction by music.

  2. Emotions evoked by the sound of music: characterization, classification, and measurement.

    Science.gov (United States)

    Zentner, Marcel; Grandjean, Didier; Scherer, Klaus R

    2008-08-01

    One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.

  3. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  4. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music.

    Science.gov (United States)

    Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.

  5. Sensitivity to musical emotions in congenital amusia.

    Science.gov (United States)

    Gosselin, Nathalie; Paquette, Sébastien; Peretz, Isabelle

    2015-10-01

    The emotional experience elicited by music is largely dependent on structural characteristics such as pitch, rhythm, and dynamics. We examine here to what extent amusic adults, who have experienced pitch perception difficulties all their lives, still maintain some ability to perceive emotions from music. Amusic and control participants judged the emotions expressed by unfamiliar musical clips intended to convey happiness, sadness, fear and peacefulness (Experiment 1A). Surprisingly, most amusic individuals showed normal recognition of the four emotions tested here. This preserved ability was not due to some peculiarities of the music, since the amusic individuals showed a typical deficit in perceiving pitch violations intentionally inserted in the same clips (Experiment 1B). In Experiment 2, we tested the use of two major structural determinants of musical emotions: tempo and mode. Neutralization of tempo had the same effect on both amusics' and controls' emotional ratings. In contrast, amusics did not respond to a change of mode as markedly as controls did. Moreover, unlike the control participants, amusics' judgments were not influenced by subtle differences in pitch, such as the number of semitones changed by the mode manipulation. Instead, amusics showed normal sensitivity to fluctuations in energy, to pulse clarity, and to timbre differences, such as roughness. Amusics even showed sensitivity to key clarity and to large mean pitch differences in distinguishing happy from sad music. Thus, the pitch perception deficit experienced by amusic adults had only mild consequences on emotional judgments. In sum, emotional responses to music may be possible in this condition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions

    Science.gov (United States)

    Reybrouck, Mark; Eerola, Tuomas

    2017-01-01

    The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning. The theoretical grounding elaborates on the transition from referential to affective semantics, the distinction between expression and induction of emotions, and the tension between discrete-digital and analog-continuous processing of the sounds. The empirical background provides evidence from several findings such as infant-directed speech, referential emotive vocalizations and separation calls in lower mammals, the distinction between the acoustic and vehicle mode of sound perception, and the bodily and physiological reactions to the sounds. It is argued, finally, that early affective processing reflects the way emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. As such there is a dynamic tension between nature and nurture, which is reflected in the nature-nurture-nature cycle of musical sense-making. PMID:28421015

  7. Impaired emotion recognition in music in Parkinson's disease

    NARCIS (Netherlands)

    van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched

  8. Impaired emotion recognition in music in Parkinson's disease

    NARCIS (Netherlands)

    van Tricht, M.J.; Smeding, H.M.M.; Speelman, J.D.; Schmand, B.A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson’s disease (PD) and 20 matched

  9. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    Science.gov (United States)

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Influence of Tempo and Rhythmic Unit in Musical Emotion Regulation.

    Science.gov (United States)

    Fernández-Sotos, Alicia; Fernández-Caballero, Antonio; Latorre, José M

    2016-01-01

    This article is based on the assumption of musical power to change the listener's mood. The paper studies the outcome of two experiments on the regulation of emotional states in a series of participants who listen to different auditions. The present research focuses on note value, an important musical cue related to rhythm. The influence of two concepts linked to note value is analyzed separately and discussed together. The two musical cues under investigation are tempo and rhythmic unit. The participants are asked to label music fragments by using opposite meaningful words belonging to four semantic scales, namely "Tension" (ranging from Relaxing to Stressing), "Expressiveness" (Expressionless to Expressive), "Amusement" (Boring to Amusing) and "Attractiveness" (Pleasant to Unpleasant). The participants also have to indicate how much they feel certain basic emotions while listening to each music excerpt. The rated emotions are "Happiness," "Surprise," and "Sadness." This study makes it possible to draw some interesting conclusions about the associations between note value and emotions.

  11. Musical Manipulations and the Emotionally Extended Mind

    Directory of Open Access Journals (Sweden)

    Joel Krueger

    2015-05-01

    Full Text Available I respond to Kersten's criticism in his article "Music and Cognitive Extension" of my approach to the musically extended emotional mind in Krueger (2014. I specify how we manipulate—and in so doing, integrate with—music when, as active listeners, we become part of a musically extended cognitive system. I also indicate how Kersten's account might be enriched by paying closer attention to the way that music functions as an environmental artifact for emotion regulation.

  12. Music Education Intervention Improves Vocal Emotion Recognition

    Science.gov (United States)

    Mualem, Orit; Lavidor, Michal

    2015-01-01

    The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…

  13. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Science.gov (United States)

    Lense, Miriam D.; Shivers, Carolyn M.; Dykens, Elisabeth M.

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia. PMID:23966965

  14. Emotional expression in music: contribution, linearity, and additivity of primary musical cues.

    Science.gov (United States)

    Eerola, Tuomas; Friberg, Anders; Bresin, Roberto

    2013-01-01

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77-89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0-8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).

  15. Impaired Emotion Recognition in Music in Parkinson's Disease

    Science.gov (United States)

    van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction…

  16. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  17. Social and Emotional Function of Music Listening: Reasons for Listening to Music

    Science.gov (United States)

    Gurgen, Elif Tekin

    2016-01-01

    Problem Statement: The reasons that people listen to music have been investigated for many years. Research results over the past 50 years have showed that individual musical preference is influenced by multiple factors. Many studies have shown throughout that music has been used to induce emotional states, express, activate, control emotions,…

  18. Brain-Activity-Driven Real-Time Music Emotive Control

    OpenAIRE

    Giraldo, Sergio; Ramirez, Rafael

    2013-01-01

    Active music listening has emerged as a study field that aims to enable listeners to interactively control music. Most of active music listening systems aim to control music aspects such as playback, equalization, browsing, and retrieval, but few of them aim to control expressive aspects of music to convey emotions. In this study our aim is to enrich the music listening experience by allowing listeners to control expressive parameters in music performances using their perceived emotional stat...

  19. Musical emotions: predicting second-by-second subjective feelings of emotion from low-level psychoacoustic features and physiological measurements.

    Science.gov (United States)

    Coutinho, Eduardo; Cangelosi, Angelo

    2011-08-01

    We sustain that the structure of affect elicited by music is largely dependent on dynamic temporal patterns in low-level music structural parameters. In support of this claim, we have previously provided evidence that spatiotemporal dynamics in psychoacoustic features resonate with two psychological dimensions of affect underlying judgments of subjective feelings: arousal and valence. In this article we extend our previous investigations in two aspects. First, we focus on the emotions experienced rather than perceived while listening to music. Second, we evaluate the extent to which peripheral feedback in music can account for the predicted emotional responses, that is, the role of physiological arousal in determining the intensity and valence of musical emotions. Akin to our previous findings, we will show that a significant part of the listeners' reported emotions can be predicted from a set of six psychoacoustic features--loudness, pitch level, pitch contour, tempo, texture, and sharpness. Furthermore, the accuracy of those predictions is improved with the inclusion of physiological cues--skin conductance and heart rate. The interdisciplinary work presented here provides a new methodology to the field of music and emotion research based on the combination of computational and experimental work, which aid the analysis of the emotional responses to music, while offering a platform for the abstract representation of those complex relationships. Future developments may aid specific areas, such as, psychology and music therapy, by providing coherent descriptions of the emotional effects of specific music stimuli. 2011 APA, all rights reserved

  20. Metaphor and music emotion: Ancient views and future directions.

    Science.gov (United States)

    Pannese, Alessia; Rappaz, Marc-André; Grandjean, Didier

    2016-08-01

    Music is often described in terms of emotion. This notion is supported by empirical evidence showing that engaging with music is associated with subjective feelings, and with objectively measurable responses at the behavioural, physiological, and neural level. Some accounts, however, reject the idea that music may directly induce emotions. For example, the 'paradox of negative emotion', whereby music described in negative terms is experienced as enjoyable, suggests that music might move the listener through indirect mechanisms in which the emotional experience elicited by music does not always coincide with the emotional label attributed to it. Here we discuss the role of metaphor as a potential mediator in these mechanisms. Drawing on musicological, philosophical, and neuroscientific literature, we suggest that metaphor acts at key stages along and between physical, biological, cognitive, and contextual processes, and propose a model of music experience in which metaphor mediates between language, emotion, and aesthetic response. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Generalizations of the subject-independent feature set for music-induced emotion recognition.

    Science.gov (United States)

    Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping

    2011-01-01

    Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.

  2. Cognitive Function, Origin, and Evolution of Musical Emotions

    Directory of Open Access Journals (Sweden)

    Leonid Perlovsky

    2013-12-01

    Full Text Available Cognitive function of music, its origin, and evolution has been a mystery until recently. Here we discuss a theory of a fundamental function of music in cognition and culture. Music evolved in parallel with language. The evolution of language toward a semantically powerful tool required freeing from uncontrolled emotions. Knowledge evolved fast along with language. This created cognitive dissonances, contradictions among knowledge and instincts, which differentiated consciousness. To sustain evolution of language and culture, these contradictions had to be unified. Music was the mechanism of unification. Differentiated emotions are needed for resolving cognitive dissonances. As knowledge has been accumulated, contradictions multiplied and correspondingly more varied emotions had to evolve. While language differentiated psyche, music unified it. Thus the need for refined musical emotions in the process of cultural evolution is grounded in fundamental mechanisms of cognition. This is why today's human mind and cultures cannot exist without today's music.

  3. Emotional Expression in Music: Contribution, Linearity, and Additivity of Primary Musical Cues

    Directory of Open Access Journals (Sweden)

    Tuomas eEerola

    2013-07-01

    Full Text Available The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary. The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%. Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin & Lindström, 2010.

  4. Emotional expression in music: contribution, linearity, and additivity of primary musical cues

    Science.gov (United States)

    Eerola, Tuomas; Friberg, Anders; Bresin, Roberto

    2013-01-01

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010). PMID:23908642

  5. Affinity for Music: A Study of the Role of Emotion in Musical Instrument Learning

    Science.gov (United States)

    StGeorge, Jennifer; Holbrook, Allyson; Cantwell, Robert

    2014-01-01

    For many people, the appeal of music lies in its connection to human emotions. A significant body of research has explored the emotions that are experienced through either the formal structure of music or through its symbolic messages. Yet in the instrumental music education field, this emotional connection is rarely examined. In this article, it…

  6. Emotional Responses to Music: Shifts in Frontal Brain Asymmetry Mark Periods of Musical Change.

    Science.gov (United States)

    Arjmand, Hussain-Abdulah; Hohagen, Jesper; Paton, Bryan; Rickard, Nikki S

    2017-01-01

    Recent studies have demonstrated increased activity in brain regions associated with emotion and reward when listening to pleasurable music. Unexpected change in musical features intensity and tempo - and thereby enhanced tension and anticipation - is proposed to be one of the primary mechanisms by which music induces a strong emotional response in listeners. Whether such musical features coincide with central measures of emotional response has not, however, been extensively examined. In this study, subjective and physiological measures of experienced emotion were obtained continuously from 18 participants (12 females, 6 males; 18-38 years) who listened to four stimuli-pleasant music, unpleasant music (dissonant manipulations of their own music), neutral music, and no music, in a counter-balanced order. Each stimulus was presented twice: electroencephalograph (EEG) data were collected during the first, while participants continuously subjectively rated the stimuli during the second presentation. Frontal asymmetry (FA) indices from frontal and temporal sites were calculated, and peak periods of bias toward the left (indicating a shift toward positive affect) were identified across the sample. The music pieces were also examined to define the temporal onset of key musical features. Subjective reports of emotional experience averaged across the condition confirmed participants rated their music selection as very positive, the scrambled music as negative, and the neutral music and silence as neither positive nor negative. Significant effects in FA were observed in the frontal electrode pair FC3-FC4, and the greatest increase in left bias from baseline was observed in response to pleasurable music. These results are consistent with findings from previous research. Peak FA responses at this site were also found to co-occur with key musical events relating to change, for instance, the introduction of a new motif, or an instrument change, or a change in low level acoustic

  7. Emotional Responses to Music: Shifts in Frontal Brain Asymmetry Mark Periods of Musical Change

    Directory of Open Access Journals (Sweden)

    Hussain-Abdulah Arjmand

    2017-12-01

    Full Text Available Recent studies have demonstrated increased activity in brain regions associated with emotion and reward when listening to pleasurable music. Unexpected change in musical features intensity and tempo – and thereby enhanced tension and anticipation – is proposed to be one of the primary mechanisms by which music induces a strong emotional response in listeners. Whether such musical features coincide with central measures of emotional response has not, however, been extensively examined. In this study, subjective and physiological measures of experienced emotion were obtained continuously from 18 participants (12 females, 6 males; 18–38 years who listened to four stimuli—pleasant music, unpleasant music (dissonant manipulations of their own music, neutral music, and no music, in a counter-balanced order. Each stimulus was presented twice: electroencephalograph (EEG data were collected during the first, while participants continuously subjectively rated the stimuli during the second presentation. Frontal asymmetry (FA indices from frontal and temporal sites were calculated, and peak periods of bias toward the left (indicating a shift toward positive affect were identified across the sample. The music pieces were also examined to define the temporal onset of key musical features. Subjective reports of emotional experience averaged across the condition confirmed participants rated their music selection as very positive, the scrambled music as negative, and the neutral music and silence as neither positive nor negative. Significant effects in FA were observed in the frontal electrode pair FC3–FC4, and the greatest increase in left bias from baseline was observed in response to pleasurable music. These results are consistent with findings from previous research. Peak FA responses at this site were also found to co-occur with key musical events relating to change, for instance, the introduction of a new motif, or an instrument change, or a

  8. Repetition and Emotive Communication in Music Versus Speech

    Directory of Open Access Journals (Sweden)

    Elizabeth Hellmuth eMargulis

    2013-04-01

    Full Text Available Music and speech are often placed alongside one another as comparative cases. Their relative overlaps and disassociations have been well explored (e.g. Patel, 2010. But one key attribute distinguishing these two domains has often been overlooked: the greater preponderance of repetition in music in comparison to speech. Recent fMRI studies have shown that familiarity – achieved through repetition – is a critical component of emotional engagement with music (Pereira et al., 2011. If repetition is fundamental to emotional responses to music, and repetition is a key distinguisher between the domains of music and speech, then close examination of the phenomenon of repetition might help clarify the ways that music elicits emotion differently than speech.

  9. EEG-Based Analysis of the Emotional Effect of Music Therapy on Palliative Care Cancer Patients

    Directory of Open Access Journals (Sweden)

    Rafael Ramirez

    2018-03-01

    Full Text Available Music is known to have the power to induce strong emotions. The present study assessed, based on Electroencephalography (EEG data, the emotional response of terminally ill cancer patients to a music therapy intervention in a randomized controlled trial. A sample of 40 participants from the palliative care unit in the Hospital del Mar in Barcelona was randomly assigned to two groups of 20. The first group [experimental group (EG] participated in a session of music therapy (MT, and the second group [control group (CG] was provided with company. Based on our previous work on EEG-based emotion detection, instantaneous emotional indicators in the form of a coordinate in the arousal-valence plane were extracted from the participants’ EEG data. The emotional indicators were analyzed in order to quantify (1 the overall emotional effect of MT on the patients compared to controls, and (2 the relative effect of the different MT techniques applied during each session. During each MT session, five conditions were considered: I (initial patient’s state before MT starts, C1 (passive listening, C2 (active listening, R (relaxation, and F (final patient’s state. EEG data analysis showed a significant increase in valence (p = 0.0004 and arousal (p = 0.003 between I and F in the EG. No significant changes were found in the CG. This results can be interpreted as a positive emotional effect of MT in advanced cancer patients. In addition, according to pre- and post-intervention questionnaire responses, participants in the EG also showed a significant decrease in tiredness, anxiety and breathing difficulties, as well as an increase in levels of well-being. No equivalent changes were observed in the CG.

  10. EEG-Based Analysis of the Emotional Effect of Music Therapy on Palliative Care Cancer Patients

    Science.gov (United States)

    Ramirez, Rafael; Planas, Josep; Escude, Nuria; Mercade, Jordi; Farriols, Cristina

    2018-01-01

    Music is known to have the power to induce strong emotions. The present study assessed, based on Electroencephalography (EEG) data, the emotional response of terminally ill cancer patients to a music therapy intervention in a randomized controlled trial. A sample of 40 participants from the palliative care unit in the Hospital del Mar in Barcelona was randomly assigned to two groups of 20. The first group [experimental group (EG)] participated in a session of music therapy (MT), and the second group [control group (CG)] was provided with company. Based on our previous work on EEG-based emotion detection, instantaneous emotional indicators in the form of a coordinate in the arousal-valence plane were extracted from the participants’ EEG data. The emotional indicators were analyzed in order to quantify (1) the overall emotional effect of MT on the patients compared to controls, and (2) the relative effect of the different MT techniques applied during each session. During each MT session, five conditions were considered: I (initial patient’s state before MT starts), C1 (passive listening), C2 (active listening), R (relaxation), and F (final patient’s state). EEG data analysis showed a significant increase in valence (p = 0.0004) and arousal (p = 0.003) between I and F in the EG. No significant changes were found in the CG. This results can be interpreted as a positive emotional effect of MT in advanced cancer patients. In addition, according to pre- and post-intervention questionnaire responses, participants in the EG also showed a significant decrease in tiredness, anxiety and breathing difficulties, as well as an increase in levels of well-being. No equivalent changes were observed in the CG. PMID:29551984

  11. Neural Processing of Emotional Musical and Nonmusical Stimuli in Depression.

    Directory of Open Access Journals (Sweden)

    Rebecca J Lepping

    Full Text Available Anterior cingulate cortex (ACC and striatum are part of the emotional neural circuitry implicated in major depressive disorder (MDD. Music is often used for emotion regulation, and pleasurable music listening activates the dopaminergic system in the brain, including the ACC. The present study uses functional MRI (fMRI and an emotional nonmusical and musical stimuli paradigm to examine how neural processing of emotionally provocative auditory stimuli is altered within the ACC and striatum in depression.Nineteen MDD and 20 never-depressed (ND control participants listened to standardized positive and negative emotional musical and nonmusical stimuli during fMRI scanning and gave subjective ratings of valence and arousal following scanning.ND participants exhibited greater activation to positive versus negative stimuli in ventral ACC. When compared with ND participants, MDD participants showed a different pattern of activation in ACC. In the rostral part of the ACC, ND participants showed greater activation for positive information, while MDD participants showed greater activation to negative information. In dorsal ACC, the pattern of activation distinguished between the types of stimuli, with ND participants showing greater activation to music compared to nonmusical stimuli, while MDD participants showed greater activation to nonmusical stimuli, with the greatest response to negative nonmusical stimuli. No group differences were found in striatum.These results suggest that people with depression may process emotional auditory stimuli differently based on both the type of stimulation and the emotional content of that stimulation. This raises the possibility that music may be useful in retraining ACC function, potentially leading to more effective and targeted treatments.

  12. Modeling emotional content of music using system identification.

    Science.gov (United States)

    Korhonen, Mark D; Clausi, David A; Jernigan, M Ed

    2006-06-01

    Research was conducted to develop a methodology to model the emotional content of music as a function of time and musical features. Emotion is quantified using the dimensions valence and arousal, and system-identification techniques are used to create the models. Results demonstrate that system identification provides a means to generalize the emotional content for a genre of music. The average R2 statistic of a valid linear model structure is 21.9% for valence and 78.4% for arousal. The proposed method of constructing models of emotional content generalizes previous time-series models and removes ambiguity from classifiers of emotion.

  13. The structural neuroanatomy of music emotion recognition: evidence from frontotemporal lobar degeneration.

    Science.gov (United States)

    Omar, Rohani; Henley, Susie M D; Bartlett, Jonathan W; Hailstone, Julia C; Gordon, Elizabeth; Sauter, Disa A; Frost, Chris; Scott, Sophie K; Warren, Jason D

    2011-06-01

    Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Eyes wide shut: amygdala mediates eyes-closed effect on emotional experience with music.

    Science.gov (United States)

    Lerner, Yulia; Papo, David; Zhdanov, Andrey; Belozersky, Libi; Hendler, Talma

    2009-07-15

    The perceived emotional value of stimuli and, as a consequence the subjective emotional experience with them, can be affected by context-dependent styles of processing. Therefore, the investigation of the neural correlates of emotional experience requires accounting for such a variable, a matter of an experimental challenge. Closing the eyes affects the style of attending to auditory stimuli by modifying the perceptual relationship with the environment without changing the stimulus itself. In the current study, we used fMRI to characterize the neural mediators of such modification on the experience of emotionality in music. We assumed that closed eyes position will reveal interplay between different levels of neural processing of emotions. More specifically, we focused on the amygdala as a central node of the limbic system and on its co-activation with the Locus Ceruleus (LC) and Ventral Prefrontal Cortex (VPFC); regions involved in processing of, respectively, 'low', visceral-, and 'high', cognitive-related, values of emotional stimuli. Fifteen healthy subjects listened to negative and neutral music excerpts with eyes closed or open. As expected, behavioral results showed that closing the eyes while listening to emotional music resulted in enhanced rating of emotionality, specifically of negative music. In correspondence, fMRI results showed greater activation in the amygdala when subjects listened to the emotional music with eyes closed relative to eyes open. More so, by using voxel-based correlation and a dynamic causal model analyses we demonstrated that increased amygdala activation to negative music with eyes closed led to increased activations in the LC and VPFC. This finding supports a system-based model of perceived emotionality in which the amygdala has a central role in mediating the effect of context-based processing style by recruiting neural operations involved in both visceral (i.e. 'low') and cognitive (i.e. 'high') related processes of emotions.

  15. Effects of music interventions on emotional States and running performance.

    Science.gov (United States)

    Lane, Andrew M; Davis, Paul A; Devonport, Tracey J

    2011-01-01

    The present study compared the effects of two different music interventions on changes in emotional states before and during running, and also explored effects of music interventions upon performance outcome. Volunteer participants (n = 65) who regularly listened to music when running registered online to participate in a three-stage study. Participants attempted to attain a personally important running goal to establish baseline performance. Thereafter, participants were randomly assigned to either a self-selected music group or an Audiofuel music group. Audiofuel produce pieces of music designed to assist synchronous running. The self-selected music group followed guidelines for selecting motivating playlists. In both experimental groups, participants used the Brunel Music Rating Inventory-2 (BMRI-2) to facilitate selection of motivational music. Participants again completed the BMRI-2 post- intervention to assess the motivational qualities of Audiofuel music or the music they selected for use during the study. Results revealed no significant differences between self-selected music and Audiofuel music on all variables analyzed. Participants in both music groups reported increased pleasant emotions and decreased unpleasant emotions following intervention. Significant performance improvements were demonstrated post-intervention with participants reporting a belief that emotional states related to performance. Further analysis indicated that enhanced performance was significantly greater among participants reporting music to be motivational as indicated by high scores on the BMRI-2. Findings suggest that both individual athletes and practitioners should consider using the BMRI-2 when selecting music for running. Key pointsListening to music with a high motivational quotient as indicated by scores on the BMRI-2 was associated with enhanced running performance and meta-emotional beliefs that emotions experienced during running helped performance.Beliefs on the

  16. Music and Emotion-A Case for North Indian Classical Music.

    Science.gov (United States)

    Valla, Jeffrey M; Alappatt, Jacob A; Mathur, Avantika; Singh, Nandini C

    2017-01-01

    The ragas of North Indian Classical Music (NICM) have been historically known to elicit emotions. Recently, Mathur et al. (2015) provided empirical support for these historical assumptions, that distinct ragas elicit distinct emotional responses. In this review, we discuss the findings of Mathur et al. (2015) in the context of the structure of NICM. Using, Mathur et al. (2015) as a demonstrative case-in-point, we argue that ragas of NICM can be viewed as uniquely designed stimulus tools for investigating the tonal and rhythmic influences on musical emotion.

  17. Recognition of the Emotional Content of Music Depending on the Characteristics of the Musical Material and Experience of Students

    Directory of Open Access Journals (Sweden)

    Knyazeva T.S.,

    2015-02-01

    Full Text Available We studied the effect of the factors affecting the recognition of the emotional content of the music. We tested hypotheses about the influence of the valence of the music, ethnic style and the listening experience on the success of music recognition. The empirical study involved 26 Russian musicians (average age of 25,7 years. For the study of musical perception we used bipolar semantic differential. We revealed that the valence of music material affects the recognition of the emotional content of music, and the ethno style does not. It was found that senior students recognize the emotional context of the music more effectively. The results show the universal nature of emotional and musical ear, equally successfully recognizing music of different ethnic style, as well as support the notion of higher significance of negative valence of emotional content in the process of musical perception. A study of factors influencing the emotional understanding of music is important for the development of models of emotion recognition, theoretical constructs of emotional intelligence, and for the theory and practice of music education.

  18. The role of music in deaf culture: deaf students' perception of emotion in music.

    Science.gov (United States)

    Darrow, Alice-Ann

    2006-01-01

    Although emotional interpretation of music is an individual and variable experience, researchers have found that typical listeners are quite consistent in associating basic or primary emotions such as happiness, sadness, fear, and anger to musical compositions. It has been suggested that an individual with a sensorineural hearing loss, or any lesion in auditory perceptors in the brain may have trouble perceiving music emotionally. The purpose of the present study was to investigate whether students with a hearing loss who associate with the deaf culture, assign the same emotions to music as students without a hearing loss. Sixty-two elementary and junior high students at a Midwestern state school for the deaf and students at neighboring elementary and junior high schools served as participants. Participants at the state school for the deaf had hearing losses ranging from moderate to severe. Twelve film score excerpts, composed to depict the primary emotions-happiness, sadness, and fear, were used as the musical stimuli. Participants were asked to assign an emotion to each excerpt. Results indicated a significant difference between the Deaf and typical hearing participants' responses, with hearing participants' responses more in agreement with the composers' intent. No significant differences were found for age or gender. Analyses of the Deaf participants' responses indicate that timbre, texture, and rhythm are perhaps the musical elements most influential in transmitting emotion to persons with a hearing loss. Adaptive strategies are suggested for assisting children who are deaf in accessing the elements of music intended to portray emotion.

  19. Music-evoked emotions: principles, brain correlates, and implications for therapy.

    Science.gov (United States)

    Koelsch, Stefan

    2015-03-01

    This paper describes principles underlying the evocation of emotion with music: evaluation, resonance, memory, expectancy/tension, imagination, understanding, and social functions. Each of these principles includes several subprinciples, and the framework on music-evoked emotions emerging from these principles and subprinciples is supposed to provide a starting point for a systematic, coherent, and comprehensive theory on music-evoked emotions that considers both reception and production of music, as well as the relevance of emotion-evoking principles for music therapy. © 2015 New York Academy of Sciences.

  20. Music and Emotion: the Dispositional or Arousal theory

    Directory of Open Access Journals (Sweden)

    Alessandra Buccella

    2012-05-01

    Full Text Available One of the ways of analysing the relationship between music and emotions in through musical expressiveness.As the theory I discuss in this paper puts it, expressiveness in a particular kind of music's secondary quality or, to use the term which gives the theory its name, a disposition of music to arouse a certain emotional response in listeners.The most accurate version of the dispositional theory is provided by Derek Matravers in his book Art and Emotion and in other papers: what I will try to do, then, is to illustrate Matravers theory and claim that it is a good solution to many problems concerning music and its capacity to affect our inner states.

  1. Aesthetic Emotions Across Arts: A Comparison Between Painting and Music

    Science.gov (United States)

    Miu, Andrei C.; Pițur, Simina; Szentágotai-Tătar, Aurora

    2016-01-01

    Emotional responses to art have long been subject of debate, but only recently have they started to be investigated in affective science. The aim of this study was to compare perceptions regarding frequency of aesthetic emotions, contributing factors, and motivation which characterize the experiences of looking at painting and listening to music. Parallel surveys were filled in online by participants (N = 971) interested in music and painting. By comparing self-reported characteristics of these experiences, this study found that compared to listening to music, looking at painting was associated with increased frequency of wonder and decreased frequencies of joyful activation and power. In addition to increased vitality, as reflected by the latter two emotions, listening to music was also more frequently associated with emotions such as tenderness, nostalgia, peacefulness, and sadness. Compared to painting-related emotions, music-related emotions were perceived as more similar to emotions in other everyday life situations. Participants reported that stimulus features and previous knowledge made more important contributions to emotional responses to painting, whereas prior mood, physical context and the presence of other people were considered more important in relation to emotional responses to music. Self-education motivation was more frequently associated with looking at painting, whereas mood repair and keeping company motivations were reported more frequently in relation to listening to music. Participants with visual arts education reported increased vitality-related emotions in their experience of looking at painting. In contrast, no relation was found between music education and emotional responses to music. These findings offer a more general perspective on aesthetic emotions and encourage integrative research linking different types of aesthetic experience. PMID:26779072

  2. Aesthetic Emotions Across Arts: A Comparison Between Painting and Music.

    Science.gov (United States)

    Miu, Andrei C; Pițur, Simina; Szentágotai-Tătar, Aurora

    2015-01-01

    Emotional responses to art have long been subject of debate, but only recently have they started to be investigated in affective science. The aim of this study was to compare perceptions regarding frequency of aesthetic emotions, contributing factors, and motivation which characterize the experiences of looking at painting and listening to music. Parallel surveys were filled in online by participants (N = 971) interested in music and painting. By comparing self-reported characteristics of these experiences, this study found that compared to listening to music, looking at painting was associated with increased frequency of wonder and decreased frequencies of joyful activation and power. In addition to increased vitality, as reflected by the latter two emotions, listening to music was also more frequently associated with emotions such as tenderness, nostalgia, peacefulness, and sadness. Compared to painting-related emotions, music-related emotions were perceived as more similar to emotions in other everyday life situations. Participants reported that stimulus features and previous knowledge made more important contributions to emotional responses to painting, whereas prior mood, physical context and the presence of other people were considered more important in relation to emotional responses to music. Self-education motivation was more frequently associated with looking at painting, whereas mood repair and keeping company motivations were reported more frequently in relation to listening to music. Participants with visual arts education reported increased vitality-related emotions in their experience of looking at painting. In contrast, no relation was found between music education and emotional responses to music. These findings offer a more general perspective on aesthetic emotions and encourage integrative research linking different types of aesthetic experience.

  3. Aesthetic emotions across arts: A comparison between painting and music

    Directory of Open Access Journals (Sweden)

    Andrei C. Miu

    2016-01-01

    Full Text Available Emotional responses to art have long been subject of debate, but only recently have they started to be investigated in affective science. The aim of this study was to compare perceptions regarding frequency of aesthetic emotions, contributing factors and motivation which characterize the experiences of looking at painting and listening to music. Parallel surveys were filled in online by participants (N = 971 interested in music and painting. By comparing self-reported characteristics of these experiences, this study found that compared to listening to music, looking at painting was associated with increased frequency of wonder and decreased frequencies of joyful activation and power. In addition to increased vitality, as reflected by the latter two emotions, listening to music was also more frequently associated with emotions such as tenderness, nostalgia, peacefulness and sadness. Compared to painting-related emotions, music-related emotions were perceived as more similar to emotions in other everyday life situations. Participants reported that stimulus features and previous knowledge made more important contributions to emotional responses to painting, whereas prior mood, physical context and the presence of other people were considered more important in relation to emotional responses to music. Self-education motivation was more frequently associated with looking at painting, whereas mood repair and keeping company motivations were reported more frequently in relation to listening to music. Participants with visual arts education reported increased vitality-related emotions in their experience of looking at painting. In contrast, no relation was found between music education and emotional responses to music. These findings offer a more general perspective on aesthetic emotions and encourage integrative research linking different types of aesthetic experience.

  4. Functional MRI of music emotion processing in frontotemporal dementia.

    Science.gov (United States)

    Agustus, Jennifer L; Mahoney, Colin J; Downey, Laura E; Omar, Rohani; Cohen, Miriam; White, Mark J; Scott, Sophie K; Mancini, Laura; Warren, Jason D

    2015-03-01

    Frontotemporal dementia is an important neurodegenerative disorder of younger life led by profound emotional and social dysfunction. Here we used fMRI to assess brain mechanisms of music emotion processing in a cohort of patients with frontotemporal dementia (n = 15) in relation to healthy age-matched individuals (n = 11). In a passive-listening paradigm, we manipulated levels of emotion processing in simple arpeggio chords (mode versus dissonance) and emotion modality (music versus human emotional vocalizations). A complex profile of disease-associated functional alterations was identified with separable signatures of musical mode, emotion level, and emotion modality within a common, distributed brain network, including posterior and anterior superior temporal and inferior frontal cortices and dorsal brainstem effector nuclei. Separable functional signatures were identified post-hoc in patients with and without abnormal craving for music (musicophilia): a model for specific abnormal emotional behaviors in frontotemporal dementia. Our findings indicate the potential of music to delineate neural mechanisms of altered emotion processing in dementias, with implications for future disease tracking and therapeutic strategies. © 2014 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals Inc. on behalf of The New York Academy of Sciences.

  5. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Cognitive approaches to analysis of emotions in music listening

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.

    2013-01-01

    In recent years research into music cognition and perception has increasingly gained territory. A fact which is not always realised by music theorists is that, from the perspective of cognitive psychology and empirical methodology, the representatives of the expanding field of cognitive music...... research frequently address questions and propose theoretical frameworks that ought to have implications for music theory of a more traditional kind. Yet, such cognitive theories and empirical findings have not had radical impact on general analytical practice and teaching of music theory. For theorists...... interested in musical meaning the emotional impact of music has always been a major concern. In this paper I will explore how multiple cognitive theories and empirical findings can be applied to account for emotional response to three subjectively chosen excerpts of strongly emotion-inducing music: Namely...

  7. [Emotional response to music by postlingually-deafened adult cochlear implant users].

    Science.gov (United States)

    Wang, Shuo; Dong, Ruijuan; Zhou, Yun; Li, Jing; Qi, Beier; Liu, Bo

    2012-10-01

    To assess the emotional response to music by postlingually-deafened adult cochlear implant users. Munich music questionnaire (MUMU) was used to match the music experience and the motivation of use of music between 12 normal-hearing and 12 cochlear implant subjects. Emotion rating test in Musical Sounds in Cochlear Implants (MuSIC) test battery was used to assess the emotion perception ability for both normal-hearing and cochlear implant subjects. A total of 15 pieces of music phases were used. Responses were given by selecting the rating scales from 1 to 10. "1" represents "very sad" feeling, and "10" represents "very happy feeling. In comparison with normal-hearing subjects, 12 cochlear implant subjects made less active use of music for emotional purpose. The emotion ratings for cochlear implant subjects were similar to normal-hearing subjects, but with large variability. Post-lingually deafened cochlear implant subjects on average performed similarly in emotion rating tasks relative to normal-hearing subjects, but their active use of music for emotional purpose was obviously less than normal-hearing subjects.

  8. Basic, specific, mechanistic? Conceptualizing musical emotions in the brain.

    Science.gov (United States)

    Omigie, Diana

    2016-06-01

    The number of studies investigating music processing in the human brain continues to increase, with a large proportion of them focussing on the correlates of so-called musical emotions. The current Review highlights the recent development whereby such studies are no longer concerned only with basic emotions such as happiness and sadness but also with so-called music-specific or "aesthetic" ones such as nostalgia and wonder. It also highlights how mechanisms such as expectancy and empathy, which are seen as inducing musical emotions, are enjoying ever-increasing investigation and substantiation with physiological and neuroimaging methods. It is proposed that a combination of these approaches, namely, investigation of the precise mechanisms through which so-called music-specific or aesthetic emotions may arise, will provide the most important advances for our understanding of the unique nature of musical experience. © 2015 Wiley Periodicals, Inc.

  9. Biological bases of human musicality.

    Science.gov (United States)

    Perrone-Capano, Carla; Volpicelli, Floriana; di Porzio, Umberto

    2017-04-01

    Music is a universal language, present in all human societies. It pervades the lives of most human beings and can recall memories and feelings of the past, can exert positive effects on our mood, can be strongly evocative and ignite intense emotions, and can establish or strengthen social bonds. In this review, we summarize the research and recent progress on the origins and neural substrates of human musicality as well as the changes in brain plasticity elicited by listening or performing music. Indeed, music improves performance in a number of cognitive tasks and may have beneficial effects on diseased brains. The emerging picture begins to unravel how and why particular brain circuits are affected by music. Numerous studies show that music affects emotions and mood, as it is strongly associated with the brain's reward system. We can therefore assume that an in-depth study of the relationship between music and the brain may help to shed light on how the mind works and how the emotions arise and may improve the methods of music-based rehabilitation for people with neurological disorders. However, many facets of the mind-music connection still remain to be explored and enlightened.

  10. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    Science.gov (United States)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  11. The Role of Emotional Skills in Music Education

    Science.gov (United States)

    Campayo-Muñoz, Emilia-Ángeles; Cabedo-Mas, Alberto

    2017-01-01

    Developing emotional skills is one of the challenges that concern teachers and researchers in education, since these skills promote well-being and enhance cognitive performance. Music is an excellent tool with which to express emotions and for this reason music education should play a role in individuals' emotional development. This paper reviews…

  12. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  13. Emotional expressions in voice and music: same code, same effect?

    Science.gov (United States)

    Escoffier, Nicolas; Zhong, Jidan; Schirmer, Annett; Qiu, Anqi

    2013-08-01

    Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition. Copyright © 2011 Wiley Periodicals, Inc.

  14. Processing of emotional faces in congenital amusia: An emotional music priming event-related potential study.

    Science.gov (United States)

    Zhishuai, Jin; Hong, Liu; Daxing, Wu; Pin, Zhang; Xuejing, Lu

    2017-01-01

    Congenital amusia is characterized by lifelong impairments in music perception and processing. It is unclear whether pitch detection deficits impact amusic individuals' perception of musical emotion. In the current work, 19 amusics and 21 healthy controls were subjected to electroencephalography (EEG) while being exposed to music excerpts and emotional faces. We assessed each individual's ability to discriminate positive- and negative-valenced emotional faces and analyzed electrophysiological indices, in the form of event-related potentials (ERPs) recorded at 32 sites, following exposure to emotionally positive or negative music excerpts. We observed smaller N2 amplitudes in response to facial expressions in the amusia group than in the control group, suggesting that amusics were less affected by the musical stimuli. The late-positive component (LPC) in amusics was similar to that in controls. Our results suggest that the neurocognitive deficit characteristic of congenital amusia is fundamentally an impairment in musical information processing rather than an impairment in emotional processing.

  15. Processing of emotional faces in congenital amusia: An emotional music priming event-related potential study

    Directory of Open Access Journals (Sweden)

    Jin Zhishuai

    2017-01-01

    Full Text Available Congenital amusia is characterized by lifelong impairments in music perception and processing. It is unclear whether pitch detection deficits impact amusic individuals' perception of musical emotion. In the current work, 19 amusics and 21 healthy controls were subjected to electroencephalography (EEG while being exposed to music excerpts and emotional faces. We assessed each individual's ability to discriminate positive- and negative-valenced emotional faces and analyzed electrophysiological indices, in the form of event-related potentials (ERPs recorded at 32 sites, following exposure to emotionally positive or negative music excerpts. We observed smaller N2 amplitudes in response to facial expressions in the amusia group than in the control group, suggesting that amusics were less affected by the musical stimuli. The late-positive component (LPC in amusics was similar to that in controls. Our results suggest that the neurocognitive deficit characteristic of congenital amusia is fundamentally an impairment in musical information processing rather than an impairment in emotional processing.

  16. Music as Emotional Self-Regulation throughout Adulthood

    Science.gov (United States)

    Saarikallio, Suvi

    2011-01-01

    Emotional self-regulation is acknowledged as one of the most important reasons for musical engagement at all ages. Yet there is little knowledge on how this self-regulatory use of music develops across the life span. A qualitative study was conducted to initially explore central processes and strategies of the emotional self-regulation during…

  17. The role of emotion in musical improvisation: an analysis of structural features.

    Science.gov (United States)

    McPherson, Malinda J; Lopez-Gonzalez, Monica; Rankin, Summer K; Limb, Charles J

    2014-01-01

    One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.

  18. The effects of emotion on memory for music and vocalisations.

    Science.gov (United States)

    Aubé, William; Peretz, Isabelle; Armony, Jorge L

    2013-01-01

    Music is a powerful tool for communicating emotions which can elicit memories through associative mechanisms. However, it is currently unknown whether emotion can modulate memory for music without reference to a context or personal event. We conducted three experiments to investigate the effect of basic emotions (fear, happiness, and sadness) on recognition memory for music, using short, novel stimuli explicitly created for research purposes, and compared them with nonlinguistic vocalisations. Results showed better memory accuracy for musical clips expressing fear and, to some extent, happiness. In the case of nonlinguistic vocalisations we confirmed a memory advantage for all emotions tested. A correlation between memory accuracy for music and vocalisations was also found, particularly in the case of fearful expressions. These results confirm that emotional expressions, particularly fearful ones, conveyed by music can influence memory as has been previously shown for other forms of expressions, such as faces and vocalisations.

  19. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  20. Congruence of happy and sad emotion in music and faces modifies cortical audiovisual activation.

    Science.gov (United States)

    Jeong, Jeong-Won; Diwadkar, Vaibhav A; Chugani, Carla D; Sinsoongsud, Piti; Muzik, Otto; Behen, Michael E; Chugani, Harry T; Chugani, Diane C

    2011-02-14

    happy (p=0.008) and sad faces were rated as sadder (p=0.002). Happy-sad congruence across modalities may enhance activity in auditory regions while incongruence appears to impact the perception of visual affect, leading to increased activation in face processing regions such as the FG. We suggest that greater understanding of the neural bases of happy-sad congruence across modalities can shed light on basic mechanisms of affective perception and experience and may lead to novel insights in the study of emotion regulation and therapeutic use of music. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Emotional reactions to music: psychophysiological correlates and applications to affective disorders

    OpenAIRE

    Kalda, Tiina

    2013-01-01

    Music has been used to evoke emotions for centuries. The mechanisms underlying this effect have remained largely unclear. This thesis contributes to research on how music\\ud evokes emotions by investigating two mechanisms from the model of Juslin and Västfjäll (2008) - musical expectancy and emotional contagion. In the perception studies the focus is on how musical expectancy violations are detected by either musically trained or untrained individuals. In the music-making studies, we concentr...

  2. Empathy manipulation impacts music-induced emotions: a psychophysiological study on opera.

    Directory of Open Access Journals (Sweden)

    Andrei C Miu

    Full Text Available This study investigated the effects of voluntarily empathizing with a musical performer (i.e., cognitive empathy on music-induced emotions and their underlying physiological activity. N = 56 participants watched video-clips of two operatic compositions performed in concerts, with low or high empathy instructions. Heart rate and heart rate variability, skin conductance level (SCL, and respiration rate (RR were measured during music listening, and music-induced emotions were quantified using the Geneva Emotional Music Scale immediately after music listening. Listening to the aria with sad content in a high empathy condition facilitated the emotion of nostalgia and decreased SCL, in comparison to the low empathy condition. Listening to the song with happy content in a high empathy condition also facilitated the emotion of power and increased RR, in comparison to the low empathy condition. To our knowledge, this study offers the first experimental evidence that cognitive empathy influences emotion psychophysiology during music listening.

  3. Learning Combinations of Multiple Feature Representations for Music Emotion Prediction

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2015-01-01

    Music consists of several structures and patterns evolving through time which greatly influences the human decoding of higher-level cognitive aspects of music like the emotions expressed in music. For tasks, such as genre, tag and emotion recognition, these structures have often been identified...... and used as individual and non-temporal features and representations. In this work, we address the hypothesis whether using multiple temporal and non-temporal representations of different features is beneficial for modeling music structure with the aim to predict the emotions expressed in music. We test...

  4. Biased emotional recognition in depression: perception of emotions in music by depressed patients.

    Science.gov (United States)

    Punkanen, Marko; Eerola, Tuomas; Erkkilä, Jaakko

    2011-04-01

    Depression is a highly prevalent mood disorder, that impairs a person's social skills and also their quality of life. Populations affected with depression also suffer from a higher mortality rate. Depression affects person's ability to recognize emotions. We designed a novel experiment to test the hypothesis that depressed patients show a judgment bias towards negative emotions. To investigate how depressed patients differ in their perception of emotions conveyed by musical examples, both healthy (n=30) and depressed (n=79) participants were presented with a set of 30 musical excerpts, representing one of five basic target emotions, and asked to rate each excerpt using five Likert scales that represented the amount of each one of those same emotions perceived in the example. Depressed patients showed moderate but consistent negative self-report biases both in the overall use of the scales and their particular application to certain target emotions, when compared to healthy controls. Also, the severity of the clinical state (depression, anxiety and alexithymia) had an effect on the self-report biases for both positive and negative emotion ratings, particularly depression and alexithymia. Only musical stimuli were used, and they were all clear examples of one of the basic emotions of happiness, sadness, fear, anger and tenderness. No neutral or ambiguous excerpts were included. Depressed patients' negative emotional bias was demonstrated using musical stimuli. This suggests that the evaluation of emotional qualities in music could become a means to discriminate between depressed and non-depressed subjects. The practical implications of the present study relate both to diagnostic uses of such perceptual evaluations, as well as a better understanding of the emotional regulation strategies of the patients. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. The role of emotion in musical improvisation: an analysis of structural features.

    Directory of Open Access Journals (Sweden)

    Malinda J McPherson

    Full Text Available One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor and emotion (happy vs. sad do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.

  6. The effect of music background on the emotional appraisal of film sequences

    Directory of Open Access Journals (Sweden)

    Pavlović Ivanka

    2011-01-01

    Full Text Available In this study the effects of musical background on the emotional appraisal of film sequences was investigated. Four pairs of polar emotions defined in Plutchik’s model were used as basic emotional qualities: joy-sadness, anticipation-surprise, fear-anger, and trust disgust. In the preliminary study eight film sequences and eight music themes were selected as the best representatives of all eight Plutchik’s emotions. In the main experiment the participant judged the emotional qualities of film-music combinations on eight seven-point scales. Half of the combinations were congruent (e.g. joyful film - joyful music, and half were incongruent (e.g. joyful film - sad music. Results have shown that visual information (film had greater effects on the emotion appraisal than auditory information (music. The modulation effects of music background depend on emotional qualities. In some incongruent combinations (joysadness the modulations in the expected directions were obtained (e.g. joyful music reduces the sadness of a sad film, in some cases (anger-fear no modulation effects were obtained, and in some cases (trust-disgust, anticipation-surprise the modulation effects were in an unexpected direction (e.g. trustful music increased the appraisal of disgust of a disgusting film. These results suggest that the appraisals of conjoint effects of emotions depend on the medium (film masks the music and emotional quality (three types of modulation effects.

  7. EFFECTS OF MUSIC INTERVENTIONS ON EMOTIONAL STATES AND RUNNING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Andrew M. Lane

    2011-06-01

    Full Text Available The present study compared the effects of two different music interventions on changes in emotional states before and during running, and also explored effects of music interventions upon performance outcome. Volunteer participants (n = 65 who regularly listened to music when running registered online to participate in a three-stage study. Participants attempted to attain a personally important running goal to establish baseline performance. Thereafter, participants were randomly assigned to either a self-selected music group or an Audiofuel music group. Audiofuel produce pieces of music designed to assist synchronous running. The self-selected music group followed guidelines for selecting motivating playlists. In both experimental groups, participants used the Brunel Music Rating Inventory-2 (BMRI-2 to facilitate selection of motivational music. Participants again completed the BMRI-2 post- intervention to assess the motivational qualities of Audiofuel music or the music they selected for use during the study. Results revealed no significant differences between self-selected music and Audiofuel music on all variables analyzed. Participants in both music groups reported increased pleasant emotions and decreased unpleasant emotions following intervention. Significant performance improvements were demonstrated post-intervention with participants reporting a belief that emotional states related to performance. Further analysis indicated that enhanced performance was significantly greater among participants reporting music to be motivational as indicated by high scores on the BMRI-2. Findings suggest that both individual athletes and practitioners should consider using the BMRI-2 when selecting music for running

  8. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  9. Emotional memory for musical excerpts in young and older adults.

    OpenAIRE

    Irene eAlonso; Irene eAlonso; Irene eAlonso; Delphine eDellacherie; Delphine eDellacherie; Séverine eSamson; Séverine eSamson

    2015-01-01

    The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotion...

  10. The Musical Emotional Bursts: A validated set of musical affect bursts to investigate auditory affective processing.

    Directory of Open Access Journals (Sweden)

    Sébastien ePaquette

    2013-08-01

    Full Text Available The Musical Emotional Bursts (MEB consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear and neutrality. These musical bursts were designed to be the musical analogue of the Montreal Affective Voices (MAV – a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 sec improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (n:40 or a clarinet (n:40. The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, nonlinguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli (30 stimuli x 4 [3 emotions + neutral] x 2 instruments by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task; 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80 was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0% and fearful or sad violin (88.0% each MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.

  11. Emotional memory for musical excerpts in young and older adults

    Science.gov (United States)

    Alonso, Irene; Dellacherie, Delphine; Samson, Séverine

    2015-01-01

    The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24 h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24 h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and dementia. PMID

  12. Emotional memory for musical excerpts in young and older adults.

    Science.gov (United States)

    Alonso, Irene; Dellacherie, Delphine; Samson, Séverine

    2015-01-01

    The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24 h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24 h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and dementia.

  13. Emotional memory for musical excerpts in young and older adults.

    Directory of Open Access Journals (Sweden)

    Irene eAlonso

    2015-03-01

    Full Text Available The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009, but it remains unclear whether positively (Eschrich et al., 2008 or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013 may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of to the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and

  14. Theory-guided Therapeutic Function of Music to facilitate emotion regulation development in preschool-aged children.

    Science.gov (United States)

    Sena Moore, Kimberly; Hanson-Abromeit, Deanna

    2015-01-01

    Emotion regulation (ER) is an umbrella term to describe interactive, goal-dependent explicit, and implicit processes that are intended to help an individual manage and shift an emotional experience. The primary window for appropriate ER development occurs during the infant, toddler, and preschool years. Atypical ER development is considered a risk factor for mental health problems and has been implicated as a primary mechanism underlying childhood pathologies. Current treatments are predominantly verbal- and behavioral-based and lack the opportunity to practice in-the-moment management of emotionally charged situations. There is also an absence of caregiver-child interaction in these treatment strategies. Based on behavioral and neural support for music as a therapeutic mechanism, the incorporation of intentional music experiences, facilitated by a music therapist, may be one way to address these limitations. Musical Contour Regulation Facilitation (MCRF) is an interactive therapist-child music-based intervention for ER development practice in preschoolers. The MCRF intervention uses the deliberate contour and temporal structure of a music therapy session to mirror the changing flow of the caregiver-child interaction through the alternation of high arousal and low arousal music experiences. The purpose of this paper is to describe the Therapeutic Function of Music (TFM), a theory-based description of the structural characteristics for a music-based stimulus to musically facilitate developmentally appropriate high arousal and low arousal in-the-moment ER experiences. The TFM analysis is based on a review of the music theory, music neuroscience, and music development literature and provides a preliminary model of the structural characteristics of the music as a core component of the MCRF intervention.

  15. Theory-guided Therapeutic Function of Music to facilitate emotion regulation development in preschool-aged children

    Science.gov (United States)

    Sena Moore, Kimberly; Hanson-Abromeit, Deanna

    2015-01-01

    Emotion regulation (ER) is an umbrella term to describe interactive, goal-dependent explicit, and implicit processes that are intended to help an individual manage and shift an emotional experience. The primary window for appropriate ER development occurs during the infant, toddler, and preschool years. Atypical ER development is considered a risk factor for mental health problems and has been implicated as a primary mechanism underlying childhood pathologies. Current treatments are predominantly verbal- and behavioral-based and lack the opportunity to practice in-the-moment management of emotionally charged situations. There is also an absence of caregiver–child interaction in these treatment strategies. Based on behavioral and neural support for music as a therapeutic mechanism, the incorporation of intentional music experiences, facilitated by a music therapist, may be one way to address these limitations. Musical Contour Regulation Facilitation (MCRF) is an interactive therapist-child music-based intervention for ER development practice in preschoolers. The MCRF intervention uses the deliberate contour and temporal structure of a music therapy session to mirror the changing flow of the caregiver–child interaction through the alternation of high arousal and low arousal music experiences. The purpose of this paper is to describe the Therapeutic Function of Music (TFM), a theory-based description of the structural characteristics for a music-based stimulus to musically facilitate developmentally appropriate high arousal and low arousal in-the-moment ER experiences. The TFM analysis is based on a review of the music theory, music neuroscience, and music development literature and provides a preliminary model of the structural characteristics of the music as a core component of the MCRF intervention. PMID:26528171

  16. Emotions induced by operatic music: psychophysiological effects of music, plot, and acting: a scientist's tribute to Maria Callas.

    Science.gov (United States)

    Balteş, Felicia Rodica; Avram, Julia; Miclea, Mircea; Miu, Andrei C

    2011-06-01

    Operatic music involves both singing and acting (as well as rich audiovisual background arising from the orchestra and elaborate scenery and costumes) that multiply the mechanisms by which emotions are induced in listeners. The present study investigated the effects of music, plot, and acting performance on emotions induced by opera. There were three experimental conditions: (1) participants listened to a musically complex and dramatically coherent excerpt from Tosca; (2) they read a summary of the plot and listened to the same musical excerpt again; and (3) they re-listened to music while they watched the subtitled film of this acting performance. In addition, a control condition was included, in which an independent sample of participants succesively listened three times to the same musical excerpt. We measured subjective changes using both dimensional, and specific music-induced emotion questionnaires. Cardiovascular, electrodermal, and respiratory responses were also recorded, and the participants kept track of their musical chills. Music listening alone elicited positive emotion and autonomic arousal, seen in faster heart rate, but slower respiration rate and reduced skin conductance. Knowing the (sad) plot while listening to the music a second time reduced positive emotions (peacefulness, joyful activation), and increased negative ones (sadness), while high autonomic arousal was maintained. Watching the acting performance increased emotional arousal and changed its valence again (from less positive/sad to transcendent), in the context of continued high autonomic arousal. The repeated exposure to music did not by itself induce this pattern of modifications. These results indicate that the multiple musical and dramatic means involved in operatic performance specifically contribute to the genesis of music-induced emotions and their physiological correlates. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Animal signals and emotion in music: Coordinating affect across groups

    Directory of Open Access Journals (Sweden)

    Gregory A. Bryant

    2013-12-01

    Full Text Available Researchers studying the emotional impact of music have not traditionally been concerned with the principled relationship between form and function in evolved animal signals. The acoustic structure of musical forms is related in important ways to emotion perception, and thus research on nonhuman animal vocalizations is relevant for understanding emotion in music. Musical behavior occurs in cultural contexts that include many other coordinated activities which mark group identity, and can allow people to communicate within and between social alliances. The emotional impact of music might be best understood as a proximate mechanism serving an ultimately social function. Here I describe recent work that reveals intimate connections between properties of certain animal signals and evocative aspects of human music, including 1 examinations of the role of nonlinearities (e.g., broadband noise in nonhuman animal vocalizations, and the analogous production and perception of these features in human music, and 2 an analysis of group musical performances and possible relationships to nonhuman animal chorusing and emotional contagion effects. Communicative features in music are likely due primarily to evolutionary byproducts of phylogenetically older, but still intact communication systems. But in some cases, such as the coordinated rhythmic sounds produced by groups of musicians, our appreciation and emotional engagement might be due to the operation of an adaptive social signaling system. Future empirical work should examine human musical behavior through the comparative lens of behavioral ecology and an adaptationist cognitive science. By this view, particular coordinated sound combinations generated by musicians exploit evolved perceptual response biases—many shared across species—and proliferate through cultural evolutionary processes.

  18. Psychoacoustic cues to emotion in speech prosody and music.

    Science.gov (United States)

    Coutinho, Eduardo; Dibben, Nicola

    2013-01-01

    There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.

  19. Studying induced musical emotion via a corpus of annotations collected through crowd‐sourcing

    NARCIS (Netherlands)

    Aljanaki, Anna; Wiering, Frans; Veltkamp, Remco

    2014-01-01

    One of the major reasons why music is so enjoyable is its emotional impact. For many people, music is an important everyday aid of emotional regulation. As such, music is used by musical therapists to and in entertainment industry. Recently, mechanisms of emotional induction through music received a

  20. Anxiety, Sadness, and Emotion Specificity: The Role of Music in Consumer Emotion and Advertisement Evaluation

    Directory of Open Access Journals (Sweden)

    Felix Septianto

    2013-12-01

    Full Text Available Although music could diversely influence consumer judgment process and behavior, it is still unclear whether music can evoke discrete emotions on consumers and influence consumer evaluation toward certain advertisements. This research proposes that music could evoke sad and anxious emotion on consumers; subsequently, consumers would regulate their negative emotions in accordance to their emotion orientations: Consumers who feel sad would show high evaluation toward happy-themed advertisement, while consumers who feel anxious would show high evaluation toward calm-themed advertisement. This paper concludes with the discussion of theoretical and practical implications and conclusion of this study.

  1. The Role of Emotion in Musical Improvisation: An Analysis of Structural Features

    OpenAIRE

    McPherson, Malinda J.; Lopez-Gonzalez, Monica; Rankin, Summer K.; Limb, Charles J.

    2014-01-01

    One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not u...

  2. Intact brain processing of musical emotions in autism spectrum disorder, but more cognitive load and arousal in happy versus sad music

    Directory of Open Access Journals (Sweden)

    Line eGebauer

    2014-07-01

    Full Text Available Music is a potent source for eliciting emotions, but not everybody experience emotions in the same way. Individuals with autism spectrum disorder (ASD show difficulties with social and emotional cognition. Impairments in emotion recognition are widely studied in ASD, and have been associated with atypical brain activation in response to emotional expressions in faces and speech. Whether these impairments and atypical brain responses generalize to other domains, such as emotional processing of music, is less clear. Using functional magnetic resonance imaging, we investigated neural correlates of emotion recognition in music in high-functioning adults with ASD and neurotypical adults. Both groups engaged similar neural networks during processing of emotional music, and individuals with ASD rated emotional music comparable to the group of neurotypical individuals. However, in the ASD group, increased activity in response to happy compared to sad music was observed in dorsolateral prefrontal regions and in the rolandic operculum/insula, and we propose that this reflects increased cognitive processing in response to emotional musical stimuli in this group.

  3. Functional cerebral distance and the effect of emotional music on spatial rotation scores in undergraduate women and men.

    Science.gov (United States)

    Bertsch, Sharon; Knee, H Donald; Webb, Jeffrey L

    2011-02-01

    The influence of listening to music on subsequent spatial rotation scores has a controversial history. The effect is unreliable, seeming to depend on several as yet unexplored factors. Using a large sample (167 women, 160 men; M age = 18.9 yr.), two related variables were investigated: participants' sex and the emotion conveyed by the music. Participants listened to 90 sec. of music that portrayed emotions of approach (happiness), or withdrawal (anger), or heard no music at all. They then performed a two-dimensional spatial rotation task. No significant difference was found in spatial rotation scores between groups exposed to music and those who were not. However, a significant interaction was found based on the sex of the participants and the emotion portrayed in the music they heard. Women's scores increased (relative to a no-music condition) only after hearing withdrawal-based music, while men's scores increased only after listening to the approach-based music. These changes were explained using the theory of functional cerebral distance.

  4. Intact brain processing of musical emotions in autism spectrum disorder, but more cognitive load and arousal in happy versus sad music

    DEFF Research Database (Denmark)

    Gebauer, Line; Skewes, Joshua; Westphael, Gitte Gülche

    2014-01-01

    Music is a potent source for eliciting emotions, but not everybody experience emotions in the same way. Individuals with autism spectrum disorder (ASD) show difficulties with social and emotional cognition. Impairments in emotion recognition are widely studied in ASD, and have been associated...... of emotion recognition in music in high-functioning adults with ASD and neurotypical adults. Both groups engaged similar neural networks during processing of emotional music, and individuals with ASD rated emotional music comparable to the group of neurotypical individuals. However, in the ASD group...

  5. Impaired socio-emotional processing in a developmental music disorder

    Science.gov (United States)

    Lima, César F.; Brancatisano, Olivia; Fancourt, Amy; Müllensiefen, Daniel; Scott, Sophie K.; Warren, Jason D.; Stewart, Lauren

    2016-01-01

    Some individuals show a congenital deficit for music processing despite normal peripheral auditory processing, cognitive functioning, and music exposure. This condition, termed congenital amusia, is typically approached regarding its profile of musical and pitch difficulties. Here, we examine whether amusia also affects socio-emotional processing, probing auditory and visual domains. Thirteen adults with amusia and 11 controls completed two experiments. In Experiment 1, participants judged emotions in emotional speech prosody, nonverbal vocalizations (e.g., crying), and (silent) facial expressions. Target emotions were: amusement, anger, disgust, fear, pleasure, relief, and sadness. Compared to controls, amusics were impaired for all stimulus types, and the magnitude of their impairment was similar for auditory and visual emotions. In Experiment 2, participants listened to spontaneous and posed laughs, and either inferred the authenticity of the speaker’s state, or judged how much laughs were contagious. Amusics showed decreased sensitivity to laughter authenticity, but normal contagion responses. Across the experiments, mixed-effects models revealed that the acoustic features of vocal signals predicted socio-emotional evaluations in both groups, but the profile of predictive acoustic features was different in amusia. These findings suggest that a developmental music disorder can affect socio-emotional cognition in subtle ways, an impairment not restricted to auditory information. PMID:27725686

  6. From Sound to Significance: Exploring the Mechanisms Underlying Emotional Reactions to Music.

    Science.gov (United States)

    Juslin, Patrik N; Barradas, Gonçalo; Eerola, Tuomas

    2015-01-01

    A common approach to studying emotional reactions to music is to attempt to obtain direct links between musical surface features such as tempo and a listener's responses. However, such an analysis ultimately fails to explain why emotions are aroused in the listener. In this article we explore an alternative approach, which aims to account for musical emotions in terms of a set of psychological mechanisms that are activated by different types of information in a musical event. This approach was tested in 4 experiments that manipulated 4 mechanisms (brain stem reflex, contagion, episodic memory, musical expectancy) by selecting existing musical pieces that featured information relevant for each mechanism. The excerpts were played to 60 listeners, who were asked to rate their felt emotions on 15 scales. Skin conductance levels and facial expressions were measured, and listeners reported subjective impressions of relevance to specific mechanisms. Results indicated that the target mechanism conditions evoked emotions largely as predicted by a multimechanism framework and that mostly similar effects occurred across the experiments that included different pieces of music. We conclude that a satisfactory account of musical emotions requires consideration of how musical features and responses are mediated by a range of underlying mechanisms.

  7. Familiarity mediates the relationship between emotional arousal and pleasure during music listening

    Science.gov (United States)

    van den Bosch, Iris; Salimpoor, Valorie N.; Zatorre, Robert J.

    2013-01-01

    Emotional arousal appears to be a major contributing factor to the pleasure that listeners experience in response to music. Accordingly, a strong positive correlation between self-reported pleasure and electrodermal activity (EDA), an objective indicator of emotional arousal, has been demonstrated when individuals listen to familiar music. However, it is not yet known to what extent familiarity contributes to this relationship. In particular, as listening to familiar music involves expectations and predictions over time based on veridical knowledge of the piece, it could be that such memory factors plays a major role. Here, we tested such a contribution by using musical stimuli entirely unfamiliar to listeners. In a second experiment we repeated the novel music to experimentally establish a sense of familiarity. We aimed to determine whether (1) pleasure and emotional arousal would continue to correlate when listeners have no explicit knowledge of how the tones will unfold, and (2) whether this could be enhanced by experimentally-induced familiarity. In the first experiment, we presented 33 listeners with 70 unfamiliar musical excerpts in two sessions. There was no relationship between the degree of experienced pleasure and emotional arousal as measured by EDA. In the second experiment, 7 participants listened to 35 unfamiliar excerpts over two sessions separated by 30 min. Repeated exposure significantly increased EDA, even though individuals did not explicitly recall having heard all the pieces before. Furthermore, increases in self-reported familiarity significantly enhanced experienced pleasure and there was a general, though not significant, increase in EDA. These results suggest that some level of expectation and predictability mediated by prior exposure to a given piece of music play an important role in the experience of emotional arousal in response to music. PMID:24046738

  8. Emotional Readiness and Music Therapeutic Activities

    Science.gov (United States)

    Drossinou-Korea, Maria; Fragkouli, Aspasia

    2016-01-01

    The purpose of this study is to understand the children's expression with verbal and nonverbal communication in the Autistic spectrum. We study the emotional readiness and the music therapeutic activities which exploit the elements of music. The method followed focused on the research field of special needs education. Assumptions on the parameters…

  9. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  10. Inferior Frontal Gyrus Activation Underlies the Perception of Emotions, While Precuneus Activation Underlies the Feeling of Emotions during Music Listening

    Science.gov (United States)

    Tabei, Ken-ichi

    2015-01-01

    While music triggers many physiological and psychological reactions, the underlying neural basis of perceived and experienced emotions during music listening remains poorly understood. Therefore, using functional magnetic resonance imaging (fMRI), I conducted a comparative study of the different brain areas involved in perceiving and feeling emotions during music listening. I measured fMRI signals while participants assessed the emotional expression of music (perceived emotion) and their emotional responses to music (felt emotion). I found that cortical areas including the prefrontal, auditory, cingulate, and posterior parietal cortices were consistently activated by the perceived and felt emotional tasks. Moreover, activity in the inferior frontal gyrus increased more during the perceived emotion task than during a passive listening task. In addition, the precuneus showed greater activity during the felt emotion task than during a passive listening task. The findings reveal that the bilateral inferior frontal gyri and the precuneus are important areas for the perception of the emotional content of music as well as for the emotional response evoked in the listener. Furthermore, I propose that the precuneus, a brain region associated with self-representation, might be involved in assessing emotional responses. PMID:26504353

  11. Inferior Frontal Gyrus Activation Underlies the Perception of Emotions, While Precuneus Activation Underlies the Feeling of Emotions during Music Listening.

    Science.gov (United States)

    Tabei, Ken-ichi

    2015-01-01

    While music triggers many physiological and psychological reactions, the underlying neural basis of perceived and experienced emotions during music listening remains poorly understood. Therefore, using functional magnetic resonance imaging (fMRI), I conducted a comparative study of the different brain areas involved in perceiving and feeling emotions during music listening. I measured fMRI signals while participants assessed the emotional expression of music (perceived emotion) and their emotional responses to music (felt emotion). I found that cortical areas including the prefrontal, auditory, cingulate, and posterior parietal cortices were consistently activated by the perceived and felt emotional tasks. Moreover, activity in the inferior frontal gyrus increased more during the perceived emotion task than during a passive listening task. In addition, the precuneus showed greater activity during the felt emotion task than during a passive listening task. The findings reveal that the bilateral inferior frontal gyri and the precuneus are important areas for the perception of the emotional content of music as well as for the emotional response evoked in the listener. Furthermore, I propose that the precuneus, a brain region associated with self-representation, might be involved in assessing emotional responses.

  12. Changing the tune: listeners like music that expresses a contrasting emotion

    Directory of Open Access Journals (Sweden)

    E Glenn eSchellenberg

    2012-12-01

    Full Text Available Theories of aesthetic appreciation propose that (1 a stimulus is liked because it is expected or familiar, (2 a stimulus is liked most when it is neither too familiar nor too novel, or (3 a novel stimulus is liked because it elicits an intensified emotional response. We tested the third hypothesis by examining liking for music as a function of whether the emotion it expressed contrasted with the emotion expressed by music heard previously. Stimuli were 30-s happy- or sad-sounding excerpts from recordings of classical piano music. On each trial, listeners heard a different excerpt and made liking and emotion-intensity ratings. The emotional character of consecutive excerpts was repeated with varying frequencies, followed by an excerpt that expressed a contrasting emotion. As the number of presentations of the background emotion increased, liking and intensity ratings became lower compared to those for the contrasting emotion. Consequently, when the emotional character of the music was relatively novel, listeners’ responses intensified and their appreciation increased.

  13. Predictive Modeling of Expressed Emotions in Music Using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2013-01-01

    We introduce a two-alternative forced-choice (2AFC) experimental paradigm to quantify expressed emotions in music using the arousal and valence (AV) dimensions. A wide range of well-known audio features are investigated for predicting the expressed emotions in music using learning curves...... and essential baselines. We furthermore investigate the scalability issues of using 2AFC in quantifying emotions expressed in music on large-scale music databases. The possibility of dividing the annotation task between multiple individuals, while pooling individuals’ comparisons is investigated by looking...... comparisons at random by using learning curves. We show that a suitable predictive model of expressed valence in music can be achieved from only 15% of the total number of comparisons when using the Expected Value of Information (EVOI) active learning scheme. For the arousal dimension we require 9...

  14. Play it again, Sam: brain correlates of emotional music recognition.

    Science.gov (United States)

    Altenmüller, Eckart; Siggel, Susann; Mohammadi, Bahram; Samii, Amir; Münte, Thomas F

    2014-01-01

    Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory. Eighteen non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 s length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 s) was followed by a 20 s resting period during which participants pressed either the "old" or the "new" button to indicate whether they had heard the piece before. Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces. Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance and is recognized very fast.

  15. Play it again Sam: Brain Correlates of Emotional Music Recognition

    Directory of Open Access Journals (Sweden)

    Eckart eAltenmüller

    2014-02-01

    Full Text Available AbstractBackground: Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory.Methods: 18 non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 second length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 sec was followed by a 20 sec resting period during which participants pressed either the old or the new to indicate whether they had heard the piece before. Results: Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces. Conclusion: Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance.

  16. Manipulating Greek musical modes and tempo affects perceived musical emotion in musicians and nonmusicians.

    Science.gov (United States)

    Ramos, D; Bueno, J L O; Bigand, E

    2011-02-01

    The combined influence of tempo and mode on emotional responses to music was studied by crossing 7 changes in mode with 3 changes in tempo. Twenty-four musicians aged 19 to 25 years (12 males and 12 females) and 24 nonmusicians aged 17 to 25 years (12 males and 12 females) were required to perform two tasks: 1) listening to different musical excerpts, and 2) associating an emotion to them such as happiness, serenity, fear, anger, or sadness. ANOVA showed that increasing the tempo strongly affected the arousal (F(2,116) = 268.62, mean square error (MSE) = 0.6676, P effects were found between tempo and mode (F (1,58) = 115.6, MSE = 0.6428, P effects. This finding demonstrates that small changes in the pitch structures of modes modulate the emotions associated with the pieces, confirming the cognitive foundation of emotional responses to music.

  17. Emotional memory for musical excerpts in young and older adults

    OpenAIRE

    Alonso, Irene; Dellacherie, Delphine; Samson, S?verine

    2015-01-01

    International audience; The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these i...

  18. Personality traits modulate neural responses to emotions expressed in music.

    Science.gov (United States)

    Park, Mona; Hennig-Fast, Kristina; Bao, Yan; Carl, Petra; Pöppel, Ernst; Welker, Lorenz; Reiser, Maximilian; Meindl, Thomas; Gutyrchik, Evgeny

    2013-07-26

    Music communicates and evokes emotions. The number of studies on the neural correlates of musical emotion processing is increasing but few have investigated the factors that modulate these neural activations. Previous research has shown that personality traits account for individual variability of neural responses. In this study, we used functional magnetic resonance imaging (fMRI) to investigate how the dimensions Extraversion and Neuroticism are related to differences in brain reactivity to musical stimuli expressing the emotions happiness, sadness and fear. 12 participants (7 female, M=20.33 years) completed the NEO-Five Factor Inventory (NEO-FFI) and were scanned while performing a passive listening task. Neurofunctional analyses revealed significant positive correlations between Neuroticism scores and activations in bilateral basal ganglia, insula and orbitofrontal cortex in response to music expressing happiness. Extraversion scores were marginally negatively correlated with activations in the right amygdala in response to music expressing fear. Our findings show that subjects' personality may have a predictive power in the neural correlates of musical emotion processing and should be considered in the context of experimental group homogeneity. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Electroencephalographic dynamics of musical emotion perception revealed by independent spectral components.

    Science.gov (United States)

    Lin, Yuan-Pin; Duann, Jeng-Ren; Chen, Jyh-Horng; Jung, Tzyy-Ping

    2010-04-21

    This study explores the electroencephalographic (EEG) correlates of emotional experience during music listening. Independent component analysis and analysis of variance were used to separate statistically independent spectral changes of the EEG in response to music-induced emotional processes. An independent brain process with equivalent dipole located in the fronto-central region exhibited distinct δ-band and θ-band power changes associated with self-reported emotional states. Specifically, the emotional valence was associated with δ-power decreases and θ-power increases in the frontal-central area, whereas the emotional arousal was accompanied by increases in both δ and θ powers. The resultant emotion-related component activations that were less interfered by the activities from other brain processes complement previous EEG studies of emotion perception to music.

  20. Recognition of facial and musical emotions in Parkinson's disease.

    Science.gov (United States)

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  1. Music, emotions and first impression perceptions of a healthcare institutions’ quality: An experimental investigation

    Directory of Open Access Journals (Sweden)

    Ivana First Komen

    2015-03-01

    Full Text Available One of the direct ways of influencing emotions and service quality perceptions is by music stimulation. The purpose of this research is to examine the impact of music of different musical elements (i.e. sad vs. happy music on respondents' emotions and their first impression perceptions of a healthcare institution's quality. The research was designed as an experimental simulation, i.e. data were collected in an online survey from respondents randomly assigned to evaluate a presentation consisting of multiple images of a healthcare institution in one of three experimental conditions (absence of, happy, and sad music stimulation. The results, in alliance with previous research, demonstrate a relationship between emotions and first impression quality perceptions and between music and emotions, but no relationship between music and first impression quality perception. The obtained significant results yet again emphasize the importance of inducing positive customer emotions as they lead to positive first impression service quality evaluations that subsequently provide appreciated returns. They also stress the importance of carefully choosing music when inducing emotions as music with different musical elements results in different emotional states. One of the limitations of this research is the non-real life situation experimental setting, which is to be overcome in future research.

  2. Music and mirror neurons: from motion to 'e'motion.

    Science.gov (United States)

    Molnar-Szakacs, Istvan; Overy, Katie

    2006-12-01

    The ability to create and enjoy music is a universal human trait and plays an important role in the daily life of most cultures. Music has a unique ability to trigger memories, awaken emotions and to intensify our social experiences. We do not need to be trained in music performance or appreciation to be able to reap its benefits-already as infants, we relate to it spontaneously and effortlessly. There has been a recent surge in neuroimaging investigations of the neural basis of musical experience, but the way in which the abstract shapes and patterns of musical sound can have such profound meaning to us remains elusive. Here we review recent neuroimaging evidence and suggest that music, like language, involves an intimate coupling between the perception and production of hierarchically organized sequential information, the structure of which has the ability to communicate meaning and emotion. We propose that these aspects of musical experience may be mediated by the human mirror neuron system.

  3. The Influence of Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder and Neurotypical Children.

    Science.gov (United States)

    Brown, Laura S

    2017-03-01

    Children with autism spectrum disorder (ASD) often struggle with social skills, including the ability to perceive emotions based on facial expressions. Research evidence suggests that many individuals with ASD can perceive emotion in music. Examining whether music can be used to enhance recognition of facial emotion by children with ASD would inform development of music therapy interventions. The purpose of this study was to investigate the influence of music with a strong emotional valance (happy; sad) on children with ASD's ability to label emotions depicted in facial photographs, and their response time. Thirty neurotypical children and 20 children with high-functioning ASD rated expressions of happy, neutral, and sad in 30 photographs under two music listening conditions (sad music; happy music). During each music listening condition, participants rated the 30 images using a 7-point scale that ranged from very sad to very happy. Response time data were also collected across both conditions. A significant two-way interaction revealed that participants' ratings of happy and neutral faces were unaffected by music conditions, but sad faces were perceived to be sadder with sad music than with happy music. Across both conditions, neurotypical children rated the happy faces as happier and the sad faces as sadder than did participants with ASD. Response times of the neurotypical children were consistently shorter than response times of the children with ASD; both groups took longer to rate sad faces than happy faces. Response times of neurotypical children were generally unaffected by the valence of the music condition; however, children with ASD took longer to respond when listening to sad music. Music appears to affect perceptions of emotion in children with ASD, and perceptions of sad facial expressions seem to be more affected by emotionally congruent background music than are perceptions of happy or neutral faces. © the American Music Therapy Association 2016

  4. Affective responses to music in depressed individuals : Aesthetic judgments, emotions, and the impact of music-evoked autobiographical memories

    OpenAIRE

    Sakka, Laura Stavroula

    2018-01-01

    Music’s powerful influence on our affective states is often utilized in everyday life for emotion regulation and in music-therapeutic interventions against depression. Given this ability of music to influence emotions and symptoms in depressed people, it appears imperative to understand how these individuals affectively respond to music. The primary aim of this thesis is to explore whether depressed individuals have distinct affective responses to music, in terms of aesthetic judgments, emoti...

  5. A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics

    Science.gov (United States)

    Brattico, Elvira; Alluri, Vinoo; Bogert, Brigitte; Jacobsen, Thomas; Vartiainen, Nuutti; Nieminen, Sirke; Tervaniemi, Mari

    2011-01-01

    Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions

  6. A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics.

    Science.gov (United States)

    Brattico, Elvira; Alluri, Vinoo; Bogert, Brigitte; Jacobsen, Thomas; Vartiainen, Nuutti; Nieminen, Sirke; Tervaniemi, Mari

    2011-01-01

    Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants' self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects' selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca's area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.

  7. A functional MRI study of happy and sad emotions in music with and without lyrics

    Directory of Open Access Journals (Sweden)

    Elvira eBrattico

    2011-12-01

    Full Text Available Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging (fMRI data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area, and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics

  8. Differential alpha coherence hemispheric patterns in men and women during pleasant and unpleasant musical emotions.

    Science.gov (United States)

    Flores-Gutiérrez, Enrique O; Díaz, José-Luis; Barrios, Fernando A; Guevara, Miguel Angel; Del Río-Portilla, Yolanda; Corsi-Cabrera, María; Del Flores-Gutiérrez, Enrique O

    2009-01-01

    Potential sex differences in EEG coherent activity during pleasant and unpleasant musical emotions were investigated. Musical excerpts by Mahler, Bach, and Prodromidès were played to seven men and seven women and their subjective emotions were evaluated in relation to alpha band intracortical coherence. Different brain links in specific frequencies were associated to pleasant and unpleasant emotions. Pleasant emotions (Mahler, Bach) increased upper alpha couplings linking left anterior and posterior regions. Unpleasant emotions (Prodromidès) were sustained by posterior midline coherence exclusively in the right hemisphere in men and bilateral in women. Combined music induced bilateral oscillations among posterior sensory and predominantly left association areas in women. Consistent with their greater positive attributions to music, the coherent network is larger in women, both for musical emotion and for unspecific musical effects. Musical emotion entails specific coupling among cortical regions and involves coherent upper alpha activity between posterior association areas and frontal regions probably mediating emotional and perceptual integration. Linked regions by combined music suggest more working memory contribution in women and attention in men.

  9. EMuJoy: software for continuous measurement of perceived emotions in music.

    Science.gov (United States)

    Nagel, Frederik; Kopiez, Reinhard; Grewe, Oliver; Altenmüller, Eckart

    2007-05-01

    An adequate study of emotions in music and film should be based on the real-time measurement of self-reported data using a continuous-response method. The recording system discussed in this article reflects two important aspects of such research: First, for a better comparison of results, experimental and technical standards for continuous measurement should be taken into account, and second, the recording system should be open to the inclusion of multimodal stimuli. In light of these two considerations, our article addresses four basic principles of the continuous measurement of emotions: (1) the dimensionality of the emotion space, (2) data acquisition (e.g., the synchronization of media and the self-reported data), (3) interface construction for emotional responses, and (4) the use of multiple stimulus modalities. Researcher-developed software (EMuJoy) is presented as a freeware solution for the continuous measurement of responses to different media, along with empirical data from the self-reports of 38 subjects listening to emotional music and viewing affective pictures.

  10. Music, emotion, and time perception: the influence of subjective emotional valence and arousal?

    Science.gov (United States)

    Droit-Volet, Sylvie; Ramos, Danilo; Bueno, José L. O.; Bigand, Emmanuel

    2013-01-01

    The present study used a temporal bisection task with short (2 s) stimulus durations to investigate the effect on time estimation of several musical parameters associated with emotional changes in affective valence and arousal. In order to manipulate the positive and negative valence of music, Experiments 1 and 2 contrasted the effect of musical structure with pieces played normally and backwards, which were judged to be pleasant and unpleasant, respectively. This effect of valence was combined with a subjective arousal effect by changing the tempo of the musical pieces (fast vs. slow) (Experiment 1) or their instrumentation (orchestral vs. piano pieces). The musical pieces were indeed judged more arousing with a fast than with a slow tempo and with an orchestral than with a piano timbre. In Experiment 3, affective valence was also tested by contrasting the effect of tonal (pleasant) vs. atonal (unpleasant) versions of the same musical pieces. The results showed that the effect of tempo in music, associated with a subjective arousal effect, was the major factor that produced time distortions with time being judged longer for fast than for slow tempi. When the tempo was held constant, no significant effect of timbre on the time judgment was found although the orchestral music was judged to be more arousing than the piano music. Nevertheless, emotional valence did modulate the tempo effect on time perception, the pleasant music being judged shorter than the unpleasant music. PMID:23882233

  11. Music, Emotion and Time Perception: The influence of subjective emotional valence and arousal?

    Directory of Open Access Journals (Sweden)

    SYLVIE eDROIT-VOLET

    2013-07-01

    Full Text Available The present study used a temporal bisection task with short (< 2 s and long (> 2 s stimulus durations to investigate the effect on time estimation of several musical parameters associated with emotional changes in affective valence and arousal. In order to manipulate the positive and negative valence of music, Experiments 1 and 2 contrasted the effect of musical structure with pieces played normally and backwards, which were judged to be pleasant and unpleasant, respectively. This effect of valence was combined with a subjective arousal effect by changing the tempo of the musical pieces (fast vs. slow (Experiment 1 or their instrumentation (orchestral vs. piano pieces. The musical pieces were indeed judged more arousing with a fast than with a slow tempo and with an orchestral than with a piano timbre. In Experiment 3, affective valence was also tested by contrasting the effect of tonal (pleasant versus atonal (unpleasant versions of the same musical pieces. The results showed that the effect of tempo in music, associated with a subjective arousal effect, was the major factor that produced time distortions with time being judged longer for fast than for slow tempi. When the tempo was held constant, no significant effect of timbre on the time judgment was found although the orchestral music was judged to be more arousing than the piano music. Nevertheless, emotional valence did modulate the tempo effect on time perception, the pleasant music being judged shorter than the unpleasant music.

  12. Music, emotion, and time perception: the influence of subjective emotional valence and arousal?

    Science.gov (United States)

    Droit-Volet, Sylvie; Ramos, Danilo; Bueno, José L O; Bigand, Emmanuel

    2013-01-01

    The present study used a temporal bisection task with short (2 s) stimulus durations to investigate the effect on time estimation of several musical parameters associated with emotional changes in affective valence and arousal. In order to manipulate the positive and negative valence of music, Experiments 1 and 2 contrasted the effect of musical structure with pieces played normally and backwards, which were judged to be pleasant and unpleasant, respectively. This effect of valence was combined with a subjective arousal effect by changing the tempo of the musical pieces (fast vs. slow) (Experiment 1) or their instrumentation (orchestral vs. piano pieces). The musical pieces were indeed judged more arousing with a fast than with a slow tempo and with an orchestral than with a piano timbre. In Experiment 3, affective valence was also tested by contrasting the effect of tonal (pleasant) vs. atonal (unpleasant) versions of the same musical pieces. The results showed that the effect of tempo in music, associated with a subjective arousal effect, was the major factor that produced time distortions with time being judged longer for fast than for slow tempi. When the tempo was held constant, no significant effect of timbre on the time judgment was found although the orchestral music was judged to be more arousing than the piano music. Nevertheless, emotional valence did modulate the tempo effect on time perception, the pleasant music being judged shorter than the unpleasant music.

  13. A systematic review on the neural effects of music on emotion regulation: implications for music therapy practice.

    Science.gov (United States)

    Moore, Kimberly Sena

    2013-01-01

    Emotion regulation (ER) is an internal process through which a person maintains a comfortable state of arousal by modulating one or more aspects of emotion. The neural correlates underlying ER suggest an interplay between cognitive control areas and areas involved in emotional reactivity. Although some studies have suggested that music may be a useful tool in ER, few studies have examined the links between music perception/production and the neural mechanisms that underlie ER and resulting implications for clinical music therapy treatment. Objectives of this systematic review were to explore and synthesize what is known about how music and music experiences impact neural structures implicated in ER, and to consider clinical implications of these findings for structuring music stimuli to facilitate ER. A comprehensive electronic database search resulted in 50 studies that met predetermined inclusion and exclusion criteria. Pertinent data related to the objective were extracted and study outcomes were analyzed and compared for trends and common findings. Results indicated there are certain music characteristics and experiences that produce desired and undesired neural activation patterns implicated in ER. Desired activation patterns occurred when listening to preferred and familiar music, when singing, and (in musicians) when improvising; undesired activation patterns arose when introducing complexity, dissonance, and unexpected musical events. Furthermore, the connection between music-influenced changes in attention and its link to ER was explored. Implications for music therapy practice are discussed and preliminary guidelines for how to use music to facilitate ER are shared.

  14. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Science.gov (United States)

    Coutinho, Eduardo; Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  15. Manipulating Greek musical modes and tempo affects perceived musical emotion in musicians and nonmusicians

    Directory of Open Access Journals (Sweden)

    D. Ramos

    2011-02-01

    Full Text Available The combined influence of tempo and mode on emotional responses to music was studied by crossing 7 changes in mode with 3 changes in tempo. Twenty-four musicians aged 19 to 25 years (12 males and 12 females and 24 nonmusicians aged 17 to 25 years (12 males and 12 females were required to perform two tasks: 1 listening to different musical excerpts, and 2 associating an emotion to them such as happiness, serenity, fear, anger, or sadness. ANOVA showed that increasing the tempo strongly affected the arousal (F(2,116 = 268.62, mean square error (MSE = 0.6676, P < 0.001 and, to a lesser extent, the valence of emotional responses (F(6,348 = 8.71, MSE = 0.6196, P < 0.001. Changes in modes modulated the affective valence of the perceived emotions (F(6,348 = 4.24, MSE = 0.6764, P < 0.001. Some interactive effects were found between tempo and mode (F (1,58 = 115.6, MSE = 0.6428, P < 0.001, but, in most cases, the two parameters had additive effects. This finding demonstrates that small changes in the pitch structures of modes modulate the emotions associated with the pieces, confirming the cognitive foundation of emotional responses to music.

  16. Relations of nostalgia with music to emotional response and recall of autobiographical memory

    OpenAIRE

    小林, 麻美; 岩永, 誠; 生和, 秀敏

    2002-01-01

    Previous researches suggest that musical mood and preferences affects on emotional response, and that context of music also affects on musical-dependent memory. We often feel 'nostalgia' when listening to old familiar tunes. Nostalgia is related to eliciting positive emotions, recall of autobiographical memory and positive evaluations for recall contents. The present study aimed to examine effects of musical mood, preference and nostalgia on emotional responses, the amounts of recall of autob...

  17. Sad and happy emotion discrimination in music by children with cochlear implants.

    Science.gov (United States)

    Hopyan, Talar; Manno, Francis A M; Papsin, Blake C; Gordon, Karen A

    2016-01-01

    Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.

  18. Music for the ageing brain: Cognitive, emotional, social, and neural benefits of musical leisure activities in stroke and dementia.

    Science.gov (United States)

    Särkämö, Teppo

    2017-01-01

    Music engages an extensive network of auditory, cognitive, motor, and emotional processing regions in the brain. Coupled with the fact that the emotional and cognitive impact of music is often well preserved in ageing and dementia, music is a powerful tool in the care and rehabilitation of many ageing-related neurological diseases. In addition to formal music therapy, there has been a growing interest in self- or caregiver-implemented musical leisure activities or hobbies as a widely applicable means to support psychological wellbeing in ageing and in neurological rehabilitation. This article reviews the currently existing evidence on the cognitive, emotional, and neural benefits of musical leisure activities in normal ageing as well as in the rehabilitation and care of two of the most common and ageing-related neurological diseases: stroke and dementia.

  19. Autism, emotion recognition and the mirror neuron system: the case of music.

    Science.gov (United States)

    Molnar-Szakacs, Istvan; Wang, Martha J; Laugeson, Elizabeth A; Overy, Katie; Wu, Wai-Ling; Piggot, Judith

    2009-11-16

    Understanding emotions is fundamental to our ability to navigate and thrive in a complex world of human social interaction. Individuals with Autism Spectrum Disorders (ASD) are known to experience difficulties with the communication and understanding of emotion, such as the nonverbal expression of emotion and the interpretation of emotions of others from facial expressions and body language. These deficits often lead to loneliness and isolation from peers, and social withdrawal from the environment in general. In the case of music however, there is evidence to suggest that individuals with ASD do not have difficulties recognizing simple emotions. In addition, individuals with ASD have been found to show normal and even superior abilities with specific aspects of music processing, and often show strong preferences towards music. It is possible these varying abilities with different types of expressive communication may be related to a neural system referred to as the mirror neuron system (MNS), which has been proposed as deficient in individuals with autism. Music's power to stimulate emotions and intensify our social experiences might activate the MNS in individuals with ASD, and thus provide a neural foundation for music as an effective therapeutic tool. In this review, we present literature on the ontogeny of emotion processing in typical development and in individuals with ASD, with a focus on the case of music.

  20. It's Sad but I Like It: The Neural Dissociation Between Musical Emotions and Liking in Experts and Laypersons.

    Science.gov (United States)

    Brattico, Elvira; Bogert, Brigitte; Alluri, Vinoo; Tervaniemi, Mari; Eerola, Tuomas; Jacobsen, Thomas

    2015-01-01

    Emotion-related areas of the brain, such as the medial frontal cortices, amygdala, and striatum, are activated during listening to sad or happy music as well as during listening to pleasurable music. Indeed, in music, like in other arts, sad and happy emotions might co-exist and be distinct from emotions of pleasure or enjoyment. Here we aimed at discerning the neural correlates of sadness or happiness in music as opposed those related to musical enjoyment. We further investigated whether musical expertise modulates the neural activity during affective listening of music. To these aims, 13 musicians and 16 non-musicians brought to the lab their most liked and disliked musical pieces with a happy and sad connotation. Based on a listening test, we selected the most representative 18 sec excerpts of the emotions of interest for each individual participant. Functional magnetic resonance imaging (fMRI) recordings were obtained while subjects listened to and rated the excerpts. The cortico-thalamo-striatal reward circuit and motor areas were more active during liked than disliked music, whereas only the auditory cortex and the right amygdala were more active for disliked over liked music. These results discern the brain structures responsible for the perception of sad and happy emotions in music from those related to musical enjoyment. We also obtained novel evidence for functional differences in the limbic system associated with musical expertise, by showing enhanced liking-related activity in fronto-insular and cingulate areas in musicians.

  1. It's Sad but I Like It: The Neural Dissociation Between Musical Emotions and Liking in Experts and Laypersons

    Science.gov (United States)

    Brattico, Elvira; Bogert, Brigitte; Alluri, Vinoo; Tervaniemi, Mari; Eerola, Tuomas; Jacobsen, Thomas

    2016-01-01

    Emotion-related areas of the brain, such as the medial frontal cortices, amygdala, and striatum, are activated during listening to sad or happy music as well as during listening to pleasurable music. Indeed, in music, like in other arts, sad and happy emotions might co-exist and be distinct from emotions of pleasure or enjoyment. Here we aimed at discerning the neural correlates of sadness or happiness in music as opposed those related to musical enjoyment. We further investigated whether musical expertise modulates the neural activity during affective listening of music. To these aims, 13 musicians and 16 non-musicians brought to the lab their most liked and disliked musical pieces with a happy and sad connotation. Based on a listening test, we selected the most representative 18 sec excerpts of the emotions of interest for each individual participant. Functional magnetic resonance imaging (fMRI) recordings were obtained while subjects listened to and rated the excerpts. The cortico-thalamo-striatal reward circuit and motor areas were more active during liked than disliked music, whereas only the auditory cortex and the right amygdala were more active for disliked over liked music. These results discern the brain structures responsible for the perception of sad and happy emotions in music from those related to musical enjoyment. We also obtained novel evidence for functional differences in the limbic system associated with musical expertise, by showing enhanced liking-related activity in fronto-insular and cingulate areas in musicians. PMID:26778996

  2. Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

    Science.gov (United States)

    Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J

    2015-12-01

    It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,pmusic induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01). Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Music-based interventions in neurological rehabilitation.

    Science.gov (United States)

    Sihvonen, Aleksi J; Särkämö, Teppo; Leo, Vera; Tervaniemi, Mari; Altenmüller, Eckart; Soinila, Seppo

    2017-08-01

    During the past ten years, an increasing number of controlled studies have assessed the potential rehabilitative effects of music-based interventions, such as music listening, singing, or playing an instrument, in several neurological diseases. Although the number of studies and extent of available evidence is greatest in stroke and dementia, there is also evidence for the effects of music-based interventions on supporting cognition, motor function, or emotional wellbeing in people with Parkinson's disease, epilepsy, or multiple sclerosis. Music-based interventions can affect divergent functions such as motor performance, speech, or cognition in these patient groups. However, the psychological effects and neurobiological mechanisms underlying the effects of music interventions are likely to share common neural systems for reward, arousal, affect regulation, learning, and activity-driven plasticity. Although further controlled studies are needed to establish the efficacy of music in neurological recovery, music-based interventions are emerging as promising rehabilitation strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. [Music-based intervention in children].

    Science.gov (United States)

    Kiese-Himmel, Christiane

    2012-01-01

    Music-based interventions with children are an effective method in health and sickness treatment and in education systems. The engagement with music enables positive transfer effects on extra-musical developmental domains. Music therapy was applied primarily as a practically-oriented scientific discipline both within the framework of a multi-modal therapy approach as one treatment component and focused specifically on children with emotional disorders within a somatic therapy concept and in rehabilitation. The following narrative overview will present music therapy's working basis, treatment goals, and select outcome research in children from 2005-2010. There currently exists a substantial lack, even within empirical research, in relation to the application of music therapy to children. This is an opportunity to initiate a broad range of study for the future. Current challenges and opportunities in scientific, music-based intervention in the paediatric population lie in the concretization of differential indications (both in intervention approach and duration), replicable comparative therapy (alternated treatment-design), the application of a music-therapeutic placebo requirement, as well as in the verification and analysis of specific music therapeutic mechanisms.

  5. Music and emotions in the brain: familiarity matters.

    Directory of Open Access Journals (Sweden)

    Carlos Silva Pereira

    Full Text Available The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data.

  6. Music and emotions in the brain: familiarity matters.

    Science.gov (United States)

    Pereira, Carlos Silva; Teixeira, João; Figueiredo, Patrícia; Xavier, João; Castro, São Luís; Brattico, Elvira

    2011-01-01

    The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI) experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data.

  7. Music and Emotions in the Brain: Familiarity Matters

    Science.gov (United States)

    Pereira, Carlos Silva; Teixeira, João; Figueiredo, Patrícia; Xavier, João; Castro, São Luís; Brattico, Elvira

    2011-01-01

    The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI) experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data. PMID:22110619

  8. Sensitivity to musical emotion is influenced by tonal structure in congenital amusia

    OpenAIRE

    Jiang, Cunmei; Liu, Fang; Wong, Patrick C. M.

    2017-01-01

    Emotional communication in music depends on multiple attributes including psychoacoustic features and tonal system information, the latter of which is unique to music. The present study investigated whether congenital amusia, a lifelong disorder of musical processing, impacts sensitivity to musical emotion elicited by timbre and tonal system information. Twenty-six amusics and 26 matched controls made tension judgments on Western (familiar) and Indian (unfamiliar) melodies played on piano and...

  9. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Directory of Open Access Journals (Sweden)

    Eduardo Coutinho

    Full Text Available Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech and cross-domain experiments (i.e., models trained in one modality and tested on the other. In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  10. Shared acoustic codes underlie emotional communication in music and speech—Evidence from deep transfer learning

    Science.gov (United States)

    Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain. PMID:28658285

  11. Music therapy, emotions and the heart: a pilot study.

    Science.gov (United States)

    Raglio, Alfredo; Oasi, Osmano; Gianotti, Marta; Bellandi, Daniele; Manzoni, Veronica; Goulene, Karine; Imbriani, Chiara; Badiale, Marco Stramba

    2012-01-01

    The autonomic nervous system plays an important role in the control of cardiac function. It has been suggested that sound and music may have effects on the autonomic control of the heart inducing emotions, concomitantly with the activation of specific brain areas, i.e. the limbic area, and they may exert potential beneficial effects. This study is a prerequisite and defines a methodology to assess the relation between changes in cardiac physiological parameters such as heart rate, QT interval and their variability and the psychological responses to music therapy sessions. We assessed the cardiac physiological parameters and psychological responses to a music therapy session. ECG Holter recordings were performed before, during and after a music therapy session in 8 healthy individuals. The different behaviors of the music therapist and of the subjects have been analyzed with a specific music therapy assessment (Music Therapy Checklist). After the session mean heart rate decreased (p = 0.05), high frequency of heart rate variability tended to be higher and QTc variability tended to be lower. During music therapy session "affect attunements" have been found in all subjects but one. A significant emotional activation was associated to a higher dynamicity and variations of sound-music interactions. Our results may represent the rational basis for larger studies in diferent clinical conditions.

  12. An examination of cue redundancy theory in cross-cultural decoding of emotions in music.

    Science.gov (United States)

    Kwoun, Soo-Jin

    2009-01-01

    The present study investigated the effects of structural features of music (i.e., variations in tempo, loudness, or articulation, etc.) and cultural and learning factors in the assignments of emotional meaning in music. Four participant groups, young Koreans, young Americans, older Koreans, and older Americans, rated emotional expressions of Korean folksongs with three adjective scales: happiness, sadness and anger. The results of the study are in accordance with the Cue Redundancy model of emotional perception in music, indicating that expressive music embodies both universal auditory cues that communicate the emotional meanings of music across cultures and cultural specific cues that result from cultural convention.

  13. Modeling Temporal Structure in Music for Emotion Prediction using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2014-01-01

    such as emotions, genre, and similarity. This paper addresses the specific hypothesis whether temporal information is essential for predicting expressed emotions in music, as a prototypical example of a cognitive aspect of music. We propose to test this hypothesis using a novel processing pipeline: 1) Extracting...

  14. Hidden sources of joy, fear, and sadness: Explicit versus implicit neural processing of musical emotions.

    Science.gov (United States)

    Bogert, Brigitte; Numminen-Kontti, Taru; Gold, Benjamin; Sams, Mikko; Numminen, Jussi; Burunat, Iballa; Lampinen, Jouko; Brattico, Elvira

    2016-08-01

    Music is often used to regulate emotions and mood. Typically, music conveys and induces emotions even when one does not attend to them. Studies on the neural substrates of musical emotions have, however, only examined brain activity when subjects have focused on the emotional content of the music. Here we address with functional magnetic resonance imaging (fMRI) the neural processing of happy, sad, and fearful music with a paradigm in which 56 subjects were instructed to either classify the emotions (explicit condition) or pay attention to the number of instruments playing (implicit condition) in 4-s music clips. In the implicit vs. explicit condition, stimuli activated bilaterally the inferior parietal lobule, premotor cortex, caudate, and ventromedial frontal areas. The cortical dorsomedial prefrontal and occipital areas activated during explicit processing were those previously shown to be associated with the cognitive processing of music and emotion recognition and regulation. Moreover, happiness in music was associated with activity in the bilateral auditory cortex, left parahippocampal gyrus, and supplementary motor area, whereas the negative emotions of sadness and fear corresponded with activation of the left anterior cingulate and middle frontal gyrus and down-regulation of the orbitofrontal cortex. Our study demonstrates for the first time in healthy subjects the neural underpinnings of the implicit processing of brief musical emotions, particularly in frontoparietal, dorsolateral prefrontal, and striatal areas of the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Reduced sensitivity to emotional prosody in congenital amusia rekindles the musical protolanguage hypothesis.

    Science.gov (United States)

    Thompson, William Forde; Marin, Manuela M; Stewart, Lauren

    2012-11-13

    A number of evolutionary theories assume that music and language have a common origin as an emotional protolanguage that remains evident in overlapping functions and shared neural circuitry. The most basic prediction of this hypothesis is that sensitivity to emotion in speech prosody derives from the capacity to process music. We examined sensitivity to emotion in speech prosody in a sample of individuals with congenital amusia, a neurodevelopmental disorder characterized by deficits in processing acoustic and structural attributes of music. Twelve individuals with congenital amusia and 12 matched control participants judged the emotional expressions of 96 spoken phrases. Phrases were semantically neutral but prosodic cues (tone of voice) communicated each of six emotional states: happy, tender, afraid, irritated, sad, and no emotion. Congenitally amusic individuals were significantly worse than matched controls at decoding emotional prosody, with decoding rates for some emotions up to 20% lower than that of matched controls. They also reported difficulty understanding emotional prosody in their daily lives, suggesting some awareness of this deficit. The findings support speculations that music and language share mechanisms that trigger emotional responses to acoustic attributes, as predicted by theories that propose a common evolutionary link between these domains.

  16. Listening to music and physiological and psychological functioning: the mediating role of emotion regulation and stress reactivity.

    Science.gov (United States)

    Thoma, M V; Scholz, U; Ehlert, U; Nater, U M

    2012-01-01

    Music listening has been suggested to have short-term beneficial effects. The aim of this study was to investigate the association and potential mediating mechanisms between various aspects of habitual music-listening behaviour and physiological and psychological functioning. An internet-based survey was conducted in university students, measuring habitual music-listening behaviour, emotion regulation, stress reactivity, as well as physiological and psychological functioning. A total of 1230 individuals (mean = 24.89 ± 5.34 years, 55.3% women) completed the questionnaire. Quantitative aspects of habitual music-listening behaviour, i.e. average duration of music listening and subjective relevance of music, were not associated with physiological and psychological functioning. In contrast, qualitative aspects, i.e. reasons for listening (especially 'reducing loneliness and aggression', and 'arousing or intensifying specific emotions') were significantly related to physiological and psychological functioning (all p = 0.001). These direct effects were mediated by distress-augmenting emotion regulation and individual stress reactivity. The habitual music-listening behaviour appears to be a multifaceted behaviour that is further influenced by dispositions that are usually not related to music listening. Consequently, habitual music-listening behaviour is not obviously linked to physiological and psychological functioning.

  17. I-space: the effects of emotional valence and source of music on interpersonal distance.

    Directory of Open Access Journals (Sweden)

    Ana Tajadura-Jiménez

    Full Text Available BACKGROUND: The ubiquitous use of personal music players in over-crowded public transport alludes to the hypothesis that apart from making the journey more pleasant, listening to music through headphones may also affect representations of our personal space, that is, the emotionally-tinged zone around the human body that people feel is "their space". We evaluated the effects of emotional valence (positive versus negative and source (external, i.e. loudspeakers, versus embedded, i.e. headphones of music on the participant's interpersonal distance when interacting with others. METHODOLOGY/PRINCIPAL FINDINGS: Personal space was evaluated as the comfort interpersonal distance between participant and experimenter during both active and passive approach tasks. Our results show that, during passive approach tasks, listening to positive versus negative emotion-inducing music reduces the representation of personal space, allowing others to come closer to us. With respect to a no-music condition, an embedded source of positive emotion-inducing music reduced personal space, while an external source of negative emotion-inducing music expanded personal space. CONCLUSIONS/SIGNIFICANCE: The results provide the first empirical evidence of the relation between induced emotional state, as a result of listening to positive music through headphones, and personal space when interacting with others. This research might help to understand the benefit that people find in using personal music players in crowded situations, such as when using the public transport in urban settings.

  18. Pleasurable emotional response to music: a case of neurodegenerative generalized auditory agnosia.

    Science.gov (United States)

    Matthews, Brandy R; Chang, Chiung-Chih; De May, Mary; Engstrom, John; Miller, Bruce L

    2009-06-01

    Recent functional neuroimaging studies implicate the network of mesolimbic structures known to be active in reward processing as the neural substrate of pleasure associated with listening to music. Psychoacoustic and lesion studies suggest that there is a widely distributed cortical network involved in processing discreet musical variables. Here we present the case of a young man with auditory agnosia as the consequence of cortical neurodegeneration who continues to experience pleasure when exposed to music. In a series of musical tasks, the subject was unable to accurately identify any of the perceptual components of music beyond simple pitch discrimination, including musical variables known to impact the perception of affect. The subject subsequently misidentified the musical character of personally familiar tunes presented experimentally, but continued to report that the activity of 'listening' to specific musical genres was an emotionally rewarding experience. The implications of this case for the evolving understanding of music perception, music misperception, music memory, and music-associated emotion are discussed.

  19. How We Remember the Emotional Intensity of Past Musical Experiences

    Directory of Open Access Journals (Sweden)

    Thomas eSchäfer

    2014-08-01

    Full Text Available Listening to music usually elicits emotions that can vary considerably in their intensity over the course of listening. Yet, after listening to a piece of music, people are easily able to evaluate the music’s overall emotional intensity. There are two different hypotheses about how affective experiences are temporally processed and integrated: (1 all moments’ intensities are integrated, resulting in an averaged value; (2 the overall evaluation is built from specific single moments, such as the moments of highest emotional intensity (peaks, the end, or a combination of these. Here we investigated what listeners do when building an overall evaluation of a musical experience. Participants listened to unknown songs and provided moment-to-moment ratings of experienced intensity of emotions. Subsequently, they evaluated the overall emotional intensity of each song. Results indicate that participants’ evaluations were predominantly influenced by their average impression but that, in addition, the peaks and end emotional intensities contributed substantially. These results indicate that both types of processes play a role: All moments are integrated into an averaged value but single moments might be assigned a higher value in the calculation of this average.

  20. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

    Science.gov (United States)

    Good, Arla; Gordon, Karen A; Papsin, Blake C; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A

    Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.

  1. Sensitivity to musical emotion is influenced by tonal structure in congenital amusia.

    Science.gov (United States)

    Jiang, Cunmei; Liu, Fang; Wong, Patrick C M

    2017-08-08

    Emotional communication in music depends on multiple attributes including psychoacoustic features and tonal system information, the latter of which is unique to music. The present study investigated whether congenital amusia, a lifelong disorder of musical processing, impacts sensitivity to musical emotion elicited by timbre and tonal system information. Twenty-six amusics and 26 matched controls made tension judgments on Western (familiar) and Indian (unfamiliar) melodies played on piano and sitar. Like controls, amusics used timbre cues to judge musical tension in Western and Indian melodies. While controls assigned significantly lower tension ratings to Western melodies compared to Indian melodies, thus showing a tonal familiarity effect on tension ratings, amusics provided comparable tension ratings for Western and Indian melodies on both timbres. Furthermore, amusics rated Western melodies as more tense compared to controls, as they relied less on tonality cues than controls in rating tension for Western melodies. The implications of these findings in terms of emotional responses to music are discussed.

  2. Emotions Induced by Operatic Music: Psychophysiological Effects of Music, Plot, and Acting: A Scientist's Tribute to Maria Callas

    Science.gov (United States)

    Baltes, Felicia Rodica; Avram, Julia; Miclea, Mircea; Miu, Andrei C.

    2011-01-01

    Operatic music involves both singing and acting (as well as rich audiovisual background arising from the orchestra and elaborate scenery and costumes) that multiply the mechanisms by which emotions are induced in listeners. The present study investigated the effects of music, plot, and acting performance on emotions induced by opera. There were…

  3. Where Words Fail, Music Speaks: A Mixed Method Study of an Evidence-Based Music Protocol.

    Science.gov (United States)

    Daniels, Ruby A; Torres, David; Reeser, Cathy

    2016-01-01

    Despite numerous studies documenting the benefits of music, hospice social workers are often unfamiliar with evidence-based music practices that may improve end of life care. This mixed method study tested an intervention to teach hospice social workers and chaplains (N = 10) an evidence-based music protocol. Participants used the evidence-based practice (EBP) for 30 days, recording 226 journal entries that described observations of 84 patients and their families. There was a significant increase in EBP knowledge (35%). Prompting behavioral and emotional responses, music was described frequently as a catalyst that facilitated deeper dialogue between patients, families, social workers, and chaplains.

  4. The role of mood and personality in the perception of emotions represented by music.

    Science.gov (United States)

    Vuoskoski, Jonna K; Eerola, Tuomas

    2011-10-01

    Neuroimaging studies investigating the processing of emotions have traditionally considered variance between subjects as statistical noise. However, according to behavioural studies, individual differences in emotional processing appear to be an inherent part of the process itself. Temporary mood states as well as stable personality traits have been shown to influence the processing of emotions, causing trait- and mood-congruent biases. The primary aim of this study was to explore how listeners' personality and mood are reflected in their evaluations of discrete emotions represented by music. A related aim was to investigate the role of personality in music preferences. An experiment was carried out where 67 participants evaluated 50 music excerpts in terms of perceived emotions (anger, fear, happiness, sadness, and tenderness) and preference. Current mood was associated with mood-congruent biases in the evaluation of emotions represented by music, but extraversion moderated the degree of mood-congruence. Personality traits were strongly connected with preference ratings, and the correlations reflected the trait-congruent patterns obtained in prior studies investigating self-referential emotional processing. Implications for future behavioural and neuroimaging studies on music and emotions are raised. Copyright © 2011 Elsevier Srl. All rights reserved.

  5. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity

    Science.gov (United States)

    Mado Proverbio, C.A. Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-01-01

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding. PMID:26469712

  6. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity.

    Science.gov (United States)

    Proverbio, Alice Mado; Mado Proverbio, C A Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-10-15

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.

  7. Elucidating the relationship between work attention performance and emotions arising from listening to music.

    Science.gov (United States)

    Shih, Yi-Nuo; Chien, Wei-Hsien; Chiang, Han-Sun

    2016-10-17

    In addition to demonstrating that human emotions improve work attention performance, numerous studies have also established that music alters human emotions. Given the pervasiveness of background music in the workplace, exactly how work attention, emotions and music listening are related is of priority concern in human resource management. This preliminary study investigates the relationship between work attention performance and emotions arising from listening to music. Thirty one males and 34 females, ranging from 20-24 years old, participated in this study following written informed consent. A randomized controlled trial (RCT) was performed in this study, which consisted of six steps and the use of the standard attention test and emotion questionnaire. Background music with lyrics adversely impacts attention performance more than that without lyrics. Analysis results also indicate that listeners self-reported feeling "loved" while music played that implied a higher score on their work-attention performance. Moreover, a greater ability of music to make listeners feel sad implied a lower score on their work-attention performance. Results of this preliminary study demonstrate that background music in the workplace should focus mainly on creating an environment in which listeners feel loved or taken care and avoiding music that causes individuals to feel stressed or sad. We recommend that future research increase the number of research participants to enhance the applicability and replicability of these findings.

  8. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    OpenAIRE

    Invitto, Sara; Calcagn?, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emo...

  9. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants

    Science.gov (United States)

    Gordon, Karen A.; Papsin, Blake C.; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A.

    2017-01-01

    Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. PMID:28085739

  10. Collecting annotations for induced musical emotion via online game with a purpose emotify

    NARCIS (Netherlands)

    Aljanaki, Anna; Wiering, Frans; Veltkamp, Remco

    2014-01-01

    One of the major reasons why music is so enjoyable is its emotional impact. Indexing and searching by emotion would greatly increase the usability of online music collections. However, there is no consensus on the question which model of emotion would fit this task best. Such a model should be easy

  11. Time flies with music whatever its emotional valence.

    Science.gov (United States)

    Droit-Volet, Sylvie; Bigand, Emmanuel; Ramos, Danilo; Bueno, José Lino Oliveira

    2010-10-01

    The present study used a temporal bisection task to investigate whether music affects time estimation differently from a matched auditory neutral stimulus, and whether the emotional valence of the musical stimuli (i.e., sad vs. happy music) modulates this effect. The results showed that, compared to sine wave control music, music presented in a major (happy) or a minor (sad) key shifted the bisection function toward the right, thus increasing the bisection point value (point of subjective equality). This indicates that the duration of a melody is judged shorter than that of a non-melodic control stimulus, thus confirming that "time flies" when we listen to music. Nevertheless, sensitivity to time was similar for all the auditory stimuli. Furthermore, the temporal bisection functions did not differ as a function of musical mode. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Alteration of complex negative emotions induced by music in euthymic patients with bipolar disorder.

    Science.gov (United States)

    Choppin, Sabine; Trost, Wiebke; Dondaine, Thibaut; Millet, Bruno; Drapier, Dominique; Vérin, Marc; Robert, Gabriel; Grandjean, Didier

    2016-02-01

    Research has shown bipolar disorder to be characterized by dysregulation of emotion processing, including biases in facial expression recognition that is most prevalent during depressive and manic states. Very few studies have examined induced emotions when patients are in a euthymic phase, and there has been no research on complex emotions. We therefore set out to test emotional hyperreactivity in response to musical excerpts inducing complex emotions in bipolar disorder during euthymia. We recruited 21 patients with bipolar disorder (BD) in a euthymic phase and 21 matched healthy controls. Participants first rated their emotional reactivity on two validated self-report scales (ERS and MAThyS). They then rated their music-induced emotions on nine continuous scales. The targeted emotions were wonder, power, melancholy and tension. We used a specific generalized linear mixed model to analyze the behavioral data. We found that participants in the euthymic bipolar group experienced more intense complex negative emotions than controls when the musical excerpts induced wonder. Moreover, patients exhibited greater emotional reactivity in daily life (ERS). Finally, a greater experience of tension while listening to positive music seemed to be mediated by greater emotional reactivity and a deficit in executive functions. The heterogeneity of the BD group in terms of clinical characteristics may have influenced the results. Euthymic patients with bipolar disorder exhibit more complex negative emotions than controls in response to positive music. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Music and emotion-a composer's perspective.

    Science.gov (United States)

    Douek, Joel

    2013-01-01

    This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavors to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives.

  14. A grounded theory of young tennis players use of music to manipulate emotional state.

    Science.gov (United States)

    Bishop, Daniel T; Karageorghis, Costas I; Loizou, Georgios

    2007-10-01

    The main objectives of this study were (a) to elucidate young tennis players' use of music to manipulate emotional states, and (b) to present a model grounded in present data to illustrate this phenomenon and to stimulate further research. Anecdotal evidence suggests that music listening is used regularly by elite athletes as a preperformance strategy, but only limited empirical evidence corroborates such use. Young tennis players (N = 14) were selected purposively for interview and diary data collection. Results indicated that participants consciously selected music to elicit various emotional states; frequently reported consequences of music listening included improved mood, increased arousal, and visual and auditory imagery. The choice of music tracks and the impact of music listening were mediated by a number of factors, including extramusical associations, inspirational lyrics, music properties, and desired emotional state. Implications for the future investigation of preperformance music are discussed.

  15. Detrended Fluctuation Analysis of the Human EEG during Listening to Emotional Music

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A nonlinear method named detrended fluctuation analysis (DFA) was utilized to investigate the scaling behavior of the human electroencephalogram (EEG) in three emotional music conditions (fear, happiness, sadness) and a rest condition (eyes-closed). The results showed that the EEG exhibited scaling behavior in two regions with two scaling exponents β1 and β2 which represented the complexity of higher and lower frequency activity besides β band respectively. As the emotional intensity decreased the value of β1 increased and the value of β2 decreased. The change of β1 was weakly correlated with the 'approach-withdrawal' model of emotion and both of fear and sad music made certain differences compared with the eyes-closed rest condition. The study shows that music is a powerful elicitor of emotion and that using nonlinear method can potentially contribute to the investigation of emotion.

  16. Autonomic Effects of Music in Health and Crohn's Disease: The Impact of Isochronicity, Emotional Valence, and Tempo

    OpenAIRE

    Krabs, Roland Uwe; Enk, Ronny; Teich, Niels; Koelsch, Stefan

    2015-01-01

    Background: Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS) responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a) whether simply a musical tactus (without common emotional components of music) is sufficient to elicit ANS effects; (b) whether changes in the tempo of a musical piece contribute to the ANS effects; (c) whether emotional valence of music influences ANS...

  17. Multidimensional scaling of emotional responses to music in patients with temporal lobe resection.

    Science.gov (United States)

    Dellacherie, D; Bigand, E; Molin, P; Baulac, M; Samson, S

    2011-10-01

    The present study investigated emotional responses to music by using multidimensional scaling (MDS) analysis in patients with right or left medial temporal lobe (MTL) lesions and matched normal controls (NC). Participants were required to evaluate emotional dissimilarities of nine musical excerpts that were selected to express graduated changes along the valence and arousal dimensions. For this purpose, they rated dissimilarity between pairs of stimuli on an eight-point scale and the resulting matrices were submitted to an MDS analysis. The results showed that patients did not differ from NC participants in evaluating emotional feelings induced by the musical excerpts, suggesting that all participants were able to distinguish refined emotions. We concluded that the ability to detect and use emotional valence and arousal when making dissimilarity judgments was not strongly impaired by a right or left MTL lesion. This finding has important clinical implications and is discussed in light of current neuropsychological studies on emotion. It suggests that emotional responses to music can be at least partially preserved at a non-verbal level in patients with unilateral temporal lobe damage including the amygdala. Copyright © 2011 Elsevier Srl. All rights reserved.

  18. From Motion to Emotion: Accelerometer Data Predict Subjective Experience of Music

    Science.gov (United States)

    Irrgang, Melanie

    2016-01-01

    Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems. PMID:27415015

  19. From Motion to Emotion: Accelerometer Data Predict Subjective Experience of Music.

    Science.gov (United States)

    Irrgang, Melanie; Egermann, Hauke

    2016-01-01

    Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems.

  20. Emotion felt by the listener and expressed by the music: literature review and theoretical perspectives.

    Science.gov (United States)

    Schubert, Emery

    2013-12-17

    In his seminal paper, Gabrielsson (2002) distinguishes between emotion felt by the listener, here: "internal locus of emotion" (IL), and the emotion the music is expressing, here: "external locus of emotion" (EL). This paper tabulates 16 comparisons of felt versus expressed emotions in music published in the decade 2003-2012 consisting of 19 studies/experiments and provides some theoretical perspectives. The key findings were that (1) IL rating was frequently rated statistically the same or lower than the corresponding EL rating (e.g., lower felt happiness rating compared to the apparent happiness of the music), and that (2) self-select and preferred music had a smaller gap across the emotion loci than experimenter-selected and disliked music. These key findings were explained by an "inhibited" emotional contagion mechanism, where the otherwise matching felt emotion may have been attenuated by some other factor such as social context. Matching between EL and IL for loved and self-selected pieces was explained by the activation of "contagion" circuits. Physiological arousal, personality and age, as well as musical features (tempo, mode, putative emotions) also influenced perceived and felt emotion distinctions. A variety of data collection formats were identified, but mostly using rating items. In conclusion, a more systematic use of terminology appears desirable. Two broad categories, namely matched and unmatched, are proposed as being sufficient to capture the relationships between EL and IL, instead of four categories as suggested by Gabrielsson.

  1. Music induces universal emotion-related psychophysiological responses: comparing Canadian listeners to Congolese Pygmies

    Science.gov (United States)

    Egermann, Hauke; Fernando, Nathalie; Chuen, Lorraine; McAdams, Stephen

    2015-01-01

    Subjective and psychophysiological emotional responses to music from two different cultures were compared within these two cultures. Two identical experiments were conducted: the first in the Congolese rainforest with an isolated population of Mebenzélé Pygmies without any exposure to Western music and culture, the second with a group of Western music listeners, with no experience with Congolese music. Forty Pygmies and 40 Canadians listened in pairs to 19 music excerpts of 29–99 s in duration in random order (eight from the Pygmy population and 11 Western instrumental excerpts). For both groups, emotion components were continuously measured: subjective feeling (using a two- dimensional valence and arousal rating interface), peripheral physiological activation, and facial expression. While Pygmy music was rated as positive and arousing by Pygmies, ratings of Western music by Westerners covered the range from arousing to calming and from positive to negative. Comparing psychophysiological responses to emotional qualities of Pygmy music across participant groups showed no similarities. However, Western stimuli, rated as high and low arousing by Canadians, created similar responses in both participant groups (with high arousal associated with increases in subjective and physiological activation). Several low-level acoustical features of the music presented (tempo, pitch, and timbre) were shown to affect subjective and physiological arousal similarly in both cultures. Results suggest that while the subjective dimension of emotional valence might be mediated by cultural learning, changes in arousal might involve a more basic, universal response to low-level acoustical characteristics of music. PMID:25620935

  2. Music Induces Universal Emotion-Related Psychophysiological Responses: Comparing Canadian Listeners To Congolese Pygmies

    Directory of Open Access Journals (Sweden)

    Hauke eEgermann

    2015-01-01

    Full Text Available Subjective and psychophysiological emotional responses to music from two different cultures were compared within these two cultures. Two identical experiments were conducted: the first in the Congolese rainforest with an isolated population of Mbenzélé Pygmies without any exposure to Western music and culture, the second with a group of Western music listeners, with no experience with Congolese music. Forty Pygmies and 40 Canadians listened in pairs to 19 music excerpts of 29 to 99 seconds in duration in random order (8 from the Pygmy population and 11 Western instrumental excerpts. For both groups, emotion components were continuously measured: subjective feeling (using a two- dimensional valence and arousal rating interface, peripheral physiological activation, and facial expression. While Pygmy music was rated as positive and arousing by Pygmies, ratings of Western music by Westerners covered the range from arousing to calming and from positive to negative. Comparing psychophysiological responses to emotional qualities of Pygmy music across participant groups showed no similarities. However, Western stimuli, rated as high and low arousing by Canadians, created similar responses in both participant groups (with high arousal associated with increases in subjective and physiological activation. Several low-level acoustical features of the music presented (tempo, pitch, and timbre were shown to affect subjective and physiological arousal similarly in both cultures. Results suggest that while the subjective dimension of emotional valence might be mediated by cultural learning, changes in arousal might involve a more basic, universal response to low-level acoustical characteristics of music.

  3. Play it again with feeling: computer feedback in musical communication of emotions.

    Science.gov (United States)

    Juslin, Patrik N; Karlsson, Jessika; Lindström, Erik; Friberg, Anders; Schoonderwaldt, Erwin

    2006-06-01

    Communication of emotions is of crucial importance in music performance. Yet research has suggested that this skill is neglected in music education. This article presents and evaluates a computer program that automatically analyzes music performances and provides feedback to musicians in order to enhance their communication of emotions. Thirty-six semi-professional jazz /rock guitar players were randomly assigned to one of 3 conditions: (1) feedback from the computer program, (2) feedback from music teachers, and (3) repetition without feedback. Performance measures revealed the greatest improvement in communication accuracy for the computer program, but usability measures indicated that certain aspects of the program could be improved. Implications for music education are discussed.

  4. Musical and emotional attunement - unique and essential in music therapy with children on the autism spectrum

    DEFF Research Database (Denmark)

    Holck, Ulla; Geretsegger, Monika

    2016-01-01

    Background: In improvisational music therapy for children with autism spectrum disorder (ASD), facilitating musical and emotional attunement has been found to be one of the unique and essential principles. Methods: Using videotaped sequences of therapy sessions from an international study (TIME...

  5. Emotional power of music in patients with memory disorders: clinical implications of cognitive neuroscience.

    Science.gov (United States)

    Samson, Séverine; Dellacherie, Delphine; Platel, Hervé

    2009-07-01

    By adapting methods of cognitive psychology to neuropsychology, we examined memory and familiarity abilities in music in relation to emotion. First we present data illustrating how the emotional content of stimuli influences memory for music. Second, we discuss recent findings obtained in patients with two different brain disorders (medically intractable epilepsy and Alzheimer's disease) that show relatively spared memory performance for music, despite severe verbal memory disorders. Studies on musical memory and its relation to emotion open up paths for new strategies in cognitive rehabilitation and reinstate the importance of examining interactions between cognitive and clinical neurosciences.

  6. Influence of trait empathy on the emotion evoked by sad music and on the preference for it.

    Science.gov (United States)

    Kawakami, Ai; Katahira, Kenji

    2015-01-01

    Some people experience pleasant emotion when listening to sad music. Therefore, they can enjoy listening to it. In the current study, we aimed to investigate such apparently paradoxical emotional mechanisms and focused on the influence of individuals' trait empathy, which has been reported to associate with emotional responses to sad music and a preference for it. Eighty-four elementary school children (42 males and 42 females, mean age 11.9 years) listened to two kinds of sad music and rated their emotional state and liking toward them. In addition, trait empathy was assessed using the Interpersonal Reactivity Index scale, which comprises four sub-components: Empathic Concern, Personal Distress, Perspective Taking, and Fantasy (FS). We conducted a path analysis and tested our proposed model that hypothesized that trait empathy and its sub-components would affect the preference for sad music directly or indirectly, mediated by the emotional response to the sad music. Our findings indicated that FS, a sub-component of trait empathy, was directly associated with liking sad music. Additionally, perspective taking ability, another sub-component of trait empathy, was correlated with the emotional response to sad music. Furthermore, the experience of pleasant emotions contributed to liking sad music.

  7. Influence of Trait Empathy on the Emotion Evoked by Sad Music and on the Preference for it

    Directory of Open Access Journals (Sweden)

    Ai eKawakami

    2015-10-01

    Full Text Available Some people experience pleasant emotion when listening to sad music. Therefore, they can enjoy listening to it. In the current study, we aimed to investigate such apparently paradoxical emotional mechanisms and focused on the influence of individuals’ trait empathy, which has been reported to associate with emotional responses to sad music and a preference for it. Eighty-four elementary school children (42 males and 42 females, mean age 11.9 years listened to two kinds of sad music and rated their emotional state and liking towards them. In addition, trait empathy was assessed using the IRI scale, which comprises four sub-components: Empathic Concern, Personal Distress, Perspective Taking, and Fantasy. We conducted a path analysis and tested our proposed model that hypothesized that trait empathy and its sub-components would affect the preference for sad music directly or indirectly, mediated by the emotional response to the sad music. Our findings indicated that fantasy, a sub-component of trait empathy, was directly associated with liking sad music. Additionally, perspective taking ability, another sub-component of trait empathy, was correlated with the emotional response to sad music. Furthermore, the experience of pleasant emotions contributed to liking sad music.

  8. Music and movement share a dynamic structure that supports universal expressions of emotion

    Science.gov (United States)

    Sievers, Beau; Polansky, Larry; Casey, Michael; Wheatley, Thalia

    2013-01-01

    Music moves us. Its kinetic power is the foundation of human behaviors as diverse as dance, romance, lullabies, and the military march. Despite its significance, the music-movement relationship is poorly understood. We present an empirical method for testing whether music and movement share a common structure that affords equivalent and universal emotional expressions. Our method uses a computer program that can generate matching examples of music and movement from a single set of features: rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. We applied our method in two experiments, one in the United States and another in an isolated tribal village in Cambodia. These experiments revealed three things: (i) each emotion was represented by a unique combination of features, (ii) each combination expressed the same emotion in both music and movement, and (iii) this common structure between music and movement was evident within and across cultures. PMID:23248314

  9. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    Science.gov (United States)

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.

    Directory of Open Access Journals (Sweden)

    Bruno L Giordano

    Full Text Available Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.

  11. Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech.

    Science.gov (United States)

    Poon, Matthew; Schutz, Michael

    2015-01-01

    Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound "happier" than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here, we describe a novel, score-based exploration of the use of pitch height and timing in a set of "balanced" major and minor key compositions. Our analysis included all 24 Preludes and 24 Fugues from Bach's Well-Tempered Clavier (book 1), as well as all 24 of Chopin's Preludes for piano. These three sets are balanced with respect to both modality (major/minor) and key chroma ("A," "B," "C," etc.). Consistent with predictions derived from speech, we found major-key (nominally "happy") pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally "sad") pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post hoc analyses illustrate interesting trade-offs, with sets featuring greater emphasis on timing distinctions between modalities exhibiting the least pitch distinction, and vice-versa. We discuss these findings in the broader context of speech-music research, as well as recent scholarship exploring the historical evolution of cue use in Western music.

  12. Cognitive, emotional, and neural benefits of musical leisure activities in aging and neurological rehabilitation: A critical review.

    Science.gov (United States)

    Särkämö, Teppo

    2017-04-28

    Music has the capacity to engage auditory, cognitive, motor, and emotional functions across cortical and subcortical brain regions and is relatively preserved in aging and dementia. Thus, music is a promising tool in the rehabilitation of aging-related neurological illnesses, such as stroke and Alzheimer disease. As the population ages and the incidence and prevalence of these illnesses rapidly increases, music-based interventions that are enjoyable and effective in the everyday care of the patients are needed. In addition to formal music therapy, musical leisure activities, such as music listening and singing, which patients can do on their own or with a caregiver, are a promising way to support psychological well-being during aging and in neurological rehabilitation. This review article provides an overview of current evidence on the cognitive, emotional, and neural effects of musical leisure activities both during normal aging and in the rehabilitation and care of stroke patients and people with dementia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study.

    Science.gov (United States)

    Vieillard, Sandrine; Gilet, Anne-Laure

    2013-01-01

    There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplored the question of how aging may influence emotional responses to and memory for emotions conveyed by music. In the present study, eighteen older (60-84 years) and eighteen younger (19-24 years) listeners were asked to evaluate the strength of their experienced emotion on happy, peaceful, sad, and scary musical excerpts (Vieillard et al., 2008) while facial muscle activity was recorded. Participants then performed an incidental recognition task followed by a task in which they judged to what extent they experienced happiness, peacefulness, sadness, and fear when listening to music. Compared to younger adults, older adults (a) reported a stronger emotional reactivity for happiness than other emotion categories, (b) showed an increased zygomatic activity for scary stimuli, (c) were more likely to falsely recognize happy music, and (d) showed a decrease in their responsiveness to sad and scary music. These results are in line with previous findings and extend them to emotion experience and memory recognition, corroborating the view of age-related changes in emotional responses to music in a positive direction away from negativity.

  14. The effect of background music and song texts on the emotional understanding of children with autism.

    Science.gov (United States)

    Katagiri, June

    2009-01-01

    The purpose of this study was to examine the effect of background music and song texts to teach emotional understanding to children with autism. Participants were 12 students (mean age 11.5 years) with a primary diagnosis of autism who were attending schools in Japan. Each participant was taught four emotions to decode and encode: happiness, sadness, anger, and fear by the counterbalanced treatment-order. The treatment consisted of the four conditions: (a) no contact control (NCC)--no purposeful teaching of the selected emotion, (b) contact control (CC)--teaching the selected emotion using verbal instructions alone, (c) background music (BM)--teaching the selected emotion by verbal instructions with background music representing the emotion, and singing songs (SS)--teaching the selected emotion by singing specially composed songs about the emotion. Participants were given a pretest and a posttest and received 8 individual sessions between these tests. The results indicated that all participants improved significantly in their understanding of the four selected emotions. Background music was significantly more effective than the other three conditions in improving participants' emotional understanding. The findings suggest that background music can be an effective tool to increase emotional understanding in children with autism, which is crucial to their social interactions.

  15. Emotion felt by the listener and expressed by the music: literature review and theoretical perspectives

    Science.gov (United States)

    Schubert, Emery

    2013-01-01

    In his seminal paper, Gabrielsson (2002) distinguishes between emotion felt by the listener, here: “internal locus of emotion” (IL), and the emotion the music is expressing, here: “external locus of emotion” (EL). This paper tabulates 16 comparisons of felt versus expressed emotions in music published in the decade 2003–2012 consisting of 19 studies/experiments and provides some theoretical perspectives. The key findings were that (1) IL rating was frequently rated statistically the same or lower than the corresponding EL rating (e.g., lower felt happiness rating compared to the apparent happiness of the music), and that (2) self-select and preferred music had a smaller gap across the emotion loci than experimenter-selected and disliked music. These key findings were explained by an “inhibited” emotional contagion mechanism, where the otherwise matching felt emotion may have been attenuated by some other factor such as social context. Matching between EL and IL for loved and self-selected pieces was explained by the activation of “contagion” circuits. Physiological arousal, personality and age, as well as musical features (tempo, mode, putative emotions) also influenced perceived and felt emotion distinctions. A variety of data collection formats were identified, but mostly using rating items. In conclusion, a more systematic use of terminology appears desirable. Two broad categories, namely matched and unmatched, are proposed as being sufficient to capture the relationships between EL and IL, instead of four categories as suggested by Gabrielsson. PMID:24381565

  16. Effects of emotion regulation strategies on music-elicited emotions: An experimental study explaining individual differences

    NARCIS (Netherlands)

    Karreman, A.; Laceulle, O.M.; Hanser, Waldie; Vingerhoets, Ad

    This experimental study examined if emotional experience can be manipulated by applying an emotion regulation strategy during music listening and if individual differences in effects of strategies can be explained by person characteristics. Adults (N = 466) completed questionnaires and rated

  17. Effects of emotion regulation strategies on music elicited emotions : An experimental study explaining individual differences

    NARCIS (Netherlands)

    Karreman, A.; Laceulle, O.M.; Hanser, W.E.; Vingerhoets, A.J.J.M.

    2017-01-01

    This experimental study examined if emotional experience can be manipulated by applying an emotion regulation strategy during music listening and if individual differences in effects of strategies can be explained by person characteristics. Adults (N = 466) completed questionnaires and rated

  18. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Sandrine eVieillard

    2013-10-01

    Full Text Available There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplored the question of how aging may influence emotional responses to and memory for music. In the present study, eighteen older (60-84 years and eighteen younger (19-24 years listeners were asked to evaluate the strength of their experienced emotion on happy, peaceful, sad, and scary musical excerpts (Vieillard, et al., 2008 while facial muscle activity was recorded. Participants then performed an incidental recognition task followed by a task in which they judged to what extent they experienced happiness, peacefulness, sadness, and fear when listening to music. Compared to younger adults, older adults (a reported a stronger emotional reactivity for happiness than other emotion categories, (b showed an increased zygomatic activity for scary stimuli, (c were more likely to falsely recognize happy music, and (d showed a decrease in their responsiveness to sad and scary music. These results are in line with previous findings and extend them to emotion experience and memory recognition, corroborating the view of age-related changes in emotional responses to music in a positive direction away from negativity.

  19. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study

    Science.gov (United States)

    Vieillard, Sandrine; Gilet, Anne-Laure

    2013-01-01

    There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplored the question of how aging may influence emotional responses to and memory for emotions conveyed by music. In the present study, eighteen older (60–84 years) and eighteen younger (19–24 years) listeners were asked to evaluate the strength of their experienced emotion on happy, peaceful, sad, and scary musical excerpts (Vieillard et al., 2008) while facial muscle activity was recorded. Participants then performed an incidental recognition task followed by a task in which they judged to what extent they experienced happiness, peacefulness, sadness, and fear when listening to music. Compared to younger adults, older adults (a) reported a stronger emotional reactivity for happiness than other emotion categories, (b) showed an increased zygomatic activity for scary stimuli, (c) were more likely to falsely recognize happy music, and (d) showed a decrease in their responsiveness to sad and scary music. These results are in line with previous findings and extend them to emotion experience and memory recognition, corroborating the view of age-related changes in emotional responses to music in a positive direction away from negativity. PMID:24137141

  20. Music listening in families and peer groups: benefits for young people's social cohesion and emotional well-being across four cultures.

    Science.gov (United States)

    Boer, Diana; Abubakar, Amina

    2014-01-01

    Families are central to the social and emotional development of youth, and most families engage in musical activities together, such as listening to music or talking about their favorite songs. However, empirical evidence of the positive effects of musical family rituals on social cohesion and emotional well-being is scarce. Furthermore, the role of culture in the shaping of musical family rituals and their psychological benefits has been neglected entirely. This paper investigates musical rituals in families and in peer groups (as an important secondary socialization context) in two traditional/collectivistic and two secular/individualistic cultures, and across two developmental stages (adolescence vs. young adulthood). Based on cross-sectional data from 760 young people in Kenya, the Philippines, New Zealand, and Germany, our study revealed that across cultures music listening in families and in peer groups contributes to family and peer cohesion, respectively. Furthermore, the direct contribution of music in peer groups on well-being appears across cultural contexts, whereas musical family rituals affect emotional well-being in more traditional/collectivistic contexts. Developmental analyses show that musical family rituals are consistently and strongly related to family cohesion across developmental stages, whereas musical rituals in peer groups appear more dependent on the developmental stage (in interaction with culture). Contributing to developmental as well as cross-cultural psychology, this research elucidated musical rituals and their positive effects on the emotional and social development of young people across cultures. The implications for future research and family interventions are discussed.

  1. Musical activity and emotional competence - a twin study.

    Science.gov (United States)

    Theorell, Töres P; Lennartsson, Anna-Karin; Mosing, Miriam A; Ullén, Fredrik

    2014-01-01

    The hypothesis was tested that musical activities may contribute to the prevention of alexithymia. We tested whether musical creative achievement and musical practice are associated with lower alexithymia. 8000 Swedish twins aged 27-54 were studied. Alexithymia was assessed using the Toronto Alexithymia Scale-20. Musical achievement was rated on a 7-graded scale. Participants estimated number of hours of music practice during different ages throughout life. A total life estimation of number of accumulated hours was made. They were also asked about ensemble playing. In addition, twin modelling was used to explore the genetic architecture of the relation between musical practice and alexithymia. Alexithymia was negatively associated with (i) musical creative achievement, (ii) having played a musical instrument as compared to never having played, and - for the subsample of participants that had played an instrument - (iii) total hours of musical training (r = -0.12 in men and -0.10 in women). Ensemble playing added significant variance. Twin modelling showed that alexithymia had a moderate heritability of 36% and that the association with musical practice could be explained by shared genetic influences. Associations between musical training and alexithymia remained significant when controlling for education, depression, and intelligence. Musical achievement and musical practice are associated with lower levels of alexithymia in both men and women. Musical engagement thus appears to be associated with higher emotional competence, although effect sizes are small. The association between musical training and alexithymia appears to be entirely genetically mediated, suggesting genetic pleiotropy.

  2. Emotion perception in music in high-functioning adolescents with Autism Spectrum Disorders.

    Science.gov (United States)

    Quintin, Eve-Marie; Bhatara, Anjali; Poissant, Hélène; Fombonne, Eric; Levitin, Daniel J

    2011-09-01

    Individuals with Autism Spectrum Disorders (ASD) succeed at a range of musical tasks. The ability to recognize musical emotion as belonging to one of four categories (happy, sad, scared or peaceful) was assessed in high-functioning adolescents with ASD (N = 26) and adolescents with typical development (TD, N = 26) with comparable performance IQ, auditory working memory, and musical training and experience. When verbal IQ was controlled for, there was no significant effect of diagnostic group. Adolescents with ASD rated the intensity of the emotions similarly to adolescents with TD and reported greater confidence in their responses when they had correctly (vs. incorrectly) recognized the emotions. These findings are reviewed within the context of the amygdala theory of autism.

  3. Expressive Suppression and Enhancement During Music-Elicited Emotions in Younger and Older Adults

    Directory of Open Access Journals (Sweden)

    Sandrine eVieillard

    2015-02-01

    Full Text Available When presented with emotional visual scenes, older adults have been found to be equally capable to regulate emotion expression as younger adults, corroborating the view that emotion regulation skills are maintained or even improved in later adulthood. However, the possibility that gaze direction might help achieve an emotion control goal has not been taken into account, raising the question whether the effortful processing of expressive regulation is really spared from the general age-related decline. Since it does not allow perceptual attention to be redirected away from the emotional source, music provides a useful way to address this question. In the present study, affective, behavioral and physiological consequences of free expression of emotion, expressive suppression and expressive enhancement were measured in 31 younger and 30 older adults while they listened to positive and negative musical excerpts. The main results indicated that compared to younger adults, older adults reported experiencing less emotional intensity in response to negative music during the free expression of emotion condition. No age difference was found in the ability to amplify or reduce emotional expressions. However, an age-related decline in the ability to reduce the intensity of emotional state and an age-related increase in physiological reactivity were found when participants were instructed to suppress negative expression. Taken together, the current data support previous findings suggesting an age-related change in response to music. They also corroborate the observation that older adults are as efficient as younger adults at controlling behavioral expression. But most importantly, they suggest that when faced with auditory sources of negative emotion, older age does not always confer a better ability to regulate emotions.

  4. Independent component processes underlying emotions during natural music listening.

    Science.gov (United States)

    Rogenmoser, Lars; Zollinger, Nina; Elmer, Stefan; Jäncke, Lutz

    2016-09-01

    The aim of this study was to investigate the brain processes underlying emotions during natural music listening. To address this, we recorded high-density electroencephalography (EEG) from 22 subjects while presenting a set of individually matched whole musical excerpts varying in valence and arousal. Independent component analysis was applied to decompose the EEG data into functionally distinct brain processes. A k-means cluster analysis calculated on the basis of a combination of spatial (scalp topography and dipole location mapped onto the Montreal Neurological Institute brain template) and functional (spectra) characteristics revealed 10 clusters referring to brain areas typically involved in music and emotion processing, namely in the proximity of thalamic-limbic and orbitofrontal regions as well as at frontal, fronto-parietal, parietal, parieto-occipital, temporo-occipital and occipital areas. This analysis revealed that arousal was associated with a suppression of power in the alpha frequency range. On the other hand, valence was associated with an increase in theta frequency power in response to excerpts inducing happiness compared to sadness. These findings are partly compatible with the model proposed by Heller, arguing that the frontal lobe is involved in modulating valenced experiences (the left frontal hemisphere for positive emotions) whereas the right parieto-temporal region contributes to the emotional arousal. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  5. Effects of Music on Emotion Regulation: A Systematic Literature Review

    NARCIS (Netherlands)

    Luck, Geoff; Brabant, Olivier; Uhlig, Sylka; Jaschke, Artur; Scherder, E.J.A.

    2013-01-01

    Music and its use for emotion regulation processes, to this day remains an unresolved question. Multiple experimental layouts encompassing its daily life use and clinical applications across different cultures and con-tinents have preserved music as a self-regulative tool. Therefore it is seen as a

  6. It's not what you play, it's how you play it: timbre affects perception of emotion in music.

    Science.gov (United States)

    Hailstone, Julia C; Omar, Rohani; Henley, Susie M D; Frost, Chris; Kenward, Michael G; Warren, Jason D

    2009-11-01

    Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18-30 years, 24 older adults aged 58-75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p music after controlling for other acoustic, cognitive, and performance factors.

  7. Evaluating Autonomic Parameters: The Role of ‎Sleep ‎Duration in Emotional Responses to Music

    Directory of Open Access Journals (Sweden)

    Atefeh Goshvarpour

    2016-02-01

    Full Text Available Objective: It has been recognized that sleep has an important effect on emotion processing. The aim ‎of this study was to investigate the effect of previous night sleep duration on autonomic ‎responses to musical stimuli in different emotional contexts.‎Method: A frequency based measure of GSR, PR and ECG signals were examined in 35 healthy ‎students in three groups of oversleeping, lack of sleep and normal sleep. ‎Results: The results of this study revealed that regardless of the emotional context of the musical ‎stimuli (happy, relax, fear, and sadness, there was an increase in the maximum power of ‎GSR, ECG and PR during the music time compared to the rest time in all the three ‎groups. In addition, the higher value of these measures was achieved while the ‎participants listened to relaxing music. Statistical analysis of the extracted features ‎between each pair of emotional states revealed that the most significant differences ‎were attained for ECG signals. These differences were more obvious in the participants ‎with normal sleeping (p<10-18. The higher value of the indices has been shown, ‎comparing long sleep duration with the normal one.‎Conclusion: There was a strong relation between emotion and sleep duration, and this association can ‎be observed by means of the ECG signals.‎ 

  8. Neural Activations of Guided Imagery and Music in Negative Emotional Processing: A Functional MRI Study.

    Science.gov (United States)

    Lee, Sang Eun; Han, Yeji; Park, HyunWook

    2016-01-01

    The Bonny Method of Guided Imagery and Music uses music and imagery to access and explore personal emotions associated with episodic memories. Understanding the neural mechanism of guided imagery and music (GIM) as combined stimuli for emotional processing informs clinical application. We performed functional magnetic resonance imaging (fMRI) to demonstrate neural mechanisms of GIM for negative emotional processing when personal episodic memory is recalled and re-experienced through GIM processes. Twenty-four healthy volunteers participated in the study, which used classical music and verbal instruction stimuli to evoke negative emotions. To analyze the neural mechanism, activated regions associated with negative emotional and episodic memory processing were extracted by conducting volume analyses for the contrast between GIM and guided imagery (GI) or music (M). The GIM stimuli showed increased activation over the M-only stimuli in five neural regions associated with negative emotional and episodic memory processing, including the left amygdala, left anterior cingulate gyrus, left insula, bilateral culmen, and left angular gyrus (AG). Compared with GI alone, GIM showed increased activation in three regions associated with episodic memory processing in the emotional context, including the right posterior cingulate gyrus, bilateral parahippocampal gyrus, and AG. No neural regions related to negative emotional and episodic memory processing showed more activation for M and GI than for GIM. As a combined multimodal stimulus, GIM may increase neural activations related to negative emotions and episodic memory processing. Findings suggest a neural basis for GIM with personal episodic memories affecting cortical and subcortical structures and functions. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech

    Directory of Open Access Journals (Sweden)

    Matthew ePoon

    2015-11-01

    Full Text Available Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound happier than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here we describe a novel, score-based exploration of the use of pitch height and timing in a set of balanced major and minor key compositions. Our corpus contained all 24 Preludes and 24 Fugues from Bach’s Well Tempered Clavier (book 1, as well as all 24 of Chopin’s Preludes for piano. These three sets are balanced with respect to both modality (major/minor and key chroma (A, B, C, etc.. Consistent with predictions derived from speech, we found major-key (nominally happy pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally sad pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post-hoc analyses illustrate interesting trade-offs, with

  10. The effect of music on corticospinal excitability is related to the perceived emotion: a transcranial magnetic stimulation study.

    Science.gov (United States)

    Giovannelli, Fabio; Banfi, Chiara; Borgheresi, Alessandra; Fiori, Elisa; Innocenti, Iglis; Rossi, Simone; Zaccara, Gaetano; Viggiano, Maria Pia; Cincotta, Massimo

    2013-03-01

    Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a functional link between the emotion-related brain areas and the motor system. It is not well understood, however, whether the motor cortex activity is modulated by specific emotions experienced during music listening. In 23 healthy volunteers, we recorded the motor evoked potentials (MEP) following TMS to investigate the corticospinal excitability while subjects listened to music pieces evoking different emotions (happiness, sadness, fear, and displeasure), an emotionally neutral piece, and a control stimulus (musical scale). Quality and intensity of emotions were previously rated in an additional group of 30 healthy subjects. Fear-related music significantly increased the MEP size compared to the neutral piece and the control stimulus. This effect was not seen with music inducing other emotional experiences and was not related to changes in autonomic variables (respiration rate, heart rate). Current data indicate that also in a musical context, the excitability of the corticomotoneuronal system is related to the emotion expressed by the listened piece. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Music listening in families and peer groups: Benefits for young people's social cohesion and emotional well-being across four cultures

    Directory of Open Access Journals (Sweden)

    Diana eBoer

    2014-05-01

    Full Text Available Families are central to the social and emotional development of youth, and most families engage in musical activities together, such as listening to music or talking about their favorite songs. However, empirical evidence of the positive effects of musical family rituals on social cohesion and emotional well-being is scarce. Furthermore, the role of culture in the shaping of musical family rituals and their psychological benefits has been neglected entirely. This paper investigates musical rituals in families and in peer groups (as an important secondary socialization context in two traditional/collectivistic and two secular/individualistic cultures, and across two developmental stages (adolescence vs. young adulthood. Based on cross-sectional data from 760 young people in Kenya, the Philippines, New Zealand and Germany, our study revealed that across cultures music listening in families and in peer groups contributes to family and peer cohesion respectively. Furthermore, the direct contribution of music in peer groups on well-being appears across cultural contexts, whereas musical family rituals affect emotional well-being in more traditional/collectivistic contexts. Developmental analyses show that musical family rituals are consistently and strongly related to family cohesion across developmental stages, whereas musical rituals in peer groups appear more dependent on the developmental stage (in interaction with culture. Contributing to developmental as well as cross-cultural psychology, this research elucidated musical rituals and their positive effects on the emotional and social development of young people across cultures. The implications for future research and family interventions are discussed.

  12. Music listening in families and peer groups: benefits for young people's social cohesion and emotional well-being across four cultures

    Science.gov (United States)

    Boer, Diana; Abubakar, Amina

    2014-01-01

    Families are central to the social and emotional development of youth, and most families engage in musical activities together, such as listening to music or talking about their favorite songs. However, empirical evidence of the positive effects of musical family rituals on social cohesion and emotional well-being is scarce. Furthermore, the role of culture in the shaping of musical family rituals and their psychological benefits has been neglected entirely. This paper investigates musical rituals in families and in peer groups (as an important secondary socialization context) in two traditional/collectivistic and two secular/individualistic cultures, and across two developmental stages (adolescence vs. young adulthood). Based on cross-sectional data from 760 young people in Kenya, the Philippines, New Zealand, and Germany, our study revealed that across cultures music listening in families and in peer groups contributes to family and peer cohesion, respectively. Furthermore, the direct contribution of music in peer groups on well-being appears across cultural contexts, whereas musical family rituals affect emotional well-being in more traditional/collectivistic contexts. Developmental analyses show that musical family rituals are consistently and strongly related to family cohesion across developmental stages, whereas musical rituals in peer groups appear more dependent on the developmental stage (in interaction with culture). Contributing to developmental as well as cross-cultural psychology, this research elucidated musical rituals and their positive effects on the emotional and social development of young people across cultures. The implications for future research and family interventions are discussed. PMID:24847296

  13. Music and Emotion: a composer’s perspective

    Directory of Open Access Journals (Sweden)

    Joel eDouek

    2013-11-01

    Full Text Available This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavours to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives.

  14. Effect of music therapy with emotional-approach coping on preprocedural anxiety in cardiac catheterization: a randomized controlled trial.

    Science.gov (United States)

    Ghetti, Claire M

    2013-01-01

    Individuals undergoing cardiac catheterization are likely to experience elevated anxiety periprocedurally, with highest anxiety levels occurring immediately prior to the procedure. Elevated anxiety has the potential to negatively impact these individuals psychologically and physiologically in ways that may influence the subsequent procedure. This study evaluated the use of music therapy, with a specific emphasis on emotional-approach coping, immediately prior to cardiac catheterization to impact periprocedural outcomes. The randomized, pretest/posttest control group design consisted of two experimental groups--the Music Therapy with Emotional-Approach Coping group [MT/EAC] (n = 13), and a talk-based Emotional-Approach Coping group (n = 14), compared with a standard care Control group (n = 10). MT/EAC led to improved positive affective states in adults awaiting elective cardiac catheterization, whereas a talk-based emphasis on emotional-approach coping or standard care did not. All groups demonstrated a significant overall decrease in negative affect. The MT/EAC group demonstrated a statistically significant, but not clinically significant, increase in systolic blood pressure most likely due to active engagement in music making. The MT/EAC group trended toward shortest procedure length and least amount of anxiolytic required during the procedure, while the EAC group trended toward least amount of analgesic required during the procedure, but these differences were not statistically significant. Actively engaging in a session of music therapy with an emphasis on emotional-approach coping can improve the well-being of adults awaiting cardiac catheterization procedures.

  15. Approaches of a Secondary Music Teacher in Response to the Social and Emotional Lives of Students

    Science.gov (United States)

    Edgar, Scott

    2015-01-01

    Music teachers interact regularly with students experiencing social and emotional challenges and are often under-prepared to do so. The purpose of this study was to examine approaches of a secondary general music teacher in responding to the social and emotional challenges of eight students in a music classroom at an alternative high school. A…

  16. Locus of emotion: the effect of task order and age on emotion perceived and emotion felt in response to music.

    Science.gov (United States)

    Schubert, Emery

    2007-01-01

    The relationship between emotions perceived to be expressed (external locus EL) versus emotions felt (internal locus--IL) in response to music was examined using 5 contrasting pieces of Romantic, Western art music. The main hypothesis tested was that emotion expressed along the dimensions of emotional-strength, valence, and arousal were lower in magnitude for IL than EL. IL and EL judgments made together after one listening (Experiment 2, n = 18) produced less differentiated responses than when each task was performed after separate listenings (Experiment 1, n = 28). This merging of responses in the locus-task-together condition started to disappear as statistical power was increased. Statistical power was increased by recruiting an additional subject pool of elderly individuals (Experiment 3, n = 19, mean age 75 years). Their valence responses were more positive, and their emotional-strength ratings were generally lower, compared to their younger counterparts. Overall data analysis revealed that IL responses fluctuated slightly more than EL emotions, meaning that the latter are more stable. An additional dimension of dominance-submissiveness was also examined, and was useful in differentiating between pieces, but did not return a difference between IL and EL. Some therapy applications of these findings are discussed.

  17. Effects of mood induction via music on cardiovascular measures of negative emotion during simulated driving.

    Science.gov (United States)

    Fairclough, Stephen H; van der Zwaag, Marjolein; Spiridon, Elena; Westerink, Joyce

    2014-04-22

    A study was conducted to investigate the potential of mood induction via music to influence cardiovascular correlates of negative emotions experience during driving behaviour. One hundred participants were randomly assigned to one of five groups, four of whom experienced different categories of music: High activation/positive valence (HA/PV), high activation/negative valence (HA/NV), low activation/positive valence (LA/PV) and low activation/negative valence (LA/NV). Following exposure to their respective categories of music, participants were required to complete a simulated driving journey with a fixed time schedule. Negative emotion was induced via exposure to stationary traffic during the simulated route. Cardiovascular reactivity was measured via blood pressure, heart rate and cardiovascular impedance. Subjective self-assessment of anger and mood was also recorded. Results indicated that low activation music, regardless of valence, reduced systolic reactivity during the simulated journey relative to HA/NV music and the control (no music) condition. Self-reported data indicated that participants were not consciously aware of any influence of music on their subjective mood. It is concluded that cardiovascular reactivity to negative mood may be mediated by the emotional properties of music. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A joint behavioral and emotive analysis of synchrony in music therapy of children with autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Paola Venuti

    2016-12-01

    Full Text Available Background Synchrony is an essential component of interactive exchanges. In mother-infant interaction, synchrony underlies reciprocity and emotive regulation. A severe lack of synchrony is indeed a core issue within the communication and interaction deficit that characterizes autism spectrum disorders (ASD in accordance with the DSM-5 classification. Based on emerging evidence that music therapy can improve the communication and regulation ability in children with ASD, we aim to verify quantitatively whether: 1 children with ASD improve synchrony with their therapist during music therapy sessions, and 2 this ability persists in different structured contexts. Participants and procedure Twenty-five children, aged from 4 to 6 years (M = 57.80, SD = 16.70, with an autistic disorder diagnosis based on DSM IV-TR and the Autism Diagnostic Observation Schedule (ADOS, participated in the study. An observational tool for coding behaviors and emotive states of synchrony (Child Behavioral and Emotional status Code [CBEC] and Adult Behavioral and Emotional status Code [ABEC] was applied in video recorded sessions of improvisational music therapy (IMT for the subject-therapist pair. For each subject, we considered the 20 central minutes of the first, tenth and twentieth session of IMT. To verify the persistence of effect in a different context with a different adult, we administered and coded the interactive ADOS section (anticipation of a routine with objects applied after session 20 of therapy. Results During the IMT cycle, the amount of synchronic activity increases, with a significant difference from Session 1 to Session 20 in behavioral synchrony and emotional attunement. Also, the increase of synchrony is confirmed at the end of the therapy cycle as measured by an interactive ADOS section. Conclusions Synchrony is an effective indicator of efficacy for music therapy in children with ASD, in particular to evaluate the expansion of positive emotive

  19. Maladaptive and adaptive emotion regulation through music: a behavioral and neuroimaging study of males and females

    Science.gov (United States)

    Carlson, Emily; Saarikallio, Suvi; Toiviainen, Petri; Bogert, Brigitte; Kliuchko, Marina; Brattico, Elvira

    2015-01-01

    Music therapists use guided affect regulation in the treatment of mood disorders. However, self-directed uses of music in affect regulation are not fully understood. Some uses of music may have negative effects on mental health, as can non-music regulation strategies, such as rumination. Psychological testing and functional magnetic resonance imaging (fMRI) were used explore music listening strategies in relation to mental health. Participants (n = 123) were assessed for depression, anxiety and Neuroticism, and uses of Music in Mood Regulation (MMR). Neural responses to music were measured in the medial prefrontal cortex (mPFC) in a subset of participants (n = 56). Discharge, using music to express negative emotions, related to increased anxiety and Neuroticism in all participants and particularly in males. Males high in Discharge showed decreased activity of mPFC during music listening compared with those using less Discharge. Females high in Diversion, using music to distract from negative emotions, showed more mPFC activity than females using less Diversion. These results suggest that the use of Discharge strategy can be associated with maladaptive patterns of emotional regulation, and may even have long-term negative effects on mental health. This finding has real-world applications in psychotherapy and particularly in clinical music therapy. PMID:26379529

  20. Maladaptive and adaptive emotion regulation through music: A behavioural and neuroimaging study of males and females

    Directory of Open Access Journals (Sweden)

    Emily eCarlson

    2015-08-01

    Full Text Available Music therapists use guided affect regulation in the treatment of mood disorders. However, self-directed uses of music in affect regulation are not fully understood. Some uses of music may have negative effects on mental health, as can non-music regulation strategies, such as rumination. Psychological testing and functional magnetic resonance imaging (fMRI were used explore music listening strategies in relation to mental health. Participants (n=123 were assessed for depression, anxiety and Neuroticism, and uses of Music in Mood Regulation (MMR. Neural responses to music were measured in the medial prefrontal cortex (mPFC in a subset of participants (n=56. Discharge, using music to express negative emotions, related to increased anxiety and Neuroticism in all participants and particularly in males. Males high in Discharge showed decreased activity of mPFC during music listening compared with those using less Discharge. Females high in Diversion, using music to distract from negative emotions, showed more mPFC activity than females using less Diversion. These results suggest that the use of Discharge strategy can be associated with maladaptive patterns of emotional regulation, and may even have long-term negative effects on mental health. This finding has real-world applications in psychotherapy and particularly in clinical music therapy.

  1. Musical activity and emotional competence – a twin study

    Directory of Open Access Journals (Sweden)

    Tores PG Theorell

    2014-07-01

    Full Text Available The hypothesis was tested that musical creative achievement and musical practice are associated with lower alexithymia. 8000 Swedish twins aged 27-54 were studied. Alexithymia was assessed using the Toronto Alexithymia Scale (TAS-20. Musical achievement was rated on a 7-graded scale. Participants estimated number of hours of music practice during different ages throughout life. A total life estimation of number of accumulated hours was made. They were also asked about ensemble playing. In addition, twin modelling was used to explore the genetic architecture of the relation between musical practice and alexithymia. Alexithymia was negatively associated with (i musical creative achievement, (ii having played a musical instrument as compared to never having played, and – for the subsample of participants that had played an instrument – (iii total hours of musical training (r = -.12 – in men and -.10 in women. Ensemble playing added significant variance. Twin modelling showed that alexithymia had a moderate heritability of 36% and that the association with musical practice could be explained by shared genetic influences. Associations between musical training and alexithymia remained significant when controlling for education, depression, and intelligence. Musical achievement and musical practice are associated with lower levels of alexithymia in both men and women. Musical engagement thus appears to be associated with higher emotional competence, although effect sizes are small. The association between musical training and alexithymia appears to be entirely genetically mediated, suggesting genetic pleiotropy.

  2. Designing for group music improvisation: a case for jamming with your emotions

    NARCIS (Netherlands)

    Ostos Rios, G.A.; Funk, M.; Hengeveld, B.J.

    2016-01-01

    During improvisation, musicians express themselves through live music. This project looks at the relationship between musicians during music improvisation, the processes of expression and communication taking place during performance and possible ways to use musicians’ emotions, to influence a

  3. Cochlear implant users rely on tempo rather than on pitch information during perception of musical emotion.

    Science.gov (United States)

    Caldwell, Meredith; Rankin, Summer K; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2015-09-01

    The purpose of this study was to investigate the extent to which cochlear implant (CI) users rely on tempo and mode in perception of musical emotion when compared with normal hearing (NH) individuals. A test battery of novel four-bar melodies was created and adapted to four permutations with alterations of tonality (major vs. minor) and tempo (presto vs. largo), resulting in non-ambiguous (major key/fast tempo and minor key/slow tempo) and ambiguous (major key/slow tempo, and minor key/fast tempo) musical stimuli. Both CI and NH participants listened to each clip and provided emotional ratings on a Likert scale of +5 (happy) to -5 (sad). A three-way ANOVA demonstrated an overall effect for tempo in both groups, and an overall effect for mode in the NH group. The CI group rated stimuli of the same tempo similarly, regardless of changes in mode, whereas the NH group did not. A subgroup analysis indicated the same effects in both musician and non-musician CI users and NH listeners. The results suggest that the CI group relied more heavily on tempo than mode in making musical emotion decisions. The subgroup analysis further suggests that level of musical training did not significantly impact this finding. CI users weigh temporal cues more heavily than pitch cues in inferring musical emotion. These findings highlight the significant disadvantage of CI users in comparison with NH listeners for music perception, particularly during recognition of musical emotion, a critically important feature of music.

  4. Emotion felt by the listener and expressed by the music: a literature review and theoretical investigation

    Directory of Open Access Journals (Sweden)

    Emery eSchubert

    2013-12-01

    Full Text Available In his seminal paper, Gabrielsson (2002 distinguishes between emotion felt by the listener, here: ‘internal locus of emotion’ (IL, and the emotion the music is expressing, here: 'external locus emotion' (EL. This paper tabulates 16 such publications published in the decade 2003-2012 consisting of 19 studies/experiments and provides some theoretical perspectives. The key findings were that (1 IL ratings was frequently rated statistically the same or lower than the corresponding EL rating (e.g. lower felt happiness rating compared to the apparent happiness of the music, and that (2 self-select and preferred music had a smaller gap across the emotion loci than experimenter selected and disliked music. These key findings were explained by an ‘inhibited’ emotional contagion mechanism, where the otherwise matching felt emotion may have been attenuated by some other factor such as social context. Matching between EL and IL for loved and self-selected pieces was explained by the activation of ‘contagion’ circuits. Physiological arousal, personality and age, as well as musical features (tempo, mode, putative emotions were observed to influence perceived and felt emotion distinctions. A variety of data collection formats were identified, but mostly using continuous rating scales. In conclusion, a more systematic use of terminology appears desirable with respect to theory-building. Whether two broad categories, namely matched and unmatched, are sufficient to capture the relationships between EL and IL, instead of four categories as suggested by Gabrielsson, is subject to future research.

  5. Problem of Formation of Emotional Culture of Musical College Students

    Directory of Open Access Journals (Sweden)

    G N Kazantseva

    2012-06-01

    Full Text Available The structurally functional model of emotional culture of the personality and the characteristics of the three levels of its development is submitted in the article. The empirical check of the model is described in the course of the implementation of the program of emotional culture formation of musical college students.

  6. Emotions over time: synchronicity and development of subjective, physiological, and facial affective reactions to music.

    Science.gov (United States)

    Grewe, Oliver; Nagel, Frederik; Kopiez, Reinhard; Altenmüller, Eckart

    2007-11-01

    Most people are able to identify basic emotions expressed in music and experience affective reactions to music. But does music generally induce emotion? Does it elicit subjective feelings, physiological arousal, and motor reactions reliably in different individuals? In this interdisciplinary study, measurement of skin conductance, facial muscle activity, and self-monitoring were synchronized with musical stimuli. A group of 38 participants listened to classical, rock, and pop music and reported their feelings in a two-dimensional emotion space during listening. The first entrance of a solo voice or choir and the beginning of new sections were found to elicit interindividual changes in subjective feelings and physiological arousal. Quincy Jones' "Bossa Nova" motivated movement and laughing in more than half of the participants. Bodily reactions such as "goose bumps" and "shivers" could be stimulated by the "Tuba Mirum" from Mozart's Requiem in 7 of 38 participants. In addition, the authors repeated the experiment seven times with one participant to examine intraindividual stability of effects. This exploratory combination of approaches throws a new light on the astonishing complexity of affective music listening.

  7. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  8. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  9. Tuned In emotion regulation program using music listening: Effectiveness for adolescents in educational settings

    Directory of Open Access Journals (Sweden)

    Genevieve Anita Dingle

    2016-06-01

    Full Text Available This paper presents an effectiveness study of Tuned In, a novel emotion regulation intervention that uses participant selected music to evoke emotions in session and teaches participants emotional awareness and regulation skills. The group program content is informed by a two dimensional model of emotion (arousal, valence, along with music psychology theories about how music evokes emotional responses. The program has been evaluated in two samples of adolescents: 41 at risk adolescents (76% males; Mage = 14.8 years attending an educational re-engagement program and 216 students (100% females; Mage = 13.6 years attending a mainstream secondary school. Results showed significant pre- to post-program improvements in measures of emotion awareness, identification, and regulation (p < .01 to p = .06 in the smaller at risk sample and all p < .001 in the mainstream school sample. Participant ratings of engagement and likelihood of using the strategies learned in the program were high. Tuned In shows promise as a brief emotion regulation intervention for adolescents, and these findings extend an earlier study with young adults. Tuned In is a-theoretical in regard to psychotherapeutic approach and could be integrated with other program components as required.

  10. Tuned In Emotion Regulation Program Using Music Listening: Effectiveness for Adolescents in Educational Settings.

    Science.gov (United States)

    Dingle, Genevieve A; Hodges, Joseph; Kunde, Ashleigh

    2016-01-01

    This paper presents an effectiveness study of Tuned In, a novel emotion regulation intervention that uses participant selected music to evoke emotions in session and teaches participants emotional awareness and regulation skills. The group program content is informed by a two dimensional model of emotion (arousal, valence), along with music psychology theories about how music evokes emotional responses. The program has been evaluated in two samples of adolescents: 41 "at risk" adolescents (76% males; M age = 14.8 years) attending an educational re-engagement program and 216 students (100% females; M age = 13.6 years) attending a mainstream secondary school. Results showed significant pre- to post-program improvements in measures of emotion awareness, identification, and regulation (p < 0.01 to p = 0.06 in the smaller "at risk" sample and all p < 0.001 in the mainstream school sample). Participant ratings of engagement and likelihood of using the strategies learned in the program were high. Tuned In shows promise as a brief emotion regulation intervention for adolescents, and these findings extend an earlier study with young adults. Tuned In is a-theoretical in regard to psychotherapeutic approach and could be integrated with other program components as required.

  11. Autonomic effects of music in health and Crohn's disease: the impact of isochronicity, emotional valence, and tempo.

    Directory of Open Access Journals (Sweden)

    Roland Uwe Krabs

    Full Text Available Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a whether simply a musical tactus (without common emotional components of music is sufficient to elicit ANS effects; (b whether changes in the tempo of a musical piece contribute to the ANS effects; (c whether emotional valence of music influences ANS effects; and (d whether music-elicited ANS effects are comparable in healthy subjects and patients with Crohn´s disease (CD, an inflammatory bowel disease suspected to be associated with autonomic dysfunction.To address these issues, three experiments were conducted, with a total of n = 138 healthy subjects and n = 19 CD patients. Heart rate (HR, heart rate variability (HRV, and electrodermal activity (EDA were recorded while participants listened to joyful pleasant music, isochronous tones, and unpleasant control stimuli.Compared to silence, both pleasant music and unpleasant control stimuli elicited an increase in HR and a decrease in a variety of HRV parameters. Surprisingly, similar ANS effects were elicited by isochronous tones (i.e., simply by a tactus. ANS effects did not differ between pleasant and unpleasant stimuli, and different tempi of the music did not entrain ANS activity. Finally, music-evoked ANS effects did not differ between healthy individuals and CD patients.The isochronous pulse of music (i.e., the tactus is a major factor of music-evoked ANS effects. These ANS effects are characterized by increased sympathetic activity. The emotional valence of a musical piece contributes surprisingly little to the ANS activity changes evoked by that piece.

  12. Autonomic effects of music in health and Crohn's disease: the impact of isochronicity, emotional valence, and tempo.

    Science.gov (United States)

    Krabs, Roland Uwe; Enk, Ronny; Teich, Niels; Koelsch, Stefan

    2015-01-01

    Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS) responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a) whether simply a musical tactus (without common emotional components of music) is sufficient to elicit ANS effects; (b) whether changes in the tempo of a musical piece contribute to the ANS effects; (c) whether emotional valence of music influences ANS effects; and (d) whether music-elicited ANS effects are comparable in healthy subjects and patients with Crohn´s disease (CD, an inflammatory bowel disease suspected to be associated with autonomic dysfunction). To address these issues, three experiments were conducted, with a total of n = 138 healthy subjects and n = 19 CD patients. Heart rate (HR), heart rate variability (HRV), and electrodermal activity (EDA) were recorded while participants listened to joyful pleasant music, isochronous tones, and unpleasant control stimuli. Compared to silence, both pleasant music and unpleasant control stimuli elicited an increase in HR and a decrease in a variety of HRV parameters. Surprisingly, similar ANS effects were elicited by isochronous tones (i.e., simply by a tactus). ANS effects did not differ between pleasant and unpleasant stimuli, and different tempi of the music did not entrain ANS activity. Finally, music-evoked ANS effects did not differ between healthy individuals and CD patients. The isochronous pulse of music (i.e., the tactus) is a major factor of music-evoked ANS effects. These ANS effects are characterized by increased sympathetic activity. The emotional valence of a musical piece contributes surprisingly little to the ANS activity changes evoked by that piece.

  13. Autonomic Effects of Music in Health and Crohn's Disease: The Impact of Isochronicity, Emotional Valence, and Tempo

    Science.gov (United States)

    Krabs, Roland Uwe; Enk, Ronny; Teich, Niels; Koelsch, Stefan

    2015-01-01

    Background Music can evoke strong emotions and thus elicit significant autonomic nervous system (ANS) responses. However, previous studies investigating music-evoked ANS effects produced inconsistent results. In particular, it is not clear (a) whether simply a musical tactus (without common emotional components of music) is sufficient to elicit ANS effects; (b) whether changes in the tempo of a musical piece contribute to the ANS effects; (c) whether emotional valence of music influences ANS effects; and (d) whether music-elicited ANS effects are comparable in healthy subjects and patients with Crohn´s disease (CD, an inflammatory bowel disease suspected to be associated with autonomic dysfunction). Methods To address these issues, three experiments were conducted, with a total of n = 138 healthy subjects and n = 19 CD patients. Heart rate (HR), heart rate variability (HRV), and electrodermal activity (EDA) were recorded while participants listened to joyful pleasant music, isochronous tones, and unpleasant control stimuli. Results Compared to silence, both pleasant music and unpleasant control stimuli elicited an increase in HR and a decrease in a variety of HRV parameters. Surprisingly, similar ANS effects were elicited by isochronous tones (i.e., simply by a tactus). ANS effects did not differ between pleasant and unpleasant stimuli, and different tempi of the music did not entrain ANS activity. Finally, music-evoked ANS effects did not differ between healthy individuals and CD patients. Conclusions The isochronous pulse of music (i.e., the tactus) is a major factor of music-evoked ANS effects. These ANS effects are characterized by increased sympathetic activity. The emotional valence of a musical piece contributes surprisingly little to the ANS activity changes evoked by that piece. PMID:25955253

  14. Clinical and Demographic Factors Associated with the Cognitive and Emotional Efficacy of Regular Musical Activities in Dementia.

    Science.gov (United States)

    Särkämö, Teppo; Laitinen, Sari; Numminen, Ava; Kurki, Merja; Johnson, Julene K; Rantanen, Pekka

    2016-01-01

    Recent evidence suggests that music-based interventions can be beneficial in maintaining cognitive, emotional, and social functioning in persons with dementia (PWDs). Our aim was to determine how clinical, demographic, and musical background factors influence the cognitive and emotional efficacy of caregiver-implemented musical activities in PWDs. In a randomized controlled trial, 89 PWD-caregiver dyads received a 10-week music coaching intervention involving either singing or music listening or standard care. Extensive neuropsychological testing and mood and quality of life (QoL) measures were performed before and after the intervention (n = 84) and six months later (n = 74). The potential effects of six key background variables (dementia etiology and severity, age, care situation, singing/instrument playing background) on the outcome of the intervention were assessed. Singing was beneficial especially in improving working memory in PWDs with mild dementia and in maintaining executive function and orientation in younger PWDs. Music listening was beneficial in supporting general cognition, working memory, and QoL especially in PWDs with moderate dementia not caused by Alzheimer's disease (AD) who were in institutional care. Both music interventions alleviated depression especially in PWDs with mild dementia and AD. The musical background of the PWD did not influence the efficacy of the music interventions. Our findings suggest that clinical and demographic factors can influence the cognitive and emotional efficacy of caregiver-implemented musical activities and are, therefore, recommended to take into account when applying and developing the intervention to achieve the greatest benefit.

  15. Coping with preoperative anxiety in cesarean section: physiological, cognitive, and emotional effects of listening to favorite music.

    Science.gov (United States)

    Kushnir, Jonathan; Friedman, Ahuva; Ehrenfeld, Mally; Kushnir, Talma

    2012-06-01

    Listening to music has a stress-reducing effect in surgical procedures. The effects of listening to music immediately before a cesarean section have not been studied. The objective of this study was to assess the effects of listening to selected music while waiting for a cesarean section on emotional reactions, on cognitive appraisal of the threat of surgery, and on stress-related physiological reactions. A total of 60 healthy women waiting alone to undergo an elective cesarean section for medical reasons only were randomly assigned either to an experimental or a control group. An hour before surgery they reported mood, and threat perception. Vital signs were assessed by a nurse. The experimental group listened to preselected favorite music for 40 minutes, and the control group waited for the operation without music. At the end of this period, all participants responded to a questionnaire assessing mood and threat perception, and the nurse measured vital signs. Women who listened to music before a cesarean section had a significant increase in positive emotions and a significant decline in negative emotions and perceived threat of the situation when compared with women in the control group, who exhibited a decline in positive emotions, an increase in the perceived threat of the situation, and had no change in negative emotions. Women who listened to music also exhibited a significant reduction in systolic blood pressure compared with a significant increase in diastolic blood pressure and respiratory rate in the control group. Listening to favorite music immediately before a cesarean section may be a cost-effective, emotion-focused coping strategy. (BIRTH 39:2 June 2012). © 2012, Copyright the Authors Journal compilation © 2012, Wiley Periodicals, Inc.

  16. Music in mind, a randomized controlled trial of music therapy for young people with behavioural and emotional problems: study protocol.

    Science.gov (United States)

    Porter, Sam; Holmes, Valerie; McLaughlin, Katrina; Lynn, Fiona; Cardwell, Chris; Braiden, Hannah-Jane; Doran, Jackie; Rogan, Sheelagh

    2012-10-01

    This article is a report of a trial protocol to determine if improvizational music therapy leads to clinically significant improvement in communication and interaction skills for young people experiencing social, emotional or behavioural problems. Music therapy is often considered an effective intervention for young people experiencing social, emotional or behavioural difficulties. However, this assumption lacks empirical evidence. Music in mind is a multi-centred single-blind randomized controlled trial involving 200 young people (aged 8-16 years) and their parents. Eligible participants will have a working diagnosis within the ambit of international classification of disease 10 mental and behavioural disorders and will be recruited over 15 months from six centres within the Child and Adolescent Mental Health Services of a large health and social care trust in Northern Ireland. Participants will be randomly allocated in a 1:1 ratio to receive standard care alone or standard care plus 12 weekly music therapy sessions delivered by the Northern Ireland Music Therapy Trust. Baseline data will be collected from young people and their parents using standardized outcome measures for communicative and interaction skills (primary endpoint), self-esteem, social functioning, depression and family functioning. Follow-up data will be collected 1 and 13 weeks after the final music therapy session. A cost-effectiveness analysis will also be carried out. This study will be the largest trial to date examining the effect of music therapy on young people experiencing social, emotional or behavioural difficulties and will provide empirical evidence for the use of music therapy among this population. Trial registration. This study is registered in the ISRCTN Register, ISRCTN96352204. Ethical approval was gained in October 2010. © 2012 Blackwell Publishing Ltd.

  17. Modeling Expressed Emotions in Music using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Nielsen, Jens Brehm; Jensen, Bjørn Sand

    2012-01-01

    We introduce a two-alternative forced-choice experimental paradigm to quantify expressed emotions in music using the two wellknown arousal and valence (AV) dimensions. In order to produce AV scores from the pairwise comparisons and to visualize the locations of excerpts in the AV space, we...

  18. Non-verbal Full Body Emotional and Social Interaction: A Case Study on Multimedia Systems for Active Music Listening

    Science.gov (United States)

    Camurri, Antonio

    Research on HCI and multimedia systems for art and entertainment based on non-verbal, full-body, emotional and social interaction is the main topic of this paper. A short review of previous research projects in this area at our centre are presented, to introduce the main issues discussed in the paper. In particular, a case study based on novel paradigms of social active music listening is presented. Active music listening experience enables users to dynamically mould expressive performance of music and of audiovisual content. This research is partially supported by the 7FP EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu).

  19. Preparing Empirical Methodologies to Examine Enactive Subjects Experiencing Musical Emotions

    DEFF Research Database (Denmark)

    Christensen, Justin

    2016-01-01

    in listeners. Many of these theories search for universal emotional essences and cause-and-effect relationships that often result in erasing the body from these experiences. Still, after reducing these emotional responses to discrete categories or localized brain functions, these theories have not been very...... successful in finding universal emotional essence in response to music. In this paper, I argue that we need to bring the body back into this research, to allow for listener variability, and include multiple levels of focus to help find meaningful relationships of emotional responses. I also appeal...

  20. A music therapy tool for assessing parent-child interaction in cases of emotional neglect

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl; H. McKinney, Cathy

    2015-01-01

    Using a music therapy approach to assess emotional communication and parent–child interaction is new to the field of child protection. However, musical improvisations in music therapy has long been known as an analogue to affect attunement and early non-verbal communication between parent and inf...

  1. Getting into the musical zone: Trait emotional intelligence and amount of practice predict flow in pianists

    Directory of Open Access Journals (Sweden)

    Manuela Maria Marin

    2013-11-01

    Full Text Available Being ‘in flow’ or ‘in the zone’ is defined as an extremely focused state of consciousness which occurs during intense engagement in an activity. In general, flow has been linked to peak performances (high achievement and feelings of intense pleasure and happiness. However, empirical research on flow in music performance is scarce, although it may offer novel insights into the question of why musicians engage in musical activities for extensive periods of time. Here, we focused on individual differences in a group of 76 piano performance students and assessed their flow experience in piano performance as well as their trait emotional intelligence. Multiple regression analysis revealed that flow was predicted by the amount of daily practice and trait emotional intelligence. Other background variables (gender, age, duration of piano training and age of first piano training were not predictive. To predict high achievement in piano performance (i.e., winning a prize in a piano competition, a seven-predictor logistic regression model was fitted to the data, and we found that the odds of winning a prize in a piano competition were predicted by the amount of daily practice and the age at which piano training began. Interestingly, a positive relationship between flow and high achievement was not supported. Further, we explored the role of musical emotions and musical styles in the induction of flow by a self-developed questionnaire. Results suggest that besides individual differences among pianists, specific structural and compositional features of musical pieces and related emotional expressions may facilitate flow experiences. Altogether, these findings highlight the role of emotion in the experience of flow during music performance, and call for further experiments addressing emotion in relation to the performer and the music alike.

  2. Getting into the musical zone: trait emotional intelligence and amount of practice predict flow in pianists

    Science.gov (United States)

    Marin, Manuela M.; Bhattacharya, Joydeep

    2013-01-01

    Being “in flow” or “in the zone” is defined as an extremely focused state of consciousness which occurs during intense engagement in an activity. In general, flow has been linked to peak performances (high achievement) and feelings of intense pleasure and happiness. However, empirical research on flow in music performance is scarce, although it may offer novel insights into the question of why musicians engage in musical activities for extensive periods of time. Here, we focused on individual differences in a group of 76 piano performance students and assessed their flow experience in piano performance as well as their trait emotional intelligence. Multiple regression analysis revealed that flow was predicted by the amount of daily practice and trait emotional intelligence. Other background variables (gender, age, duration of piano training and age of first piano training) were not predictive. To predict high achievement in piano performance (i.e., winning a prize in a piano competition), a seven-predictor logistic regression model was fitted to the data, and we found that the odds of winning a prize in a piano competition were predicted by the amount of daily practice and the age at which piano training began. Interestingly, a positive relationship between flow and high achievement was not supported. Further, we explored the role of musical emotions and musical styles in the induction of flow by a self-developed questionnaire. Results suggest that besides individual differences among pianists, specific structural and compositional features of musical pieces and related emotional expressions may facilitate flow experiences. Altogether, these findings highlight the role of emotion in the experience of flow during music performance and call for further experiments addressing emotion in relation to the performer and the music alike. PMID:24319434

  3. Emotional, motivational and interpersonal responsiveness of children with autism in improvisational music therapy

    DEFF Research Database (Denmark)

    Kim, Jinah; Wigram, Tony; Gold, Christian

    2009-01-01

    Through behavioural analysis, this study investigated the social-motivational aspects of musical interaction between the child and the therapist in improvisational music therapy by measuring emotional, motivational and interpersonal responsiveness in children with autism during joint attention ep...

  4. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    Science.gov (United States)

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Exploring Musical Activities and Their Relationship to Emotional Well-Being in Elderly People across Europe: A Study Protocol.

    Science.gov (United States)

    Grau-Sánchez, Jennifer; Foley, Meabh; Hlavová, Renata; Muukkonen, Ilkka; Ojinaga-Alfageme, Olatz; Radukic, Andrijana; Spindler, Melanie; Hundevad, Bodil

    2017-01-01

    Music is a powerful, pleasurable stimulus that can induce positive feelings and can therefore be used for emotional self-regulation. Musical activities such as listening to music, playing an instrument, singing or dancing are also an important source for social contact, promoting interaction and the sense of belonging with others. Recent evidence has suggested that after retirement, other functions of music, such as self-conceptual processing related to autobiographical memories, become more salient. However, few studies have addressed the meaningfulness of music in the elderly. This study aims to investigate elderly people's habits and preferences related to music, study the role music plays in their everyday life, and explore the relationship between musical activities and emotional well-being across different countries of Europe. A survey will be administered to elderly people over the age of 65 from five different European countries (Bosnia and Herzegovina, Czechia, Germany, Ireland, and UK) and to a control group. Participants in both groups will be asked about basic sociodemographic information, habits and preferences in their participation in musical activities and emotional well-being. Overall, the aim of this study is to gain a deeper understanding of the role of music in the elderly from a psychological perspective. This advanced knowledge could help to develop therapeutic applications, such as musical recreational programs for healthy older people or elderly in residential care, which are better able to meet their emotional and social needs.

  6. Colour Association with Music Is Mediated by Emotion: Evidence from an Experiment Using a CIE Lab Interface and Interviews.

    Science.gov (United States)

    Lindborg, PerMagnus; Friberg, Anders K

    2015-01-01

    Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.

  7. Colour Association with Music Is Mediated by Emotion: Evidence from an Experiment Using a CIE Lab Interface and Interviews

    Science.gov (United States)

    Lindborg, PerMagnus; Friberg, Anders K.

    2015-01-01

    Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R 2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds. PMID:26642050

  8. Not all sounds sound the same: Parkinson's disease affects differently emotion processing in music and in speech prosody.

    Science.gov (United States)

    Lima, César F; Garrett, Carolina; Castro, São Luís

    2013-01-01

    Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.

  9. Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study

    OpenAIRE

    Vieillard , Sandrine; Gilet , Anne-Laure ,

    2013-01-01

    International audience; There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplo...

  10. Preattentive processing of emotional musical tones: a multidimensional scaling and ERP study

    Directory of Open Access Journals (Sweden)

    Thomas F Münte

    2013-09-01

    Full Text Available Musical emotion can be conveyed by subtle variations in timbre. Here, we investigated whether the brain is capable to discriminate tones differing in emotional expression by recording event-related potentials (ERPs in an oddball paradigm under preattentive listening conditions. First, using multidimensional Fechnerian scaling, pairs of violin tones played with a happy or sad intonation were rated same or different by a group of non-musicians. Three happy and three sad tones were selected for the ERP experiment. The Fechnerian distances between tones within an emotion were in the same range as the distances between tones of different emotions. In two conditions, either 3 happy and 1 sad or 3 sad and 1 happy tone were presented in pseudo-random order. A mismatch negativity for the emotional deviant was observed, indicating that in spite of considerable perceptual differences between the three equiprobable tones of the standard emotion, a template was formed based on timbral cues against which the emotional deviant was compared. Based on Juslin’s assumption of redundant code usage, we propose that tones were grouped together, because they were identified as belonging to one emotional category based on different emotion-specific cues. These results indicate that the brain forms an emotional memory trace at a preattentive level and thus extends previous investigations in which emotional deviance was confounded with physical dissimilarity. Differences between sad and happy tones were observed which might be due to the fact that the happy emotion is mostly communicated by suprasegmental features.

  11. Emotional, Motivational and Interpersonal Responsiveness of Children with Autism in Improvisational Music Therapy

    Science.gov (United States)

    Kim, Jinah; Wigram, Tony; Gold, Christian

    2009-01-01

    Through behavioural analysis, this study investigated the social-motivational aspects of musical interaction between the child and the therapist in improvisational music therapy by measuring emotional, motivational and interpersonal responsiveness in children with autism during joint engagement episodes. The randomized controlled study (n = 10)…

  12. Assessing the Role of Emotional Associations in Mediating Crossmodal Correspondences between Classical Music and Red Wine

    Directory of Open Access Journals (Sweden)

    Qian (Janice Wang

    2017-01-01

    Full Text Available Several recent studies have demonstrated that people intuitively make consistent matches between classical music and specific wines. It is not clear, however, what governs such crossmodal mappings. Here, we assess the role of emotion—specifically different dimensional aspects of valence, arousal, and dominance—in mediating such mappings. Participants matched three different red wines to three different pieces of classical music. Subsequently, they made emotion ratings separately for each wine and each musical selection. The results revealed that certain wine–music pairings were rated as being significantly better matches than others. More importantly, there was evidence that the participants’ dominance and arousal ratings for the wines and the music predicted their matching rating for each wine–music pairing. These results therefore support the view that wine–music associations are not arbitrary but can be explained, at least in part, by common emotional associations.

  13. Towards Predicting Expressed Emotion in Music from Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2012-01-01

    We introduce five regression models for the modeling of expressed emotion in music using data obtained in a two alternative forced choice listening experiment. The predictive performance of the proposed models is compared using learning curves, showing that all models converge to produce a similar...

  14. Music close to one's heart: heart rate variability with music, diagnostic with e-bra and smartphone

    Science.gov (United States)

    Hegde, Shantala; Kumar, Prashanth S.; Rai, Pratyush; Mathur, Gyanesh N.; Varadan, Vijay K.

    2012-04-01

    Music is a powerful elicitor of emotions. Emotions evoked by music, through autonomic correlates have been shown to cause significant modulation of parameters like heart rate and blood pressure. Consequently, Heart Rate Variability (HRV) analysis can be a powerful tool to explore evidence based therapeutic functions of music and conduct empirical studies on effect of musical emotion on heart function. However, there are limitations with current studies. HRV analysis has produced variable results to different emotions evoked via music, owing to variability in the methodology and the nature of music chosen. Therefore, a pragmatic understanding of HRV correlates of musical emotion in individuals listening to specifically chosen music whilst carrying out day to day routine activities is needed. In the present study, we aim to study HRV as a single case study, using an e-bra with nano-sensors to record heart rate in real time. The e-bra developed previously, has several salient features that make it conducive for this study- fully integrated garment, dry electrodes for easy use and unrestricted mobility. The study considers two experimental conditions:- First, HRV will be recorded when there is no music in the background and second, when music chosen by the researcher and by the subject is playing in the background.

  15. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  16. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  17. Music based cognitive remediation therapy for patients with traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Shantala eHegde

    2014-03-01

    Full Text Available Traumatic brain injury (TBI is one of the common causes of disability in physical, psychological and social domains of functioning leading to poor quality of life. TBI leads to impairment in sensory, motor, language and emotional processing, and also in cognitive functions such as attention, information processing, executive functions and memory. Cognitive impairment plays a central role in functional recovery in TBI. Innovative methods such as music therapy to alleviate cognitive impairments have been investigated recently. The role of music in cognitive rehabilitation is evolving, based on newer findings emerging from the fields of neuromusicology and music cognition. Research findings from these fields have contributed significantly to our understanding of music perception and cognition, and its neural underpinnings. From a neuroscientific perspective, indulging in music is considered as one of the best cognitive exercises. With ‘plasticity’ as its veritable nature, brain engages in producing music indulging an array of cognitive functions and the product, the music, in turn permits restoration and alter brain functions. With scientific findings as its basis, ‘Neurologic Music Therapy’ (NMT has been developed as a systematic treatment method to improve sensorimotor, language and cognitive domains of functioning via music. A preliminary study examining the effect of NMT in cognitive rehabilitation has reported promising results in improving executive functions along with improvement in emotional adjustment and decreasing depression and anxiety following TBI. The potential usage of music-based cognitive rehabilitation therapy in various clinical conditions including TBI is yet to be fully explored. There is a need for systematic research studies to bridge the gap between increasing theoretical understanding of usage of music in cognitive rehabilitation and application of the same in a heterogeneous condition such as TBI.

  18. Music-based cognitive remediation therapy for patients with traumatic brain injury.

    Science.gov (United States)

    Hegde, Shantala

    2014-01-01

    Traumatic brain injury (TBI) is one of the common causes of disability in physical, psychological, and social domains of functioning leading to poor quality of life. TBI leads to impairment in sensory, motor, language, and emotional processing, and also in cognitive functions such as attention, information processing, executive functions, and memory. Cognitive impairment plays a central role in functional recovery in TBI. Innovative methods such as music therapy to alleviate cognitive impairments have been investigated recently. The role of music in cognitive rehabilitation is evolving, based on newer findings emerging from the fields of neuromusicology and music cognition. Research findings from these fields have contributed significantly to our understanding of music perception and cognition, and its neural underpinnings. From a neuroscientific perspective, indulging in music is considered as one of the best cognitive exercises. With "plasticity" as its veritable nature, brain engages in producing music indulging an array of cognitive functions and the product, the music, in turn permits restoration and alters brain functions. With scientific findings as its basis, "neurologic music therapy" (NMT) has been developed as a systematic treatment method to improve sensorimotor, language, and cognitive domains of functioning via music. A preliminary study examining the effect of NMT in cognitive rehabilitation has reported promising results in improving executive functions along with improvement in emotional adjustment and decreasing depression and anxiety following TBI. The potential usage of music-based cognitive rehabilitation therapy in various clinical conditions including TBI is yet to be fully explored. There is a need for systematic research studies to bridge the gap between increasing theoretical understanding of usage of music in cognitive rehabilitation and application of the same in a heterogeneous condition such as TBI.

  19. Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style

    Science.gov (United States)

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception. PMID:22242169

  20. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Directory of Open Access Journals (Sweden)

    Eduardo A Garza Villarreal

    Full Text Available Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  1. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Science.gov (United States)

    Villarreal, Eduardo A Garza; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  2. Music to My Eyes: Cross-Modal Interactions in the Perception of Emotions in Musical Performance

    Science.gov (United States)

    Vines, Bradley W.; Krumhansl, Carol L.; Wanderley, Marcelo M.; Dalca, Ioana M.; Levitin, Daniel J.

    2011-01-01

    We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles:…

  3. Emotional and Motivational Uses of Music in Sports and Exercise: A Questionnaire Study among Athletes

    Science.gov (United States)

    Laukka, Petri; Quick, Lina

    2013-01-01

    Music is present in many sport and exercise situations, but empirical investigations on the motives for listening to music in sports remain scarce. In this study, Swedish elite athletes (N = 252) answered a questionnaire that focused on the emotional and motivational uses of music in sports and exercise. The questionnaire contained both…

  4. The influence of music-elicited emotions and relative pitch on absolute pitch memory for familiar melodies.

    Science.gov (United States)

    Jakubowski, Kelly; Müllensiefen, Daniel

    2013-01-01

    Levitin's findings that nonmusicians could produce from memory the absolute pitches of self-selected pop songs have been widely cited in the music psychology literature. These findings suggest that latent absolute pitch (AP) memory may be a more widespread trait within the population than traditional AP labelling ability. However, it has been left unclear what factors may facilitate absolute pitch retention for familiar pieces of music. The aim of the present paper was to investigate factors that may contribute to latent AP memory using Levitin's sung production paradigm for AP memory and comparing results to the outcomes of a pitch labelling task, a relative pitch memory test, measures of music-induced emotions, and various measures of participants' musical backgrounds. Our results suggest that relative pitch memory and the quality and degree of music-elicited emotions impact on latent AP memory.

  5. Emotions and Understanding in Music. A Transcendental and Empirical Approach

    Czech Academy of Sciences Publication Activity Database

    Kolman, Vojtěch

    2014-01-01

    Roč. 44, č. 1 (2014), s. 83-100 ISSN 0046-8541 R&D Projects: GA ČR(CZ) GA13-20785S Institutional support: RVO:67985955 Keywords : emotions * philosophy of music * idealism * Hegel * Wittgenstein * expectations * Brandom Subject RIV: AA - Philosophy ; Religion

  6. The use of music in facilitating emotional expression in the terminally ill.

    Science.gov (United States)

    Clements-Cortes, Amy

    2004-01-01

    The expression and discussion of feelings of loss and grief can be very difficult for terminally ill patients. Expressing their emotions can help these patients experience a more relaxed and comfortable state. This paper discusses the role of music therapy in palliative care and the function music plays in accessing emotion. It also describes techniques used in assisting clients to express their thoughts and feelings. Case examples of three in-patient palliative care clients at Baycrest Centre for Geriatric Care are presented. The goals set for these patients were to decrease depressive symptoms and social isolation, increase communication and self-expression, stimulate reminiscence and life review, and enhance relaxation. The clients were all successful in reaching their individual goals.

  7. Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Songhua [ORNL; Fan, Rukun [University of North Carolina, Chapel Hill; Geng, Weidong [Zhejiang University

    2011-04-21

    We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with a piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.

  8. Example-based automatic music-driven conventional dance motion synthesis.

    Science.gov (United States)

    Fan, Rukun; Xu, Songhua; Geng, Weidong

    2012-03-01

    We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with a piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.

  9. Involuntary and voluntary recall of musical memories: A comparison of temporal accuracy and emotional responses.

    Science.gov (United States)

    Jakubowski, Kelly; Bashir, Zaariyah; Farrugia, Nicolas; Stewart, Lauren

    2018-01-29

    Comparisons between involuntarily and voluntarily retrieved autobiographical memories have revealed similarities in encoding and maintenance, with differences in terms of specificity and emotional responses. Our study extended this research area into the domain of musical memory, which afforded a unique opportunity to compare the same memory as accessed both involuntarily and voluntarily. Specifically, we compared instances of involuntary musical imagery (INMI, or "earworms")-the spontaneous mental recall and repetition of a tune-to deliberate recall of the same tune as voluntary musical imagery (VMI) in terms of recall accuracy and emotional responses. Twenty participants completed two 3-day tasks. In an INMI task, participants recorded information about INMI episodes as they occurred; in a VMI task, participants were prompted via text message to deliberately imagine each tune they had previously experienced as INMI. In both tasks, tempi of the imagined tunes were recorded by tapping to the musical beat while wearing an accelerometer and additional information (e.g., tune name, emotion ratings) was logged in a diary. Overall, INMI and VMI tempo measurements for the same tune were strongly correlated. Tempo recall for tunes that have definitive, recorded versions was relatively accurate, and tunes that were retrieved deliberately (VMI) were not recalled more accurately in terms of tempo than spontaneous and involuntary instances of imagined music (INMI). Some evidence that INMI elicited stronger emotional responses than VMI was also revealed. These results demonstrate several parallels to previous literature on involuntary memories and add new insights on the phenomenology of INMI.

  10. Emotion, Embodied Mind and the Therapeutic Aspects of Musical Experience in Everyday Life

    Directory of Open Access Journals (Sweden)

    Dylan van der Schyff

    2013-07-01

    Full Text Available The capacity for music to function as a force for bio-cognitive organisation is considered in clinical and everyday contexts. Given the deeply embodied nature of such therapeutic responses to music, it is argued that cognitivist approaches may be insufficient to fully explain music’s affective power. Following this, an embodied approach is considered, where the emotional-affective response to music is discussed in terms of primary bodily systems and the innate cross-modal perceptive capacities of the embodied human mind. It is suggested that such an approach may extend the largely cognitivist view taken by much of contemporary music psychology and philosophy of music by pointing the way towards a conception of musical meaning that begins with our most primordial interactions with the world.

  11. Emotional recognition from dynamic facial, vocal and musical expressions following traumatic brain injury.

    Science.gov (United States)

    Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle

    2017-01-01

    To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.

  12. Affective Music Information Retrieval

    OpenAIRE

    Wang, Ju-Chiang; Yang, Yi-Hsuan; Wang, Hsin-Min

    2015-01-01

    Much of the appeal of music lies in its power to convey emotions/moods and to evoke them in listeners. In consequence, the past decade witnessed a growing interest in modeling emotions from musical signals in the music information retrieval (MIR) community. In this article, we present a novel generative approach to music emotion modeling, with a specific focus on the valence-arousal (VA) dimension model of emotion. The presented generative model, called \\emph{acoustic emotion Gaussians} (AEG)...

  13. The influence of caregiver singing and background music on vocally expressed emotions and moods in dementia care: a qualitative analysis.

    Science.gov (United States)

    Götell, Eva; Brown, Steven; Ekman, Sirkka-Liisa

    2009-04-01

    Music and singing are considered to have a strong impact on human emotions. Such an effect has been demonstrated in caregiving contexts with dementia patients. The aim of the study was to illuminate vocally expressed emotions and moods in the communication between caregivers and persons with severe dementia during morning care sessions. Three types of caring sessions were compared: the "usual" way, with no music; with background music playing; and with the caregiver singing to and/or with the patient. Nine persons with severe dementia living in a nursing home in Sweden and five professional caregivers participated in this study. Qualitative content analysis was used to examine videotaped recordings of morning care sessions, with a focus on vocally expressed emotions and moods during verbal communication. Compared to no music, the presence of background music and caregiver singing improved the mutuality of the communication between caregiver and patient, creating a joint sense of vitality. Positive emotions were enhanced, and aggressiveness was diminished. Whereas background music increased the sense of playfulness, caregiver singing enhanced the sense of sincerity and intimacy in the interaction. Caregiver singing and background music can help the caregiver improve the patient's ability to express positive emotions and moods, and to elicit a sense of vitality on the part of the person with severe dementia. The results further support the value of caregiver singing as a method to improve the quality of dementia care.

  14. The effect of musical experience on emotional self-reports and psychophysiological responses to dissonance.

    Science.gov (United States)

    Dellacherie, Delphine; Roy, Mathieu; Hugueville, Laurent; Peretz, Isabelle; Samson, Séverine

    2011-03-01

    To study the influence of musical education on emotional reactions to dissonance, we examined self-reports and physiological responses to dissonant and consonant musical excerpts in listeners with low (LE: n=15) and high (HE: n=13) musical experience. The results show that dissonance induces more unpleasant feelings and stronger physiological responses in HE than in LE participants, suggesting that musical education reinforces aversion to dissonance. Skin conductance (SCR) and electromyographic (EMG) signals were analyzed according to a defense cascade model, which takes into account two successive time windows corresponding to orienting and defense responses. These analyses suggest that musical experience can influence the defense response to dissonance and demonstrate a powerful role of musical experience not only in autonomic but also in expressive responses to music. Copyright © 2010 Society for Psychophysiological Research.

  15. Slow motion in films and video clips: Music influences perceived duration and emotion, autonomic physiological activation and pupillary responses.

    Science.gov (United States)

    Wöllner, Clemens; Hammerschmidt, David; Albrecht, Henning

    2018-01-01

    Slow motion scenes are ubiquitous in screen-based audiovisual media and are typically accompanied by emotional music. The strong effects of slow motion on observers are hypothetically related to heightened emotional states in which time seems to pass more slowly. These states are simulated in films and video clips, and seem to resemble such experiences in daily life. The current study investigated time perception and emotional response to media clips containing decelerated human motion, with or without music using psychometric and psychophysiological testing methods. Participants were presented with slow-motion scenes taken from commercial films, ballet and sports footage, as well as the same scenes converted to real-time. Results reveal that slow-motion scenes, compared to adapted real-time scenes, led to systematic underestimations of duration, lower perceived arousal but higher valence, lower respiration rates and smaller pupillary diameters. The presence of music compared to visual-only presentations strongly affected results in terms of higher accuracy in duration estimates, higher perceived arousal and valence, higher physiological activation and larger pupillary diameters, indicating higher arousal. Video genre affected responses in addition. These findings suggest that perceiving slow motion is not related to states of high arousal, but rather affects cognitive dimensions of perceived time and valence. Music influences these experiences profoundly, thus strengthening the impact of stretched time in audiovisual media.

  16. Emoções de uma escuta musical afetam a percepção subjetiva de tempo Emotions from listening to music affect the subjective perception of time

    Directory of Open Access Journals (Sweden)

    Danilo Ramos

    2012-01-01

    Full Text Available Este estudo verificou se emoções percebidas durante uma escuta musical influenciam a percepção temporal. Músicos e não músicos foram submetidos a tarefas de escuta de trechos musicais do repertório erudito ocidental com 20 segundos de duração cada um e tarefas de associação temporal de cada trecho ouvido a durações padrões, que variavam de 16 a 24 segundos. Os trechos musicais empregados eram representativos de uma dentre as categorias emocionais Alegria, Tristeza, Serenidade ou Medo / Raiva. Uma análise de variância mostrou que, enquanto os não músicos apresentaram subestimações temporais associadas a pelo menos um trecho musical de cada uma das categorias emocionais, os músicos subestimaram todos os trechos musicais tristes, relacionados às características de baixo arousal e valência afetiva negativa.This study examined whether perceived emotions during music listening tasks influence time perception. Musicians and non-musicians were submitted to tasks of listening to musical excerpts from Western classical repertoire of 20 seconds and tasks of temporal association of each piece of music to standard durations, ranging from 16 to 24 seconds. Musical excerpts were representative from one of the following emotional categories: Happiness, Sadness, Threat and Peacefulness. An analysis of variance showed that, while non-musicians showed temporal underestimations associated with, at least, one piece of music from each emotional category, musicians underestimated all sad musical excerpts, related to low arousal and negative valence features.

  17. Musical rhythm and affect. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    Science.gov (United States)

    Witek, Maria A. G.; Kringelbach, Morten L.; Vuust, Peter

    2015-06-01

    The Quartet Theory of Human Emotion (QT) proposed by Koelsch et al. [1] adds to existing affective models, e.g. by directing more attention to emotional contagion, attachment-related and non-goal-directed emotions. Such an approach seems particularly appropriate to modelling musical emotions, and music is indeed a recurring example in the text, used to illustrate the distinct characteristics of the affect systems that are at the centre of the theory. Yet, it would seem important for any theory of emotion to account for basic functions such as prediction and anticipation, which are only briefly mentioned. Here we propose that QT, specifically its focus on emotional contagion, attachment-related and non-goal directed emotions, might help generate new ideas about a largely neglected source of emotion - rhythm - a musical property that relies fundamentally on the mechanism of prediction.

  18. Jenefer Robinson. Deeper than Reason, Emotion and its Role in Literature, Music and Art

    OpenAIRE

    PHELAN, Richard

    2011-01-01

    This book sets out to examine the role of emotion in both the construction and reception of art. It begins by a survey of recent theories of emotion and then applies them to the action of emotion in the fields of literature, music and, to a lesser extent, painting. Part One thus addresses the question of what emotions are and how they operate. It considers in particular the theory of emotions as judgements as argued by philosophers Robert Gordon, Gabriele Taylor, Robert Solomon, and William L...

  19. Expressive-Emotional Sides of the Development of The Preschool Child Speech by Means Onto Psychological Music Therapy

    OpenAIRE

    Volzhentseva Iryna

    2017-01-01

    ABSTRACT In this article the problem of expressive-emotional sides of preschool child’s speech components development is considered by means of ontomusic therapy. Due to the theoretical analysis of psycho physiological theories, which methodologically substantiated the development of emotional and expressive sides of children’s speech by means of active music therapy and the interaction of speech and music as the related, mutually influencing at each other sign and semiotic kinds of activ...

  20. Music and Emotion. Review of "Music and Emotion", monographic issue of the journal «Music Analysis», edited by Michael Spitzer, n. 29/1-2-3 (2010

    Directory of Open Access Journals (Sweden)

    Mario Baroni

    2014-03-01

    Full Text Available The last fifteen years have seen the development of a lively interest, on an international scale, in the topic of emotion in music, documented by numerous noteworthy publications: the two volumes edited by Juslin and Sloboda [2001; 2010], the monographic issues of the journal «Musicae Scientiae» published in 2001 and 2011, and many other texts published in various parts of the world. The topic is not without repercussions also for those involved in musical analysis; and it is no mere chance that the English journal «Music Analysis» chose to dedicate the volume we are now reviewing to the subject. The contributions contained in the volume derive from an international meeting held in Durham in September 2009 and offer a fairly lively cross-section of a wide series of questions. Our only regret is that with the exception of three nordic names (Juslin, Lindström and Eerola, whose place is now solidly established in English language publications all the authors come from the now almost self-referential context of British or American research. But of course this is nothing new.

  1. Music Therapy: A Career in Music Therapy

    Science.gov (United States)

    About Music Therapy & Music Therapy Training M usic therapy is a healthcare profession that uses music to help individuals of all ages improve physical, cognitive, emotional, and social functioning. Music therapists work with children and adults with developmental ...

  2. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity

    OpenAIRE

    Mado Proverbio, C.A. Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-01-01

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or sil...

  3. Social and cognitive functions of music based on the example of Tuvan throat singing

    Directory of Open Access Journals (Sweden)

    Shen-Mou Hsu

    2017-06-01

    Full Text Available Music is pervasive across human cultures and throughout times. Particularly, music serves great importance at social functions. Like many cultures’ use of music, throat singing or khöömei, the most distinguished aspect of Tuva’s music, contributes significantly to social communication, emotional expression, social bonding and religious rituals. Acknowledgment and consideration of current social cognitive findings of music may thus provide a better insight into the nature of throat singing. To date, evidence has indicated that similar to language, music is a fundamental channel of communication, and these two constructs may have common origins in a single communicative system. Moreover, music may modulate neural activity in the brain structures associated with emotions and alter our autonomic responses. In addition to information sharing, music thus has the capacity to convey emotions. This ability may further render music a powerful mechanism to facilitate social bonding and ritual practice, as individuals’ internal states during these social events become synchronized through musical engagement. In conclusion, I suggest that those social cognitive perspectives may point toward new directions for a continuing discourse on our understanding of throat singing.

  4. The Role of Trait and State Absorption in the Enjoyment of Music

    Science.gov (United States)

    2016-01-01

    Little is known about the role of state versus trait characteristics on our enjoyment of music. The aim of this study was to investigate the influence of state and trait absorption upon preference for music, particularly preference for music that evokes negative emotions. The sample consisted of 128 participants who were asked to listen to two pieces of self-selected music and rate the music on variables including preference and felt and expressed emotions. Participants completed a brief measure of state absorption after listening to each piece, and a trait absorption inventory. State absorption was strongly positively correlated with music preference, whereas trait absorption was not. Trait absorption was related to preference for negative emotions in music, with chi-square analyses demonstrating greater enjoyment of negative emotions in music among individuals with high trait absorption. This is the first study to show that state and trait absorption have separable and distinct effects on a listener’s music experience, with state characteristics impacting music enjoyment in the moment, and trait characteristics influencing music preference based on its emotional content. PMID:27828970

  5. The Role of Trait and State Absorption in the Enjoyment of Music.

    Science.gov (United States)

    Hall, Sarah E; Schubert, Emery; Wilson, Sarah J

    2016-01-01

    Little is known about the role of state versus trait characteristics on our enjoyment of music. The aim of this study was to investigate the influence of state and trait absorption upon preference for music, particularly preference for music that evokes negative emotions. The sample consisted of 128 participants who were asked to listen to two pieces of self-selected music and rate the music on variables including preference and felt and expressed emotions. Participants completed a brief measure of state absorption after listening to each piece, and a trait absorption inventory. State absorption was strongly positively correlated with music preference, whereas trait absorption was not. Trait absorption was related to preference for negative emotions in music, with chi-square analyses demonstrating greater enjoyment of negative emotions in music among individuals with high trait absorption. This is the first study to show that state and trait absorption have separable and distinct effects on a listener's music experience, with state characteristics impacting music enjoyment in the moment, and trait characteristics influencing music preference based on its emotional content.

  6. Using listener-based perceptual features as intermediate representations in music information retrieval.

    Science.gov (United States)

    Friberg, Anders; Schoonderwaldt, Erwin; Hedblad, Anton; Fabiani, Marco; Elowsson, Anders

    2014-10-01

    The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.

  7. Evolutionary considerations on complex emotions and music-induced emotions. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    Science.gov (United States)

    Gingras, Bruno; Marin, Manuela M.

    2015-06-01

    Recent efforts to uncover the neural underpinnings of emotional experiences have provided a foundation for novel neurophysiological theories of emotions, adding to the existing body of psychophysiological, motivational, and evolutionary theories. Besides explicitly modeling human-specific emotions and considering the interactions between emotions and language, Koelsch et al.'s original contribution to this challenging endeavor is to identify four brain areas as distinct "affect systems" which differ in terms of emotional qualia and evolutionary pathways [1]. Here, we comment on some features of this promising Quartet Theory of Emotions, focusing particularly on evolutionary and biological aspects related to the four affect systems and their relation to prevailing emotion theories, as well as on the role of music-induced emotions.

  8. Mood Dependent Music Generator

    DEFF Research Database (Denmark)

    Scirea, Marco

    2013-01-01

    Music is one of the most expressive media to show and manipulate emotions, but there have been few studies on how to generate music connected to emotions. Such studies have always been shunned upon by musicians affirming that a machine cannot create expressive music, as it's the composer......'s and player's experiences and emotions that get poured into the piece. At the same time another problem is that music is highly complicated (and subjective) and finding out which elements transmit certain emotions is not an easy task. This demo wants to show how the manipulation of a set of features can...... actually change the mood the music transmits, hopefully awakening an interest in this area of research....

  9. Is music a memory booster in normal aging? The influence of emotion.

    Science.gov (United States)

    Ratovohery, Stéphie; Baudouin, Alexia; Gachet, Aude; Palisson, Juliette; Narme, Pauline

    2018-05-17

    Age-related differences in episodic memory have been explained by a decrement in strategic encoding implementation. It has been shown in clinical populations that music can be used during the encoding stage as a mnemonic strategy to learn verbal information. The effectiveness of this strategy remains equivocal in older adults (OA). Furthermore, the impact of the emotional valence of the music used has never been investigated in this context. Thirty OA and 24 young adults (YA) learned texts that were either set to music that was positively or negatively valenced, or spoken only. Immediate and delayed recalls were measured. Results showed that: (i) OA perform worse than YA in immediate and delayed recall; (ii) sung lyrics are better remembered than spoken ones in OA, but only when the associated music is positively-valenced; (iii) this pattern is observed regardless the retention delay. These findings support the benefit of a musical encoding on verbal learning in healthy OA and are consistent with the positivity effect classically reported in normal aging. Added to the potential applications in daily life, the results are discussed with respect to the theoretical hypotheses of the mechanisms underlying the advantage of musical encoding.

  10. 基于BP神经网络的音乐情感分类及评价模型%Music emotion classification and evaluation model based on BP neural network

    Institute of Scientific and Technical Information of China (English)

    赵伟

    2015-01-01

    针对多音轨MIDI文件,提出一种多音轨MIDI音乐主旋律识别方法,通过对表征音乐旋律特征的音高、音长、音色、速度和力度5个特征向量的提取,构建基于BP神经网络的情感模型,并且用200首不同情感特征的歌曲对其进行训练和验证。实验结果显示取得了较好的效果。%The audio track of music melody includes a lot of useful information of music melody, which is the basic of music character recognition and also the premise work in the design of the performance plan of music foundation .Five eigenvectors:pitch, length, tone tempo and strength are extracted for the expression of music melody, by which, the basic music character recognition system can be set up. A emotion model is formed by using BP neural network.200 songs with different emotional characteristic songs will be used as the sample data for the training and validation of the neural network. The results of validation shows the effectiveness of the emotion model.

  11. Study Protocol RapMusicTherapy for emotion regulation in a school setting

    NARCIS (Netherlands)

    Uhlig, S.; Jansen, E.; Scherder, E.J.A.

    2015-01-01

    The growing risk of the development of problem behaviors in adolescents (ages 10-15) requires effective methods for prevention, supporting self-regulative capacities. Music listening as an effective self-regulative tool for emotions and behavioral adaptation for adolescents and youth is widely

  12. Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    NARCIS (Netherlands)

    Başkent, Deniz; Fuller, Christina; Galvin, John; Schepel, Like; Gaudrain, Etienne; Free, Rolien

    2018-01-01

    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech

  13. Music, emotion, and autobiographical memory: they're playing your song.

    Science.gov (United States)

    Schulkind, M D; Hennis, L K; Rubin, D C

    1999-11-01

    Very long-term memory for popular music was investigated. Older and younger adults listened to 20-sec excerpts of popular songs drawn from across the 20th century. The subjects gave emotionality and preference ratings and tried to name the title, artist, and year of popularity for each excerpt. They also performed a cued memory test for the lyrics. The older adults' emotionality ratings were highest for songs from their youth; they remembered more about these songs, as well. However, the stimuli failed to cue many autobiographical memories of specific events. Further analyses revealed that the older adults were less likely than the younger adults to retrieve multiple attributes of a song together (i.e., title and artist) and that there was a significant positive correlation between emotion and memory, especially for the older adults. These results have implications for research on long-term memory, as well as on the relationship between emotion and memory.

  14. [Non pharmacological treatment for Alzheimer's disease: comparison between musical and non-musical interventions].

    Science.gov (United States)

    Narme, Pauline; Tonini, Audrey; Khatir, Fatiha; Schiaratura, Loris; Clément, Sylvain; Samson, Séverine

    2012-06-01

    On account of the limited effectiveness of pharmacological treatments in Alzheimer's disease (AD), there is a growing interest on nonpharmacological treatments, including musical intervention. Despite the large number of studies showing the multiple benefits of music on behavioral, emotional and cognitive disorders of patients with AD, only a few of them used a rigorous method. Finally, the specificity of musical as compared to non-musical and pleasant interventions has rarely been addressed. To investigate this issue, two randomized controlled trials were conducted contrasting the effects of musical to painting (Study 1) or cooking (Study 2) interventions on emotional state of 33 patients with AD. The patients' emotional state was assessed by analyzing professional caregivers' judgments of the patient's mood, then facial expressions and valence of the discourse from short-filmed interviews. In the first study (n=22), each intervention lasted 3 weeks (two sessions per week) and the patients' emotional state was assessed before, during and after intervention periods. After the interventions, the results showed that facial expression, discourse content and mood assessment improved (more positive than negative expressions) as compared to pre-intervention assessment. However, musical intervention was more effective and had longer effects as compared with painting. In the second study (n=11), we further examined long lasting effects of music as compared to cooking by adding evaluation of the patients' emotional state 2 and 4 weeks after the last intervention. Again, music was more effective to improve the emotional state. Music had positive effects that remained significant up to 4 weeks after the intervention, while cooking only produced short-term effect on mood. In both studies, benefits were significant in more than 80% of patients. Taken together, these findings show that music intervention has specific effects on patients' emotional well being, offering promising

  15. The joy of heartfelt music: An examination of emotional and physiological responses.

    Science.gov (United States)

    Lynar, Emily; Cvejic, Erin; Schubert, Emery; Vollmer-Conna, Ute

    2017-10-01

    Music-listening can be a powerful therapeutic tool for mood rehabilitation, yet quality evidence for its validity as a singular treatment is scarce. Specifically, the relationship between music-induced mood improvement and meaningful physiological change, as well as the influence of music- and person-related covariates on these outcomes are yet to be comprehensively explored. Ninety-four healthy participants completed questionnaires probing demographics, personal information, and musical background. Participants listened to two prescribed musical pieces (one classical, one jazz), an "uplifting" piece of their own choice, and an acoustic control stimulus (white noise) in randomised order. Physiological responses (heart rate, respiration, galvanic skin response) were recorded throughout. After each piece, participants rated their subjective responses on a series of Likert scales. Subjectively, the self-selected pieces induced the most joy, and the classical piece was perceived as most relaxing, consistent with the arousal ratings proposed by a music selection panel. These two stimuli led to the greatest overall improvement in composite emotional state from baseline. Psycho-physiologically, self-selected pieces often elicited a "eustress" response ("positive arousal"), whereas classical music was associated with the highest heart rate variability. Very few person-related covariates appeared to affect responses, and music-related covariates (besides self-selection) appeared arbitrary. These data provide strong evidence that optimal music for therapy varies between individuals. Our findings additionally suggest that the self-selected music was most effective for inducing a joyous state; while low arousal classical music was most likely to shift the participant into a state of relaxation. Therapy should attempt to find the most effective and "heartfelt" music for each listener, according to therapeutic goals. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Central auditory processing. Are the emotional perceptions of those listening to classical music inherent in the composition or acquired by the listeners?

    Science.gov (United States)

    Goycoolea, Marcos; Levy, Raquel; Ramírez, Carlos

    2013-04-01

    There is seemingly some inherent component in selected musical compositions that elicits specific emotional perceptions, feelings, and physical conduct. The purpose of the study was to determine if the emotional perceptions of those listening to classical music are inherent in the composition or acquired by the listeners. Fifteen kindergarten students, aged 5 years, from three different sociocultural groups, were evaluated. They were exposed to portions of five purposefully selected classical compositions and asked to describe their emotions when listening to these musical pieces. All were instrumental compositions without human voices or spoken language. In addition, they were played to an audience of an age at which they were capable of describing their perceptions and supposedly had no significant previous experience of classical music. Regardless of their sociocultural background, the children in the three groups consistently identified similar emotions (e.g. fear, happiness, sadness), feelings (e.g. love), and mental images (e.g. giants or dangerous animals walking) when listening to specific compositions. In addition, the musical compositions generated physical conducts that were reflected by the children's corporal expressions. Although the sensations were similar, the way of expressing them differed according to their background.

  17. A Fragment on the Emotion, “Mathesis” and Time Dimension of the Purely Musical. Marginalia with Prelude to the Afternoon of a Faun by Claude Debussy

    Directory of Open Access Journals (Sweden)

    Tijana Popović Mladjenović

    2007-12-01

    Full Text Available In the dialogue What Is Music? between Carl Dahlhaus and Hans Heinrich Eggebrecht, music is defined as a “mathematized emotion” or an “emotionalized ‘mathesis’”. As emphasized by Marija Bergamo, this is the way of underlining its equal and unavoidable constitution, based on emotion and rational organization in the time dimension. So, Marija Bergamo is continuously searching for those music determinants in a musical work as an “autonomous aesthetic fact”, whose base and real essence lie “within the nature and essence of music itself”. In other words, the starting point of the author’s concern with (art music is her reflection on that which is “purely musical”, that is, on “the very nature of the musical”.The attempts to determine what the purely musical is and to understand the nature of the sense and inevitability of man’s musical dimension have been made since the beginnings of music and musical thinking. In that context, more recent knowledge and thinking about the phenomenon of music, which are derived from various disciplines, correspond closely to Marija Bergamo’s views. In a narrower sense, the notion of purely musical is closely related to aesthetic autonomy, that is, autonomous music or musical autonomy. From such a viewpoint – and in conformity with Marija Bergamo’s view – I would say that the purely musical in an art music work exists independently of non/autonomy (that is, independently of any function, except an aesthetic one, as well as independently of the origin of its content (musical or extra-musical, and that it always, whenever “one thinks in the sense of music and is seized by it” (in terms of emotion, mathesis and time, creates, brings and possesses its specific (non-conceptual perceptive musical-semantic stratum. This is shown, at least partly, on a characteristic and (in many respects paradigmatic example – the music of Prelude to the Afternoon of a Faun by Claude Debussy

  18. Involuntary and voluntary recall of musical memories: a comparison of temporal accuracy and emotional responses.

    OpenAIRE

    Jakubowski, Kelly; Bashir, Zaariyah; Farrugia, Nicolas; Stewart, Lauren

    2018-01-01

    Comparisons between involuntarily and voluntarily retrieved autobiographical memories have revealed similarities in encoding and maintenance, with differences in terms of specificity and emotional responses. Our study extended this research area into the domain of musical memory, which afforded a unique opportunity to compare the same memory as accessed both involuntarily and voluntarily. Specifically, we compared instances of involuntary musical imagery (INMI, or “earworms”)—the spontaneous ...

  19. The effects of music listening after a stressful task on immune functions, neuroendocrine responses, and emotional states in college students.

    Science.gov (United States)

    Hirokawa, Eri; Ohira, Hideki

    2003-01-01

    The purpose of this study was to examine the effects of listening to high-uplifting or low-uplifting music after a stressful task on (a) immune functions, (b) neuroendocrine responses, and (c) emotional states in college students. Musical selections that were evaluated as high-uplifting or low-uplifting by Japanese college students were used as musical stimuli. Eighteen Japanese subjects performed stressful tasks before they experienced each of these experimental conditions: (a) high-uplifting music, (b) low-uplifting music, and (c) silence. Subjects' emotional states, the Secretory IgA (S-IgA) level, active natural killer (NK) cell level, the numbers of T lymphocyte CD4+, CD8+, CD16+, dopamine, norepinephrine, and epinephrine levels were measured before and after each experimental condition. Results indicated low-uplifting music had a trend of increasing a sense of well-being. High-uplifting music showed trends of increasing the norepinephrine level, liveliness, and decreasing depression. Active NK cells were decreased after 20 min of silence. Results of the study were inconclusive, but high-uplifting and low-uplifting music had different effects on immune, neuroendocrine, and psychological responses. Classification of music is important to research that examines the effects of music on these responses. Recommendations for future research are discussed.

  20. Effects of music therapy and music-based interventions in the treatment of substance use disorders: A systematic review

    Science.gov (United States)

    Hohmann, Louisa; Bradt, Joke; Stegemann, Thomas; Koelsch, Stefan

    2017-01-01

    Music therapy (MT) and music-based interventions (MBIs) are increasingly used for the treatment of substance use disorders (SUD). Previous reviews on the efficacy of MT emphasized the dearth of research evidence for this topic, although various positive effects were identified. Therefore, we conducted a systematic search on published articles examining effects of music, MT and MBIs and found 34 quantitative and six qualitative studies. There was a clear increase in the number of randomized controlled trials (RCTs) during the past few years. We had planned for a meta-analysis, but due to the diversity of the quantitative studies, effect sizes were not computed. Beneficial effects of MT/ MBI on emotional and motivational outcomes, participation, locus of control, and perceived helpfulness were reported, but results were inconsistent across studies. Furthermore, many RCTs focused on effects of single sessions. No published longitudinal trials could be found. The analysis of the qualitative studies revealed four themes: emotional expression, group interaction, development of skills, and improvement of quality of life. Considering these issues for quantitative research, there is a need to examine social and health variables in future studies. In conclusion, due to the heterogeneity of the studies, the efficacy of MT/ MBI in SUD treatment still remains unclear. PMID:29141012

  1. Music Influences Ratings of the Affect of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Waldie E Hanser

    2013-09-01

    Full Text Available This review provides an overview of recent studies that have examined how music influences the judgment of emotional stimuli, including affective pictures and film clips. The relevant findings are incorporated within a broader theory of music and emotion, and suggestions for future research are offered.Music is important in our daily lives, and one of its primary uses by listeners is the active regulation of one's mood. Despite this widespread use as a regulator of mood and its general pervasiveness in our society, the number of studies investigating the issue of whether, and how, music affects mood and emotional behaviour is limited however. Experiments investigating the effects of music have generally focused on how the emotional valence of background music impacts how affective pictures and/or film clips are evaluated. These studies have demonstrated strong effects of music on the emotional judgment of such stimuli. Most studies have reported concurrent background music to enhance the emotional valence when music and pictures are emotionally congruent. On the other hand, when music and pictures are emotionally incongruent, the ratings of the affect of the pictures will in- or decrease depending on the emotional valence of the background music. These results appear to be consistent in studies investigating the effects of (background music.

  2. Listening and Musical Engagement: An Exploration of the Effects of Different Listening Strategies on Attention, Emotion, and Peak Affective Experiences

    Science.gov (United States)

    Diaz, Frank M.

    2015-01-01

    Music educators often use guided listening strategies as a means of enhancing engagement during music listening activities. Although previous research suggests that these strategies are indeed helpful in facilitating some form of cognitive and emotional engagement, little is known about how these strategies might function for music of differing…

  3. Cognitive, emotional, and social benefits of regular musical activities in early dementia: randomized controlled study.

    Science.gov (United States)

    Särkämö, Teppo; Tervaniemi, Mari; Laitinen, Sari; Numminen, Ava; Kurki, Merja; Johnson, Julene K; Rantanen, Pekka

    2014-08-01

    During aging, musical activities can help maintain physical and mental health and cognitive abilities, but their rehabilitative use has not been systematically explored in persons with dementia (PWDs). Our aim was to determine the efficacy of a novel music intervention based on coaching the caregivers of PWDs to use either singing or music listening regularly as a part of everyday care. Eighty-nine PWD-caregiver dyads were randomized to a 10-week singing coaching group (n = 30), a 10-week music listening coaching group (n = 29), or a usual care control group (n = 30). The coaching sessions consisted primarily of singing/listening familiar songs coupled occasionally with vocal exercises and rhythmic movements (singing group) and reminiscence and discussions (music listening group). In addition, the intervention included regular musical exercises at home. All PWDs underwent an extensive neuropsychological assessment, which included cognitive tests, as well as mood and quality of life (QOL) scales, before and after the intervention period and 6 months later. In addition, the psychological well-being of family members was repeatedly assessed with questionnaires. Compared with usual care, both singing and music listening improved mood, orientation, and remote episodic memory and to a lesser extent, also attention and executive function and general cognition. Singing also enhanced short-term and working memory and caregiver well-being, whereas music listening had a positive effect on QOL. Regular musical leisure activities can have long-term cognitive, emotional, and social benefits in mild/moderate dementia and could therefore be utilized in dementia care and rehabilitation. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Music therapy improvisation

    Directory of Open Access Journals (Sweden)

    Mira Kuzma

    2001-09-01

    Full Text Available In this article, the technique of music therapy – music therapy improvisation is introduced. In this form of music therapy the improvising partners share meaning through the improvisation: the improvisation is not an end in itself: it portrays meaning that is personal, complex and can be shared with the partner. The therapeutic work, then, is meeting and matching the client's music in order to give the client an experience of "being known", being responded through sounds and being able to express things and communicate meaningfully. Rather than the client playing music, the therapy is about developing the engagement through sustained, joint improvisations. In music therapy, music and emotion share fundamental features: one may represent the other, i.e., we hear the music not as music but as dynamic emotional states. The concept of dynamic structure explains why music makes therapeutic sense.

  5. Social functioning and autonomic nervous system sensitivity across vocal and musical emotion in Williams syndrome and autism spectrum disorder.

    Science.gov (United States)

    Järvinen, Anna; Ng, Rowena; Crivelli, Davide; Neumann, Dirk; Arnold, Andrew J; Woo-VonHoogenstyn, Nicholas; Lai, Philip; Trauner, Doris; Bellugi, Ursula

    2016-01-01

    Both Williams syndrome (WS) and autism spectrum disorders (ASD) are associated with unusual auditory phenotypes with respect to processing vocal and musical stimuli, which may be shaped by the atypical social profiles that characterize the syndromes. Autonomic nervous system (ANS) reactivity to vocal and musical emotional stimuli was examined in 12 children with WS, 17 children with ASD, and 20 typically developing (TD) children, and related to their level of social functioning. The results of this small-scale study showed that after controlling for between-group differences in cognitive ability, all groups showed similar emotion identification performance across conditions. Additionally, in ASD, lower autonomic reactivity to human voice, and in TD, to musical emotion, was related to more normal social functioning. Compared to TD, both clinical groups showed increased arousal to vocalizations. A further result highlighted uniquely increased arousal to music in WS, contrasted with a decrease in arousal in ASD and TD. The ASD and WS groups exhibited arousal patterns suggestive of diminished habituation to the auditory stimuli. The results are discussed in the context of the clinical presentation of WS and ASD. © 2015 Wiley Periodicals, Inc.

  6. Background Music and Background Feelings

    DEFF Research Database (Denmark)

    Have, Iben

    2008-01-01

    With a focus on underscore music in film and television this report discusses the relations between music and emotions. The report will present and discuss an interdisciplinary theoretical framework connecting the experience of musical structures with emotional structures. Subsequently it discuss...... how music in the attachment to the audiovisual context contributes to the generation of different kinds of emotional experiences. The Danish television documentary Ballets droning (“The Queen of the Ball”) portraying the leader of the Danish right wing party The Danish Peoples’ Party...

  7. Music Composition from the Brain Signal: Representing the Mental State by Music

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2010-01-01

    Full Text Available This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.

  8. Music-Evoked Emotions—Current Studies

    Science.gov (United States)

    Schaefer, Hans-Eckhardt

    2017-01-01

    The present study is focused on a review of the current state of investigating music-evoked emotions experimentally, theoretically and with respect to their therapeutic potentials. After a concise historical overview and a schematic of the hearing mechanisms, experimental studies on music listeners and on music performers are discussed, starting with the presentation of characteristic musical stimuli and the basic features of tomographic imaging of emotional activation in the brain, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), which offer high spatial resolution in the millimeter range. The progress in correlating activation imaging in the brain to the psychological understanding of music-evoked emotion is demonstrated and some prospects for future research are outlined. Research in psychoneuroendocrinology and molecular markers is reviewed in the context of music-evoked emotions and the results indicate that the research in this area should be intensified. An assessment of studies involving measuring techniques with high temporal resolution down to the 10 ms range, as, e.g., electroencephalography (EEG), event-related brain potentials (ERP), magnetoencephalography (MEG), skin conductance response (SCR), finger temperature, and goose bump development (piloerection) can yield information on the dynamics and kinetics of emotion. Genetic investigations reviewed suggest the heredity transmission of a predilection for music. Theoretical approaches to musical emotion are directed to a unified model for experimental neurological evidence and aesthetic judgment. Finally, the reports on musical therapy are briefly outlined. The study concludes with an outlook on emerging technologies and future research fields. PMID:29225563

  9. Music playlist recommendation based on user heartbeat and music preference

    NARCIS (Netherlands)

    Liu, H.; Hu, J.; Rauterberg, G.W.M.

    2009-01-01

    In this paper, we present a new user heartbeat and preference aware music recommendation system. The system can not only recommend a music playlist based on the user’s music preference but also the music playlist is generated based on the user’s heartbeat. If the user’s heartbeat is higher than the

  10. Wellbeing in the Classroom: How an Evolutionary Perspective on Human Musicality Can Inform Music Education

    Science.gov (United States)

    Maury, Susan; Rickard, Nikki

    2016-01-01

    Group singing is a common feature of classroom-based music education, and has often been proposed to have benefits that extend beyond acquisition of music skills, primarily in academic achievement. However, potential social and emotional well-being benefits have been under-represented in these discussions. This article proposes that an…

  11. Emotions in Concert : Performers' Experienced Emotions on Stage

    OpenAIRE

    Van Zijl, Anemone G. W.; Sloboda, John A.

    2013-01-01

    Music is often said to be expressive of emotions. Surprisingly, not much is known about the role of performers’ emotions while performing. Do musicians feel the musical emotions when expressing them? Or has expressive playing nothing to do with the emotional experiences of the performer? To investigate performers’ perspectives on the role of emotions in performance, we conducted qualitative in-depth interviews with nineteen musicians teaching or studying at a European conservatoire. In the in...

  12. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing.

    Directory of Open Access Journals (Sweden)

    Sara Giannantonio

    Full Text Available Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use. Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real, regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and

  13. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing

    Science.gov (United States)

    Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in

  14. Study of emotion-based neurocardiology through wearable systems

    Science.gov (United States)

    Ramasamy, Mouli; Varadan, Vijay

    2016-04-01

    Neurocardiology is the exploration of neurophysiological, neurological and neuroanatomical facets of neuroscience's influence in cardiology. The paraphernalia of emotions on the heart and brain are premeditated because of the interaction between the central and peripheral nervous system. This is an investigative attempt to study emotion based neurocardiology and the factors that influence this phenomena. The factors include: interaction between sleep EEG (electroencephalogram) and ECG (electrocardiogram), relationship between emotion and music, psychophysiological coherence between the heart and brain, emotion recognition techniques, and biofeedback mechanisms. Emotions contribute vitally to the mundane life and are quintessential to a numerous biological and everyday-functional modalities of a human being. Emotions are best represented through EEG signals, and to a certain extent, can be observed through ECG and body temperature. Confluence of medical and engineering science has enabled the monitoring and discrimination of emotions influenced by happiness, anxiety, distress, excitement and several other factors that influence the thinking patterns and the electrical activity of the brain. Similarly, HRV (Heart Rate Variability) widely investigated for its provision and discerning characteristics towards EEG and the perception in neurocardiology.

  15. Effects of a dyadic music therapy intervention on parent-child interaction, parent stress, and parent-child relationship in families with emotionally neglected children: a randomized controlled trial.

    Science.gov (United States)

    Jacobsen, Stine L; McKinney, Cathy H; Holck, Ulla

    2014-01-01

    Work with families and families at risk within the field of music therapy have been developing for the last decade. To diminish risk for unhealthy child development, families with emotionally neglected children need help to improve their emotional communication and develop healthy parent-child interactions. While some researchers have investigated the effect of music therapy on either the parent or the child, no study has investigated the effect of music therapy on the observed interaction between the parent and child within the field of child protection. The purpose of this study was to investigate the effect of a dyadic music therapy intervention on observed parent-child interaction (mutual attunement, nonverbal communication, emotional parental response), self-reported parenting stress, and self-reported parent-child relationship in families at risk and families with emotionally neglected children, ages 5-12 years. This was a randomized controlled trial study conducted at a family care center in Denmark. Eighteen parent-child dyads were randomly assigned to receive 10 weekly music therapy sessions with a credentialed music therapist (n = 9) or treatment as usual (n = 9). Observational measures for parent-child interaction, self-reported measures for parenting stress and parent-child relationship were completed at baseline and 4 months post-baseline assessment. Results of the study showed that dyads who received music therapy intervention significantly improved their nonverbal communication and mutual attunement. Similarly, parents who participated in dyadic music therapy reported themselves to be significantly less stressed by the mood of the child and to significantly improve their parent-child relationship in terms of being better at talking to and understanding their children than parents who did not receive music therapy. Both groups significantly improved in terms of increased positive and decreased negative emotional parental response, parenting stress and

  16. Theoretical Framework of A Computational Model of Auditory Memory for Music Emotion Recognition

    NARCIS (Netherlands)

    Caetano, Marcelo; Wiering, Frans

    2014-01-01

    The bag of frames (BOF) approach commonly used in music emotion recognition (MER) has several limitations. The semantic gap is believed to be responsible for the glass ceiling on the performance of BOF MER systems. However, there are hardly any alternative proposals to address it. In this article,

  17. A percepção de emoções em trechos de música ocidental erudita The perception of emotions in excerpts of classical Western music

    Directory of Open Access Journals (Sweden)

    Danilo Ramos

    2012-12-01

    Full Text Available O objetivo deste estudo foi verificar respostas emocionais a trechos musicais do repertório erudito ocidental. Músicos e não músicos ouviam cada trecho musical e associavam-no a categorias emocionais (Alegria, Tristeza, Serenidade ou Medo/Raiva. Os resultados indicaram que, para ambos os grupos, cada trecho musical, na maioria, não foi associado a mais de uma categoria emocional. De um modo geral, as associações foram semelhantes entre os grupos, embora as respostas dos músicos tenham sido mais consistentes. Estes resultados sugerem um processamento cognitivo de respostas emocionais à música ocidental relacionado à estrutura cognitiva do evento, a diferenças entre indivíduos e à expertise musical.The aim of this study was to evaluate emotional responses to musical excerpts from Western repertoire. Musicians and nonmusicians listened to each musical excerpt and linked it to emotional categories (Joy, Sadness, Serenity or Fear / Anger. The results indicated that each musical excerpt, in majority, was not associated to more than one emotional category, for both groups. In general, associations were similar between groups, although the responses of musicians have been more consistent. These results suggest a cognitive processing of emotional responses to music related to the cognitive structure of the event, to individual differences and to musical expertise.

  18. Neural responses to nostalgia-evoking music modeled by elements of dynamic musical structure and individual differences in affective traits.

    Science.gov (United States)

    Barrett, Frederick S; Janata, Petr

    2016-10-01

    Nostalgia is an emotion that is most commonly associated with personally and socially relevant memories. It is primarily positive in valence and is readily evoked by music. It is also an idiosyncratic experience that varies between individuals based on affective traits. We identified frontal, limbic, paralimbic, and midbrain brain regions in which the strength of the relationship between ratings of nostalgia evoked by music and blood-oxygen-level-dependent (BOLD) signal was predicted by affective personality measures (nostalgia proneness and the sadness scale of the Affective Neuroscience Personality Scales) that are known to modulate the strength of nostalgic experiences. We also identified brain areas including the inferior frontal gyrus, substantia nigra, cerebellum, and insula in which time-varying BOLD activity correlated more strongly with the time-varying tonal structure of nostalgia-evoking music than with music that evoked no or little nostalgia. These findings illustrate one way in which the reward and emotion regulation networks of the brain are recruited during the experiencing of complex emotional experiences triggered by music. These findings also highlight the importance of considering individual differences when examining the neural responses to strong and idiosyncratic emotional experiences. Finally, these findings provide a further demonstration of the use of time-varying stimulus-specific information in the investigation of music-evoked experiences. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Modulation of the startle reflex by pleasant and unpleasant music.

    Science.gov (United States)

    Roy, Mathieu; Mailhot, Jean-Philippe; Gosselin, Nathalie; Paquette, Sébastien; Peretz, Isabelle

    2009-01-01

    The issue of emotional feelings to music is the object of a classic debate in music psychology. Emotivists argue that emotions are really felt in response to music, whereas cognitivists believe that music is only representative of emotions. Psychophysiological recordings of emotional feelings to music might help to resolve the debate, but past studies have failed to show clear and consistent differences between musical excerpts of different emotional valence. Here, we compared the effects of pleasant and unpleasant musical excerpts on the startle eye blink reflex and associated body markers (such as the corrugator and zygomatic activity, skin conductance level and heart rate). The startle eye blink amplitude was larger and its latency was shorter during unpleasant compared with pleasant music, suggesting that the defensive emotional system was indeed modulated by music. Corrugator activity was also enhanced during unpleasant music, whereas skin conductance level was higher for pleasant excerpts. The startle reflex was the response that contributed the most in distinguishing pleasant and unpleasant music. Taken together, these results provide strong evidence that emotions were felt in response to music, supporting the emotivist stance.

  20. Neurologic music therapy improves executive function and emotional adjustment in traumatic brain injury rehabilitation.

    Science.gov (United States)

    Thaut, Michael H; Gardiner, James C; Holmberg, Dawn; Horwitz, Javan; Kent, Luanne; Andrews, Garrett; Donelan, Beth; McIntosh, Gerald R

    2009-07-01

    This study examined the immediate effects of neurologic music therapy (NMT) on cognitive functioning and emotional adjustment with brain-injured persons. Four treatment sessions were held, during which participants were given a pre-test, participated in 30 min of NMT that focused on one aspect of rehabilitation (attention, memory, executive function, or emotional adjustment), which was followed by post-testing. Control participants engaged in a pre-test, 30 min of rest, and then a post-test. Treatment participants showed improvement in executive function and overall emotional adjustment, and lessening of depression, sensation seeking, and anxiety. Control participants improved in emotional adjustment and lessening of hostility, but showed decreases in measures of memory, positive affect, and sensation seeking.

  1. Daydreams and trait affect: The role of the listener's state of mind in the emotional response to music.

    Science.gov (United States)

    Martarelli, Corinna S; Mayer, Boris; Mast, Fred W

    2016-11-01

    Music creates room for the mind to wander, mental time travel, and departures into more fantastical worlds. We examined the mediating role of daydreams and the moderating function of personality differences for the emotional response to music by using a moderated mediation approach. The results showed that the valence of daydreams played a mediating role in the reaction to the musical experience: happy music was related to more positive daydreams, which were associated with greater relaxation with the happy music and to greater liking of the happy music. Furthermore, negative affect (trait) moderated the direct effect of sad vs. happy music on the liking of the music: individuals with high scores on negative affect preferred sad music. The results are discussed with regard to the interplay of general and personality-specific processes as it is relevant to better understand the effects music can have on the listeners. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Measuring Supportive Music and Imagery Interventions: The Development of the Music Therapy Self-Rating Scale.

    Science.gov (United States)

    Meadows, Anthony; Burns, Debra S; Perkins, Susan M

    2015-01-01

    Previous research has demonstrated modest benefits from music-based interventions, specifically music and imagery interventions, during cancer care. However, little attention has been paid to measuring the benefits of music-based interventions using measurement instruments specifically designed to account for the multidimensional nature of music-imagery experiences. The purpose of this study was to describe the development of, and psychometrically evaluate, the Music Therapy Self-Rating Scale (MTSRS) as a measure for cancer patients engaged in supportive music and imagery interventions. An exploratory factor analysis using baseline data from 76 patients who consented to participate in a music-based intervention study during chemotherapy. Factor analysis of 14 items revealed four domains: Awareness of Body, Emotionally Focused, Personal Resources, and Treatment Specific. Internal reliability was excellent (Cronbach alphas ranging from 0.75 to 0.88) and construct and divergent-discriminant validity supported. The MTSRS is a psychometrically sound, brief instrument that captures essential elements of patient experience during music and imagery interventions. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. A neuroscientific perspective on music therapy.

    Science.gov (United States)

    Koelsch, Stefan

    2009-07-01

    During the last years, a number of studies demonstrated that music listening (and even more so music production) activates a multitude of brain structures involved in cognitive, sensorimotor, and emotional processing. For example, music engages sensory processes, attention, memory-related processes, perception-action mediation ("mirror neuron system" activity), multisensory integration, activity changes in core areas of emotional processing, processing of musical syntax and musical meaning, and social cognition. It is likely that the engagement of these processes by music can have beneficial effects on the psychological and physiological health of individuals, although the mechanisms underlying such effects are currently not well understood. This article gives a brief overview of factors contributing to the effects of music-therapeutic work. Then, neuroscientific studies using music to investigate emotion, perception-action mediation ("mirror function"), and social cognition are reviewed, including illustrations of the relevance of these domains for music therapy.

  4. Music, Mathematics and Bach

    Indian Academy of Sciences (India)

    Long interested in music of various kinds, ... other art form, it is impossible to adequately explain the appeal of Bach's music ... composer, does exhibit a full range of emotions such as joy, ... seem to be cerebral rather than emotional. Moreover ...

  5. Music-induced changes in functional cerebral asymmetries.

    Science.gov (United States)

    Hausmann, Markus; Hodgetts, Sophie; Eerola, Tuomas

    2016-04-01

    After decades of research, it remains unclear whether emotion lateralization occurs because one hemisphere is dominant for processing the emotional content of the stimuli, or whether emotional stimuli activate lateralised networks associated with the subjective emotional experience. By using emotion-induction procedures, we investigated the effect of listening to happy and sad music on three well-established lateralization tasks. In a prestudy, Mozart's piano sonata (K. 448) and Beethoven's Moonlight Sonata were rated as the most happy and sad excerpts, respectively. Participants listened to either one emotional excerpt, or sat in silence before completing an emotional chimeric faces task (Experiment 1), visual line bisection task (Experiment 2) and a dichotic listening task (Experiment 3 and 4). Listening to happy music resulted in a reduced right hemispheric bias in facial emotion recognition (Experiment 1) and visuospatial attention (Experiment 2) and increased left hemispheric bias in language lateralization (Experiments 3 and 4). Although Experiments 1-3 revealed an increased positive emotional state after listening to happy music, mediation analyses revealed that the effect on hemispheric asymmetries was not mediated by music-induced emotional changes. The direct effect of music listening on lateralization was investigated in Experiment 4 in which tempo of the happy excerpt was manipulated by controlling for other acoustic features. However, the results of Experiment 4 made it rather unlikely that tempo is the critical cue accounting for the effects. We conclude that listening to music can affect functional cerebral asymmetries in well-established emotional and cognitive laterality tasks, independent of music-induced changes in the emotion state. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. How Parents' and Teachers' Emotional Skills Foster Academic Performance in School Music Students

    Science.gov (United States)

    Campayo-Muñoz, Emilia; Cabedo-Mas, Alberto

    2016-01-01

    This paper explores the importance and effects of parents' and teachers' attitudes on students' academic performance in music. To this end, the research literature on the effects of parental and teacher behaviour on the behaviour of their children and students is reviewed, focusing on parents' and teachers' emotional skills. The review looks at…

  7. Emotion Chat: A Web Chatroom with Emotion Regulation for E-Learners

    Science.gov (United States)

    Zheng, Deli; Tian, Feng; Liu, Jun; Zheng, Qinghua; Qin, Jiwei

    In order to compensate for lack of emotion communication between teachers and students in e-learning systems, we have designed and implemented the EmotionChat -- a web chatroom with emotion regulation. EmotionChat perceives e-learners' emotional states based on interactive text. And it recommends resources such as music, cartoons, and mottos to an e-learner when it detects negative emotional states. Meanwhile, it recommends emotion regulation cases to the e-learner's listeners and teachers. The result of our initial experiment shows that EmotionChat can recommend valuable emotion regulation policies for e-learners.

  8. Induction of depressed and elated mood by music influences the perception of facial emotional expressions in healthy subjects.

    Science.gov (United States)

    Bouhuys, A L; Bloem, G M; Groothuis, T G

    1995-04-04

    The judgement of healthy subject rating the emotional expressions of a set of schematic drawn faces is validated (study 1) to examine the relationship between mood (depressed/elated) and judgement of emotional expressions of these faces (study 2). Study 1: 30 healthy subjects judged 12 faces with respect to the emotions they express (fear, happiness, anger, sadness, disgust, surprise, rejection and invitation). It was found that a particular face could reflect various emotions. All eight emotions were reflected in the set of faces and the emotions were consensually judged. Moreover, gender differences in judgement could be established. Study 2: In a cross-over design, 24 healthy subjects judged the faces after listening to depressing or elating music. The faces were subdivided in six 'ambiguous' faces (i.e., expressing similar amounts of positive and negative emotions) and six 'clear' faces (i.e., faces showing a preponderance of positive or negative emotions). In addition, these two types of faces were distinguished with respect to the intensity of emotions they express. 11 subjects who showed substantial differences in experienced depression after listening to the music were selected for further analysis. It was found that, when feeling more depressed, the subjects perceived more rejection/sadness in ambiguous faces (displaying less intensive emotions) and less invitation/happiness in clear faces. In addition, subjects saw more fear in clear faces that express less intensive emotions. Hence, results show a depression-related negative bias in the perception of facial displays.

  9. Aesthetic responses to music

    DEFF Research Database (Denmark)

    Istok, Eva; Brattico, Elvira; Jacobsen, Thomas

    2009-01-01

    We explored the content and structure of the cognitive, knowledge-based concept underlying aesthetic responses to music. To this aim, we asked 290 Finnish students to verbally associate the aesthetic value of music and to write down a list of appropriate adjectives within a given time limit....... No music was presented during the task. In addition, information about participants' musical background was collected. A variety of analysis techniques was used to determine the key results of our study. The adjective "beautiful" proved to be the core item of the concept under question. Interestingly......, the adjective "touching" was often listed together with "beautiful". In addition, we found music-specific vocabulary as well as adjectives related to emotions and mood states indicating that affective processes are an essential part of aesthetic responses to music. Differences between music experts and laymen...

  10. Moving Forward: A Feminist Analysis of Mobile Music Streaming

    Directory of Open Access Journals (Sweden)

    Ann Werner

    2015-06-01

    Full Text Available The importance of understanding gender, space and mobility as co-constructed in public space has been emphasized by feminist researchers (Massey 2005, Hanson 2010. And within feminist theory materiality, affect and emotions has been described as central for experienced subjectivity (Ahmed 2012. Music listening while moving through public space has previously been studied as a way of creating a private auditory bubble for the individual (Bull 2000, Cahir and Werner 2013 and in this article feminist theory on emotion (Ahmed 2010 and space (Massey 2005 is employed in order to understand mobile music streaming. More specifically it discusses what can happen when mobile media technology is used to listen to music in public space and investigates interconnectedness of bodies, music, technology and space. The article is based on autoethnographic material of mobile music streaming in public and concludes that a forward movement shaped by happiness is a desired result of mobile music streaming. The valuing of "forward" is critically examined from the point of feminist theory and the failed music listening moments are also discussed in terms of emotion and space.

  11. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  12. The Paradox of Music-Evoked Sadness: An Online Survey

    Science.gov (United States)

    Taruffi, Liila; Koelsch, Stefan

    2014-01-01

    This study explores listeners’ experience of music-evoked sadness. Sadness is typically assumed to be undesirable and is therefore usually avoided in everyday life. Yet the question remains: Why do people seek and appreciate sadness in music? We present findings from an online survey with both Western and Eastern participants (N = 772). The survey investigates the rewarding aspects of music-evoked sadness, as well as the relative contribution of listener characteristics and situational factors to the appreciation of sad music. The survey also examines the different principles through which sadness is evoked by music, and their interaction with personality traits. Results show 4 different rewards of music-evoked sadness: reward of imagination, emotion regulation, empathy, and no “real-life” implications. Moreover, appreciation of sad music follows a mood-congruent fashion and is greater among individuals with high empathy and low emotional stability. Surprisingly, nostalgia rather than sadness is the most frequent emotion evoked by sad music. Correspondingly, memory was rated as the most important principle through which sadness is evoked. Finally, the trait empathy contributes to the evocation of sadness via contagion, appraisal, and by engaging social functions. The present findings indicate that emotional responses to sad music are multifaceted, are modulated by empathy, and are linked with a multidimensional experience of pleasure. These results were corroborated by a follow-up survey on happy music, which indicated differences between the emotional experiences resulting from listening to sad versus happy music. This is the first comprehensive survey of music-evoked sadness, revealing that listening to sad music can lead to beneficial emotional effects such as regulation of negative emotion and mood as well as consolation. Such beneficial emotional effects constitute the prime motivations for engaging with sad music in everyday life. PMID:25330315

  13. The paradox of music-evoked sadness: an online survey.

    Directory of Open Access Journals (Sweden)

    Liila Taruffi

    Full Text Available This study explores listeners' experience of music-evoked sadness. Sadness is typically assumed to be undesirable and is therefore usually avoided in everyday life. Yet the question remains: Why do people seek and appreciate sadness in music? We present findings from an online survey with both Western and Eastern participants (N = 772. The survey investigates the rewarding aspects of music-evoked sadness, as well as the relative contribution of listener characteristics and situational factors to the appreciation of sad music. The survey also examines the different principles through which sadness is evoked by music, and their interaction with personality traits. Results show 4 different rewards of music-evoked sadness: reward of imagination, emotion regulation, empathy, and no "real-life" implications. Moreover, appreciation of sad music follows a mood-congruent fashion and is greater among individuals with high empathy and low emotional stability. Surprisingly, nostalgia rather than sadness is the most frequent emotion evoked by sad music. Correspondingly, memory was rated as the most important principle through which sadness is evoked. Finally, the trait empathy contributes to the evocation of sadness via contagion, appraisal, and by engaging social functions. The present findings indicate that emotional responses to sad music are multifaceted, are modulated by empathy, and are linked with a multidimensional experience of pleasure. These results were corroborated by a follow-up survey on happy music, which indicated differences between the emotional experiences resulting from listening to sad versus happy music. This is the first comprehensive survey of music-evoked sadness, revealing that listening to sad music can lead to beneficial emotional effects such as regulation of negative emotion and mood as well as consolation. Such beneficial emotional effects constitute the prime motivations for engaging with sad music in everyday life.

  14. The paradox of music-evoked sadness: an online survey.

    Science.gov (United States)

    Taruffi, Liila; Koelsch, Stefan

    2014-01-01

    This study explores listeners' experience of music-evoked sadness. Sadness is typically assumed to be undesirable and is therefore usually avoided in everyday life. Yet the question remains: Why do people seek and appreciate sadness in music? We present findings from an online survey with both Western and Eastern participants (N = 772). The survey investigates the rewarding aspects of music-evoked sadness, as well as the relative contribution of listener characteristics and situational factors to the appreciation of sad music. The survey also examines the different principles through which sadness is evoked by music, and their interaction with personality traits. Results show 4 different rewards of music-evoked sadness: reward of imagination, emotion regulation, empathy, and no "real-life" implications. Moreover, appreciation of sad music follows a mood-congruent fashion and is greater among individuals with high empathy and low emotional stability. Surprisingly, nostalgia rather than sadness is the most frequent emotion evoked by sad music. Correspondingly, memory was rated as the most important principle through which sadness is evoked. Finally, the trait empathy contributes to the evocation of sadness via contagion, appraisal, and by engaging social functions. The present findings indicate that emotional responses to sad music are multifaceted, are modulated by empathy, and are linked with a multidimensional experience of pleasure. These results were corroborated by a follow-up survey on happy music, which indicated differences between the emotional experiences resulting from listening to sad versus happy music. This is the first comprehensive survey of music-evoked sadness, revealing that listening to sad music can lead to beneficial emotional effects such as regulation of negative emotion and mood as well as consolation. Such beneficial emotional effects constitute the prime motivations for engaging with sad music in everyday life.

  15. Interactive effects of video, priming, and music on emotions and the needs underlying intrinsic motivation

    OpenAIRE

    Loizou, G; Karageorghis, CI; Bishop, D

    2014-01-01

    Objectives: Emotions can enhance motivation towards a particular goal (Brehm, 1999), while activation of human motivation does not necessarily involve conscious processes (Bargh, 1990). The main purpose of the present study was to explore the impact of video, priming, and music on a range of emotion- and motivation-related variables, while the secondary purpose was to conduct a cross-cultural comparison. Design: A randomized controlled design was employed to address the interactive effects of...

  16. Music and Music Intervention for Therapeutic Purposes in Patients with Ventilator Support; Gamelan Music Perspective

    Directory of Open Access Journals (Sweden)

    Suhartini Suhartini

    2011-01-01

    Full Text Available Background: Gamelan music is one of folk music for Javanese people. Several research studies testing the effects of music were conducted in Western countries. The music studies for therapeutic purposes used classical music commonly. Even in Indonesia, some researchers may use that music for therapeutic purposes. This concern article explains the perspective music and music intervention as therapeutic purposes, view with Javanese classical music.Objectives: To explore the evidence of music and music intervention for therapeutic purposes and to describe the perspective of gamelan music used in nursing interventionMethods: Using five bibliography databases (MEDLINE, CINAHL, Science Direct, Interscience, and Proquest were searched from 1999-2010 for original clinical reports or reviews that evaluated the use of complementary therapy for therapeutic intervention in patients with ventilator support. The term of complementary therapy, anxiety, and pain were used in a comprehensive search of electronic databases. Articles were screened and excluded based on the title and abstract information.Results: Music brings about helpful changes in the emotional and physical health of patients, and has the ability to provide an altered state of physical arousal and subsequent mood improvement by processing a progression of musical notes of varying tone, rhythm, and instrumentation for a pleasing effect.Conclusion: Music can be used for therapeutic purposes, for instance to reduce anxiety, to decrease pain sensation, and some effects of psychological impact. Include, the gamelan music can be offer for patients for Javanese people in Indonesia.Key words: Music, music intervention, therapeutic purposes

  17. Music as language.

    Science.gov (United States)

    Ready, Trisha

    2010-02-01

    This article is an inquiry into the potential role of music in helping to address and to articulate complex emotional states such as the feelings patients might experience during the process of an illness or while undergoing bereavement. The article is centered on the role music played in structuring and articulating the cancer treatment experience of my infant nephew. What is woven around that central core is a synthesis and analysis of various philosophical perspectives, autobiographical vignettes, and empirical research. The writer postulates that music has an essential, inherent capacity to scaffold and contain emotions. Music is also considered a means to help facilitate the expression of difficult emotions such as lamentation, longing, and fear of the unknown that are often otherwise isolating, ineffable, or unbearable for patients. A major point of inquiry in this article is whether music can serve as a nurturing love object, or as a transitional object, for a patient during times of intense distress. What is also woven throughout this article is a subexploration of various philosophical perspectives on the cultural meanings and metaphors of illness.

  18. Affordances and the musically extended mind.

    Science.gov (United States)

    Krueger, Joel

    2014-01-06

    I defend a model of the musically extended mind. I consider how acts of "musicking" grant access to novel emotional experiences otherwise inaccessible. First, I discuss the idea of "musical affordances" and specify both what musical affordances are and how they invite different forms of entrainment. Next, I argue that musical affordances - via soliciting different forms of entrainment - enhance the functionality of various endogenous, emotion-granting regulative processes, drawing novel experiences out of us with an expanded complexity and phenomenal character. I argue that music therefore ought to be thought of as part of the vehicle needed to realize these emotional experiences. I appeal to different sources of empirical work to develop this idea.

  19. Affordances and the musically extended mind

    Directory of Open Access Journals (Sweden)

    Joel eKrueger

    2014-01-01

    Full Text Available I defend a model of the musically extended mind. I consider how acts of musicking grant access to novel emotional experiences otherwise inaccessible. First, I discuss the idea of musical affordances and specify both what musical affordances are and how they invite different forms of entrainment. Next, I argue that musical affordances—via soliciting different forms of entrainment—enhance the functionality of various endogenous, emotion-granting regulative processes, drawing novel experiences out of us with an expanded complexity and phenomenal character. I suggest that music therefore ought to be thought of as part of the vehicle needed to realize these emotional experiences. I appeal to different sources of empirical work to develop this idea.

  20. Music Learning Based on Computer Software

    Directory of Open Access Journals (Sweden)

    Baihui Yan

    2017-12-01

    Full Text Available In order to better develop and improve students’ music learning, the authors proposed the method of music learning based on computer software. It is still a new field to use computer music software to assist teaching. Hereby, we conducted an in-depth analysis on the computer-enabled music learning and the music learning status in secondary schools, obtaining the specific analytical data. Survey data shows that students have many cognitive problems in the current music classroom, and yet teachers have not found a reasonable countermeasure to them. Against this background, the introduction of computer music software to music learning is a new trial that can not only cultivate the students’ initiatives of music learning, but also enhance their abilities to learn music. Therefore, it is concluded that the computer software based music learning is of great significance to improving the current music learning modes and means.

  1. Effects of mood induction via music on cardiovascular measures of negative emotion during simulated driving

    NARCIS (Netherlands)

    Fairclough, S.H.; Zwaag, van der M.D.; Spiridon, E.; Westerink, J.H.D.M.

    2014-01-01

    A study was conducted to investigate the potential of mood induction via music to influence cardiovascular correlates of negative emotions experience during driving behaviour. One hundred participants were randomly assigned to one of five groups, four of whom experienced different categories of

  2. Remember Bach: an investigation in episodic memory for music.

    Science.gov (United States)

    Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O

    2005-12-01

    Emotional events are remembered better than nonemotional ones, especially after a long period of time. In this study, we investigated whether emotional music is kept better in episodic long-term memory than less emotional music and to which extent musical structure is important.

  3. Music–color associations are mediated by emotion

    Science.gov (United States)

    Palmer, Stephen E.; Schloss, Karen B.; Xu, Zoe; Prado-León, Lilia R.

    2013-01-01

    Experimental evidence demonstrates robust cross-modal matches between music and colors that are mediated by emotional associations. US and Mexican participants chose colors that were most/least consistent with 18 selections of classical orchestral music by Bach, Mozart, and Brahms. In both cultures, faster music in the major mode produced color choices that were more saturated, lighter, and yellower whereas slower, minor music produced the opposite pattern (choices that were desaturated, darker, and bluer). There were strong correlations (0.89 colors chosen to go with the music, supporting an emotional mediation hypothesis in both cultures. Additional experiments showed similarly robust cross-modal matches from emotionally expressive faces to colors and from music to emotionally expressive faces. These results provide further support that music-to-color associations are mediated by common emotional associations. PMID:23671106

  4. Music and Emotion—A Case for North Indian Classical Music

    Directory of Open Access Journals (Sweden)

    Jeffrey M. Valla

    2017-12-01

    Full Text Available The ragas of North Indian Classical Music (NICM have been historically known to elicit emotions. Recently, Mathur et al. (2015 provided empirical support for these historical assumptions, that distinct ragas elicit distinct emotional responses. In this review, we discuss the findings of Mathur et al. (2015 in the context of the structure of NICM. Using, Mathur et al. (2015 as a demonstrative case-in-point, we argue that ragas of NICM can be viewed as uniquely designed stimulus tools for investigating the tonal and rhythmic influences on musical emotion.

  5. Unraveling the mystery of music: music as an evolved group process.

    Science.gov (United States)

    Loersch, Chris; Arbuckle, Nathan L

    2013-11-01

    As prominently highlighted by Charles Darwin, music is one of the most mysterious aspects of human nature. Despite its ubiquitous presence across cultures and throughout recorded history, the reason humans respond emotionally to music remains unknown. Although many scientists and philosophers have offered hypotheses, there is little direct empirical evidence for any perspective. Here we address this issue, providing data which support the idea that music evolved in service of group living. Using 7 studies, we demonstrate that people's emotional responses to music are intricately tied to the other core social phenomena that bind us together into groups. In sum, this work establishes human musicality as a special form of social cognition and provides the first direct support for the hypothesis that music evolved as a tool of social living. In addition, the findings provide a reason for the intense psychological pull of music in modern life, suggesting that the pleasure we derive from listening to music results from its innate connection to the basic social drives that create our interconnected world. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  6. The Study on the Music Development and Rythmique for Infants: Through the Music Analysis and Case Study

    OpenAIRE

    今井, かんいち; 吉村, 夕里; 堀内, 詩子; Kan'ichi, IMAI; Yuri, YOSHIMURA; Utako, HORIUCHI; 京都文教大学臨床心理学部臨床心理学科; 京都文教大学臨床心理学部臨床心理学科; 京都府精神保健福祉総合センター; Kyoto Bunkyo University Department of Clinical Psychology Faculty of Clinical Psychology; Kyoto Bunkyo University Department of Clinical Psychology Faculty of Clinical Psychology /

    2011-01-01

    The purpose of this study is to reveal how infant's music experience affects his/her development and personality growth. This study addresses the "cognition" or "feeling organ" issues on music development, and also discusses mainly the emotion and posture, in order to identify the music experience from the aspect of representation and self-awareness. Moreover, from based on Jung’s theory, we describe the developed meaning that infants express their own images. The effectiveness of eurhythmics...

  7. Music therapy with sexually abused children.

    Science.gov (United States)

    Robarts, Jacqueline

    2006-04-01

    Music is part of everyday life, and is generally regarded as therapeutic. There is increasing interdisciplinary interest in innate human musicality and the link between music and the emotions. Innate musicality is evident in the dynamic forms of emotional expression that both regulate and cultivate the foundations of meaning in human communication (intersubjectivity). This article discusses music therapy, drawing from interdisciplinary perspectives, and illustrated by case material of individual music therapy with a sexually abused child. Where the growth of mind and meaning is devastated at its core by early relational trauma, music, when used with clinical perception, may reach and work constructively with damaged children in an evolving, musically mediated therapeutic relationship.

  8. The Understanding and Emotion Caused by an Architect-Built Space Using Music

    Directory of Open Access Journals (Sweden)

    Davod Baradaran Tavakoli

    2017-02-01

    Full Text Available Music and architecture are two of the effective components of the lives of human beings which are especially important. However, the link between these two components and their impacts on the understanding of the audience are some of the issues which have not been focused on. This bond is caused by a mutual space. A space which can either be created by a work of architecture or a piece of music. Despite of the previous studies that focused on investigating the different aspects of these two forms of art that link them together, this paper aims to discover and understand the perceptual – emotional relationship between music and architecture further than the preliminary principles that link them. It also aims to find an answer to this question: how can a relationship be established between various linking aspects of these two arts that would be understandable for their audiences? The present study is an analytical – descriptive research that relies on library studies and uses a logical argument in order to analyze, interpret and compare the relationship between music and architecture and its impact on the understanding of the audience. Accordingly, after reviewing the research literature and stating the concept of space in architecture and music, the linking aspects of these two forms of art have been comparatively analyzed. According to the conclusion of results, by taking into consideration various frequencies and generation of a variety of geometrical orders in each frequency, the more intense music is the more complex its spatial impact will be on the feeling and understanding of the audience.

  9. Neural correlates of cross-modal affective priming by music in Williams syndrome.

    Science.gov (United States)

    Lense, Miriam D; Gordon, Reyna L; Key, Alexandra P F; Dykens, Elisabeth M

    2014-04-01

    Emotional connection is the main reason people engage with music, and the emotional features of music can influence processing in other domains. Williams syndrome (WS) is a neurodevelopmental genetic disorder where musicality and sociability are prominent aspects of the phenotype. This study examined oscillatory brain activity during a musical affective priming paradigm. Participants with WS and age-matched typically developing controls heard brief emotional musical excerpts or emotionally neutral sounds and then reported the emotional valence (happy/sad) of subsequently presented faces. Participants with WS demonstrated greater evoked fronto-central alpha activity to the happy vs sad musical excerpts. The size of these alpha effects correlated with parent-reported emotional reactivity to music. Although participant groups did not differ in accuracy of identifying facial emotions, reaction time data revealed a music priming effect only in persons with WS, who responded faster when the face matched the emotional valence of the preceding musical excerpt vs when the valence differed. Matching emotional valence was also associated with greater evoked gamma activity thought to reflect cross-modal integration. This effect was not present in controls. The results suggest a specific connection between music and socioemotional processing and have implications for clinical and educational approaches for WS.

  10. Affordances and the musically extended mind

    OpenAIRE

    Joel eKrueger

    2014-01-01

    I defend a model of the musically extended mind. I consider how acts of “musicking” grant access to novel emotional experiences otherwise inaccessible. First, I discuss the idea of “musical affordances” and specify both what musical affordances are and how they invite different forms of entrainment. Next, I argue that musical affordances – via soliciting different forms of entrainment – enhance the functionality of various endogenous, emotion-granting regulative processes, drawing novel exper...

  11. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users.

    Science.gov (United States)

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.

  12. Predictive coding links perception, action, and learning to emotions in music. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    Science.gov (United States)

    Gebauer, L.; Kringelbach, M. L.; Vuust, P.

    2015-06-01

    The review by Koelsch and colleagues [1] offers a timely, comprehensive, and anatomically detailed framework for understanding the neural correlates of human emotions. The authors describe emotion in a framework of four affect systems, which are linked to effector systems, and higher order cognitive functions. This is elegantly demonstrated through the example of music; a realm for exploring emotions in a domain, that can be independent of language but still highly relevant for understanding human emotions [2].

  13. Style in Music

    Science.gov (United States)

    Dannenberg, Roger B.

    Because music is not objectively descriptive or representational, the subjective qualities of music seem to be most important. Style is one of the most salient qualities of music, and in fact most descriptions of music refer to some aspect of musical style. Style in music can refer to historical periods, composers, performers, sonic texture, emotion, and genre. In recent years, many aspects of music style have been studied from the standpoint of automation: How can musical style be recognized and synthesized? An introduction to musical style describes ways in which style is characterized by composers and music theorists. Examples are then given where musical style is the focal point for computer models of music analysis and music generation.

  14. [In search of the musical brain].

    Science.gov (United States)

    Samson, S

    2011-01-01

    The emotional power of music opens novel prospects in the field of affective neurosciences. To clarify the neurobiological substrate of emotions brought by music, we adopted an integrative approach, which combines neuropsychology, brain imaging and electrophysiology (intracranial depth electrode recordings). The results of a series of studies carried out in patients with focal brain lesions allow to describe the involvement of different temporal lobe regions and, in particular, of the amygdala in these emotional judgments before discussing the therapeutic benefits of music to handle Alzheimer's disease.

  15. Developing the Emotional Intelligence of Undergraduate Music Education Majors: An Exploratory Study Using Bradberry and Greaves' (2009) "Emotional Intelligence 2.0"

    Science.gov (United States)

    McGinnis, Emily J.

    2018-01-01

    Research focused on the relationship of emotional intelligence (EI) to academic and professional success in education, and whether and how it might be taught and learned, is inconclusive. The purpose of this study was to examine the degree to which undergraduate music education majors experienced a change in EI after implementing strategies from…

  16. Music perception and cognition: development, neural basis, and rehabilitative use of music.

    Science.gov (United States)

    Särkämö, Teppo; Tervaniemi, Mari; Huotilainen, Minna

    2013-07-01

    Music is a highly versatile form of art and communication that has been an essential part of human society since its early days. Neuroimaging studies indicate that music is a powerful stimulus also for the human brain, engaging not just the auditory cortex but also a vast, bilateral network of temporal, frontal, parietal, cerebellar, and limbic brain areas that govern auditory perception, syntactic and semantic processing, attention and memory, emotion and mood control, and motor skills. Studies of amusia, a severe form of musical impairment, highlight the right temporal and frontal cortices as the core neural substrates for adequate perception and production of music. Many of the basic auditory and musical skills, such as pitch and timbre perception, start developing already in utero, and babies are born with a natural preference for music and singing. Music has many important roles and functions throughout life, ranging from emotional self-regulation, mood enhancement, and identity formation to promoting the development of verbal, motor, cognitive, and social skills and maintaining their healthy functioning in old age. Music is also used clinically as a part of treatment in many illnesses, which involve affective, attention, memory, communication, or motor deficits. Although more research is still needed, current evidence suggests that music-based rehabilitation can be effective in many developmental, psychiatric, and neurological disorders, such as autism, depression, schizophrenia, and stroke, as well as in many chronic somatic illnesses that cause pain and anxiety. WIREs Cogn Sci 2013, 4:441-451. doi: 10.1002/wcs.1237 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Stimulating cognitive, emotional and social development with the help of music: Case study of the pupil with special needs

    Directory of Open Access Journals (Sweden)

    Mira Kuzma

    2004-05-01

    Full Text Available The goal of the study was to improve some cognitive and social abilities and skills of the nine-year-old boy with special needs by using different kinds of methods and techniques of help with music. In the action research (the qualitative case study were actively and on equal terms involved beside me as school psychologist, boy's teacher and his mother and also a boy alone. After a nine months long period of intervention, especially by the help of music (especially by individual and group remedial music making all members of the group actively involved in researching perceived effect of the interventions – efficiency of solving problems with concentration (and with that connecting knowledge of multiplication table, learning habits, general learning results, emotional and social maturity. The results of the research confirmed the all-working hypotheses, that the use of different methods and techniques of help with music will have a positive effect on (i boy's concentration (and with that connecting knowledge of multiplication table, (ii boy's learning habits and his general learning results and (iii boy's emotional and social maturity.

  18. Music in the family: music making and music therapy with young children and their families.

    Science.gov (United States)

    Wetherick, Donald

    2009-01-01

    Songs and singing games are a healthy part of young children's social, emotional and cognitive development. Such shared music making can facilitate and strengthen relationships between parents and children. Family health workers can encourage carers' informal uses of music with their children. In cases of developmental delay, disability, severe illness or family stress, music can continue to have a significant role in supporting children and parents. In some cases referral to specialist music therapy services may be appropriate for assessment and/or treatment.

  19. Toward a neural chronometry for the aesthetic experience of music.

    Science.gov (United States)

    Brattico, Elvira; Bogert, Brigitte; Jacobsen, Thomas

    2013-01-01

    Music is often studied as a cognitive domain alongside language. The emotional aspects of music have also been shown to be important, but views on their nature diverge. For instance, the specific emotions that music induces and how they relate to emotional expression are still under debate. Here we propose a mental and neural chronometry of the aesthetic experience of music initiated and mediated by external and internal contexts such as intentionality, background mood, attention, and expertise. The initial stages necessary for an aesthetic experience of music are feature analysis, integration across modalities, and cognitive processing on the basis of long-term knowledge. These stages are common to individuals belonging to the same musical culture. The initial emotional reactions to music include the startle reflex, core "liking," and arousal. Subsequently, discrete emotions are perceived and induced. Presumably somatomotor processes synchronizing the body with the music also come into play here. The subsequent stages, in which cognitive, affective, and decisional processes intermingle, require controlled cross-modal neural processes to result in aesthetic emotions, aesthetic judgments, and conscious liking. These latter aesthetic stages often require attention, intentionality, and expertise for their full actualization.

  20. Toward a Neural Chronometry for the Aesthetic Experience of Music

    Science.gov (United States)

    Brattico, Elvira; Bogert, Brigitte; Jacobsen, Thomas

    2013-01-01

    Music is often studied as a cognitive domain alongside language. The emotional aspects of music have also been shown to be important, but views on their nature diverge. For instance, the specific emotions that music induces and how they relate to emotional expression are still under debate. Here we propose a mental and neural chronometry of the aesthetic experience of music initiated and mediated by external and internal contexts such as intentionality, background mood, attention, and expertise. The initial stages necessary for an aesthetic experience of music are feature analysis, integration across modalities, and cognitive processing on the basis of long-term knowledge. These stages are common to individuals belonging to the same musical culture. The initial emotional reactions to music include the startle reflex, core “liking,” and arousal. Subsequently, discrete emotions are perceived and induced. Presumably somatomotor processes synchronizing the body with the music also come into play here. The subsequent stages, in which cognitive, affective, and decisional processes intermingle, require controlled cross-modal neural processes to result in aesthetic emotions, aesthetic judgments, and conscious liking. These latter aesthetic stages often require attention, intentionality, and expertise for their full actualization. PMID:23641223

  1. Kivy and Langer on expressiveness in music

    Directory of Open Access Journals (Sweden)

    van der Schoot Albert

    2013-01-01

    Full Text Available From 1980 onwards, Peter Kivy has put forward that music does not so much express emotions but rather is expressive of emotions. The character of the music does not represent the character or mood of the composer, but reflects his knowledge of emotional life. Unfortunately, Kivy fails to give credit to Susanne Langer, who brought these views to the fore as early as 1942, claiming that the vitality of music lies in expressiveness, not in expression.

  2. 音乐声的情感属性与听者的情绪反应%Emotional Attributes of the Musical Sound and Mood Reactions of the Listeners

    Institute of Scientific and Technical Information of China (English)

    杨倩; 孟子厚

    2013-01-01

    以音乐声的情感为目标进行标注与分类,考察不同情感类别的音乐声对听者情绪的影响,分析听者情绪分量和音乐声情感属性之间的关联。%The musical sound was labelled and classified according to the different emotions, in the purpose of observation on the emotional influence that different types of music have on listeners, and analysis of the correlation between the mood components of the listeners and the emotional attributes of the musical sound.

  3. Music as Water: The Functions of Music from a Utilitarian Perspective

    Directory of Open Access Journals (Sweden)

    Liam Maloney

    2017-11-01

    Full Text Available The rapid increase of technologically enhanced listening platforms gives listeners access to music with ever-increasing ease and ubiquity, giving rise to the suggestion that we should now conceptualize music as a resource similar to water; something that is utilized to achieve everyday goals. This paper proposes that music is a utilitarian resource employed by listeners to augment cognitive, emotional, behavioral, and physiological aspects of the self. To better explore these notions this paper examines the potential role of the “functions of music,” first espoused by Alan P. Merriam in 1964. Merriam suggested music has a situational use and an underlying function (music’s ability to alter the self through listening. The research presented here asserts that listeners interact with specific musical materials to achieve or orientate themselves towards contextually-rooted goals. Reinforcing Tia DeNora’s suggestion that music is a “technology of the self” this research presents the results of a 41 publication meta-analysis exploring the possible functions of music. The resultant Aggregate Thematic Functions Framework (ATF framework identifies 45 possible utilitarian functions of music, spread across five domains of action. The framework also proposes a meta-domain and an emotional sub-domain.

  4. Between Emotion and Intellect. On the Musical Language of Andrzej Panufnik (1914–1991

    Directory of Open Access Journals (Sweden)

    Bolesławska-Lewandowska Beata

    2015-12-01

    Full Text Available Andrzej Panufnik’s (1914-1991 key objective as a composer was to achieve a balance between emotion and intellect. The composer very often emphasised the role of the relation between these two elements in his works. This topic is the leitmotiv of texts about his own music left behind by the composer. From those texts, it is clearly evident that symmetry (and in later years also geometry played a central role in the composer’s formal concepts. The impulse for the study of the possibility of using geometric shapes for the construction of musical forms came from his 1972 composition for the BBC television entitled Triangles - for three flutes and three cellos.

  5. Classical Music Clustering Based on Acoustic Features

    OpenAIRE

    Wang, Xindi; Haque, Syed Arefinul

    2017-01-01

    In this paper we cluster 330 classical music pieces collected from MusicNet database based on their musical note sequence. We use shingling and chord trajectory matrices to create signature for each music piece and performed spectral clustering to find the clusters. Based on different resolution, the output clusters distinctively indicate composition from different classical music era and different composing style of the musicians.

  6. [Contributions to the psychodynamic approach to understanding the phenomenon of music].

    Science.gov (United States)

    Giglio, J S; Giglio, Z G

    1980-12-01

    Based on different writings from different periods, this article gives us some of the psychodinamic view of musical phenomenon. It is divided in three parts. 1. The initial one is about general aspects of musical phenomenon. Music has a hihg power of symbolization, due to its ambiguity and simultaneity of elements. It can make a change on the common state of consciousness. Music also can influence on the Ego's cognitive style. Music has relationship with both the Ego's dissociative function and Ego's integrative function. 2. The secon part refers to the affective nature of music. Music release libidinal energy mainly through rhythm; for exemple we see the link between and dance in primitive societies. It also gives expression for emotional life, and arouses emotions too. 3. The third part refers to structural and functional aspects of music. This last part talks about the role of music to the listener and to the creative musician; it presents an analogy between the structure of the music and structure of the dreams. The joy of music involves the whole personality and music may be used to expand the Self.

  7. Music and the Mind: Music's Healing Powers

    Directory of Open Access Journals (Sweden)

    Caroilyn S. Ticker

    2017-03-01

    Full Text Available Music makes you smarter: or at least that is what the "experts" are saying. CDs are sold of Mozart's Sonatas for babies, and parents are urged to give their children music lessons in the belief that music does something to our brains which in turn makes us more intelligent. But is this really true? Does music really affect the brain in the powerful way that scientists are suggesting, or is it hearsay? In this paper I investigate the effects of music on our brain's plasticity and cognition by looking at several different experimental studies. Specifically I will address how music affects brain plasticity, emotion, physical health and linguistic processing, and how these effects in turn make music a beneficial tool for therapy, particularly in patients with Traumatic-Brain Injury (TBI and Autism-Spectrum Disorder.

  8. The functions of music and their relationship to music preference in India and Germany.

    Science.gov (United States)

    Schäfer, Thomas; Tipandjan, Arun; Sedlmeier, Peter

    2012-01-01

    Is the use of music in everyday life a culturally universal phenomenon? And do the functions served by music contribute to the development of music preferences regardless of the listener's cultural background? The present study explored similarities and dissimilarities in the functions of music listening and their relationship to music preferences in two countries with different cultural backgrounds: India as an example of a collectivistic society and Germany as an example of an individualistic society. Respondents were asked to what degree their favorite music serves several functions in their life. The functions were summarized in seven main groups: background entertainment, prompt for memories, diversion, emotion regulation, self-regulation, self-reflection, and social bonding. Results indicate a strong similarity of the functions of people's favorite music for Indian and German listeners. Among the Indians, all of the seven functions were rated as meaningful; among the Germans, this was the case for all functions except emotion regulation. However, a pronounced dissimilarity was found in the predictive power of the functions of music for the strength of music preference, which was much stronger for Germans than for Indians. In India, the functions of music most predictive for music preference were diversion, self-reflection, and social bonding. In Germany, the most predictive functions were emotion regulation, diversion, self-reflection, prompt for memories, and social bonding. It is concluded that potential cultural differences hardly apply to the functional use of music in everyday life, but they do so with respect to the impact of the functions on the development of music preference. The present results are consistent with the assumption that members of a collectivistic society tend to set a higher value on their social and societal integration and their connectedness to each other than do members of individualistic societies.

  9. The cognitive organization of music knowledge: a clinical analysis.

    Science.gov (United States)

    Omar, Rohani; Hailstone, Julia C; Warren, Jane E; Crutch, Sebastian J; Warren, Jason D

    2010-04-01

    Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer's disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer's disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.

  10. Music Learning Based on Computer Software

    OpenAIRE

    Baihui Yan; Qiao Zhou

    2017-01-01

    In order to better develop and improve students’ music learning, the authors proposed the method of music learning based on computer software. It is still a new field to use computer music software to assist teaching. Hereby, we conducted an in-depth analysis on the computer-enabled music learning and the music learning status in secondary schools, obtaining the specific analytical data. Survey data shows that students have many cognitive problems in the current music classroom, and yet teach...

  11. Towards a neural chronometric framework for the aesthetic experience of music

    Directory of Open Access Journals (Sweden)

    Elvira eBrattico

    2013-05-01

    Full Text Available Music is often studied as a cognitive domain alongside language. The emotional aspects of music have also been shown to be important, but views on their nature diverge. For instance, the specific emotions that music induces and how they relate to emotional expression are still under debate. Here we propose a mental and neural chronometry of the aesthetic experience of music initiated and mediated by external and internal contexts such as intentionality, background mood, attention, and expertise. The initial stages necessary for an aesthetic experience of music are feature analysis, integration across modalities, and cognitive processing on the basis of long-term knowledge. These stages are common to individuals belonging to the same musical culture. The initial emotional reactions to music include the startle reflex, core ‘liking’, and arousal. Subsequently, discrete emotions are perceived and induced. Presumably somatomotor processes synchronizing the body with the music also come into play here. The subsequent stages, in which cognitive, affective, and decisional processes intermingle, require controlled cross-modal neural processes to result in aesthetic emotions, aesthetic judgments, and conscious liking. These latter aesthetic stages often require attention, intentionality, and expertise for their full actualization.

  12. Perception of Music and Speech in Adolescents with Cochlear Implants – A Pilot Study on Effects of Intensive Musical Ear Training

    DEFF Research Database (Denmark)

    Petersen, Bjørn; Sørensen, Stine Derdau; Pedersen, Ellen Raben

    measures of rehabilitation are important throughout adolescence. Music training may provide a beneficial method of strengthening not only music perception, but also linguistic skills, particularly prosody. The purpose of this study was to examine perception of music and speech and music engagement...... of adolescent CI users and the potential effects of an intensive musical ear training program. METHODS Eleven adolescent CI users participated in a short intensive training program involving music making activities and computer based listening exercises. Ten NH agemates formed a reference group, who followed...... their standard school schedule and received no music training. Before and after the intervention period, both groups completed a set of tests for perception of music, speech and emotional prosody. In addition, the participants filled out a questionnaire which examined music listening habits and enjoyment...

  13. Toward a Neural Chronometry for the Aesthetic Experience of Music

    OpenAIRE

    Brattico, Elvira; Bogert, Brigitte; Jacobsen, Thomas

    2013-01-01

    Music is often studied as a cognitive domain alongside language. The emotional aspects of music have also been shown to be important, but views on their nature diverge. For instance, the specific emotions that music induces and how they relate to emotional expression are still under debate. Here we propose a mental and neural chronometry of the aesthetic experience of music initiated and mediated by external and internal contexts such as intentionality, background mood, attention, and experti...

  14. [Acquired amusia and musical anhedonia].

    Science.gov (United States)

    Hirel, C; Lévêque, Y; Deiana, G; Richard, N; Cho, T-H; Mechtouff, L; Derex, L; Tillmann, B; Caclin, A; Nighoghossian, N

    2014-01-01

    Amusia is defined as an auditory agnosia, specifically related to music, resulting from a cerebral lesion or being of congenital origin. Amusia is rarely associated to musical anhedonia. We report the case of a 43-year-old patient who suffered in January 2012 from a right ischemic lesion affecting the superior temporal cortex, in particular lateral Heschl Gyrus and the posterior part of the Superior Temporal Gyrus (Brodmann areas 21 and 22). Neuropsychological tests revealed an amusia combined to musical anhedonia. The specificity of this case is based on the combination of both syndromes highlighting the relation between neural networks involved in the processing of musical information in both its perceptual and emotional components. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  15. Music Mood Player Implementation Applied in Daycare Using Self Organizing Map Method

    OpenAIRE

    Dewi, Kadek Cahya; Putri, Luh Arida Ayu Rahning

    2011-01-01

    . Music is an art, entertainment and human activity that involve some organized sounds. Music is closely related to human psychology. A piece of music often associated with certain adjectives such as happy, sad, romantic and many more. The linkage between the music with a certain mood has been widely used in various occasions by people, there for music classification based on relevance to a particular emotion is important. Daycare is one example of an institution that used music as therapy or...

  16. Enhancing emotional experiences to dance through music: the role of valence and arousal in the cross-modal bias

    Directory of Open Access Journals (Sweden)

    Julia F. Christensen

    2014-10-01

    Full Text Available It is well established that emotional responses to stimuli presented to one perceptive modality (e.g. visual are modulated by the concurrent presentation of affective information to another modality (e.g. auditory – an effect known as the cross-modal bias. However, the affective mechanisms mediating this effect are still not fully understood. It remains unclear what role different dimensions of stimulus valence and arousal play in mediating the effect, and to what extent cross-modal influences impact not only our perception and conscious affective experiences, but also our psychophysiological emotional response. We addressed these issues by measuring participants’ subjective emotion ratings and their Galvanic Skin Responses in a cross-modal affect perception paradigm employing videos of ballet dance movements and instrumental classical music as the stimuli. We chose these stimuli to explore the cross-modal bias in a context of stimuli (ballet dance movements that most participants would have relatively little prior experience with. Results showed (i that the cross-modal bias was more pronounced for sad than for happy movements, whereas it was equivalent when contrasting high vs. low arousal movements, and (ii that movement valence did not modulate participants’ GSR, while movement arousal did such that GSR was potentiated in the case of low arousal movements with sad music and when high arousal movements were paired with happy music. Results are discussed in the context of the cross-modal affect perception literature and with regards to implications for the art community.

  17. Enhancing emotional experiences to dance through music: the role of valence and arousal in the cross-modal bias.

    Science.gov (United States)

    Christensen, Julia F; Gaigg, Sebastian B; Gomila, Antoni; Oke, Peter; Calvo-Merino, Beatriz

    2014-01-01

    It is well established that emotional responses to stimuli presented to one perceptive modality (e.g., visual) are modulated by the concurrent presentation of affective information to another modality (e.g., auditory)-an effect known as the cross-modal bias. However, the affective mechanisms mediating this effect are still not fully understood. It remains unclear what role different dimensions of stimulus valence and arousal play in mediating the effect, and to what extent cross-modal influences impact not only our perception and conscious affective experiences, but also our psychophysiological emotional response. We addressed these issues by measuring participants' subjective emotion ratings and their Galvanic Skin Responses (GSR) in a cross-modal affect perception paradigm employing videos of ballet dance movements and instrumental classical music as the stimuli. We chose these stimuli to explore the cross-modal bias in a context of stimuli (ballet dance movements) that most participants would have relatively little prior experience with. Results showed (i) that the cross-modal bias was more pronounced for sad than for happy movements, whereas it was equivalent when contrasting high vs. low arousal movements; and (ii) that movement valence did not modulate participants' GSR, while movement arousal did, such that GSR was potentiated in the case of low arousal movements with sad music and when high arousal movements were paired with happy music. Results are discussed in the context of the affective dimension of neuroentrainment and with regards to implications for the art community.

  18. Characterization of music-evoked autobiographical memories.

    Science.gov (United States)

    Janata, Petr; Tomic, Stefan T; Rakowski, Sonja K

    2007-11-01

    Despite music's prominence in Western society and its importance to individuals in their daily lives, very little is known about the memories and emotions that are often evoked when hearing a piece of music from one's past. We examined the content of music-evoked autobiographical memories (MEAMs) using a novel approach for selecting stimuli from a large corpus of popular music, in both laboratory and online settings. A set of questionnaires probed the cognitive and affective properties of the evoked memories. On average, 30% of the song presentations evoked autobiographical memories, and the majority of songs also evoked various emotions, primarily positive, that were felt strongly. The third most common emotion was nostalgia. Analyses of written memory reports found both general and specific levels of autobiographical knowledge to be represented, and several social and situational contexts for memory formation were common across many memories. The findings indicate that excerpts of popular music serve as potent stimuli for studying the structure of autobiographical memories.

  19. Music: Its Expressive Power and Moral Significance

    Directory of Open Access Journals (Sweden)

    Sarah Whitfield

    2010-05-01

    Full Text Available The creation and practice of music is tightly wound with human emotion, character, and experience. Music arouses sentiment and cannot be underestimated as a powerful shaper of human virtue, character, and emotion. As vehicles of musical expression, musicians possess the ability to profoundly influence an audience for good or for evil. Thus, the nature of music and the manner in which musicians utilize it creates innumerable ramifications that cannot be ignored. The pervasiveness of this notion is largely attributed to the Greek theorists, who ascribed various emotions and moral implications to particular modes. The prominent Greek philosophers Plato and Aristotle affirmed that music contained an intrinsic element that was conducive to the promotion of moral or spiritual harmony and order in the soul. Plato and his contemporaries attributed specific character-forming qualities to each of the individual harmonia, or musical modes, believing that each could shape human character in a distinct way. These ideas inevitably persisted and continue to endure. Theorists throughout history have agreed that music profoundly influences human character and shapes morality.

  20. The prenatal roots of music

    Directory of Open Access Journals (Sweden)

    David Ernest Teie

    2016-08-01

    Full Text Available Although the idea that pulse in music may be related to human pulse is ancient and has recently been promoted by researchers (Parncutt, 2006; Snowdon & Teie, 2010, there has been no ordered delineation of the characteristics of music that are based on the sounds of the womb. I describe features of music that are based on sounds that are present in the womb: tempo of pulse (pulse is understood as the regular, underlying beat that defines the meter, amplitude contour of pulse, meter, musical notes, melodic frequency range, continuity, syllabic contour, melodic rhythm, melodic accents, phrase length, and phrase contour. There are a number of features of prenatal development that allow for the formation of long-term memories of the sounds of the womb in the areas of the brain that are responsible for emotions. Taken together, these features and the similarities between the sounds of the womb and the elemental building blocks of music allow for a postulation that the fetal acoustic environment may provide the bases for the fundamental musical elements that are found in the music of all cultures. This hypothesis is supported by a one-to-one matching of the universal features of music with the sounds of the womb: 1 all of the regularly heard sounds that are present in the fetal environment are represented in the music of every culture, and 2 all of the features of music that are present in the music of all cultures can be traced to the fetal environment.

  1. Appeal of love themes in popular music.

    Science.gov (United States)

    Knobloch, Silvia; Zillmann, Dolf

    2003-12-01

    The relationship between romantic satisfaction versus discontent and a preference for music celebrating versus lamenting love is explored. The satisfaction/discontent was ascertained in 60 college undergraduate women and men who later freely listened to music from a sampling of selections. The duration of their self-determined exposure to love-celebrating versus love-lamenting music was unobtrusively recorded by computer software. Romantically satisfied women and men showed a preference for love-celebrating music, whereas discontented women and men preferred love-lamenting music. Romantically discontent women and men preferred love-lamenting music presented by performers of their own sex. The findings indicate young adults' inclination to match emotions expressed in music about love with the emotions experienced in their own romantic situation.

  2. Effects of music on cardiovascular responses in men with essential hypertension compared with healthy men based on introversion and extraversion.

    Science.gov (United States)

    Namdar, Hossein; Taban Sadeghi, Mohammadreza; Sabourimoghaddam, Hassan; Sadeghi, Babak; Ezzati, Davoud

    2014-01-01

    The present research investigated the effects of two different types of music on cardiovascular responses in essential hypertensive men in comparison with healthy men based on introversion and extraversion. One hundred and thirteen hypertensive men referred to Madani Heart Hospital in Tabriz completed the NEO-FFI Questionnaire and after obtaining acceptable scores were classified in four groups: introvert patients, extravert patients, introvert healthy subjects, and extravert healthy subjects (each group with 25 samples with age range 31-50). Baseline blood pressure and heart rate of each subject was recorded without any stimulus. Then subjects were exposed to slow-beat music and blood pressure and heart rate were recorded. After15 minute break, and a little cognitive task for distraction, subjects were exposed to fast-beat music and blood pressure and heart rate were recorded again. Multivariate analysis of covariance (MANCOVA) test showed that extravert patient subjects obtained greater reduction in systolic blood pressure and heart rate after presenting slow-beat music compared with introvert patients (P= 0.035, and P= 0.033 respectively). And extravert healthy subjects obtained greater reduction in heart rate after presenting slow-beat music compared with introvert healthy subjects (P= 0.036). However, there are no significant differences between introvert and extravert groups in systolic and diastolic blood pressure and heart rate after presenting fast-beat music. Based on our results, introvert subjects experience negative emotions more than extravert subjects and negative emotions cause less change in blood pressure in these subjects compared with extravert subjects.

  3. Effects of Music on Cardiovascular Responses in Men with Essential Hypertension Compared with Healthy Men Based on Introversion and Extraversion

    Directory of Open Access Journals (Sweden)

    Hossein Namdar

    2014-10-01

    Full Text Available Introduction: The present research investigated the effects of two different types of music on cardiovascular responses in essential hypertensive men in comparison with healthy men based on introversion and extraversion. Methods: One hundred and thirteen hypertensive men referred to Madani Heart Hospital in Tabriz completed the NEO-FFI Questionnaire and after obtaining acceptable scores were classified in four groups: introvert patients, extravert patients, introvert healthy subjects, and extravert healthy subjects (each group with 25 samples with age range 31-50. Baseline blood pressure and heart rate of each subject was recorded without any stimulus. Then subjects were exposed to slow-beat music and blood pressure and heart rate were recorded. After15 minute break, and a little cognitive task for distraction, subjects were exposed to fast-beat music and blood pressure and heart rate were recorded again. Results: Multivariate analysis of covariance (MANCOVA test showed that extravert patient subjects obtained greater reduction in systolic blood pressure and heart rate after presenting slow-beat music compared with introvert patients (P= 0.035, and P= 0.033 respectively. And extravert healthy subjects obtained greater reduction in heart rate after presenting slow-beat music compared with introvert healthy subjects (P= 0.036. However, there are no significant differences between introvert and extravert groups in systolic and diastolic blood pressure and heart rate after presenting fast-beat music. Conclusion: Based on our results, introvert subjects experience negative emotions more than extravert subjects and negative emotions cause less change in blood pressure in these subjects compared with extravert subjects.

  4. Evidence-based music therapy practice: an integral understanding.

    Science.gov (United States)

    Abrams, Brian

    2010-01-01

    The American Music Therapy Association has recently put into action a plan called its Research Strategic Priority, with one of its central purposes to advance the music therapy field through research promoting Evidence-Based Practice of music therapy. The extant literature on music therapy practice, theory, and research conveys a range of very different perspectives on what may count as the "evidence" upon which practice is based. There is therefore a need to conceptualize evidence-based music therapy practice in a multifaceted, yet coherent and balanced way. The purpose of this paper is to illustrate a framework based upon four distinct epistemological perspectives on evidence-based music therapy practice that together represent an integral understanding.

  5. Effects of a Dyadic Music Therapy Intervention on Parent-Child Interaction, Parent Stress, and Parent-Child Relationship in Families with Emotionally Neglected Children

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl; H. McKinney, Cathy; Holck, Ulla

    2014-01-01

    of this study was to investigate the effect of a dyadic music therapy intervention on observed parent-child interaction (mutual attunement, nonverbal communication, emotional parental response), self-reported parenting stress, and self-reported parent-child relationship in families at risk and families...... significantly improved their nonverbal communication and mutual attunement. Similarly, parents who participated in dyadic music therapy reported themselves to be significantly less stressed by the mood of the child and to significantly improve their parent-child relationship in terms of being better at talking......-perceived autonomy, attachment, and parental competence. Conclusions: The dyadic music therapy intervention examined in this study improved emotional communication between parent and child and interaction after 6 to 10 sessions and can be considered as a viable treatment alternative or supplement for families...

  6. Study on Brain Dynamics by Non Linear Analysis of Music Induced EEG Signals

    Science.gov (United States)

    Banerjee, Archi; Sanyal, Shankha; Patranabis, Anirban; Banerjee, Kaushik; Guhathakurta, Tarit; Sengupta, Ranjan; Ghosh, Dipak; Ghose, Partha

    2016-02-01

    Music has been proven to be a valuable tool for the understanding of human cognition, human emotion, and their underlying brain mechanisms. The objective of this study is to analyze the effect of Hindustani music on brain activity during normal relaxing conditions using electroencephalography (EEG). Ten male healthy subjects without special musical education participated in the study. EEG signals were acquired at the frontal (F3/F4) lobes of the brain while listening to music at three experimental conditions (rest, with music and without music). Frequency analysis was done for the alpha, theta and gamma brain rhythms. The finding shows that arousal based activities were enhanced while listening to Hindustani music of contrasting emotions (romantic/sorrow) for all the subjects in case of alpha frequency bands while no significant changes were observed in gamma and theta frequency ranges. It has been observed that when the music stimulus is removed, arousal activities as evident from alpha brain rhythms remain for some time, showing residual arousal. This is analogous to the conventional 'Hysteresis' loop where the system retains some 'memory' of the former state. This is corroborated in the non linear analysis (Detrended Fluctuation Analysis) of the alpha rhythms as manifested in values of fractal dimension. After an input of music conveying contrast emotions, withdrawal of music shows more retention as evidenced by the values of fractal dimension.

  7. The change of music preferences following the onset of a mental disorder

    Directory of Open Access Journals (Sweden)

    Stefan Gebhardt

    2015-06-01

    Full Text Available A psychiatric population (n=123 was examined on changed music preferences after onset of a mental disorder. Most patients did not change their previous music preference; they considered music helpful for their mental state, showed more attractivity and enforcement as personality traits and used music more for emotion modulation. Patients who have undergone a preference shift reported that music has impaired them during the time of illness; these patients showed less ego-strength, less conficence and less enforcement and used music less for arousal modulation. A third subgroup stopped listening to music completely after the onset of the mental disorder; these patients attached less importance to music and also reported that music has impaired their mental state. They showed more ego-strength and used music less for emotion modulation. The results suggest that the use of music in everyday life can be helpful as an emotion modulation strategy. However, some patients might need instructions how to use music in a functional, and not dysfunctional, way. Psychiatrists and psychotherapists as well as music therapists should be aware of emotion modulation strategies, subjective valence of music and personality traits of their patients.

  8. Healthy Behaviours in Music and Non-Music Performance Students

    Science.gov (United States)

    Ginsborg, Jane; Kreutz, Gunter; Thomas, Mike; Williamon, Aaron

    2009-01-01

    Purpose: The purpose of this paper is to compare the self-reported health-promoting behaviours of music and non-music performance students in higher education. It also seeks to determine the extent to which perceived health and self-reported symptoms are associated with lifestyle, emotional affect state, self-regulation and self-efficacy.…

  9. Sharing experienced sadness : Negotiating meanings of self-defined sad music within a group interview session

    OpenAIRE

    Peltola, Henna-Riikka

    2017-01-01

    Sadness induced by music listening has been a popular research focus in music and emotion research. Despite the wide consensus in affective sciences that emotional experiences are social processes, previous studies have only concentrated on individuals. Thus, the intersubjective dimension of musical experience – how music and music-related emotions are experienced between individuals – has not been investigated. In order to tap into shared emotional experiences, group discussions about experi...

  10. Psychiatry and music

    OpenAIRE

    Nizamie, Shamsul Haque; Tikka, Sai Krishna

    2014-01-01

    Vocal and/or instrumental sounds combined in such a way as to produce beauty of form, harmony and expression of emotion is music. Brain, mind and music are remarkably related to each other and music has got a strong impact on psychiatry. With the advent of music therapy, as an efficient form of alternative therapy in treating major psychiatric conditions, this impact has been further strengthened. In this review, we deliberate upon the historical aspects of the relationship between psychiatry...

  11. Proposal of an Algorithm to Synthesize Music Suitable for Dance

    Science.gov (United States)

    Morioka, Hirofumi; Nakatani, Mie; Nishida, Shogo

    This paper proposes an algorithm for synthesizing music suitable for emotions in moving pictures. Our goal is to support multi-media content creation; web page design, animation films and so on. Here we adopt a human dance as a moving picture to examine the availability of our method. Because we think the dance image has high affinity with music. This algorithm is composed of three modules. The first is the module for computing emotions from an input dance image, the second is for computing emotions from music in the database and the last is for selecting music suitable for input dance via an interface of emotion.

  12. Background music genre can modulate flavor pleasantness and overall impression of food stimuli.

    Science.gov (United States)

    Fiegel, Alexandra; Meullenet, Jean-François; Harrington, Robert J; Humble, Rachel; Seo, Han-Seok

    2014-05-01

    This study aimed to determine whether background music genre can alter food perception and acceptance, but also to determine how the effect of background music can vary as a function of type of food (emotional versus non-emotional foods) and source of music performer (single versus multiple performers). The music piece was edited into four genres: classical, jazz, hip-hop, and rock, by either a single or multiple performers. Following consumption of emotional (milk chocolate) or non-emotional food (bell peppers) with the four musical stimuli, participants were asked to rate sensory perception and impression of food stimuli. Participants liked food stimuli significantly more while listening to the jazz stimulus than the hip-hop stimulus. Further, the influence of background music on overall impression was present in the emotional food, but not in the non-emotional food. In addition, flavor pleasantness and overall impression of food stimuli differed between music genres arranged by a single performer, but not between those by multiple performers. In conclusion, our findings demonstrate that music genre can alter flavor pleasantness and overall impression of food stimuli. Furthermore, the influence of music genre on food acceptance varies as a function of the type of served food and the source of music performer. Published by Elsevier Ltd.

  13. Commentary on "Why Does Music Therapy Help in Autism?" by N. Khetrapal

    Directory of Open Access Journals (Sweden)

    Anjali K. Bhatara

    2009-04-01

    Full Text Available Khetrapal reviews the literature on music and autism and stresses the need for a greater focus on the cognitive and neural mechanisms underlying both autism and music perception. I build upon this review and discuss the strong connections between speech prosody and emotion in music. These connections imply that emotion recognition training in one domain can influence emotion recognition in the other. Understanding of emotional speech is frequently impaired in individuals with ASD, so music therapy should be explored further as a possible treatment.

  14. Towards a neural chronometric framework for the aesthetic experience of music

    OpenAIRE

    Elvira eBrattico; Elvira eBrattico; Brigitte eBogert; Thomas eJacobsen

    2013-01-01

    Music is often studied as a cognitive domain alongside language. The emotional aspects of music have also been shown to be important, but views on their nature diverge. For instance, the specific emotions that music induces and how they relate to emotional expression are still under debate. Here we propose a mental and neural chronometry of the aesthetic experience of music initiated and mediated by external and internal contexts such as intentionality, background mood, attention, and experti...

  15. Investigation of musicality in birdsong.

    Science.gov (United States)

    Rothenberg, David; Roeske, Tina C; Voss, Henning U; Naguib, Marc; Tchernichovski, Ofer

    2014-02-01

    Songbirds spend much of their time learning, producing, and listening to complex vocal sequences we call songs. Songs are learned via cultural transmission, and singing, usually by males, has a strong impact on the behavioral state of the listeners, often promoting affiliation, pair bonding, or aggression. What is it in the acoustic structure of birdsong that makes it such a potent stimulus? We suggest that birdsong potency might be driven by principles similar to those that make music so effective in inducing emotional responses in humans: a combination of rhythms and pitches-and the transitions between acoustic states-affecting emotions through creating expectations, anticipations, tension, tension release, or surprise. Here we propose a framework for investigating how birdsong, like human music, employs the above "musical" features to affect the emotions of avian listeners. First we analyze songs of thrush nightingales (Luscinia luscinia) by examining their trajectories in terms of transitions in rhythm and pitch. These transitions show gradual escalations and graceful modifications, which are comparable to some aspects of human musicality. We then explore the feasibility of stripping such putative musical features from the songs and testing how this might affect patterns of auditory responses, focusing on fMRI data in songbirds that demonstrate the feasibility of such approaches. Finally, we explore ideas for investigating whether musical features of birdsong activate avian brains and affect avian behavior in manners comparable to music's effects on humans. In conclusion, we suggest that birdsong research would benefit from current advances in music theory by attempting to identify structures that are designed to elicit listeners' emotions and then testing for such effects experimentally. Birdsong research that takes into account the striking complexity of song structure in light of its more immediate function - to affect behavioral state in listeners - could

  16. Two types of peak emotional responses to music: The psychophysiology of chills and tears

    Science.gov (United States)

    Mori, Kazuma; Iwanaga, Makoto

    2017-01-01

    People sometimes experience a strong emotional response to artworks. Previous studies have demonstrated that the peak emotional experience of chills (goose bumps or shivers) when listening to music involves psychophysiological arousal and a rewarding effect. However, many aspects of peak emotion are still not understood. The current research takes a new perspective of peak emotional response of tears (weeping, lump in the throat). A psychophysiological experiment showed that self-reported chills increased electrodermal activity and subjective arousal whereas tears produced slow respiration during heartbeat acceleration, although both chills and tears induced pleasure and deep breathing. A song that induced chills was perceived as being both happy and sad whereas a song that induced tears was perceived as sad. A tear-eliciting song was perceived as calmer than a chill-eliciting song. These results show that tears involve pleasure from sadness and that they are psychophysiologically calming; thus, psychophysiological responses permit the distinction between chills and tears. Because tears may have a cathartic effect, the functional significance of chills and tears seems to be different. We believe that the distinction of two types of peak emotions is theoretically relevant and further study of tears would contribute to more understanding of human peak emotional response. PMID:28387335

  17. Two types of peak emotional responses to music: The psychophysiology of chills and tears.

    Science.gov (United States)

    Mori, Kazuma; Iwanaga, Makoto

    2017-04-07

    People sometimes experience a strong emotional response to artworks. Previous studies have demonstrated that the peak emotional experience of chills (goose bumps or shivers) when listening to music involves psychophysiological arousal and a rewarding effect. However, many aspects of peak emotion are still not understood. The current research takes a new perspective of peak emotional response of tears (weeping, lump in the throat). A psychophysiological experiment showed that self-reported chills increased electrodermal activity and subjective arousal whereas tears produced slow respiration during heartbeat acceleration, although both chills and tears induced pleasure and deep breathing. A song that induced chills was perceived as being both happy and sad whereas a song that induced tears was perceived as sad. A tear-eliciting song was perceived as calmer than a chill-eliciting song. These results show that tears involve pleasure from sadness and that they are psychophysiologically calming; thus, psychophysiological responses permit the distinction between chills and tears. Because tears may have a cathartic effect, the functional significance of chills and tears seems to be different. We believe that the distinction of two types of peak emotions is theoretically relevant and further study of tears would contribute to more understanding of human peak emotional response.

  18. Dancing to "groovy" music enhances the experience of flow.

    Science.gov (United States)

    Bernardi, Nicolò F; Bellemare-Pepin, Antoine; Peretz, Isabelle

    2018-05-06

    We investigated whether dancing influences the emotional response to music, compared to when music is listened to in the absence of movement. Forty participants without previous dance training listened to "groovy" and "nongroovy" music excerpts while either dancing or refraining from movement. Participants were also tested while imitating their own dance movements, but in the absence of music as a control condition. Emotion ratings and ratings of flow were collected following each condition. Dance movements were recorded using motion capture. We found that the state of flow was increased specifically during spontaneous dance to groovy excerpts, compared with both still listening and motor imitation. Emotions in the realms of vitality (such as joy and power) and sublimity (such as wonder and nostalgia) were evoked by music in general, whether participants moved or not. Significant correlations were found between the emotional and flow responses to music and whole-body acceleration profiles. Thus, the results highlight a distinct state of flow when dancing, which may be of use to promote well-being and to address certain clinical conditions. © 2018 New York Academy of Sciences.

  19. Music Therapy by Proxy: Using Humanised Images in Song

    Directory of Open Access Journals (Sweden)

    Carol Chambers

    2013-07-01

    Full Text Available Developing awareness, exploration and expression of emotionally sensitive issues can be difficult for some clients in music therapy. They may find it hard to express emotion through improvised music and may turn instead to the perceived security of the repetition of known songs.This paper presents the results from a completed research PhD, a qualitative case study based on naturalistic clinical practice, which examined the song choices of one woman in a medium-secure forensic unit over the three-year course of her music therapy. A descriptive narrative account was subjected to analysis according to a modified form of therapeutic narrative analysis (Aldridge and Aldridge 2002, resulting in the abstraction of a series of generative metaphoric images, framed within a chronological series of events. Crucially, these images were found to be humanised figures, yet they were also emotionally decentred or depersonalised. When approached from the philosophical and methodological perspective of behaviourism, which views these as conditioned responses associating music with life experiences as part of a process of developing self-identity, such images can be seen to provide an unspoken voice for the client’s feelings to be expressed in a manner that is personally revealing, socially acceptable, culturally accessible and therapeutically constructive.I assert that using these third-person characters as a form of proxy facilitates mutual reference and experimentation, and places music firmly at the heart of a socially constructed process of music therapy.

  20. The Change of Music Preferences Following the Onset of a Mental Disorder.

    Science.gov (United States)

    Gebhardt, Stefan; von Georgi, Richard

    2015-02-24

    A psychiatric population (n=123) was examined on how music preferences had changed after the onset of a mental disorder. Most patients did not change their previous music preference; this group of patients considered music helpful for their mental state, showed more attractivity and enforcement as personality traits and used music more for emotion modulation. Patients who experienced a preference shift reported that music had impaired them during the time of illness; these patients showed less ego-strength, less confidence and less enforcement and used music less for arousal modulation. A third subgroup stopped listening to music completely after the onset of the mental disorder; these patients attribute less importance to music and also reported that music had impaired their mental state. They showed more ego-strength and used music less for emotion modulation. The results suggest that the use of music in everyday life can be helpful as an emotion modulation strategy. However, some patients might need instructions on how to use music in a functional way and not a dysfunctional one. Psychiatrists and psychotherapists as well as music therapists should be aware of emotion modulation strategies, subjective valence of music and personality traits of their patients. Due to the ubiquity of music, psychoeducative instructions on how to use music in everyday life plays an increasing role in the treatment of mental illness.

  1. The Change of Music Preferences Following the Onset of a Mental Disorder

    Science.gov (United States)

    Gebhardt, Stefan; von Georgi, Richard

    2015-01-01

    A psychiatric population (n=123) was examined on how music preferences had changed after the onset of a mental disorder. Most patients did not change their previous music preference; this group of patients considered music helpful for their mental state, showed more attractivity and enforcement as personality traits and used music more for emotion modulation. Patients who experienced a preference shift reported that music had impaired them during the time of illness; these patients showed less ego-strength, less confidence and less enforcement and used music less for arousal modulation. A third subgroup stopped listening to music completely after the onset of the mental disorder; these patients attribute less importance to music and also reported that music had impaired their mental state. They showed more ego-strength and used music less for emotion modulation. The results suggest that the use of music in everyday life can be helpful as an emotion modulation strategy. However, some patients might need instructions on how to use music in a functional way and not a dysfunctional one. Psychiatrists and psychotherapists as well as music therapists should be aware of emotion modulation strategies, subjective valence of music and personality traits of their patients. Due to the ubiquity of music, psychoeducative instructions on how to use music in everyday life plays an increasing role in the treatment of mental illness. PMID:26266024

  2. Musicing, Materiality, and the Emotional Niche

    Science.gov (United States)

    Krueger, Joel

    2015-01-01

    Building on Elliot and Silverman's (2015) embodied and enactive approach to musicing, I argue for an extended approach: namely, the idea that music can function as an environmental scaffolding supporting the development of various experiences and embodied practices that would otherwise remain inaccessible. I focus especially on the materiality of…

  3. Music interventions for dental anxiety.

    Science.gov (United States)

    Bradt, J; Teague, A

    2018-04-01

    Anxiety is a significant issue in the dental care of adults and children. Dental anxiety often leads to avoidance of dental care which may result in significant deterioration of oral and dental health. Non-pharmacological anxiety management interventions such as music listening are increasingly used in dental care. Although efficacy for music's anxiolytic effects has been established for pre-operative anxiety, findings regarding the use of music listening for dental anxiety are inconclusive, especially for children. The use of music for passive distraction may not be adequate for children and highly anxious adults. Instead, interventions offered by a trained music therapist may be needed to optimize music's anxiolytic impact. Music therapy interventions are individualized to the patient's presenting needs and geared at enhancing patients' active engagement in the management of their anxiety. Interventions may include (i) active refocusing of attention, (ii) music-guided deep breathing, (iii) music-assisted relaxation, and (iv) music-guided imagery. In addition, music therapists can teach patients music-based anxiety management skills prior to dental treatments, offer them the opportunity to express emotions related to the upcoming procedure, and help them gain a sense of control and safety. Clinical guidelines for the use of music listening by dental practitioners are offered. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Group Rumination: Social Interactions Around Music in People with Depression

    Science.gov (United States)

    Garrido, Sandra; Eerola, Tuomas; McFerran, Katrina

    2017-01-01

    One of the most important roles that music serves in human society is the promotion of social relationships and group cohesion. In general, emotional experiences tend to be amplified in group settings through processes of social feedback. However, previous research has established that listening to sad music can intensify negative emotions in people with tendencies to rumination and depression. This study therefore investigated the phenomenon of ruminating with music, and the question of whether listening to sad music in group settings provides social benefits for emotionally vulnerable listeners, or whether it further exaggerates depressive tendencies. Participants recruited via online depression groups and mental health websites were surveyed as to music listening habits. Results revealed that people with depression were more likely to engage in “group rumination” using music, and that this behavior could be partially explained by a general tendency to ruminate using music. Both affective states and coping styles were found to be related to the affective outcomes of group interactions around music. These findings go some way toward clarifying the situations in which group interactions around music are able to provide important social benefits for those involved, and situations in which negative emotions can be amplified by the group context. PMID:28421014

  5. Body Movement Music Score – Introduction of a newly developed model for the analysis and description of body qualities, movement and music in music therapy

    Directory of Open Access Journals (Sweden)

    Hanna Agnieszka Skrzypek

    2017-01-01

    Full Text Available Background In music therapy, there is a range of music therapy concepts that, in addition to music, describe and analyse the body and movement. A model that equally examines the body, movement and music has not been developed. The Body Movement Music Score (BMMS is a newly developed and evaluated music therapy model for analysing body qualities, movement, playing style of musical instruments and music and to describe body behaviour and body expression, movement behaviour and movement expression, playing behaviour and musical expression in music therapy treatment. The basis for the development of the Body Movement Music Score was the evaluation of the analytical movement model Emotorics-Emotive Body Movement Mind Paradigm (Emotorics-EBMMP by Yona Shahar Levy for the analysis and description of the emotive-motor behaviour and movement expression of schizophrenic patients in music therapy treatment. Participants and procedure The application of the Body Movement Music Score is presented in a videotaped example from the music therapy treatment of one schizophrenic patient. Results The results of applying the Body Movement Music Score are presented in the form of Body Qualities I Analysis, Body Qualities II Analysis, Movement Analysis, Playing Style Analysis and Music Analysis Profiles. Conclusions The Body Movement Music Score has been developed and evaluated for the music therapy treatment of schizophrenic patients. For the development of the model, a proof of reliability is necessary to verify the reliability and limitations of the model in practice and show that the Body Movement Music Score could be used for both practical and clinical work, for documentation purposes and to impact research in music therapy.

  6. The minor third communicates sadness in speech, mirroring its use in music.

    Science.gov (United States)

    Curtis, Meagan E; Bharucha, Jamshed J

    2010-06-01

    There is a long history of attempts to explain why music is perceived as expressing emotion. The relationship between pitches serves as an important cue for conveying emotion in music. The musical interval referred to as the minor third is generally thought to convey sadness. We reveal that the minor third also occurs in the pitch contour of speech conveying sadness. Bisyllabic speech samples conveying four emotions were recorded by 9 actresses. Acoustic analyses revealed that the relationship between the 2 salient pitches of the sad speech samples tended to approximate a minor third. Participants rated the speech samples for perceived emotion, and the use of numerous acoustic parameters as cues for emotional identification was modeled using regression analysis. The minor third was the most reliable cue for identifying sadness. Additional participants rated musical intervals for emotion, and their ratings verified the historical association between the musical minor third and sadness. These findings support the theory that human vocal expressions and music share an acoustic code for communicating sadness.

  7. Perception of Music and Speech in Adolescents with Cochlear Implants – A Pilot Study on Effects of Intensive Musical Ear Training

    DEFF Research Database (Denmark)

    Petersen, Bjørn; Sørensen, Stine Derdau; Pedersen, Ellen Raben

    their standard school schedule and received no music training. Before and after the intervention period, both groups completed a set of tests for perception of music, speech and emotional prosody. In addition, the participants filled out a questionnaire which examined music listening habits and enjoyment....... RESULTS CI users significantly improved their overall music perception and discrimination of melodic contour and rhythm in particular. No effect of the music training was found on discrimination of emotional prosody or speech. The CI users described levels of music engagement and enjoyment that were...... combined with their positive feedback suggests that music training could form part of future rehabilitation programs as a strong, motivational and beneficial method of improving auditory skills in adolescent CI users....

  8. Music Therapy for children with special needs - clinical practice and assessment in the light of developmental psychology and communicative musicality

    DEFF Research Database (Denmark)

    Holck, Ulla

    continues with practices on basic improvisational techniques related to time, form and emotions: synchronization, turn-taking, theme-with-variations, matching/attunement, vitality forms, simple musical playing rules, etc. The techniques are connected to macro- and micro-regulation of arousal and emotions......). Turn-taking in music therapy with children with communication disorders. British Journal of Music Therapy, 18(2), 45-53. Malloch, S. & Trevarthen, C. (Eds) (2009). Communicative Musicality. Exploring the basis of human companionship. Oxford: Oxford University Press. Stern, D. N. (2010). Forms...... of Vitality. Oxford: Oxford University Press. Wigram, T. (2004). Improvisation. Methods and Techniques for Music Therapy Clinicians, Educators and Students. London: Jessica Kingsley Publishers....

  9. Connectivity patterns during music listening: Evidence for action-based processing in musicians.

    Science.gov (United States)

    Alluri, Vinoo; Toiviainen, Petri; Burunat, Iballa; Kliuchko, Marina; Vuust, Peter; Brattico, Elvira

    2017-06-01

    Musical expertise is visible both in the morphology and functionality of the brain. Recent research indicates that functional integration between multi-sensory, somato-motor, default-mode (DMN), and salience (SN) networks of the brain differentiates musicians from non-musicians during resting state. Here, we aimed at determining whether brain networks differentially exchange information in musicians as opposed to non-musicians during naturalistic music listening. Whole-brain graph-theory analyses were performed on participants' fMRI responses. Group-level differences revealed that musicians' primary hubs comprised cerebral and cerebellar sensorimotor regions whereas non-musicians' dominant hubs encompassed DMN-related regions. Community structure analyses of the key hubs revealed greater integration of motor and somatosensory homunculi representing the upper limbs and torso in musicians. Furthermore, musicians who started training at an earlier age exhibited greater centrality in the auditory cortex, and areas related to top-down processes, attention, emotion, somatosensory processing, and non-verbal processing of speech. We here reveal how brain networks organize themselves in a naturalistic music listening situation wherein musicians automatically engage neural networks that are action-based while non-musicians use those that are perception-based to process an incoming auditory stream. Hum Brain Mapp 38:2955-2970, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Leading Together, Learning Together: Music Education and Music Therapy Students' Perceptions of a Shared Practicum

    Science.gov (United States)

    Ballantyne, Julie; Baker, Felicity A.

    2013-01-01

    The health benefits of musical engagement extend across the lifespan, with research documenting developmental and quality of life outcomes in senior adulthood. Whilst the psychological functions of music include three broad domains: cognitive, emotional and social, the social factors of music consumption have been, for the most part, ignored. This…

  11. Affective evolutionary music composition with MetaCompose

    DEFF Research Database (Denmark)

    Scirea, Marco; Togelius, Julian; Eklund, Peter

    2017-01-01

    This paper describes the MetaCompose music generator, a compositional, extensible framework for affective music composition. In this context ‘affective’ refers to the music generator’s ability to express emotional information. The main purpose of MetaCompose is to create music in real-time that can...

  12. The Brain on Music

    Indian Academy of Sciences (India)

    effects of music training on auditory .... dala but is distributed over a network of regions that also in- clude the ... In addition to the emotional impact of music on the brain, these ... social cognition, contact, copathy, and social cohesion in a group.

  13. Content and user-based music visual analysis

    Science.gov (United States)

    Guo, Xiaochun; Tang, Lei

    2015-12-01

    In recent years, people's ability to collect music got enhanced greatly. Many people who prefer listening music offline even stored thousands of music on their local storage or portable device. However, their ability to deal with music information has not been improved accordingly, which results in two problems. One is how to find out the favourite songs from large music dataset and satisfy different individuals. The other one is how to compose a play list quickly. To solve these problems, the authors proposed a content and user-based music visual analysis approach. We first developed a new recommendation algorithm based on the content of music and user's behaviour, which satisfy individual's preference. Then, we make use of visualization and interaction tools to illustrate the relationship between songs and help people compose a suitable play list. At the end of this paper, a survey is mentioned to show that our system is available and effective.

  14. Why Does Music Therapy Help in Autism?

    Directory of Open Access Journals (Sweden)

    Neha Khetrapal

    2009-04-01

    Full Text Available Music therapy is shown to be an effective intervention for emotional recognition deficits in autism. However, researchers to date have yet to propose a model that accounts for the neurobiological and cognitive components that are responsible for such improvements. The current paper outlines a model whereby the encoding of tonal pitch is proposed as the underlying mechanism. Accurate tonal pitch perception is important for recognizing emotions like happiness and sadness in the auditory domain. Once acquired, the ability to perceive tonal pitch functions as a domain-specific module that proves beneficial for music cognition. There is biological preparedness for the development of such a module and it is hypothesized to be preserved in autism. The current paper reinforces the need to build intervention programs based on this preserved module in autism, and proposes that this module may form the basis for a range of benefits related to music therapy. Possible brain areas associated with this module are suggested.

  15. Older but not younger infants associate own-race faces with happy music and other-race faces with sad music.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2018-03-01

    We used a novel intermodal association task to examine whether infants associate own- and other-race faces with music of different emotional valences. Three- to 9-month-olds saw a series of neutral own- or other-race faces paired with happy or sad musical excerpts. Three- to 6-month-olds did not show any specific association between face race and music. At 9 months, however, infants looked longer at own-race faces paired with happy music than at own-race faces paired with sad music. Nine-month-olds also looked longer at other-race faces paired with sad music than at other-race faces paired with happy music. These results indicate that infants with nearly exclusive own-race face experience develop associations between face race and music emotional valence in the first year of life. The potential implications of such associations for developing racial biases in early childhood are discussed. © 2017 John Wiley & Sons Ltd.

  16. Emotion Recognition From Singing Voices Using Contemporary Commercial Music and Classical Styles.

    Science.gov (United States)

    Hakanpää, Tua; Waaramaa, Teija; Laukkanen, Anne-Maria

    2018-02-22

    This study examines the recognition of emotion in contemporary commercial music (CCM) and classical styles of singing. This information may be useful in improving the training of interpretation in singing. This is an experimental comparative study. Thirteen singers (11 female, 2 male) with a minimum of 3 years' professional-level singing studies (in CCM or classical technique or both) participated. They sang at three pitches (females: a, e1, a1, males: one octave lower) expressing anger, sadness, joy, tenderness, and a neutral state. Twenty-nine listeners listened to 312 short (0.63- to 4.8-second) voice samples, 135 of which were sung using a classical singing technique and 165 of which were sung in a CCM style. The listeners were asked which emotion they heard. Activity and valence were derived from the chosen emotions. The percentage of correct recognitions out of all the answers in the listening test (N = 9048) was 30.2%. The recognition percentage for the CCM-style singing technique was higher (34.5%) than for the classical-style technique (24.5%). Valence and activation were better perceived than the emotions themselves, and activity was better recognized than valence. A higher pitch was more likely to be perceived as joy or anger, and a lower pitch as sorrow. Both valence and activation were better recognized in the female CCM samples than in the other samples. There are statistically significant differences in the recognition of emotions between classical and CCM styles of singing. Furthermore, in the singing voice, pitch affects the perception of emotions, and valence and activity are more easily recognized than emotions. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Mozart, music and medicine.

    Science.gov (United States)

    Pauwels, Ernest K J; Volterrani, Duccio; Mariani, Giuliano; Kostkiewics, Magdalena

    2014-01-01

    According to the first publication in 1993 by Rauscher et al. [Nature 1993;365:611], the Mozart effect implies the enhancement of reasoning skills solving spatial problems in normal subjects after listening to Mozart's piano sonata K 448. A further evaluation of this effect has raised the question whether there is a link between music-generated emotions and a higher level of cognitive abilities by mere listening. Positron emission tomography and functional magnetic resonance imaging have revealed that listening to pleasurable music activates cortical and subcortical cerebral areas where emotions are processed. These neurobiological effects of music suggest that auditory stimulation evokes emotions linked to heightened arousal and result in temporarily enhanced performance in many cognitive domains. Music therapy applies this arousal in a clinical setting as it may offer benefits to patients by diverting their attention from unpleasant experiences and future interventions. It has been applied in the context of various important clinical conditions such as cardiovascular disorders, cancer pain, epilepsy, depression and dementia. Furthermore, music may modulate the immune response, among other things, evidenced by increasing the activity of natural killer cells, lymphocytes and interferon-γ, which is an interesting feature as many diseases are related to a misbalanced immune system. Many of these clinical studies, however, suffer from methodological inadequacies. Nevertheless, at present, there is moderate but not altogether convincing evidence that listening to known and liked music helps to decrease the burden of a disease and enhances the immune system by modifying stress. © 2014 S. Karger AG, Basel.

  18. Music in the recovering brain

    OpenAIRE

    Särkämö, Teppo

    2011-01-01

    Listening to music involves a widely distributed bilateral network of brain regions that controls many auditory perceptual, cognitive, emotional, and motor functions. Exposure to music can also temporarily improve mood, reduce stress, and enhance cognitive performance as well as promote neural plasticity. However, very little is currently known about the relationship between music perception and auditory and cognitive processes or about the potential therapeutic effects of listening to music ...

  19. Perceived Properties of Parameterised Music for Interactive Applications

    Directory of Open Access Journals (Sweden)

    Jan Berg

    2006-04-01

    Full Text Available Traditional implementations of sound and music in interactive contexts have their limitations. One way to overcome these and to expand the possibilities of music is to handle the music in a parameterised form. To better understand the properties of the musical parameters resulting from parameterisation, two experiments were carried out. The first experiment investigated selected parameters' capability to change the music; the second experiment examined how the parameters can contribute to express emotions. From these experiments, it is concluded that users without musical training perform differently from musicians on some of the parameters. There is also a clear association between the parameters and the expressed basic emotions. The paper is concluded with observations on how parameterisation might be used in interactive applications.

  20. Music's relevance for adolescents and young adults with cancer: a constructivist research approach.

    Science.gov (United States)

    O'Callaghan, Clare; Barry, Philippa; Thompson, Kate

    2012-04-01

    Music is one of the most widely used activities amongst young people, significant in personal and group identity, motivation, physical release, and emotional support. Adolescents and young adults with cancer (AYA) require specialized care because of intensified challenges related to developmental vulnerability, treatment toxicity effects, and slower improvements in survival rates compared to other age groups. To advance effective supportive care for AYA, understanding their thoughts about music is necessary. This study examines AYAs' perspectives about music's role in their lives. A constructivist research approach with grounded theory design was applied. Twelve people, 15 to 25 years old, known to onTrac@PeterMac Victorian Adolescent & Young Adult Cancer Service, participated. Respondents completed a brief music demographic questionnaire and participated in a semi-structured interview. Qualitative inter-rater reliability was integrated. Participants mostly reported music's calming, supportive, and relaxing effects, which alleviated hardship associated with their cancer diagnoses. Themes encompassed: music backgrounds, changed "musicking", endurance and adjustment, time with music therapists, and wisdom. Music provided supportive messages, enabled personal and shared understandings about cancer's effects, and elicited helpful physical, emotional, and imagery states. Music therapy could also promote normalized and supportive connections with others. A musician, however, struggled to get music "back" post-treatment. Supportive music-based strategies were recommended for other AYA and their health care providers. Music can signify and creatively enable AYAs' hope, endurance, identity development, and adjustment through cancer treatment and post-treatment phases. Health professionals are encouraged to support AYAs' music-based self-care and "normalized" activities.

  1. Emotional ornamentation in performances of a Handel sonata

    NARCIS (Netherlands)

    Timmers, R.; Ashley, R.

    2007-01-01

    ORNAMENTATION IS ONE ASPECT OF MUSIC ASSOCIATED with emotional affect in Baroque music. In an empirical study, the relationship between ornamentation and emotion was investigated by asking a violinist and flutist to ornament three melodies in different ways to express four emotions: happiness,

  2. Music listening after stroke: beneficial effects and potential neural mechanisms.

    Science.gov (United States)

    Särkämö, Teppo; Soto, David

    2012-04-01

    Music is an enjoyable leisure activity that also engages many emotional, cognitive, and motor processes in the brain. Here, we will first review previous literature on the emotional and cognitive effects of music listening in healthy persons and various clinical groups. Then we will present findings about the short- and long-term effects of music listening on the recovery of cognitive function in stroke patients and the underlying neural mechanisms of these music effects. First, our results indicate that listening to pleasant music can have a short-term facilitating effect on visual awareness in patients with visual neglect, which is associated with functional coupling between emotional and attentional brain regions. Second, daily music listening can improve auditory and verbal memory, focused attention, and mood as well as induce structural gray matter changes in the early poststroke stage. The psychological and neural mechanisms potentially underlying the rehabilitating effect of music after stroke are discussed. © 2012 New York Academy of Sciences.

  3. Musical Nurture in the Early Years of Children.

    Science.gov (United States)

    Miller, Samuel D.

    Children are naturally musical and should be musically educated. Music provides a unique way for children to grow intellectually, emotionally and socially. Music fulfills an inner drive to express feelings and experiences in a symbolic, abstract, creative, and acceptable manner which is positive and valued. Musical nurture should begin within the…

  4. Anhedonia to music and mu-opioids: Evidence from the administration of naltrexone

    Science.gov (United States)

    Mallik, Adiel; Chanda, Mona Lisa; Levitin, Daniel J.

    2017-01-01

    Music’s universality and its ability to deeply affect emotions suggest an evolutionary origin. Previous investigators have found that naltrexone (NTX), a μ-opioid antagonist, may induce reversible anhedonia, attenuating both positive and negative emotions. The neurochemical basis of musical experience is not well-understood, and the NTX-induced anhedonia hypothesis has not been tested with music. Accordingly, we administered NTX or placebo on two different days in a double-blind crossover study, and assessed participants’ responses to music using both psychophysiological (objective) and behavioral (subjective) measures. We found that both positive and negative emotions were attenuated. We conclude that endogenous opioids are critical to experiencing both positive and negative emotions in music, and that music uses the same reward pathways as food, drug and sexual pleasure. Our findings add to the growing body of evidence for the evolutionary biological substrates of music. PMID:28176798

  5. Emotion effects on implicit and explicit musical memory in normal aging.

    Science.gov (United States)

    Narme, Pauline; Peretz, Isabelle; Strub, Marie-Laure; Ergis, Anne-Marie

    2016-12-01

    Normal aging affects explicit memory while leaving implicit memory relatively spared. Normal aging also modifies how emotions are processed and experienced, with increasing evidence that older adults (OAs) focus more on positive information than younger adults (YAs). The aim of the present study was to investigate how age-related changes in emotion processing influence explicit and implicit memory. We used emotional melodies that differed in terms of valence (positive or negative) and arousal (high or low). Implicit memory was assessed with a preference task exploiting exposure effects, and explicit memory with a recognition task. Results indicated that effects of valence and arousal interacted to modulate both implicit and explicit memory in YAs. In OAs, recognition was poorer than in YAs; however, recognition of positive and high-arousal (happy) studied melodies was comparable. Insofar as socioemotional selectivity theory (SST) predicts a preservation of the recognition of positive information, our findings are not fully consistent with the extension of this theory to positive melodies since recognition of low-arousal (peaceful) studied melodies was poorer in OAs. In the preference task, YAs showed stronger exposure effects than OAs, suggesting an age-related decline of implicit memory. This impairment is smaller than the one observed for explicit memory (recognition), extending to the musical domain the dissociation between explicit memory decline and implicit memory relative preservation in aging. Finally, the disproportionate preference for positive material seen in OAs did not translate into stronger exposure effects for positive material suggesting no age-related emotional bias in implicit memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Music therapy for depression

    NARCIS (Netherlands)

    Aalbers, Sonja; Fusar-Poli, Laura; Freeman, Ruth E.; Spreen, Marinus; Ket, Johannes C.F.; Vink, Annemiek C.; Maratos, Anna; Crawford, Mike; Chen, Xi Jing; Gold, Christian

    2017-01-01

    Background: Depression is a highly prevalent mood disorder that is characterised by persistent low mood, diminished interest, and loss of pleasure. Music therapy may be helpful in modulating moods and emotions. An update of the 2008 Cochrane review was needed to improve knowledge on effects of music

  7. Content-based Music Search and Recommendation System

    Science.gov (United States)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  8. Music information retrieval based on tonal harmony

    NARCIS (Netherlands)

    de Haas, W.B.|info:eu-repo/dai/nl/304841250

    2012-01-01

    With the emergence of large scale digitalisation of music, content-based methods to maintain, structure, and provide access to digital music repositories have become increasingly important. This doctoral dissertation covers a wide range of methods that aim to aid in the organisation of music

  9. The Musical Self-Concept of Chinese Music Students

    Directory of Open Access Journals (Sweden)

    Suse ePetersen

    2016-05-01

    Full Text Available The relationship between self-concept and societal settings has been widely investigated in several Western and Asian countries, with respect to the academic self-concept in an educational environment. Although the musical self-concept is highly relevant to musical development and performance, there is a lack of research exploring how the musical self-concept evolves in different cultural settings and societies. In particular, there have been no enquiries yet in the Chinese music education environment. This study’s goal was the characterization of musical self-concept types among music students at a University in Beijing, China. The Musical Self-Concept Inquiry (MUSCI—including ability, emotional, physical, cognitive, and social facets—was used to assess the students’ musical self-concepts (N=97. The data analysis led to three significantly distinct clusters and corresponding musical self-concept types. The types were especially distinct, in the students’ perception of their musical ambitions and abilities; their movement, rhythm and dancing affinity; and the spiritual and social aspects of music. The professional aims and perspectives, and the aspects of the students’ sociodemographic background also differed between the clusters. This study is one of the first research endeavors addressing musical self-concepts in China. The empirical identification of the self-concept types offers a basis for future research on the connections between education, the development of musical achievement, and the musical self-concept in societal settings with differing understandings of the self.

  10. The Musical Self-Concept of Chinese Music Students.

    Science.gov (United States)

    Petersen, Suse; Camp, Marc-Antoine

    2016-01-01

    The relationship between self-concept and societal settings has been widely investigated in several Western and Asian countries, with respect to the academic self-concept in an educational environment. Although the musical self-concept is highly relevant to musical development and performance, there is a lack of research exploring how the musical self-concept evolves in different cultural settings and societies. In particular, there have been no enquiries yet in the Chinese music education environment. This study's goal was the characterization of musical self-concept types among music students at a University in Beijing, China. The Musical Self-Concept Inquiry-including ability, emotional, physical, cognitive, and social facets-was used to assess the students' musical self-concepts (N = 97). The data analysis led to three significantly distinct clusters and corresponding musical self-concept types. The types were especially distinct, in the students' perception of their musical ambitions and abilities; their movement, rhythm and dancing affinity; and the spiritual and social aspects of music. The professional aims and perspectives, and the aspects of the students' sociodemographic background also differed between the clusters. This study is one of the first research endeavors addressing musical self-concepts in China. The empirical identification of the self-concept types offers a basis for future research on the connections between education, the development of musical achievement, and the musical self-concept in societal settings with differing understandings of the self.

  11. The Musical Self-Concept of Chinese Music Students

    Science.gov (United States)

    Petersen, Suse; Camp, Marc-Antoine

    2016-01-01

    The relationship between self-concept and societal settings has been widely investigated in several Western and Asian countries, with respect to the academic self-concept in an educational environment. Although the musical self-concept is highly relevant to musical development and performance, there is a lack of research exploring how the musical self-concept evolves in different cultural settings and societies. In particular, there have been no enquiries yet in the Chinese music education environment. This study’s goal was the characterization of musical self-concept types among music students at a University in Beijing, China. The Musical Self-Concept Inquiry—including ability, emotional, physical, cognitive, and social facets—was used to assess the students’ musical self-concepts (N = 97). The data analysis led to three significantly distinct clusters and corresponding musical self-concept types. The types were especially distinct, in the students’ perception of their musical ambitions and abilities; their movement, rhythm and dancing affinity; and the spiritual and social aspects of music. The professional aims and perspectives, and the aspects of the students’ sociodemographic background also differed between the clusters. This study is one of the first research endeavors addressing musical self-concepts in China. The empirical identification of the self-concept types offers a basis for future research on the connections between education, the development of musical achievement, and the musical self-concept in societal settings with differing understandings of the self. PMID:27303337

  12. Treating the emotional and motivational inhibition of highly gifted underachievers with music psychotherapy: Meta-analysis of an evaluation study based on a sequential design.

    Science.gov (United States)

    Schiltz, L

    The psychological and neuropsychological characteristics of gifted children and adolescents are analysed, as well as the emotional and behavioural risks linked to this condition. A prospective follow-up study of N=93 highly gifted students suffering from school failure at the beginning of adolescence was implemented. They were treated with an integrated form of music psychotherapy and verbal psychotherapy in 5 separate groups. The methodology of treatment combined active musical improvisation with the writing of stories or the production of drawings under musical induction, followed by verbal elaboration in the cognitive-psychodynamic psychotherapeutic tradition. The evaluation was based on a mixed-methods design, combining psychometric scales, projective tests and expressive tests. Comparative pretest-posttest, correlational and multidimensional analyses were computed, using non-parametric statistical procedures adapted to small samples and data belonging to a mixed level of measurement. We present a meta-analysis of the confirmatory results in 5 subgroups. There was a significant increase in the capacity of concentration, the capacity of imaginary and symbolic elaboration, the pictorial and literary creativity, self-esteem, the quality of coping strategies, as well as in school marks. There was a significant decrease in defensive functioning and in embitterment and resignation. The latent dimensions extracted with Optimal Scaling procedures from the correlational matrixes of the Delta values of TAT and TSD-Z were meaningful at the light of the state-of-the-arts. The results of the study confirm a prior theoretical modelization coming out of the preparatory stage of the research project. They are interpreted at the light of recent findings in developmental and clinical psychology of adolescence and they open many tracks for future research.

  13. Emotional and psychophysiological responses to tempo, mode, and percussiveness

    NARCIS (Netherlands)

    van der Zwaag, Marjolein; Westerink, Joyce H.D.M.; van den Broek, Egon

    People often listen to music to influence their emotional state. However, the specific musical characteristics which cause this process are not yet fully understood. We have investigated the influence of the musical characteristics of tempo, mode and percussiveness on our emotions. In a quest

  14. Musical taste, employment, education, and global region.

    Science.gov (United States)

    North, Adrian C; Davidson, Jane W

    2013-10-01

    Sociologists have argued that musical taste should vary between social groups, but have not considered whether the effect extends beyond taste into uses of music and also emotional reactions to music. Moreover, previous research has ignored the culture in which participants are located. The present research employed a large sample from five post-industrial global regions and showed that musical taste differed between regions but not according to education and employment; and that there were three-way interactions between education, employment, and region in the uses to which participants put music and also their typical emotional reactions. In addition to providing partial support for existing sociological theory, the findings highlight the potential of culture as a variable in future quantitative research on taste. © 2013 The Scandinavian Psychological Associations.

  15. Music influences ratings of the affect of visual stimuli

    NARCIS (Netherlands)

    Hanser, W.E.; Mark, R.E.

    2013-01-01

    This review provides an overview of recent studies that have examined how music influences the judgment of emotional stimuli, including affective pictures and film clips. The relevant findings are incorporated within a broader theory of music and emotion, and suggestions for future research are

  16. A hypothesis on the biological origins and social evolution of music and dance

    Directory of Open Access Journals (Sweden)

    Tianyan eWang

    2015-02-01

    Full Text Available The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation to the transposition invariance of music, but also integrates music and dance into a common form—rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism’s survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs. The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis

  17. A hypothesis on the biological origins and social evolution of music and dance.

    Science.gov (United States)

    Wang, Tianyan

    2015-01-01

    The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance, and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation for the transposition invariance of music, but also integrates music and dance into a common form-rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism's survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE) system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs) are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs). The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance, and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis scientifically

  18. Music cognition: a developmental perspective.

    Science.gov (United States)

    Stalinski, Stephanie M; Schellenberg, E Glenn

    2012-10-01

    Although music is universal, there is a great deal of cultural variability in music structures. Nevertheless, some aspects of music processing generalize across cultures, whereas others rely heavily on the listening environment. Here, we discuss the development of musical knowledge, focusing on four themes: (a) capabilities that are present early in development; (b) culture-general and culture-specific aspects of pitch and rhythm processing; (c) age-related changes in pitch perception; and (d) developmental changes in how listeners perceive emotion in music. Copyright © 2012 Cognitive Science Society, Inc.

  19. Musical training as an alternative and effective method for neuro-education and neuro-rehabilitation.

    Science.gov (United States)

    François, Clément; Grau-Sánchez, Jennifer; Duarte, Esther; Rodriguez-Fornells, Antoni

    2015-01-01

    In the last decade, important advances in the field of cognitive science, psychology, and neuroscience have largely contributed to improve our knowledge on brain functioning. More recently, a line of research has been developed that aims at using musical training and practice as alternative tools for boosting specific perceptual, motor, cognitive, and emotional skills both in healthy population and in neurologic patients. These findings are of great hope for a better treatment of language-based learning disorders or motor impairment in chronic non-communicative diseases. In the first part of this review, we highlight several studies showing that learning to play a musical instrument can induce substantial neuroplastic changes in cortical and subcortical regions of motor, auditory and speech processing networks in healthy population. In a second part, we provide an overview of the evidence showing that musical training can be an alternative, low-cost and effective method for the treatment of language-based learning impaired populations. We then report results of the few studies showing that training with musical instruments can have positive effects on motor, emotional, and cognitive deficits observed in patients with non-communicable diseases such as stroke or Parkinson Disease. Despite inherent differences between musical training in educational and rehabilitation contexts, these results favor the idea that the structural, multimodal, and emotional properties of musical training can play an important role in developing new, creative and cost-effective intervention programs for education and rehabilitation in the next future.

  20. MUSICAL TRAINING AS AN ALTERNATIVE AND EFFECTIVE METHOD FOR NEURO-EDUCATION AND NEURO-REHABILITATION

    Directory of Open Access Journals (Sweden)

    Clément eFrançois

    2015-04-01

    Full Text Available In the last decade, important advances in the field of cognitive science, psychology and neuroscience have largely contributed to improve our knowledge on brain functioning. More recently, a line of research has been developed that aims at using musical training and practice as alternative tools for boosting specific perceptual, motor, cognitive and emotional skills both in healthy population and in neurologic patients. These findings are of great hope for a better treatment of language-based learning disorders or motor impairment in chronic non-communicative diseases. In the first part of this review, we highlight several studies showing that learning to play a musical instrument can induce substantial neuroplastic changes in cortical and subcortical regions of motor, auditory and speech processing networks in healthy population. In a second part, we provide an overview of the evidence showing that musical training can be an alternative, low-cost and effective method for the treatment of language-based learning impaired populations. We then report results of the few studies showing that training with musical instruments can have positive effects on motor, emotional and cognitive deficits observed in patients with noncommunicable diseases such as stroke or Parkinson Disease. Despite inherent differences between musical training in educational and rehabilitation contexts, these results favour the idea that the structural, multimodal and emotional properties of musical training can play an important role in developing new, creative and cost-effective intervention programs for education and rehabilitation in the next future.

  1. The function of music in the development of empathy in children: the construction of the educational course “Music and well-being ” and the evaluation of its effects

    Directory of Open Access Journals (Sweden)

    Giuseppe Sellari

    2011-12-01

    Full Text Available In the present research, the authors examined the contents and the methods of the educational course Music and well-Being (that uses global musical activities based on listening, and on vocal and instrumental production in order to check its efficiency in improving empathy in a group of four year old children. The results show that the training has been efficient in improving the empathic ability of children towards all emotions considered (joy, sadness, fear, anger and above all towards emotions of negative hedonic tone.

  2. Islands and Islandness in Rock Music Lyrics

    Directory of Open Access Journals (Sweden)

    Daniele Mezzana

    2012-05-01

    Full Text Available This paper presents a first exploration, qualitative in character, based on a review of 412 songs produced in the period 1960-2009, about islands in rock music as both social products and social tools potentially contributing to shaping ideas, emotions, will, and desires. An initial taxonomy of 24 themes clustered under five meta-themes of space, lifestyle, emotions, symbolism, and social-political relations is provided, together with some proposals for further research.

  3. [The phenomenon of pain in the history of music – observations of neurobiological mechanisms of pain and its expressions in western music].

    Science.gov (United States)

    Gasenzer, E R; Neugebauer, E A M

    2014-12-01

    Purpose of this essay is to provide a historical overview how music has dealt with the emotion and sensation of pain, as well as an overview over the more recent medical research into the relationship of music and pain. Since the beginnings of western music humans have put their emotions into musical sounds. During the baroque era, composers developed musical styles that expressed human emotions and our experiences of nature. In some compositions, like in operas, we find musical representations of pain. During Romanticism artists began to intrude into the soul of their audience. New expressive harmonies and styles touch the soul and the consciousness of the listener. With the inception of atonality dissonant sounds where experienced as a physical pain.The physiology of deep brain structures (like thalamus, hypothalamus or limbic system) and the physiology of the acoustic pathway process consonant and dissonant sound and musical perceptions in ways, that are similar to the perception of pain. In the thalamus and in the limbic system music and pain meet.The relationships of music and pain is a wide open research field with such interesting questions as the role of dopamine in the perception of consonant or dissonant music, or the processing of pain during music listening. Musicology has not yet embarked on a general investigation of how musical compositions express pain and how that has developed or changed over the centuries. Music therapy, neuro-musicology and the performing arts medicine are scientific fields that offer a lot of ideas for medical and musical research projects. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Listening to music can influence hedonic and sensory perceptions of gelati.

    Science.gov (United States)

    Kantono, Kevin; Hamid, Nazimah; Shepherd, Daniel; Yoo, Michelle J Y; Grazioli, Gianpaolo; Carr, B Thomas

    2016-05-01

    The dominant taste sensations of three different types of chocolate gelati (milk chocolate, dark chocolate, and bittersweet chocolate) were determined using forty five trained panellists exposed to a silent reference condition and three music samples differing in hedonic ratings. The temporal dominance of sensations (TDS) method was used to measure temporal taste perceptions. The emotional states of panellists were measured after each gelati-music pairing using a scale specifically developed for this study. The TDS difference curves showed significant differences between gelati samples and music conditions (p music were played, while bitterness was more dominant for disliked music. A joint Canonical Variate Analysis (CVA) further explained the variability in sensory and emotion data. The first and second dimensions explained 78% of the variance, with the first dimension separating liked and disliked music and the second dimension separating liked music and silence. Gelati samples consumed while listening to liked and neutral music had positive scores, and were separated from those consumed under the disliked music condition along the first dimension. Liked music and disliked music were further correlated with positive and negative emotions respectively. Findings indicate that listening to music influenced the hedonic and sensory impressions of the gelati. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Social representation of "music" in young adults: a cross-cultural study.

    Science.gov (United States)

    Manchaiah, Vinaya; Zhao, Fei; Widén, Stephen; Auzenne, Jasmin; Beukes, Eldré W; Ahmadi, Tayebeh; Tomé, David; Mahadeva, Deepthi; Krishna, Rajalakshmi; Germundsson, Per

    2017-01-01

    This study was aimed to explore perceptions of and reactions to music in young adults (18-25 years) using the theory of social representations (TSR). The study used a cross-sectional survey design and included participants from India, Iran, Portugal, USA and UK. Data were analysed using various qualitative and quantitative methods. The study sample included 534 young adults. The Chi-square analysis showed significant differences between the countries regarding the informants' perception of music. The most positive connotations about music were found in the responses obtained from Iranian participants (82.2%), followed by Portuguese participants (80.6%), while the most negative connotations about music were found in the responses obtained from Indian participants (18.2%), followed by Iranian participants (7.3%). The participants' responses fell into 19 main categories based on their meaning; however, not all categories were found in all five countries. The co-occurrence analysis results generally indicate that the category "positive emotions or actions" was the most frequent category occurring in all five countries. The results indicate that music is generally considered to bring positive emotions for people within these societies, although a small percentage of responses indicate some negative consequences of music.

  6. Can the Use of Background Music Improve the Behaviour and Academic Performance of Children with Emotional and Behavioural Difficulties?

    Science.gov (United States)

    Hallam, Susan; Price, John

    1998-01-01

    This study examined effects of providing "mood calming" background music in a special class for children with emotional and behavioral difficulties. Findings indicated a significant improvement in behavior and mathematics performance for all 10 of the children, with effects most noticeable for children with problems related to constant stimulus…

  7. Understanding the Score: Film Music Communicating to and Influencing the Audience

    Science.gov (United States)

    Green, Jessica

    2010-01-01

    As evidenced by the strong use of musical scores in modern film, film music has come a long way since the initial silent film's piano or organ accompaniment that simply marked general emotions or moods. Though film music has retained its basic functions of reflecting emotions and moods in the images, the film score has progressed into actually…

  8. Music-to-Color Associations of Single-Line Piano Melodies in Non-synesthetes.

    Science.gov (United States)

    Palmer, Stephen E; Langlois, Thomas A; Schloss, Karen B

    2016-01-01

    Prior research has shown that non-synesthetes' color associations to classical orchestral music are strongly mediated by emotion. The present study examines similar cross-modal music-to-color associations for much better controlled musical stimuli: 64 single-line piano melodies that were generated from four basic melodies by Mozart, whose global musical parameters were manipulated in tempo(slow/fast), note-density (sparse/dense), mode (major/minor) and pitch-height (low/high). Participants first chose the three colors (from 37) that they judged to be most consistent with (and, later, the three that were most inconsistent with) the music they were hearing. They later rated each melody and each color for the strength of its association along four emotional dimensions: happy/sad, agitated/calm, angry/not-angry and strong/weak. The cross-modal choices showed that faster music in the major mode was associated with lighter, more saturated, yellower (warmer) colors than slower music in the minor mode. These results replicate and extend those of Palmer et al. (2013, Proc. Natl Acad. Sci. 110, 8836-8841) with more precisely controlled musical stimuli. Further results replicated strong evidence for emotional mediation of these cross-modal associations, in that the emotional ratings of the melodies were very highly correlated with the emotional associations of the colors chosen as going best/worst with the melodies (r = 0.92, 0.85, 0.82 and 0.70 for happy/sad, strong/weak,angry/not-angry and agitated/calm, respectively). The results are discussed in terms of common emotional associations forming a cross-modal bridge between highly disparate sensory inputs.

  9. Effects of Voice on Emotional Arousal

    Directory of Open Access Journals (Sweden)

    Psyche eLoui

    2013-10-01

    Full Text Available Music is a powerful medium capable of eliciting a broad range of emotions. Although the relationship between language and music is well documented, relatively little is known about the effects of lyrics and the voice on the emotional processing of music and on listeners’ preferences. In the present study, we investigated the effects of vocals in music on participants’ perceived valence and arousal in songs. Participants (N = 50 made valence and arousal ratings for familiar songs that were presented with and without the voice. We observed robust effects of vocal content on perceived arousal. Furthermore, we found that the effect of the voice on enhancing arousal ratings is independent of familiarity of the song and differs across genders and age: females were more influenced by vocals than males; furthermore these gender effects were enhanced among older adults. Results highlight the effects of gender and aging in emotion perception and are discussed in terms of the social roles of music.

  10. Apollo's gift: new aspects of neurologic music therapy.

    Science.gov (United States)

    Altenmüller, Eckart; Schlaug, Gottfried

    2015-01-01

    Music listening and music making activities are powerful tools to engage multisensory and motor networks, induce changes within these networks, and foster links between distant, but functionally related brain regions with continued and life-long musical practice. These multimodal effects of music together with music's ability to tap into the emotion and reward system in the brain can be used to facilitate and enhance therapeutic approaches geared toward rehabilitating and restoring neurological dysfunctions and impairments of an acquired or congenital brain disorder. In this article, we review plastic changes in functional networks and structural components of the brain in response to short- and long-term music listening and music making activities. The specific influence of music on the developing brain is emphasized and possible transfer effects on emotional and cognitive processes are discussed. Furthermore, we present data on the potential of using musical tools and activities to support and facilitate neurorehabilitation. We will focus on interventions such as melodic intonation therapy and music-supported motor rehabilitation to showcase the effects of neurologic music therapies and discuss their underlying neural mechanisms. © 2015 Elsevier B.V. All rights reserved.

  11. Emotional cues, emotional signals, and their contrasting effects on listener valence

    DEFF Research Database (Denmark)

    Christensen, Justin

    2015-01-01

    that are mimetic of emotional cues interact in less clear and less cohesive manners with their corresponding haptic signals. For my investigations, subjects listen to samples from the International Affective Digital Sounds Library[2] and selected musical works on speakers in combination with a tactile transducer...... and of benefit to both the sender and the receiver of the signal, otherwise they would cease to have the intended effect of communication. In contrast with signals, animal cues are much more commonly unimodal as they are unintentional by the sender. In my research, I investigate whether subjects exhibit...... are more emotional cues (e.g. sadness or calmness). My hypothesis is that musical and sound stimuli that are mimetic of emotional signals should combine to elicit a stronger response when presented as a multimodal stimulus as opposed to as a unimodal stimulus, whereas musical or sound stimuli...

  12. Challenges facing theories of music and language co-evolution ...

    African Journals Online (AJOL)

    Some of the issues raised include the definition of the term 'music', the status of music as some sort of communicative medium for the expression of emotion, musical meaning, musical universals and grammars, and the issue of empirical evidence from other disciplines. Journal of the Musical Arts in Africa Volume 6 2009, ...

  13. Parents and Young Children with Disabilities: The Effects of a Home-Based Music Therapy Program on Parent-Child Interactions.

    Science.gov (United States)

    Yang, Yen-Hsuan

    2016-01-01

    Responsive parenting style and synchronous parent-child interactions have a positive impact on children in terms of language, cognitive, and social-emotional development. Despite widely documented benefits of music therapy on parent-child interactions, empirical evidence for the effects of music therapy on parent-child synchrony is lacking. To examine effects of parent-child dyads' participation in a six-week home-based music therapy program on parent response, child initiation, and parent-child synchrony, as well as parents' daily use of musical activities with their child. Twenty-six parent-child dyads participated in this pretest-posttest within-subject single-group design study. Participating dyads included parents and their child with disabilities or developmental delays (ages 1-3 years inclusive). Parent-child dyads participated in a home-based music therapy program that included six weekly 40-minute sessions, and incorporated five responsive teaching strategies (i.e., affect, match, reciprocity, shared control, and contingency). Observational data were recorded for parent-child interactions and parent-child synchrony. Parents' positive physical and verbal responses, as well as children's positive verbal initiations, increased significantly pre- to post-intervention; however, children's positive physical initiations did not increase significantly. Parent-child synchrony also improved significantly pre- to post-intervention. Findings support the use of home-based music therapy programs to facilitate parent-child interactions in the areas of parental responsiveness and child-initiated communication, as well as parent-child synchrony. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Negative mood state enhances the susceptibility to unpleasant events: neural correlates from a music-primed emotion classification task.

    Directory of Open Access Journals (Sweden)

    Jiajin Yuan

    Full Text Available BACKGROUND: Various affective disorders are linked with enhanced processing of unpleasant stimuli. However, this link is likely a result of the dominant negative mood derived from the disorder, rather than a result of the disorder itself. Additionally, little is currently known about the influence of mood on the susceptibility to emotional events in healthy populations. METHOD: Event-Related Potentials (ERP were recorded for pleasant, neutral and unpleasant pictures while subjects performed an emotional/neutral picture classification task during positive, neutral, or negative mood induced by instrumental Chinese music. RESULTS: Late Positive Potential (LPP amplitudes were positively related to the affective arousal of pictures. The emotional responding to unpleasant pictures, indicated by the unpleasant-neutral differences in LPPs, was enhanced during negative compared to neutral and positive moods in the entire LPP time window (600-1000 ms. The magnitude of this enhancement was larger with increasing self-reported negative mood. In contrast, this responding was reduced during positive compared to neutral mood in the 800-1000 ms interval. Additionally, LPP reactions to pleasant stimuli were similar across positive, neutral and negative moods except those in the 800-900 ms interval. IMPLICATIONS: Negative mood intensifies the humans' susceptibility to unpleasant events in healthy individuals. In contrast, music-induced happy mood is effective in reducing the susceptibility to these events. Practical implications of these findings were discussed.

  15. The efficacy of musical emotions provoked by Mozart's music for the reconciliation of cognitive dissonance.

    Science.gov (United States)

    Masataka, Nobuo; Perlovsky, Leonid

    2012-01-01

    Debates on the origin and function of music have a long history. While some scientists argue that music itself plays no adaptive role in human evolution, others suggest that music clearly has an evolutionary role, and point to music's universality. A recent hypothesis suggested that a fundamental function of music has been to help mitigating cognitive dissonance, which is a discomfort caused by holding conflicting cognitions simultaneously. It usually leads to devaluation of conflicting knowledge. Here we provide experimental confirmation of this hypothesis using a classical paradigm known to create cognitive dissonance. Results of our experiment reveal that the exposure to Mozart's music exerted a strongly positive influence upon the performance of young children and served as basis by which they were enabled to reconcile the cognitive dissonance.

  16. Music training improves speech-in-noise perception: Longitudinal evidence from a community-based music program.

    Science.gov (United States)

    Slater, Jessica; Skoe, Erika; Strait, Dana L; O'Connell, Samantha; Thompson, Elaine; Kraus, Nina

    2015-09-15

    Music training may strengthen auditory skills that help children not only in musical performance but in everyday communication. Comparisons of musicians and non-musicians across the lifespan have provided some evidence for a "musician advantage" in understanding speech in noise, although reports have been mixed. Controlled longitudinal studies are essential to disentangle effects of training from pre-existing differences, and to determine how much music training is necessary to confer benefits. We followed a cohort of elementary school children for 2 years, assessing their ability to perceive speech in noise before and after musical training. After the initial assessment, participants were randomly assigned to one of two groups: one group began music training right away and completed 2 years of training, while the second group waited a year and then received 1 year of music training. Outcomes provide the first longitudinal evidence that speech-in-noise perception improves after 2 years of group music training. The children were enrolled in an established and successful community-based music program and followed the standard curriculum, therefore these findings provide an important link between laboratory-based research and real-world assessment of the impact of music training on everyday communication skills. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Music Therapy and Avatars: Reflections on Virtual Learning Environments for Music Therapy Students

    DEFF Research Database (Denmark)

    Story, Maya

    2014-01-01

    Music therapy students have expressed concerns regarding their general preparedness for practicum and working with new populations. Simulations in the immersive virtual world, Second Life, may provide a platform to assist in training music therapy students and enhance preparedness. This project...... examined the feasibility of utilizing Second Life to assist in training music therapists. Music therapy practicum students enrolled in a music therapy equivalency program participated in weekly one hour virtual class meetings in Second Life, which included 5 sessions of music therapy simulations....... At the end of the semester, students were interviewed in relation to their experiences, and interviews were analyzed qualitatively. Common themes among students were limitations of Second Life software, student’s knowledge of software, emotional reactions (both positive and negative), and distance learning....

  18. Music Therapy Clinical Practice in Hospice: Differences Between Home and Nursing Home Delivery.

    Science.gov (United States)

    Liu, Xiaodi; Burns, Debra S; Hilliard, Russell E; Stump, Timothy E; Unroe, Kathleen T

    2015-01-01

    Hospice music therapy is delivered in both homes and nursing homes (NH). No studies to date have explored differences in music therapy delivery between home and NH hospice patients. To compare music therapy referral reasons and delivery for hospice patients living in NH versus home. A retrospective, electronic medical record review was conducted from a large U.S. hospice of patients receiving music therapy between January 1, 2006, and December 31, 2010. Among the 4,804 patients, 2,930 lived in an NH and 1,847 patients lived at home. Compared to home, NH hospice patients were more likely to be female, older, unmarried, and Caucasian. For home hospice patients, the top referral reasons were patient/family emotional and spiritual support, quality of life, and isolation. The most frequent referral reasons for NH hospice patients were isolation, quality of life, and patient/family emotional and spiritual support. Differences in music therapy delivery depended mainly on patients' primary diagnosis and location of care. Results suggest differences in referral reasons and delivery based on an interaction between location of care and patient characteristics. Delivery differences are likely a result of individualized assessment and care plans developed by the music therapist and other interdisciplinary team members to address the unique needs of the patient. Thus, it is important to have professionally trained music therapists assess and provide tailored music-based interventions for patients with different referral reasons and personal characteristics. This study also supports staffing decisions based on patient need rather than average daily census. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Fear across the senses: brain responses to music, vocalizations and facial expressions.

    Science.gov (United States)

    Aubé, William; Angulo-Perkins, Arafat; Peretz, Isabelle; Concha, Luis; Armony, Jorge L

    2015-03-01

    Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing 'biologically relevant' emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related functional magnetic resonance imaging study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, non-linguistic vocalizations and short novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant blood oxygen level-dependent signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Coping with stress: the effectiveness of different types of music.

    Science.gov (United States)

    Labbé, Elise; Schmidt, Nicholas; Babin, Jonathan; Pharr, Martha

    2007-12-01

    Listening to classical and self-selected relaxing music after exposure to a stressor should result in significant reductions in anxiety, anger, and sympathetic nervous system arousal, and increased relaxation compared to those who sit in silence or listen to heavy metal music. Fifty-six college students, 15 males and 41 females, were exposed to different types of music genres after experiencing a stressful test. Several 4 x 2 mixed design analyses of variance were conducted to determine the effects of music and silence conditions (heavy metal, classical, or self-selected music and silence) and time (pre-post music) on emotional state and physiological arousal. Results indicate listening to self-select or classical music, after exposure to a stressor, significantly reduces negative emotional states and physiological arousal compared to listening to heavy metal music or sitting in silence.