WorldWideScience

Sample records for vocal music sounds

  1. Time course of the influence of musical expertise on the processing of vocal and musical sounds.

    Science.gov (United States)

    Rigoulot, S; Pell, M D; Armony, J L

    2015-04-02

    Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. University Vocal Training and Vocal Health of Music Educators and Music Therapists

    Science.gov (United States)

    Baker, Vicki D.; Cohen, Nicki

    2017-01-01

    The purpose of this study was to describe the university vocal training and vocal health of music educators and music therapists. The participants (N = 426), music educators (n = 351) and music therapists (n = 75), completed a survey addressing demographics, vocal training, voice usage, and vocal health. Both groups reported singing at least 50%…

  3. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  4. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    Science.gov (United States)

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Vocal health fitness to different music styles

    Directory of Open Access Journals (Sweden)

    Maria Cláudia Mendes Caminha Muniz

    2010-09-01

    Full Text Available Objective: To present genres and styles currently running on western music scene, focusing on the practice of singing voice. Methods: An observational and documental study for which were selected sound sources presenting musical genres and styles that are part of the experience of the researchers, which were analyzed considering origins, formative elements and vocal features. Alongside we carried out a review of literature grounded in databases research and free review of websites and classical books of the area. Results: The selected styles (Rock and Roll, Heavy Metal, Trash Metal, Grunge, Gothic Metal, Rap, Funk, Blues, R&B – Rhythm and Blues, Soul, Gospel, MPB, Samba, Forro, Sertanejo, Bossa Nova, Opera and Chamber Music were described, pointing the reasons for the speech therapist to be informed about them and about singing voice aspects. His guidance may minimize possible vocal damage caused by each style, since each of them carries its own patterns to which the interpreter must submit. Conclusions: We conclude that the singer will use a specific vocal pattern that resembles the musical style he intends to sing, regardless of any harm it may or may not cause to vocal health. When choosing a musical style, it is important that the singer has the knowledge and understanding of how the use of his vocal apparatus will cause or not cause injury to his voice. Also be aware that the technique in singing is necessary for vocal longevity.

  6. Perception of acoustic scale and size in musical instrument sounds.

    Science.gov (United States)

    van Dinther, Ralph; Patterson, Roy D

    2006-10-01

    There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension. Listeners can also recognize vowels scaled well beyond the range of vocal tracts normally experienced, indicating that perception is robust to changes in acoustic scale. This paper reports two perceptual experiments designed to extend research on acoustic scale and size perception to the domain of musical sounds: The first study shows that listeners can discriminate the scale of musical instrument sounds reliably, although not quite as well as for voices. The second experiment shows that listeners can recognize the family of an instrument sound which has been modified in pitch and scale beyond the range of normal experience. We conclude that processing of acoustic scale in music perception is very similar to processing of acoustic scale in speech perception.

  7. Vocal Qualities in Music Theater Voice: Perceptions of Expert Pedagogues.

    Science.gov (United States)

    Bourne, Tracy; Kenny, Dianna

    2016-01-01

    To gather qualitative descriptions of music theater vocal qualities including belt, legit, and mix from expert pedagogues to better define this voice type. This is a prospective, semistructured interview. Twelve expert teachers from United States, United Kingdom, Asia, and Australia were interviewed by Skype and asked to identify characteristics of music theater vocal qualities including vocal production, physiology, esthetics, pitch range, and pedagogical techniques. Responses were compared with published studies on music theater voice. Belt and legit were generally described as distinct sounds with differing physiological and technical requirements. Teachers were concerned that belt should be taught "safely" to minimize vocal health risks. There was consensus between teachers and published research on the physiology of the glottis and vocal tract; however, teachers were not in agreement about breathing techniques. Neither were teachers in agreement about the meaning of "mix." Most participants described belt as heavily weighted, thick folds, thyroarytenoid-dominant, or chest register; however, there was no consensus on an appropriate term. Belt substyles were named and generally categorized by weightedness or tone color. Descriptions of male belt were less clear than for female belt. This survey provides an overview of expert pedagogical perspectives on the characteristics of belt, legit, and mix qualities in the music theater voice. Although teacher responses are generally in agreement with published research, there are still many controversial issues and gaps in knowledge and understanding of this vocal technique. Breathing techniques, vocal range, mix, male belt, and vocal registers require continuing investigation so that we can learn more about efficient and healthy vocal function in music theater singing. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. The Vocal Tract Organ: A New Musical Instrument Using 3-D Printed Vocal Tracts.

    Science.gov (United States)

    Howard, David M

    2017-10-27

    The advent and now increasingly widespread availability of 3-D printers is transforming our understanding of the natural world by enabling observations to be made in a tangible manner. This paper describes the use of 3-D printed models of the vocal tract for different vowels that are used to create an acoustic output when stimulated with an appropriate sound source in a new musical instrument: the Vocal Tract Organ. The shape of each printed vocal tract is recovered from magnetic resonance imaging. It sits atop a loudspeaker to which is provided an acoustic L-F model larynx input signal that is controlled by the notes played on a musical instrument digital interface device such as a keyboard. The larynx input is subject to vibrato with extent and frequency adjustable as desired within the ranges usually found for human singing. Polyphonic inputs for choral singing textures can be applied via a single loudspeaker and vocal tract, invoking the approximation of linearity in the voice production system, thereby making multiple vowel stops a possibility while keeping the complexity of the instrument in reasonable check. The Vocal Tract Organ offers a much more human and natural sounding result than the traditional Vox Humana stops found in larger pipe organs, offering the possibility of enhancing pipe organs of the future as well as becoming the basis for a "multi-vowel" chamber organ in its own right. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Voice amplification as a means of reducing vocal load for elementary music teachers.

    Science.gov (United States)

    Morrow, Sharon L; Connor, Nadine P

    2011-07-01

    Music teachers are over four times more likely than classroom teachers to develop voice disorders and greater than eight times more likely to have voice-related problems than the general public. Research has shown that individual voice-use parameters of phonation time, fundamental frequency and vocal intensity, as well as vocal load as calculated by cycle dose and distance dose are significantly higher for music teachers than their classroom teacher counterparts. Finding effective and inexpensive prophylactic measures to decrease vocal load for music teachers is an important aspect for voice preservation for this group of professional voice users. The purpose of this study was to determine the effects of voice amplification on vocal intensity and vocal load in the workplace as measured using a KayPENTAX Ambulatory Phonation Monitor (APM) (KayPENTAX, Lincoln Park, NJ). Seven music teachers were monitored for 1 workweek using an APM to determine average vocal intensity (dB sound pressure level [SPL]) and vocal load as calculated by cycle dose and distance dose. Participants were monitored a second week while using a voice amplification unit (Asyst ChatterVox; Asyst Communications Company, Inc., Indian Creek, IL). Significant decreases in mean vocal intensity of 7.00-dB SPL (Pmusic teachers in the classroom. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  10. Hierarchical temporal structure in music, speech and animal vocalizations: jazz is like a conversation, humpbacks sing like hermit thrushes.

    Science.gov (United States)

    Kello, Christopher T; Bella, Simone Dalla; Médé, Butovens; Balasubramaniam, Ramesh

    2017-10-01

    Humans talk, sing and play music. Some species of birds and whales sing long and complex songs. All these behaviours and sounds exhibit hierarchical structure-syllables and notes are positioned within words and musical phrases, words and motives in sentences and musical phrases, and so on. We developed a new method to measure and compare hierarchical temporal structures in speech, song and music. The method identifies temporal events as peaks in the sound amplitude envelope, and quantifies event clustering across a range of timescales using Allan factor (AF) variance. AF variances were analysed and compared for over 200 different recordings from more than 16 different categories of signals, including recordings of speech in different contexts and languages, musical compositions and performances from different genres. Non-human vocalizations from two bird species and two types of marine mammals were also analysed for comparison. The resulting patterns of AF variance across timescales were distinct to each of four natural categories of complex sound: speech, popular music, classical music and complex animal vocalizations. Comparisons within and across categories indicated that nested clustering in longer timescales was more prominent when prosodic variation was greater, and when sounds came from interactions among individuals, including interactions between speakers, musicians, and even killer whales. Nested clustering also was more prominent for music compared with speech, and reflected beat structure for popular music and self-similarity across timescales for classical music. In summary, hierarchical temporal structures reflect the behavioural and social processes underlying complex vocalizations and musical performances. © 2017 The Author(s).

  11. Vocal health fitness to different music styles - doi:10.5020/18061230.2010.p278

    Directory of Open Access Journals (Sweden)

    Maria Claudia Mendes Caminha Muniz

    2012-01-01

    Full Text Available Objective: To present genres and styles currently running on western music scene, focusing on the practice of singing voice. Methods: An observational and documental study for which were selected sound sources presenting musical genres and styles that are part of the experience of the researchers, which were analyzed considering origins, formative elements and vocal features. Alongside we carried out a review of literature grounded in databases research and free review of websites and classical books of the area. Results: The selected styles (Rock and Roll, Heavy Metal, Trash Metal, Grunge, Gothic Metal, Rap, Funk, Blues, R&B – Rhythm and Blues, Soul, Gospel, MPB, Samba, Forro, Sertanejo, Bossa Nova, Opera and Chamber Music were described, pointing the reasons for the speech therapist to be informed about them and about singing voice aspects. His guidance may minimize possible vocal damage caused by each style, since each of them carries its own patterns to which the interpreter must submit. Conclusions: We conclude that the singer will use a specific vocal pattern that resembles the musical style he intends to sing, regardless of any harm it may or may not cause to vocal health. When choosing a musical style, it is important that the singer has the knowledge and understanding of how the use of his vocal apparatus will cause or not cause injury to his voice. Also be aware that the technique in singing is necessary for vocal longevity.

  12. North Indian Classical Vocal Music for the Classroom

    Science.gov (United States)

    Arya, Divya D.

    2015-01-01

    This article offers information that will allow music educators to incorporate North Indian classical vocal music into a multicultural music education curriculum. Obstacles to teaching North Indian classical vocal music are acknowledged, including lack of familiarity with the cultural/structural elements and challenges in teaching ear training and…

  13. Expression of emotion in Eastern and Western music mirrors vocalization.

    Science.gov (United States)

    Bowling, Daniel Liu; Sundararajan, Janani; Han, Shui'er; Purves, Dale

    2012-01-01

    In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.

  14. UPDATING THE BASIC PRINCIPLES OF PROJECT EDUCATION TECHNOLOGY IN FUTURE MUSIC TEACHERS’ VOCAL AND CHORAL TRAINING

    Directory of Open Access Journals (Sweden)

    Liang Haiye

    2017-04-01

    Full Text Available The article is devoted to characterizing features of implementing project technology into future music teachers’ vocal and choral training. The analysis of scientific papers of outstanding scientists in philosophy, psychology, and art education, which deal with modern directions of using project technology, highlight its role in art education process. A methodological base is supported by considering contemporary scientific researches, in particular the theory and methodology of musical studies in accordance with forming students’ independence in the process of solving educational problems by means of project technology; developing principles of students’ professional training optimization on the basis of project activity; innovative development of future music teachers’ professional training that gives to the presented material novelty and presentable appearance. Studying future music teachers’ vocal and choral training as a process of constructing that has a special purpose of improving the quality descriptions of educational vocal and choral collective sound functioning, the author of the article discloses the basic principles of implementing project technology into future music teachers’ vocal and choral training. The author of the article pays the special attention to revealing specific features and maintenance of project technology in vocal and choral training of future leaders of child's art groups. An emphasis is made on the following basic factors that influence development of students’ creative individuality: constructing projects of their own becoming; setting aims, tasks, strategies and facilities of vocal and choral work; directing to the result; independent creative activity; presentation, reflection and correction of a project. On the basis of the obtained data the following principles of project technology are put forward in future music teachers’ vocal and choral training: principle of independence; principle of

  15. Expression of emotion in Eastern and Western music mirrors vocalization.

    Directory of Open Access Journals (Sweden)

    Daniel Liu Bowling

    Full Text Available In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.

  16. The songbird as a percussionist: syntactic rules for non-vocal sound and song production in Java sparrows.

    Directory of Open Access Journals (Sweden)

    Masayo Soma

    Full Text Available Music and dance are two remarkable human characteristics that are closely related. Communication through integrated vocal and motional signals is also common in the courtship displays of birds. The contribution of songbird studies to our understanding of vocal learning has already shed some light on the cognitive underpinnings of musical ability. Moreover, recent pioneering research has begun to show how animals can synchronize their behaviors with external stimuli, like metronome beats. However, few studies have applied such perspectives to unraveling how animals can integrate multimodal communicative signals that have natural functions. Additionally, studies have rarely asked how well these behaviors are learned. With this in mind, here we cast a spotlight on an unusual animal behavior: non-vocal sound production associated with singing in the Java sparrow (Lonchura oryzivora, a songbird. We show that male Java sparrows coordinate their bill-click sounds with the syntax of their song-note sequences, similar to percussionists. Analysis showed that they produced clicks frequently toward the beginning of songs and before/after specific song notes. We also show that bill-clicking patterns are similar between social fathers and their sons, suggesting that these behaviors might be learned from models or linked to learning-based vocalizations. Individuals untutored by conspecifics also exhibited stereotypical bill-clicking patterns in relation to song-note sequence, indicating that while the production of bill clicking itself is intrinsic, its syncopation appears to develop with songs. This paints an intriguing picture in which non-vocal sounds are integrated with vocal courtship signals in a songbird, a model that we expect will contribute to the further understanding of multimodal communication.

  17. The sound of arousal in music is context-dependent.

    Science.gov (United States)

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  18. VOCAL SEGMENT CLASSIFICATION IN POPULAR MUSIC

    DEFF Research Database (Denmark)

    Feng, Ling; Nielsen, Andreas Brinch; Hansen, Lars Kai

    2008-01-01

    This paper explores the vocal and non-vocal music classification problem within popular songs. A newly built labeled database covering 147 popular songs is announced. It is designed for classifying signals from 1sec time windows. Features are selected for this particular task, in order to capture...

  19. The effect of music on repetitive disruptive vocalizations of persons with dementia.

    Science.gov (United States)

    Casby, J A; Holm, M B

    1994-10-01

    This study examined the effect of classical music and favorite music on the repetitive disruptive vocalizations of long-term-care facility (LTCF) residents with dementia of the Alzheimer's type (DAT). Three subjects diagnosed with DAT who had a history of repetitive disruptive vocalizations were selected for the study. Three single-subject withdrawal designs (ABA, ACA, and ABCA) were used to assess subjects' repetitive disruptive vocalizations during each phase: no intervention (A); relaxing, classical music (B); and favorite music (C). Classical music and favorite music significantly decreased the number of vocalizations in two of the three subjects (p < .05). These findings support a method that was effective in decreasing the disruptive vocalization pattern common in those with DAT in the least restrictive manner, as mandated by the Omnibus Budget Reconciliation Act of 1987.

  20. Music Education Intervention Improves Vocal Emotion Recognition

    Science.gov (United States)

    Mualem, Orit; Lavidor, Michal

    2015-01-01

    The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…

  1. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  2. Scene and character: interdisciplinary analysis of musical and sound symbols for higher education

    Directory of Open Access Journals (Sweden)

    Josep Gustems Carnicer

    2017-01-01

    Full Text Available The aim of this paper is to analyze interdisciplinary and educationally the descriptive aspects of the characters in literature in the world of music (opera, ballet, musical theater, program music, audiovisual, etc. through a wide range of resources and creative processes in various skills that include or encompass the sound. Because of that a literature review and multidisciplinary documentary is done from the most relevant texts and principal authors of the dynamic and stable personality models, from the analysis of vocal features in the scene and in the audiovisuals, from the leitmotiv as a symbol and sound representation of the character, from the the conflicts faced by the characters and how they can overcome them and how we could translated into music those transitions. The subject of myths brought to the world of music scene, character stereotypes and sound symbols that may characterize these scenic and literary content is also addressed. Notably, there is a broad consensus on the use of sound resources to characterize the different characters throughout the history of Western music in its various styles and genres. Furthermore, indications for their use are given and suggestions for activities to higher education suggest.

  3. The Effect of Teaching Experience and Specialty (Vocal or Instrumental) on Vocal Health Ratings of Music Teachers

    Science.gov (United States)

    Hackworth, Rhonda S.

    2010-01-01

    The current study sought to determine the relationship among music teachers' length of teaching experience, specialty (vocal or instrumental), and ratings of behaviors and teaching activities related to vocal health. Participants (N = 379) were experienced (n = 208) and preservice (n = 171) music teachers, further categorized by specialty, either…

  4. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  5. The Influence of Distracting Familiar Vocal Music on Cognitive Performance of Introverts and Extraverts

    Science.gov (United States)

    Avila, Christina; Furnham, Adrian; McClelland, Alastair

    2012-01-01

    This study investigates the effect of familiar musical distractors on the cognitive performance of introverts and extraverts. Participants completed a verbal, numerical and logic test in three music conditions: vocal music, instrumental music and silence. It was predicted that introverts would perform worse with vocal music, better with…

  6. Disturbance effect of music on processing of verbal and spatial memories.

    Science.gov (United States)

    Iwanaga, Makoto; Ito, Takako

    2002-06-01

    The purpose of the present study was to examine the disturbance effect of music on performances of memory tasks. Subjects performed a verbal memory task and a spatial memory task in 4 sound conditions, including the presence of vocal music, instrumental music, a natural sound (murmurings of a stream), and no music. 47 undergraduate volunteers were randomly assigned to perform tasks under each condition. Perceived disturbance was highest under the vocal music condition regardless of the type of task. A disturbance in performance by music was observed only with the verbal memory task under the vocal and the instrumental music conditions. These findings were discussed from the perspectives of the working memory hypothesis and the changing state model.

  7. The Effect of Vocal Hygiene and Behavior Modification Instruction on the Self-Reported Vocal Health Habits of Public School Music Teachers

    Science.gov (United States)

    Hackworth, Rhonda S.

    2007-01-01

    This study examined the effects of vocal hygiene and behavior modification instruction on self-reported behaviors of music teachers. Subjects (N = 76) reported daily behaviors for eight weeks: water consumption, warm-up, talking over music/noise, vocal rest, nonverbal commands, and vocal problems. Subjects were in experimental group 1 or 2, or the…

  8. The Individual vocal expression in future music teacher's personal competence development

    OpenAIRE

    Jucevičiūtė-Bartkevičienė, Vaiva

    2011-01-01

    In music education, individual vocal expression is a significant fact contributing to future teachers’ emotional, spiritual and intellectual perfection. This article examines aspects of the individual vocal expression of future music teachers in the context of education related to becoming a competent member of the music teacher’s profession. Using the methods of analysis of the education documents and quantitative analysis, the results of the research, which was conducted in Lithuania, are p...

  9. The Effects of Vocal Register Use and Age on the Perceived Vocal Health of Male Elementary Music Teachers

    Science.gov (United States)

    Fisher, Ryan A.; Scott, Julie K.

    2014-01-01

    The purpose of this study was to examine the effects of vocal register use and age on the perceived vocal health of male elementary music teachers. Participants (N = 160) consisted of male elementary music teachers from two neighboring states in the south-central region of the United States. Participants responded to various demographic questions…

  10. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  11. Musical Sounds, Motor Resonance, and Detectable Agency

    Directory of Open Access Journals (Sweden)

    Jacques Launay

    2015-09-01

    Full Text Available This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.

  12. On Sound: Reconstructing a Zhuangzian Perspective of Music

    Directory of Open Access Journals (Sweden)

    So Jeong Park

    2015-12-01

    Full Text Available A devotion to music in Chinese classical texts is worth noticing. Early Chinese thinkers saw music as a significant part of human experience and a core practice for philosophy. While Confucian endorsement of ritual and music has been discussed in the field, Daoist understanding of music was hardly explored. This paper will make a careful reading of the Xiánchí 咸池 music story in the Zhuangzi, one of the most interesting, but least noticed texts, and reconstruct a Zhuangzian perspective from it. While sounds had been regarded as mere building blocks of music and thus depreciated in the hierarchical understanding of music in the mainstream discourse of early China, sound is the alpha and omega of music in the Zhuangzian perspective. All kinds of sounds, both human and natural, are invited into musical discourse. Sound is regarded as the real source of our being moved by music, and therefore, musical consummation is depicted as embodiment through sound.

  13. Fear across the senses: brain responses to music, vocalizations and facial expressions.

    Science.gov (United States)

    Aubé, William; Angulo-Perkins, Arafat; Peretz, Isabelle; Concha, Luis; Armony, Jorge L

    2015-03-01

    Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing 'biologically relevant' emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related functional magnetic resonance imaging study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, non-linguistic vocalizations and short novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant blood oxygen level-dependent signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Hemispheric processing of vocal emblem sounds.

    Science.gov (United States)

    Neumann-Werth, Yael; Levy, Erika S; Obler, Loraine K

    2013-01-01

    Vocal emblems, such as shh and brr, are speech sounds that have linguistic and nonlinguistic features; thus, it is unclear how they are processed in the brain. Five adult dextral individuals with left-brain damage and moderate-severe Wernicke's aphasia, five adult dextral individuals with right-brain damage, and five Controls participated in two tasks: (1) matching vocal emblems to photographs ('picture task') and (2) matching vocal emblems to verbal translations ('phrase task'). Cross-group statistical analyses on items on which the Controls performed at ceiling revealed lower accuracy by the group with left-brain damage (than by Controls) on both tasks, and lower accuracy by the group with right-brain damage (than by Controls) on the picture task. Additionally, the group with left-brain damage performed significantly less accurately than the group with right-brain damage on the phrase task only. Findings suggest that comprehension of vocal emblems recruits more left- than right-hemisphere processing.

  15. Thinking Sound and Body-Motion Shapes in Music: Public Peer Review of “Gesture and the Sonic Event in Karnatak Music” by Lara Pearson

    Directory of Open Access Journals (Sweden)

    Rolfe Inge Godøy

    2013-12-01

    Full Text Available It seems that the majority of research on music-related body motion has so far been focused on Western music, so this paper by Lara Pearson on music-related body motion in Indian vocal music is a most welcome contribution to this field. But research on music-related body motion does present us with a number of challenges, ranging from issues of method to fundamental issues of perception and multi-modal integration in music. In such research, thinking of perceptually salient features in different modalities (sound, motion, touch, etc. as shapes seems to go well with our cognitive apparatus, and also be quite practical in representing the features in question. The research reported in this paper gives us an insight into how tracing shapes by hand motion is an integral part of teaching Indian vocal music, and the approach of this paper also holds promise for fruitful future research.

  16. BASIC APPROACHES TO THE ORGANIZATION OF PEDAGOGICAL INTERACTION IN SCHOOL VOCAL CHOIRS

    Directory of Open Access Journals (Sweden)

    Anatolii Kuzmenko

    2016-04-01

    Full Text Available The article reveals the basic approaches to the school music teacher’s creative and educational interaction with vocal and choral group. The significance of singing in the national artistic culture of Ukraine is singled out. Students’ practical work on study singing is generalized, it allows to form a vocal and choral musical education system that recreates the natural child’s sound and provides euphonious cantilena voice sounding and availability of mastering this kind of activity. The author also singles out some components such as orientation (it involves goals, content, tasks, volume of learning information; competence one (it is defined as musical and educational complex which includes the high level of knowledge, skills in sound production and psychological peculiarities of learning children of different ages to sing; communication one (it is the process of teacher and pupils’ interaction with the help of different communication means, sharing learning information and its discussion using different technical tools; executive one ( it provides the level of students’ performance skills and creative interpretation of vocal and choral works. The main attention is paid to proper planning and practices at school music lessons, rehearsals, vocal choirs, ensembles, presentations, concerts. It is proved that efficient using methods of achieving content filling of creative and educational interaction of the conductor with a school choral group is due to defining necessary principles that determine his experience. They include effective mastering choirmaster skills, accurate diagnostics and analysis of information and problem material, that sounds jarringly; paying attention to those imperfections which should be corrected in each case for getting positive results.

  17. Vocal tract shapes in different singing functions used in musical theater singing-a pilot study.

    Science.gov (United States)

    Echternach, Matthias; Popeil, Lisa; Traser, Louisa; Wienhausen, Sascha; Richter, Bernhard

    2014-09-01

    Singing styles in Musical Theater singing might differ in many ways from Western Classical singing. However, vocal tract adjustments are not understood in detail. Vocal tract shapes of a single professional Music Theater female subject were analyzed concerning different aspects of singing styles using dynamic real-time magnetic resonance imaging technology with a frame rate of 8 fps. The different tasks include register differences, belting, and vibrato strategies. Articulatory differences were found between head register, modal register, and belting. Also, some vibrato strategies ("jazzy" vibrato) do involve vocal tract adjustments, whereas others (classical vibrato) do not. Vocal tract shaping might contribute to the establishment of different singing functions in Musical Theater singing. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Verbal learning in the context of background music: no influence of vocals and instrumentals on verbal learning.

    Science.gov (United States)

    Jäncke, Lutz; Brügger, Eliane; Brummer, Moritz; Scherrer, Stephanie; Alahmadi, Nsreen

    2014-03-26

    Whether listening to background music enhances verbal learning performance is still a matter of dispute. In this study we investigated the influence of vocal and instrumental background music on verbal learning. 226 subjects were randomly assigned to one of five groups (one control group and 4 experimental groups). All participants were exposed to a verbal learning task. One group served as control group while the 4 further groups served as experimental groups. The control group learned without background music while the 4 experimental groups were exposed to vocal or instrumental musical pieces during learning with different subjective intensity and valence. Thus, we employed 4 music listening conditions (vocal music with high intensity: VOC_HIGH, vocal music with low intensity: VOC_LOW, instrumental music with high intensity: INST_HIGH, instrumental music with low intensity: INST_LOW) and one control condition (CONT) during which the subjects learned the word lists. Since it turned out that the high and low intensity groups did not differ in terms of the rated intensity during the main experiment these groups were lumped together. Thus, we worked with 3 groups: one control group and two groups, which were exposed to background music (vocal and instrumental) during verbal learning. As dependent variable, the number of learned words was used. Here we measured immediate recall during five learning sessions (recall 1 - recall 5) and delayed recall for 15 minutes (recall 6) and 14 days (recall 7) after the last learning session. Verbal learning improved during the first 5 recall sessions without any strong difference between the control and experimental groups. Also the delayed recalls were similar for the three groups. There was only a trend for attenuated verbal learning for the group passively listened to vocals. This learning attenuation diminished during the following learning sessions. The exposure to vocal or instrumental background music during encoding did not

  19. Prevalence of Vocal Problems: Speech-Language Pathologists' Evaluation of Music and Non-Music Teacher Recordings

    Science.gov (United States)

    Hackworth, Rhonda S.

    2013-01-01

    The current study, a preliminary examination of whether music teachers are more susceptible to vocal problems than teachers of other subjects, asked for expert evaluation of audio recordings from licensed speech-language pathologists. Participants (N = 41) taught music (n = 23) or another subject (n = 18) in either elementary (n = 21), middle (n =…

  20. Preservice Music Teacher Voice Use, Vocal Health, and Voice Function before and during Student Teaching

    Science.gov (United States)

    Brunkan, Melissa C.

    2018-01-01

    Preservice music teachers often use their voices differently during the semesters leading up to student teaching as compared to during the semester itself. Vocal demands often increase and change as students move from a student role to full-time teacher role. Consequently, music student teachers frequently experience vocal distress symptoms that…

  1. Automatic Transcription of Polyphonic Vocal Music

    Directory of Open Access Journals (Sweden)

    Andrew McLeod

    2017-12-01

    Full Text Available This paper presents a method for automatic music transcription applied to audio recordings of a cappella performances with multiple singers. We propose a system for multi-pitch detection and voice assignment that integrates an acoustic and a music language model. The acoustic model performs spectrogram decomposition, extending probabilistic latent component analysis (PLCA using a six-dimensional dictionary with pre-extracted log-spectral templates. The music language model performs voice separation and assignment using hidden Markov models that apply musicological assumptions. By integrating the two models, the system is able to detect multiple concurrent pitches in polyphonic vocal music and assign each detected pitch to a specific voice type such as soprano, alto, tenor or bass (SATB. We compare our system against multiple baselines, achieving state-of-the-art results for both multi-pitch detection and voice assignment on a dataset of Bach chorales and another of barbershop quartets. We also present an additional evaluation of our system using varied pitch tolerance levels to investigate its performance at 20-cent pitch resolution.

  2. Sound Stories for General Music

    Science.gov (United States)

    Cardany, Audrey Berger

    2013-01-01

    Language and music literacy share a similar process of understanding that progresses from sensory experience to symbolic representation. The author identifies Bruner’s modes of understanding as they relate to using narrative in the music classroom to enhance music reading at iconic and symbolic levels. Two sound stories are included for…

  3. MOTIVATIONAL AND ADAPTATIVE ASPECT OF PROSPECTIVE MUSIC TEACHERS’ VOCAL TRAINING

    Directory of Open Access Journals (Sweden)

    Lin Ye

    2016-04-01

    Full Text Available The article deals with motivational and adaptive direction of vocal training of the Art Faculty students of the Pedagogical University. Motivational and adaptive phase consisted in identifying the real state of prospective music teachers’ readiness to work with educational vocal choirs. The criterion of formation of motivational and adaptive component is defined as personal motivation in acquiring high-quality vocal and choral training. The author developed an experimental technique that involves a number of empirical research methods: special and long-term monitoring of the content and progress of the educational process; analysis, control and objectivity of teaching methods; testing; perform creative tasks; test activities; conversations and interviews that were conducted among students, faculty and trainers professional disciplines teaching practice.The mentioned criterion implies that Chinese students have sustained professional focus on improving their own vocal and choral training, awareness of the importance and prospects of this profession in their practical activities in educational conditions in China. Motivation in learning vocal and choral activities, Chinese students made the so-called "immunity" to the difficulties related with the new learning environment in universities Ukraine increases the desire to intensify and optimize the process of conducting and choral training, there is awareness of the need for new development knowledge, skills, new experience and carry it into practice national music and teacher education.

  4. Vocal pedagogy and contemporary commercial music : reflections on higher education non-classical vocal pedagogy in the United States and Finland

    OpenAIRE

    Keskinen, Anu Katri

    2013-01-01

    This study is focused on the discipline of higher education contemporary commercial music (CCM) vocal pedagogy through the experiences of two vocal pedagogy teachers, the other in the USA and the other in Finland. The aim of this study has been to find out how the discipline presently looks from a vocal pedagogy teacher's viewpoint, what has the process of building higher education CCM vocal pedagogy courses been like, and where is the field headed. The discussion on CCM pedagogy, also kn...

  5. Psychiatry and music

    OpenAIRE

    Nizamie, Shamsul Haque; Tikka, Sai Krishna

    2014-01-01

    Vocal and/or instrumental sounds combined in such a way as to produce beauty of form, harmony and expression of emotion is music. Brain, mind and music are remarkably related to each other and music has got a strong impact on psychiatry. With the advent of music therapy, as an efficient form of alternative therapy in treating major psychiatric conditions, this impact has been further strengthened. In this review, we deliberate upon the historical aspects of the relationship between psychiatry...

  6. Comparison of Effects Produced by Physiological Versus Traditional Vocal Warm-up in Contemporary Commercial Music Singers.

    Science.gov (United States)

    Portillo, María Priscilla; Rojas, Sandra; Guzman, Marco; Quezada, Camilo

    2018-03-01

    The present study aimed to observe whether physiological warm-up and traditional singing warm-up differently affect aerodynamic, electroglottographic, acoustic, and self-perceived parameters of voice in Contemporary Commercial Music singers. Thirty subjects were asked to perform a 15-minute session of vocal warm-up. They were randomly assigned to one of two types of vocal warm-up: physiological (based on semi-occluded exercises) or traditional (singing warm-up based on open vowel [a:]). Aerodynamic, electroglottographic, acoustic, and self-perceived voice quality assessments were carried out before (pre) and after (post) warm-up. No significant differences were found when comparing both types of vocal warm-up methods, either in subjective or in objective measures. Furthermore, the main positive effect observed in both groups when comparing pre and post conditions was a better self-reported quality of voice. Additionally, significant differences were observed for sound pressure level (decrease), glottal airflow (increase), and aerodynamic efficiency (decrease) in the traditional warm-up group. Both traditional and physiological warm-ups produce favorable voice sensations. Moreover, there are no evident differences in aerodynamic and electroglottographic variables when comparing both types of vocal warm-ups. Some changes after traditional warm-up (decreased intensity, increased airflow, and decreased aerodynamic efficiency) could imply an early stage of vocal fatigue. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. Non-musical sound branding – a conceptualization and research overview

    DEFF Research Database (Denmark)

    Graakjær, Nicolai J.; Bonde, Anders

    2018-01-01

    Purpose The purpose of this paper is to advance the understanding of sound branding by developing a new conceptual framework and providing an overview of the research literature on non-musical sound. Design/methodology/approach Using four mutually exclusive and collectively exhaustive types of non......-musical sound, the paper assesses and synthesizes 99 significant studies across various scholarly fields. Findings The overview reveals two areas in which more research may be warranted, that is, non-musical atmospherics and non-musical sonic logos. Moreover, future sound-branding research should examine...... in further detail the potentials of developed versus annexed object sounds, and mediated versus unmediated brand sounds. Research limitations/implications The paper provides important insights into critical issues that suggest directions for further research on non-musical sound branding. Practical...

  8. Comparison of voice-use profiles between elementary classroom and music teachers.

    Science.gov (United States)

    Morrow, Sharon L; Connor, Nadine P

    2011-05-01

    Among teachers, music teachers are roughly four times more likely than classroom teachers to develop voice-related problems. Although it has been established that music teachers use their voices at high intensities and durations in the course of their workday, voice-use profiles concerning the amount and intensity of vocal use and vocal load have neither been quantified nor has vocal load for music teachers been compared with classroom teachers using these same voice-use parameters. In this study, total phonation time, fundamental frequency (F₀), and vocal intensity (dB SPL [sound pressure level]) were measured or estimated directly using a KayPENTAX Ambulatory Phonation Monitor (KayPENTAX, Lincoln Park, NJ). Vocal load was calculated as cycle and distance dose, as defined by Švec et al (2003), which integrates total phonation time, F₀, and vocal intensity. Twelve participants (n = 7 elementary music teachers and n = 5 elementary classroom teachers) were monitored during five full teaching days of one workweek to determine average vocal load for these two groups of teachers. Statistically significant differences in all measures were found between the two groups (P vocal loads for music teachers are substantially higher than those experienced by classroom teachers (P vocal load may have immediate clinical and educational benefits in vocal health in music teachers. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  9. The electronic cry: Voice and gender in electroacoustic music

    NARCIS (Netherlands)

    Bosma, H.M.

    2013-01-01

    The voice provides an entrance to discuss gender and related fundamental issues in electroacoustic music that are relevant as well in other musical genres and outside of music per se: the role of the female voice; the use of language versus non-verbal vocal sounds; the relation of voice, embodiment

  10. Let the music play! A short-term but no long-term detrimental effect of vocal background music with familiar language lyrics on foreign language vocabulary learning

    NARCIS (Netherlands)

    de Groot, A.M.B.; Smedinga, H.E.

    2014-01-01

    Participants learned foreign vocabulary by means of the paired-associates learning procedure in three conditions: (a) in silence, (b) with vocal music with lyrics in a familiar language playing in the background, or (c) with vocal music with lyrics in an unfamiliar language playing in the

  11. Analysis and Synthesis of Musical Instrument Sounds

    Science.gov (United States)

    Beauchamp, James W.

    For synthesizing a wide variety of musical sounds, it is important to understand which acoustic properties of musical instrument sounds are related to specific perceptual features. Some properties are obvious: Amplitude and fundamental frequency easily control loudness and pitch. Other perceptual features are related to sound spectra and how they vary with time. For example, tonal "brightness" is strongly connected to the centroid or tilt of a spectrum. "Attack impact" (sometimes called "bite" or "attack sharpness") is strongly connected to spectral features during the first 20-100 ms of sound, as well as the rise time of the sound. Tonal "warmth" is connected to spectral features such as "incoherence" or "inharmonicity."

  12. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    Science.gov (United States)

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  13. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    Directory of Open Access Journals (Sweden)

    Xuhai Chen

    Full Text Available Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  14. In Search of the Golden Age Hip-Hop Sound (1986–1996

    Directory of Open Access Journals (Sweden)

    Ben Duinker

    2017-09-01

    Full Text Available The notion of a musical repertoire's "sound" is frequently evoked in journalism and scholarship, but what parameters comprise such a sound? This question is addressed through a statistically-driven corpus analysis of hip-hop music released during the genre's Golden Age era. The first part of the paper presents a methodology for developing, transcribing, and analyzing a corpus of 100 hip-hop tracks released during the Golden Age. Eight categories of aurally salient musical and production parameters are analyzed: tempo, orchestration and texture, harmony, form, vocal and lyric profiles, global and local production effects, vocal doubling and backing, and loudness and compression. The second part of the paper organizes the analysis data into three trend categories: trends of change (parameters that change over time, trends of prevalence (parameters that remain generally constant across the corpus, and trends of similarity (parameters that are similar from song to song. These trends form a generalized model of the Golden Age hip-hop sound which considers both global (the whole corpus and local (unique songs within the corpus contexts. By operationalizing "sound" as the sum of musical and production parameters, aspects of popular music that are resistant to traditional music-analytical methods can be considered.

  15. Sound Exposure During Outdoor Music Festivals

    Science.gov (United States)

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  16. Sound exposure during outdoor music festivals

    Directory of Open Access Journals (Sweden)

    Tron V Tronstad

    2016-01-01

    Full Text Available Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival’s duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization’s recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization’s recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  17. School Music and Society: A Content Analysis of the Midwestern Conference on School Vocal and Instrumental Music, 1946-1996

    Science.gov (United States)

    West, Chad

    2013-01-01

    This article provides an analysis of the session content presented in the first fifty years (1946-1996) of the (Michigan) state music education conference," The Midwestern Conference on School Vocal and Instrumental Music." The purpose of this study was to examine instructional techniques, technology, social/societal, and multicultural…

  18. Hearing speech in music.

    Science.gov (United States)

    Ekström, Seth-Reino; Borg, Erik

    2011-01-01

    The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (Ptempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (Pmusic offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  19. The Effectiveness of Using Vocal Music as the Content Area of English Immersion Classes for Japanese Children

    Science.gov (United States)

    Morgan, Steven Gene

    2012-01-01

    This study set out to determine if English can be taught effectively to Japanese children through a content-based instruction program that uses vocal music as the content area. A total of 240 children participated in the study. The treatment group at a private elementary school in Tokyo received weekly vocal music lessons taught in English for one…

  20. Let the Music Play!--A Short-Term but No Long-Term Detrimental Effect of Vocal Background Music with Familiar Language Lyrics on Foreign Language Vocabulary Learning

    Science.gov (United States)

    de Groot, Annette M. B.; Smedinga, Hilde E.

    2014-01-01

    Participants learned foreign vocabulary by means of the paired-associates learning procedure in three conditions: (a) in silence, (b) with vocal music with lyrics in a familiar language playing in the background, or (c) with vocal music with lyrics in an unfamiliar language playing in the background. The vocabulary to learn varied in concreteness…

  1. Contemporary Commercial Music Singing Students-Voice Quality and Vocal Function at the Beginning of Singing Training.

    Science.gov (United States)

    Sielska-Badurek, Ewelina M; Sobol, Maria; Olszowska, Katarzyna; Niemczyk, Kazimierz

    2017-10-03

    The purpose of this study was to assess the voice quality and the vocal tract function in popular singing students at the beginning of their singing training at the High School of Music. This is a retrospective cross-sectional study. The study consisted of 45 popular singing students (35 females and 10 males, mean age: 19.9 ± 2.8 years). They were assessed in the first 2 months of their 4-year singing training at the High School of Music, between 2013 and 2016. Voice quality and vocal tract function were evaluated using videolaryngostroboscopy, palpation of the vocal tract structures, the perceptual speaking and singing voice assessment, acoustic analysis, maximal phonation time, the Voice Handicap Index, and the Singing Voice Handicap Index (SVHI). Twenty-two percent of Contemporary Commercial Music singing students began their education in the High School, with vocal nodules. Palpation of the vocal tract structure showed in 50% correct motions and tension in speaking and in 39.3% in singing. Perceptual voice assessment showed in 80% proper speaking voice quality and in 82.4% proper singing voice quality. The mean vocal fundamental frequency while speaking in females was 214 Hz and in males was 116 Hz. Dysphonia Severity Index was at the level of 2, and maximum phonation time was 17.7 seconds. The Voice Handicap Index and the SVHI remained within the normal range: 7.5 and 19, respectively. Perceptual singing voice assessment correlated with the SVHI (P = 0.006). Twenty-two percent of the Contemporary Commercial Music singing students began their education in the High School, with organic vocal fold lesions. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. The effect of vocal fold vertical stiffness gradient on sound production

    Science.gov (United States)

    Geng, Biao; Xue, Qian; Zheng, Xudong

    2015-11-01

    It is observed in some experimental studies on canine vocal folds (VFs) that the inferior aspect of the vocal fold (VF) is much stiffer than the superior aspect under relatively large strain. Such vertical difference is supposed to promote the convergent-divergent shape during VF vibration and consequently facilitate the production of sound. In this study, we investigate the effect of vertical variation of VF stiffness on sound production using a numerical model. The vertical variation of stiffness is produced by linearly increasing the Young's modulus and shear modulus from the superior to inferior aspects in the cover layer, and its effect on phonation is examined in terms of aerodynamic and acoustic quantities such as flow rate, open quotient, skewness of flow wave form, sound intensity and vocal efficiency. The flow-induced vibration of the VF is solved with a finite element solver coupled with 1D Bernoulli equation, which is further coupled with a digital waveguide model. This study is designed to find out whether it's beneficial to artificially induce the vertical stiffness gradient by certain implanting material in VF restoring surgery, and if it is beneficial, what gradient is the most favorable.

  3. Sound and Music Interventions in Psychiatry at Aalborg University Hospital

    DEFF Research Database (Denmark)

    Lund, Helle Nystrup; Bertelsen, Lars Rye; Bonde, Lars Ole

    2016-01-01

    to their needs here-and-now. In the study, we focus on how self-selected music may lead to decrease of anxiety and pain or improved relaxation/sleep. The article describes and discusses the theory-driven development of the sound/music milieu, relevant empirical studies, the novel method of data collection......This article reports on the ongoing project development and research study called “A New Sound and Music Milieu at Aalborg University Hospital”. Based on a number of pilot studies in AUH Psychiatry, investigating how special playlists and sound equipment (“sound pillows” and portable players) can...... be used by hospital patients and administered by hospital staff supervised by music therapists, the new project aims to prepare the ground for a systematic application of sound and music in the hospital environment. A number of playlists have been developed, based on theoretical and empirical research...

  4. FeelSound: interactive acoustic music making

    NARCIS (Netherlands)

    Fikkert, F.W.; Hakvoort, Michiel; Hakvoort, M.C.; van der Vet, P.E.; Nijholt, Antinus

    2009-01-01

    FeelSound is a multi-user, multi-touch application that aims to collaboratively compose, in an entertaining way, acoustic music. Simultaneous input by each of up to four users enables collaborative composing. This process as well as the resulting music are entertaining. Sensor-packed intelligent

  5. Sound Levels and Risk Perceptions of Music Students During Classes.

    Science.gov (United States)

    Rodrigues, Matilde A; Amorim, Marta; Silva, Manuela V; Neves, Paula; Sousa, Aida; Inácio, Octávio

    2015-01-01

    It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians' exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.

  6. Physics and music the science of musical sound

    CERN Document Server

    White, Harvey E

    2014-01-01

    Comprehensive and accessible, this foundational text surveys general principles of sound, musical scales, characteristics of instruments, mechanical and electronic recording devices, and many other topics. More than 300 illustrations plus questions, problems, and projects.

  7. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  8. The Traditional/Acoustic Music Project: a study of vocal demands and vocal health.

    Science.gov (United States)

    Erickson, Molly L

    2012-09-01

    The Traditional/Acoustic Music Project seeks to identify the musical and performance characteristics of traditional/acoustic musicians and determine the vocal demands they face with the goals of (1) providing information and outreach to this important group of singers and (2) providing information to physicians, speech-language pathologists, and singing teachers who will enable them to provide appropriate services. Descriptive cross-sectional study. Data have been collected through administration of a 53-item questionnaire. The questionnaire was administered to artists performing at local venues in Knoxville, Tennessee and also to musicians attending the 2008 Folk Alliance Festival in Memphis, Tennessee. Approximately 41% of the respondents have had no vocal training, whereas approximately 34% of the respondents have had some form of formal vocal training (private lessons or group instruction). About 41% of the participants had experienced a tired voice, whereas about 30% of the participants had experienced either a loss of the top range of the voice or a total loss of voice at least once in their careers. Approximately 31% of the respondents had no health insurance. Approximately 69% of the respondents reported that they get their information about healthy singing practices solely from fellow musicians or that they do not get any information at all. Traditional/acoustic musicians are a poorly studied population at risk for the development of voice disorders. Continued research is necessary with the goal of a large sample that can be analyzed for associations, identification of subpopulations, and formulation of specific hypotheses that lend themselves to experimental research. Appropriate models of information and service delivery tailored for the singer-instrumentalist are needed. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  9. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  10. Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions

    Science.gov (United States)

    Reybrouck, Mark; Eerola, Tuomas

    2017-01-01

    The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning. The theoretical grounding elaborates on the transition from referential to affective semantics, the distinction between expression and induction of emotions, and the tension between discrete-digital and analog-continuous processing of the sounds. The empirical background provides evidence from several findings such as infant-directed speech, referential emotive vocalizations and separation calls in lower mammals, the distinction between the acoustic and vehicle mode of sound perception, and the bodily and physiological reactions to the sounds. It is argued, finally, that early affective processing reflects the way emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. As such there is a dynamic tension between nature and nurture, which is reflected in the nature-nurture-nature cycle of musical sense-making. PMID:28421015

  11. Complex coevolution of wing, tail, and vocal sounds of courting male bee hummingbirds.

    Science.gov (United States)

    Clark, Christopher J; McGuire, Jimmy A; Bonaccorso, Elisa; Berv, Jacob S; Prum, Richard O

    2018-03-01

    Phenotypic characters with a complex physical basis may have a correspondingly complex evolutionary history. Males in the "bee" hummingbird clade court females with sound from tail-feathers, which flutter during display dives. On a phylogeny of 35 species, flutter sound frequency evolves as a gradual, continuous character on most branches. But on at least six internal branches fall two types of major, saltational changes: mode of flutter changes, or the feather that is the sound source changes, causing frequency to jump from one discrete value to another. In addition to their tail "instruments," males also court females with sound from their syrinx and wing feathers, and may transfer or switch instruments over evolutionary time. In support of this, we found a negative phylogenetic correlation between presence of wing trills and singing. We hypothesize this transference occurs because wing trills and vocal songs serve similar functions and are thus redundant. There are also three independent origins of self-convergence of multiple signals, in which the same species produces both a vocal (sung) frequency sweep, and a highly similar nonvocal sound. Moreover, production of vocal, learned song has been lost repeatedly. Male bee hummingbirds court females with a diverse, coevolving array of acoustic traits. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  12. Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    NARCIS (Netherlands)

    Başkent, Deniz; Fuller, Christina; Galvin, John; Schepel, Like; Gaudrain, Etienne; Free, Rolien

    2018-01-01

    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech

  13. Music Structure Analysis from Acoustic Signals

    Science.gov (United States)

    Dannenberg, Roger B.; Goto, Masataka

    Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g., strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture.

  14. A Joyful Noise: The Vocal Health of Worship Leaders and Contemporary Christian Singers.

    Science.gov (United States)

    Neto, Leon; Meyer, David

    2017-03-01

    Contemporary commercial music (CCM) is a term that encompasses many styles of music. A growing subset of CCM is contemporary Christian music, a genre that has outpaced other popular styles such as Latin, jazz, and classical music. Contemporary Christian singers (CCSs) and worship leaders (WLs) are a subset of CCM musicians that face unique vocal demands and risks. They typically lack professional training and often perform in acoustically disadvantageous venues with substandard sound reinforcement systems. The vocal needs and risks of these singers are not well understood, and because of this, their training and care may be suboptimal. The aim of the present study was to investigate the vocal health of this growing population and their awareness of standard vocal hygiene principles. An online questionnaire was designed and administered to participants in the Americas, Europe, Australia, and Asia. A total of 614 participants responded to the questionnaire, which is made available in English, Portuguese, and Spanish. Many participants reported vocal symptoms such as vocal fatigue (n = 213; 34.7%), tickling or choking sensation (n = 149; 24.3%), loss of upper range (n = 172; 28%), and complete loss of voice (n = 25; 4.1%). One third of the participants (n = 210; 34%) indicated that they do not warm up their voices before performances and over half of the participants (n = 319; 52%) have no formal vocal training. Results suggest that this population demonstrates low awareness of vocal hygiene principles, frequently experience difficulty with their voices, and may face elevated risk of vocal pathology. Future studies of this population may confirm the vocal risks that our preliminary findings suggest. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Ecoacoustic Music for Geoscience: Sonic Physiographies and Sound Casting

    Science.gov (United States)

    Burtner, M.

    2017-12-01

    The author describes specific ecoacoustic applications in his original compositions, Sonic Physiography of a Time-Stretched Glacier (2015), Catalog of Roughness (2017), Sound Cast of Matanuska Glacier (2016) and Ecoacoustic Concerto (Eagle Rock) (2014). Ecoacoustic music uses technology to map systems from nature into music through techniques such as sonification, material amplification, and field recording. The author aspires for this music to be descriptive of the data (as one would expect from a visualization) and also to function as engaging and expressive music/sound art on its own. In this way, ecoacoustic music might provide a fitting accompaniment to a scientific presentation (such as music for a science video) while also offering an exemplary concert hall presentation for a dedicated listening public. The music can at once support the communication of scientific research, and help science make inroads into culture. The author discusses how music created using the data, sounds and methods derived from earth science can recast this research into a sonic art modality. Such music can amplify the communication and dissemination of scientific knowledge by broadening the diversity of methods and formats we use to bring excellent scientific research to the public. Music can also open the public's imagination to science, inspiring curiosity and emotional resonance. Hearing geoscience as music may help a non-scientist access scientific knowledge in new ways, and it can greatly expand the types of venues in which this work can appear. Anywhere music is played - concert halls, festivals, galleries, radio, etc - become a venue for scientific discovery.

  16. The sound of music: Differentiating musicians using a fast, musical multi-feature mismatch negativity paradigm

    DEFF Research Database (Denmark)

    Vuust, Peter; Brattico, Elvira; Seppänen, Miia

    2012-01-01

    to the other deviants in jazz musicians and left lateralization of the MMN to timbre in classical musicians. These findings indicate that the characteristics of the style/genre of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in a musical......Musicians' skills in auditory processing depend highly on instrument, performance practice, and on level of expertise. Yet, it is not known though whether the style/genre of music might shape auditory processing in the brains of musicians. Here, we aimed at tackling the role of musical style....../genre on modulating neural and behavioral responses to changes in musical features. Using a novel, fast and musical sounding multi-feature paradigm, we measured the mismatch negativity (MMN), a pre-attentive brain response, to six types of musical feature change in musicians playing three distinct styles of music...

  17. A Comparison of the Basic Song Repertoire of Vocal/Choral and Instrumental Music Education Majors.

    Science.gov (United States)

    Prickett, Carol A.; Bridges, Madeline S.

    2000-01-01

    Explores whether the basic song repertoire of vocal/choral music education majors is significantly better than instrumental music education majors. Participants attempted to identify 25 standard songs. Reveals no significant difference between the two groups, indicating that neither had developed a strong repertoire of songs. (CMK)

  18. [Music therapy in adults with cochlear implants : Effects on music perception and subjective sound quality].

    Science.gov (United States)

    Hutter, E; Grapp, M; Argstatter, H

    2016-12-01

    People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.

  19. Hearing speech in music

    Directory of Open Access Journals (Sweden)

    Seth-Reino Ekström

    2011-01-01

    Full Text Available The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA noise and speech spectrum-filtered noise (SPN]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA. The results showed a significant effect of piano performance speed and octave (P<.01. Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01 and SPN (P<.05. Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01, but there were smaller differences between masking conditions (P<.01. It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  20. The use of music on Barney & Friends: implications for music therapy practice and research.

    Science.gov (United States)

    McGuire , K M

    2001-01-01

    This descriptive study examined the music content of 88 episodes from the PBS television show Barney & Friends, which aired from September 1992 to September 1998, in an attempt to quantify musical examples and presentations that may be considered introductory music experiences for preschoolers. Using many of the procedures identified by Wolfe and Stambaugh (1993) in their study on the music of Sesame Street, 25% of Barney & Friends' 88 episodes were analyzed by using the computer observation program SCRIBE in determining: (a) the temporal use of music; (b) performance medium; and (c) intention of music use. Furthermore, each structural prompt presentation (n = 749) from all 88 episodes was examined for: (a) tempo; (b) vocal range; (c) music style; (d) word clarity; (e) repetition; (f) vocal modeling; and (g) movement. Results revealed that the show contained more music (92.2%) than nonmusic (7.8%), with the majority of this music containing instrumental sounds (61%). The function of this music was distributed equally between structural prompt music (48%) and background music (48%). The majority of the structural prompt music contained newly composed material (52%), while 33% consisted of previously composed material. Fifteen percent contained a combination of newly composed and previously composed material. The most common tempo range for presentations on the show was 80-100 bpm, while vocal ranges of a 9th, 8th, 6th, and 7th were predominant and most often sung by children's voices. The adult male voice was also common, with 84% of all adult vocals being male. The tessitura category with the greatest number of appearances was middle C to C above (n = 133), with the majority of the presentations (n = 435, 73%) extending singers' voices over the register lift of B above middle C. Children's music and music of the American heritage were the most common style categories observed, and these two categories combined on 260 (35%) presentations. The use of choreographed

  1. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  2. A MISCELLANY ON INDIAN TRADITIONAL MUSIC

    Directory of Open Access Journals (Sweden)

    Rauf Kerimov

    2013-06-01

    Full Text Available Indian music has a very long, unbroken tradition and is an accumulated heritage of centuries. Music in India was popular among all the sections of society and intertwined in life and culture from birth to death. Indian music was formed with the evolution of ancient religious and secular music. The Indian culture absorbed all the best that was brought by other nations in the process of historical development. The Indian music is quite diverse: there are classical instrumental and vocal works and traditional singing of sacred hymns, folk songs and music of different nations. In contrast to the music scholarship, where typically image is a certain regularity, discipline and harmony, beauty of the traditional Indian music in the free improvisation, which is used by the performer. Listening carefully of this music, the man in a new world, a different sounds and explore a different idea of music for himself. The aim of the Indian music, unlike European musical culture define, explore, create and move depths to people's moods. And the Indian instruments is a miracle, that could reflect all these philosophical and aesthetic views. Along with the vocal art, this musical tradition has rich variety of melodic and rhythmic instruments.

  3. Vocalisation Repertoire of Female Bluefin Gurnard (Chelidonichthys kumu in Captivity: Sound Structure, Context and Vocal Activity.

    Directory of Open Access Journals (Sweden)

    Craig A Radford

    Full Text Available Fish vocalisation is often a major component of underwater soundscapes. Therefore, interpretation of these soundscapes requires an understanding of the vocalisation characteristics of common soniferous fish species. This study of captive female bluefin gurnard, Chelidonichthys kumu, aims to formally characterise their vocalisation sounds and daily pattern of sound production. Four types of sound were produced and characterised, twice as many as previously reported in this species. These sounds fit two aural categories; grunt and growl, the mean peak frequencies for which ranged between 129 to 215 Hz. This species vocalized throughout the 24 hour period at an average rate of (18.5 ± 2.0 sounds fish-1 h-1 with an increase in vocalization rate at dawn and dusk. Competitive feeding did not elevate vocalisation as has been found in other gurnard species. Bluefin gurnard are common in coastal waters of New Zealand, Australia and Japan and, given their vocalization rate, are likely to be significant contributors to ambient underwater soundscape in these areas.

  4. Assessment of sound quality perception in cochlear implant users during music listening.

    Science.gov (United States)

    Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2012-04-01

    Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this

  5. Augmenting the Sound Experience at Music Festivals using Mobile Phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Stopczynski, Arkadiusz; Larsen, Jan

    2011-01-01

    In this paper we describe experiments carried out at the Nibe music festival in Denmark involving the use of mobile phones to augment the participants' sound experience at the concerts. The experiments involved N=19 test participants that used a mobile phone with a headset playing back sound...... “in-the-wild” experiments augmenting the sound experience at two concerts at this music festival....

  6. FeelSound : Collaborative Composing of Acoustic Music

    NARCIS (Netherlands)

    Fikkert, Wim; Hakvoort, Michiel; van der Vet, Paul; Nijholt, Anton

    2009-01-01

    FeelSound is a multi-user application for collaboratively composing music in an entertaining way. Up to four composers can jointly create acoustic music on a top-projection multitouch sensitive table. The notes of an acoustic instrument are represented on a harmonic table and, by drawing shapes on

  7. Singing and Vocal Interventions in Palliative and Cancer Care: Music Therapists' Perceptions of Usage.

    Science.gov (United States)

    Clements-Cortés, Amy

    2017-11-01

    Music therapists in palliative and cancer care settings often use singing and vocal interventions. Although benefits for these interventions are emerging, more information is needed on what type of singing interventions are being used by credentialed music therapists, and what goal areas are being addressed. To assess music therapists' perceptions on how they use singing and vocal interventions in palliative and cancer care environments. Eighty credentialed music therapists from Canada and the United States participated in this two-part convergent mixed-methods study that began with an online survey, followed by individual interviews with 50% (n = 40) of the survey participants. In both palliative and cancer care, singing client-preferred music and singing for relaxation were the most frequently used interventions. In palliative care, the most commonly addressed goals were to increase self-expression, improve mood, and create a feeling of togetherness between individuals receiving palliative care and their family. In cancer care, the most commonly addressed goals were to support breathing, improve mood, and support reminiscence. Seven themes emerged from therapist interviews: containing the space, connection, soothing, identity, freeing the voice within, letting go, and honoring. Music therapists use singing to address the physical, emotional, social, and spiritual goals of patients, and described singing interventions as accessible and effective. Further research is recommended to examine intervention efficacy and identify factors responsible that contribute to clinical benefit. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  8. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    Directory of Open Access Journals (Sweden)

    Mari eTervaniemi

    2014-07-01

    Full Text Available Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel to compare memory-related MMN and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians. In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  9. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding.

    Science.gov (United States)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  10. The sound of music: differentiating musicians using a fast, musical multi-feature mismatch negativity paradigm.

    Science.gov (United States)

    Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari

    2012-06-01

    Musicians' skills in auditory processing depend highly on instrument, performance practice, and on level of expertise. Yet, it is not known though whether the style/genre of music might shape auditory processing in the brains of musicians. Here, we aimed at tackling the role of musical style/genre on modulating neural and behavioral responses to changes in musical features. Using a novel, fast and musical sounding multi-feature paradigm, we measured the mismatch negativity (MMN), a pre-attentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, rock/pop) and in non-musicians. Jazz and classical musicians scored higher in the musical aptitude test than band musicians and non-musicians, especially with regards to tonal abilities. These results were extended by the MMN findings: jazz musicians had larger MMN-amplitude than all other experimental groups across the six different sound features, indicating a greater overall sensitivity to auditory outliers. In particular, we found enhanced processing of pith and sliding up to pitches in jazz musicians only. Furthermore, we observed a more frontal MMN to pitch and location compared to the other deviants in jazz musicians and left lateralization of the MMN to timbre in classical musicians. These findings indicate that the characteristics of the style/genre of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in a musical context. Musicians' brain is hence shaped by the type of training, musical style/genre, and listening experiences. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Sounds of Modified Flight Feathers Reliably Signal Danger in a Pigeon.

    Science.gov (United States)

    Murray, Trevor G; Zeil, Jochen; Magrath, Robert D

    2017-11-20

    In his book on sexual selection, Darwin [1] devoted equal space to non-vocal and vocal communication in birds. Since then, vocal communication has become a model for studies of neurobiology, learning, communication, evolution, and conservation [2, 3]. In contrast, non-vocal "instrumental music," as Darwin called it, has only recently become subject to sustained inquiry [4, 5]. In particular, outstanding work reveals how feathers, often highly modified, produce distinctive sounds [6-9], and suggests that these sounds have evolved at least 70 times, in many orders [10]. It remains to be shown, however, that such sounds are signals used in communication. Here we show that crested pigeons (Ochyphaps lophotes) signal alarm with specially modified wing feathers. We used video and feather-removal experiments to demonstrate that the highly modified 8 th primary wing feather (P8) produces a distinct note during each downstroke. The sound changes with wingbeat frequency, so that birds fleeing danger produce wing sounds with a higher tempo. Critically, a playback experiment revealed that only if P8 is present does the sound of escape flight signal danger. Our results therefore indicate, nearly 150 years after Darwin's book, that modified feathers can be used for non-vocal communication, and they reveal an intrinsically reliable alarm signal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Influence of Pitch Height on the Perception of Submissiveness and Threat in Musical Passages

    Directory of Open Access Journals (Sweden)

    David Huron

    2006-09-01

    Full Text Available Bolinger, Ohala, Morton and others have established that vocal pitch height is perceived to be associated with social signals of dominance and submissiveness: higher vocal pitch is associated with submissiveness, whereas lower vocal pitch is associated with social dominance. An experiment was carried out to test this relationship in the perception of non-vocal melodies. Results show a parallel situation in music: higher-pitched melodies sound more submissive (less threatening than lower-pitched melodies.

  13. Vocal Music Therapy for Chronic Pain Management in Inner-City African Americans: A Mixed Methods Feasibility Study.

    Science.gov (United States)

    Bradt, Joke; Norris, Marisol; Shim, Minjung; Gracely, Edward J; Gerrity, Patricia

    2016-01-01

    To date, research on music for pain management has focused primarily on listening to prerecorded music for acute pain. Research is needed on the impact of active music therapy interventions on chronic pain management. The aim of this mixed methods research study was to determine feasibility and estimates of effect of vocal music therapy for chronic pain management. Fifty-five inner-city adults, predominantly African Americans, with chronic pain were randomized to an 8-week vocal music therapy treatment group or waitlist control group. Consent and attrition rates, treatment compliance, and instrument appropriateness/burden were tracked. Physical functioning (pain interference and general activities), self-efficacy, emotional functioning, pain intensity, pain coping, and participant perception of change were measured at baseline, 4, 8, and 12 weeks. Focus groups were conducted at the 12-week follow-up. The consent rate was 77%. The attrition rate was 27% at follow-up. We established acceptability of the intervention. Large effect sizes were obtained for self-efficacy at weeks 8 and 12; a moderate effect size was found for pain interference at week 8; no improvements were found for general activities and emotional functioning. Moderate effect sizes were obtained for pain intensity and small effect sizes for coping, albeit not statistically significant. Qualitative findings suggested that the treatment resulted in enhanced self-management, motivation, empowerment, a sense of belonging, and reduced isolation. This study suggests that vocal music therapy may be effective in building essential stepping-stones for effective chronic pain management, namely enhanced self-efficacy, motivation, empowerment, and social engagement. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Music and Sound in Time Processing of Children with ADHD.

    Science.gov (United States)

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  15. The effect of vocal and instrumental music on cardio respiratory variables, energy expenditure and exertion levels during sub maximal treadmill exercise.

    Science.gov (United States)

    Savitha, D; Sejil, T V; Rao, Shwetha; Roshan, C J; Roshan, C J

    2013-01-01

    The purpose of the study was to investigate the effect of vocal and instrumental music on various physiological parameters during submaximal exercise. Each subject underwent three sessions of exercise protocol without music, with vocal music, and instrumental versions of same piece of music. The protocol consisted of 10 min treadmill exercise at 70% HR(max) and 20 min of recovery. Minute to minute heart rate and breath by breath recording of respiratory parameters, rate of energy expenditure and perceived exertion levels were measured. Music, irrespective of the presence or absence of lyrics, enabled the subjects to exercise at a significantly lower heart rate and oxygen consumption, reduced the metabolic cost and perceived exertion levels of exercise (P Music having a relaxant effect could have probably increased the parasympathetic activation leading to these effects.

  16. The sounds of safety: stress and danger in music perception.

    Science.gov (United States)

    Schäfer, Thomas; Huron, David; Shanahan, Daniel; Sedlmeier, Peter

    2015-01-01

    As with any sensory input, music might be expected to incorporate the processing of information about the safety of the environment. Little research has been done on how such processing has evolved and how different kinds of sounds may affect the experience of certain environments. In this article, we investigate if music, as a form of auditory information, can trigger the experience of safety. We hypothesized that (1) there should be an optimal, subjectively preferred degree of information density of musical sounds, at which safety-related information can be processed optimally; (2) any deviation from the optimum, that is, both higher and lower levels of information density, should elicit experiences of higher stress and danger; and (3) in general, sonic scenarios with music should reduce experiences of stress and danger more than other scenarios. In Experiment 1, the information density of short music-like rhythmic stimuli was manipulated via their tempo. In an initial session, listeners adjusted the tempo of the stimuli to what they deemed an appropriate tempo. In an ensuing session, the same listeners judged their experienced stress and danger in response to the same stimuli, as well as stimuli exhibiting tempo variants. Results are consistent with the existence of an optimum information density for a given rhythm; the preferred tempo decreased for increasingly complex rhythms. The hypothesis that any deviation from the optimum would lead to experiences of higher stress and danger was only partly fit by the data. In Experiment 2, listeners should indicate their experience of stress and danger in response to different sonic scenarios: music, natural sounds, and silence. As expected, the music scenarios were associated with lowest stress and danger whereas both natural sounds and silence resulted in higher stress and danger. Overall, the results largely fit the hypothesis that music seemingly carries safety-related information about the environment.

  17. Social functioning and autonomic nervous system sensitivity across vocal and musical emotion in Williams syndrome and autism spectrum disorder.

    Science.gov (United States)

    Järvinen, Anna; Ng, Rowena; Crivelli, Davide; Neumann, Dirk; Arnold, Andrew J; Woo-VonHoogenstyn, Nicholas; Lai, Philip; Trauner, Doris; Bellugi, Ursula

    2016-01-01

    Both Williams syndrome (WS) and autism spectrum disorders (ASD) are associated with unusual auditory phenotypes with respect to processing vocal and musical stimuli, which may be shaped by the atypical social profiles that characterize the syndromes. Autonomic nervous system (ANS) reactivity to vocal and musical emotional stimuli was examined in 12 children with WS, 17 children with ASD, and 20 typically developing (TD) children, and related to their level of social functioning. The results of this small-scale study showed that after controlling for between-group differences in cognitive ability, all groups showed similar emotion identification performance across conditions. Additionally, in ASD, lower autonomic reactivity to human voice, and in TD, to musical emotion, was related to more normal social functioning. Compared to TD, both clinical groups showed increased arousal to vocalizations. A further result highlighted uniquely increased arousal to music in WS, contrasted with a decrease in arousal in ASD and TD. The ASD and WS groups exhibited arousal patterns suggestive of diminished habituation to the auditory stimuli. The results are discussed in the context of the clinical presentation of WS and ASD. © 2015 Wiley Periodicals, Inc.

  18. Sound or Expression: Dilemmas in the Phenomenological Aesthetics of 20th Century Music (Sound or Expression: Dilemmas in the Phenomenological Aesthetics of 20th Century Music

    Directory of Open Access Journals (Sweden)

    Martina Stratilková

    2016-12-01

    Full Text Available Phenomenology, as a philosophy of the twentieth century, is often confronted with music of the same period, which in contrast with the classical-romantic repertoire recedes from previously codified means for the organisation of musical structure (namely tonality and holds up the actual matter of the musicsound – for admiration. So musical experience dwells more at the sound and its direct appearance rather than rushing to the musical meanings intended through sensuous moments. From this aspect music in the first decades of the twentieth century complemented the other arts undergoing a similar development. Romantic art was replaced by artistic creativity relying on the objectivity of the musical material and not on the emotional quality. The paper considers circumstances under which some of the phenomenological approaches adopt a positive approach to the music of the twentieth century (those which stress the immediacy of the perceptive presence and some of which tend to reject it (those which apply the requirement of expressive intentionality.

  19. Music and language expertise influence the categorization of speech and musical sounds: behavioral and electrophysiological measurements.

    Science.gov (United States)

    Elmer, Stefan; Klein, Carina; Kühnis, Jürg; Liem, Franziskus; Meyer, Martin; Jäncke, Lutz

    2014-10-01

    In this study, we used high-density EEG to evaluate whether speech and music expertise has an influence on the categorization of expertise-related and unrelated sounds. With this purpose in mind, we compared the categorization of speech, music, and neutral sounds between professional musicians, simultaneous interpreters (SIs), and controls in response to morphed speech-noise, music-noise, and speech-music continua. Our hypothesis was that music and language expertise will strengthen the memory representations of prototypical sounds, which act as a perceptual magnet for morphed variants. This means that the prototype would "attract" variants. This so-called magnet effect should be manifested by an increased assignment of morphed items to the trained category, by a reduced maximal slope of the psychometric function, as well as by differential event-related brain responses reflecting memory comparison processes (i.e., N400 and P600 responses). As a main result, we provide first evidence for a domain-specific behavioral bias of musicians and SIs toward the trained categories, namely music and speech. In addition, SIs showed a bias toward musical items, indicating that interpreting training has a generic influence on the cognitive representation of spectrotemporal signals with similar acoustic properties to speech sounds. Notably, EEG measurements revealed clear distinct N400 and P600 responses to both prototypical and ambiguous items between the three groups at anterior, central, and posterior scalp sites. These differential N400 and P600 responses represent synchronous activity occurring across widely distributed brain networks, and indicate a dynamical recruitment of memory processes that vary as a function of training and expertise.

  20. Discrimination of musical instrument sounds resynthesized with simplified spectrotemporal parameters.

    Science.gov (United States)

    McAdams, S; Beauchamp, J W; Meneguzzi, S

    1999-02-01

    The perceptual salience of several outstanding features of quasiharmonic, time-variant spectra was investigated in musical instrument sounds. Spectral analyses of sounds from seven musical instruments (clarinet, flute, oboe, trumpet, violin, harpsichord, and marimba) produced time-varying harmonic amplitude and frequency data. Six basic data simplifications and five combinations of them were applied to the reference tones: amplitude-variation smoothing, coherent variation of amplitudes over time, spectral-envelope smoothing, forced harmonic-frequency variation, frequency-variation smoothing, and harmonic-frequency flattening. Listeners were asked to discriminate sounds resynthesized with simplified data from reference sounds resynthesized with the full data. Averaged over the seven instruments, the discrimination was very good for spectral envelope smoothing and amplitude envelope coherence, but was moderate to poor in decreasing order for forced harmonic frequency variation, frequency variation smoothing, frequency flattening, and amplitude variation smoothing. Discrimination of combinations of simplifications was equivalent to that of the most potent constituent simplification. Objective measurements were made on the spectral data for harmonic amplitude, harmonic frequency, and spectral centroid changes resulting from simplifications. These measures were found to correlate well with discrimination results, indicating that listeners have access to a relatively fine-grained sensory representation of musical instrument sounds.

  1. Blah, Blah, Blah: Making Sense of Nonsense in Irish Vocal Music

    Directory of Open Access Journals (Sweden)

    Catherine E. Mullins

    2014-11-01

    Full Text Available This paper seeks to provide a foundation for understanding lilting, a traditional type of vocal music found in Ireland that involves improvising non-lexical vocables to dance tunes, in order to help preserve this genre in its traditional form as well as encourage its transformation and incorporation into modern music. Through a case study, this research paper demonstrates certain features and patterns that may characterize traditional lilting. A recording of Seamus Fay’s performance of the traditional folk jig, “Humours of Ballyloughlin,” has been transcribed for analysis and examined for possible relationships of vocables or vowels to music and vocables to other vocables. Characteristics suggested by the transcription include the importance of [d], the extent of the vocable vocabulary used throughout the piece, the typical arrangement of vocables in relation to one another, and the connection between vocables to metric accents and vowels to agogic accents.

  2. What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics.

    Science.gov (United States)

    Olsen, Kirk N; Dean, Roger T; Leung, Yvonne

    2016-01-01

    Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event 'hazard' analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral

  3. Voice Use Among Music Theory Teachers: A Voice Dosimetry and Self-Assessment Study.

    Science.gov (United States)

    Schiller, Isabel S; Morsomme, Dominique; Remacle, Angélique

    2017-07-25

    This study aimed (1) to investigate music theory teachers' professional and extra-professional vocal loading and background noise exposure, (2) to determine the correlation between vocal loading and background noise, and (3) to determine the correlation between vocal loading and self-evaluation data. Using voice dosimetry, 13 music theory teachers were monitored for one workweek. The parameters analyzed were voice sound pressure level (SPL), fundamental frequency (F0), phonation time, vocal loading index (VLI), and noise SPL. Spearman correlation was used to correlate vocal loading parameters (voice SPL, F0, and phonation time) and noise SPL. Each day, the subjects self-assessed their voice using visual analog scales. VLI and self-evaluation data were correlated using Spearman correlation. Vocal loading parameters and noise SPL were significantly higher in the professional than in the extra-professional environment. Voice SPL, phonation time, and female subjects' F0 correlated positively with noise SPL. VLI correlated with self-assessed voice quality, vocal fatigue, and amount of singing and speaking voice produced. Teaching music theory is a profession with high vocal demands. More background noise is associated with increased vocal loading and may indirectly increase the risk for voice disorders. Correlations between VLI and self-assessments suggest that these teachers are well aware of their vocal demands and feel their effect on voice quality and vocal fatigue. Visual analog scales seem to represent a useful tool for subjective vocal loading assessment and associated symptoms in these professional voice users. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Exposure to excessive sounds and hearing status in academic classical music students

    Directory of Open Access Journals (Sweden)

    Małgorzata Pawlaczyk-Łuszczyńska

    2017-02-01

    Full Text Available Objectives: The aim of this study was to assess hearing of music students in relation to their exposure to excessive sounds. Material and Methods: Standard pure-tone audiometry (PTA was performed in 168 music students, aged 22.5±2.5 years. The control group included 67 subjects, non-music students and non-musicians, aged 22.8±3.3 years. Data on the study subjects’ musical experience, instruments in use, time of weekly practice and additional risk factors for noise-induced hearing loss (NIHL were identified by means of a questionnaire survey. Sound pressure levels produced by various groups of instruments during solo and group playing were also measured and analyzed. The music students’ audiometric hearing threshold levels (HTLs were compared with the theoretical predictions calculated according to the International Organization for Standardization standard ISO 1999:2013. Results: It was estimated that the music students were exposed for 27.1±14.3 h/week to sounds at the A-weighted equivalent-continuous sound pressure level of 89.9±6.0 dB. There were no significant differences in HTLs between the music students and the control group in the frequency range of 4000–8000 Hz. Furthermore, in each group HTLs in the frequency range 1000–8000 Hz did not exceed 20 dB HL in 83% of the examined ears. Nevertheless, high frequency notched audiograms typical of the noise-induced hearing loss were found in 13.4% and 9% of the musicians and non-musicians, respectively. The odds ratio (OR of notching in the music students increased significantly along with higher sound pressure levels (OR = 1.07, 95% confidence interval (CI: 1.014–1.13, p < 0.05. The students’ HTLs were worse (higher than those of a highly screened non-noise-exposed population. Moreover, their hearing loss was less severe than that expected from sound exposure for frequencies of 3000 Hz and 4000 Hz, and it was more severe in the case of frequency of 6000 Hz. Conclusions: The

  5. Exposure to excessive sounds and hearing status in academic classical music students.

    Science.gov (United States)

    Pawlaczyk-Łuszczyńska, Małgorzata; Zamojska-Daniszewska, Małgorzata; Dudarewicz, Adam; Zaborowski, Kamil

    2017-02-21

    The aim of this study was to assess hearing of music students in relation to their exposure to excessive sounds. Standard pure-tone audiometry (PTA) was performed in 168 music students, aged 22.5±2.5 years. The control group included 67 subjects, non-music students and non-musicians, aged 22.8±3.3 years. Data on the study subjects' musical experience, instruments in use, time of weekly practice and additional risk factors for noise-induced hearing loss (NIHL) were identified by means of a questionnaire survey. Sound pressure levels produced by various groups of instruments during solo and group playing were also measured and analyzed. The music students' audiometric hearing threshold levels (HTLs) were compared with the theoretical predictions calculated according to the International Organization for Standardization standard ISO 1999:2013. It was estimated that the music students were exposed for 27.1±14.3 h/week to sounds at the A-weighted equivalent-continuous sound pressure level of 89.9±6.0 dB. There were no significant differences in HTLs between the music students and the control group in the frequency range of 4000-8000 Hz. Furthermore, in each group HTLs in the frequency range 1000-8000 Hz did not exceed 20 dB HL in 83% of the examined ears. Nevertheless, high frequency notched audiograms typical of the noise-induced hearing loss were found in 13.4% and 9% of the musicians and non-musicians, respectively. The odds ratio (OR) of notching in the music students increased significantly along with higher sound pressure levels (OR = 1.07, 95% confidence interval (CI): 1.014-1.13, p students' HTLs were worse (higher) than those of a highly screened non-noise-exposed population. Moreover, their hearing loss was less severe than that expected from sound exposure for frequencies of 3000 Hz and 4000 Hz, and it was more severe in the case of frequency of 6000 Hz. The results confirm the need for further studies and development of a hearing conservation program for

  6. What Does Music Sound Like for a Cochlear Implant User?

    Science.gov (United States)

    Jiam, Nicole T; Caldwell, Meredith T; Limb, Charles J

    2017-09-01

    Cochlear implant research and product development over the past 40 years have been heavily focused on speech comprehension with little emphasis on music listening and enjoyment. The relatively little understanding of how music sounds in a cochlear implant user stands in stark contrast to the overall degree of importance the public places on music and quality of life. The purpose of this article is to describe what music sounds like to cochlear implant users, using a combination of existing research studies and listener descriptions. We examined the published literature on music perception in cochlear implant users, particularly postlingual cochlear implant users, with an emphasis on the primary elements of music and recorded music. Additionally, we administered an informal survey to cochlear implant users to gather first-hand descriptions of music listening experience and satisfaction from the cochlear implant population. Limitations in cochlear implant technology lead to a music listening experience that is significantly distorted compared with that of normal hearing listeners. On the basis of many studies and sources, we describe how music is frequently perceived as out-of-tune, dissonant, indistinct, emotionless, and weak in bass frequencies, especially for postlingual cochlear implant users-which may in part explain why music enjoyment and participation levels are lower after implantation. Additionally, cochlear implant users report difficulty in specific musical contexts based on factors including but not limited to genre, presence of lyrics, timbres (woodwinds, brass, instrument families), and complexity of the perceived music. Future research and cochlear implant development should target these areas as parameters for improvement in cochlear implant-mediated music perception.

  7. Exploring the effect of sound and music on health in hospital settings: A narrative review.

    Science.gov (United States)

    Iyendo, Timothy Onosahwo

    2016-11-01

    Sound in hospital space has traditionally been considered in negative terms as both intrusive and unwanted, and based mainly on sound levels. However, sound level is only one aspect of the soundscape. There is strong evidence that exploring the positive aspect of sound in a hospital context can evoke positive feelings in both patients and nurses. Music psychology studies have also shown that music intervention in health care can have a positive effect on patient's emotions and recuperating processes. In this way, hospital spaces have the potential to reduce anxiety and stress, and make patients feel comfortable and secure. This paper describes a review of the literature exploring sound perception and its effect on health care. This review sorted the literature and main issues into themes concerning sound in health care spaces; sound, stress and health; positive soundscape; psychological perspective of music and emotion; music as a complementary medicine for improving health care; contradicting arguments concerning the use of music in health care; and implications for clinical practice. Using Web of Science, PubMed, Scopus, ProQuest Central, MEDLINE, and Google, a literature search on sound levels, sound sources and the impression of a soundscape was conducted. The review focused on the role and use of music on health care in clinical environments. In addition, other pertinent related materials in shaping the understanding of the field were retrieved, scanned and added into this review. The result indicated that not all noises give a negative impression within healthcare soundscapes. Listening to soothing music was shown to reduce stress, blood pressure and post-operative trauma when compared to silence. Much of the sound conveys meaningful information that is positive for both patients and nurses, in terms of soft wind, bird twitter, and ocean sounds. Music perception was demonstrated to bring about positive change in patient-reported outcomes such as eliciting

  8. Animal signals and emotion in music: Coordinating affect across groups

    Directory of Open Access Journals (Sweden)

    Gregory A. Bryant

    2013-12-01

    Full Text Available Researchers studying the emotional impact of music have not traditionally been concerned with the principled relationship between form and function in evolved animal signals. The acoustic structure of musical forms is related in important ways to emotion perception, and thus research on nonhuman animal vocalizations is relevant for understanding emotion in music. Musical behavior occurs in cultural contexts that include many other coordinated activities which mark group identity, and can allow people to communicate within and between social alliances. The emotional impact of music might be best understood as a proximate mechanism serving an ultimately social function. Here I describe recent work that reveals intimate connections between properties of certain animal signals and evocative aspects of human music, including 1 examinations of the role of nonlinearities (e.g., broadband noise in nonhuman animal vocalizations, and the analogous production and perception of these features in human music, and 2 an analysis of group musical performances and possible relationships to nonhuman animal chorusing and emotional contagion effects. Communicative features in music are likely due primarily to evolutionary byproducts of phylogenetically older, but still intact communication systems. But in some cases, such as the coordinated rhythmic sounds produced by groups of musicians, our appreciation and emotional engagement might be due to the operation of an adaptive social signaling system. Future empirical work should examine human musical behavior through the comparative lens of behavioral ecology and an adaptationist cognitive science. By this view, particular coordinated sound combinations generated by musicians exploit evolved perceptual response biases—many shared across species—and proliferate through cultural evolutionary processes.

  9. The effects of noncontingent music and response interruption and redirection on vocal stereotypy.

    Science.gov (United States)

    Gibbs, Ashley R; Tullis, Christopher A; Thomas, Raven; Elkins, Brittany

    2018-06-17

    Vocal stereotypy is a commonly occurring challenging behavior in children with autism spectrum disorder (ASD) that is frequently maintained by automatic reinforcement and often interferes with skill acquisition. Matched stimulation (MS), and response interruption and redirection (RIRD) are two interventions that have been demonstrated to be effective in reducing the occurrence of vocal stereotypy with participants with ASD. The current study sought to determine if the combination of MS (noncontingent music) and RIRD was more effective at reducing vocal stereotypy than RIRD alone and if the parents of children with ASD found the combination of MS and RIRD more socially valid than RIRD alone. The results suggested that the combined intervention resulted in greater suppression of vocal stereotypy and increased occurrences of on-task behavior in both participants. Additionally, RIRD required fewer implementations and had a shorter duration when combined with MS. Results suggest that the combination of MS and RIRD may be an effective intervention outside of highly controlled settings. © 2018 Society for the Experimental Analysis of Behavior.

  10. On the Music of Sounds and the Music of Things (EMS2017, Nagoya, Japan)

    OpenAIRE

    Richards, John; Landy, Leigh

    2017-01-01

    After a century of great upheaval in music, the twenty-first century is demonstrating that it will provide electroacoustic (or sound-based) music with continued radical developments although they may very well be of a different sort. Technological developments certainly dictated most of the twentieth century changes in music and this influence is in no way decreasing. The key change is less in terms of radical change regarding content; instead, our thesis is that production and distribution w...

  11. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music

    DEFF Research Database (Denmark)

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene

    2012-01-01

    Previous studies have shown a superior analgesic effect of favorite music over other passive or active distractive tasks. However, it is unclear what mediates this effect. In this study we investigated to which extent distraction, emotional valence and cognitive styles may explain part...... questionnaires concerning cognitive styles (Baron – Cohen and self-report). Active distraction with PASAT led to significantly less pain intensity and unpleasantness as compared to music and sound. In turn, both music and sound relieved pain significantly more than noise. When music and sound had the same level...... of valence they relieved pain to a similar degree. The emotional ratings of the conditions were correlated with the amount of pain relief and cognitive styles seemed to influence the analgesia effect. These findings suggest that the pain relieving effect previously seen in relation to music may be at least...

  12. Effects of melody and technique on acoustical and musical features of western operatic singing voices.

    Science.gov (United States)

    Larrouy-Maestri, Pauline; Magis, David; Morsomme, Dominique

    2014-05-01

    The operatic singing technique is frequently used in classical music. Several acoustical parameters of this specific technique have been studied but how these parameters combine remains unclear. This study aims to further characterize the Western operatic singing technique by observing the effects of melody and technique on acoustical and musical parameters of the singing voice. Fifty professional singers performed two contrasting melodies (popular song and romantic melody) with two vocal techniques (with and without operatic singing technique). The common quality parameters (energy distribution, vibrato rate, and extent), perturbation parameters (standard deviation of the fundamental frequency, signal-to-noise ratio, jitter, and shimmer), and musical features (fundamental frequency of the starting note, average tempo, and sound pressure level) of the 200 sung performances were analyzed. The results regarding the effect of melody and technique on the acoustical and musical parameters show that the choice of melody had a limited impact on the parameters observed, whereas a particular vocal profile appeared depending on the vocal technique used. This study confirms that vocal technique affects most of the parameters examined. In addition, the observation of quality, perturbation, and musical parameters contributes to a better understanding of the Western operatic singing technique. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  13. P1-18: The Effect of Background Music on Working Memory

    Directory of Open Access Journals (Sweden)

    Ding-Hao Liu

    2012-10-01

    Full Text Available Many studies do visual working memory research under sundry sound conditions (Alley & Greene, 2008 Current Psychology 27 277–289; Iwanaga & Ito, 2002 Perceptual Motor Skills 94 1251–1258; Pring & Walker, 1994 Current Psychology 13 165–171. In order to understand more about background music, we modified previous studies to examine how the performance of working memory is affected by four different music conditions. Participants were randomly assigned into two groups to listen to two different pop songs to see if they have the similar effect on the performance of working memory. They were required to do six trials of digit span tasks under each music condition (silence, classical music, non-vocal music, vocal music. After being shown ten digits, each for 800 ms, participants were asked to recall and write down the digits in the correct order within 20 s. The results showed that there was no significant difference between two pop songs. Therefore, data were pooled for further analysis and indicated that there are significant differences and correlations in working memory among the four music conditions. The finding that the effect of non-vocal music affects working memory is greater in this study than in that of western films (Alley & Greene, 2008; Pring & Walker, 1994, which is consistent with the previous study in Japan (Iwanaga & Ito, 2002. The application of this study will be discussed in detail.

  14. Introducing the Oxford Vocal (OxVoc Sounds Database: A validated set of non-acted affective sounds from human infants, adults and domestic animals

    Directory of Open Access Journals (Sweden)

    Christine eParsons

    2014-06-01

    Full Text Available Sound moves us. Nowhere is this more apparent than in our responses to genuine emotional vocalisations, be they heartfelt distress cries or raucous laughter. Here, we present perceptual ratings and a description of a freely available, large database of natural affective vocal sounds from human infants, adults and domestic animals, the Oxford Vocal (OxVoc Sounds database. This database consists of 173 non-verbal sounds expressing a range of happy, sad and neutral emotional states. Ratings are presented for the sounds on a range of dimensions from a number of independent participant samples. Perceptions related to valence, including distress, vocaliser mood, and listener mood are presented in Study 1. Perceptions of the arousal of the sound, listener motivation to respond and valence (positive, negative are presented in Study 2. Perceptions of the emotional content of the stimuli in both Study 1 and Study 2 were consistent with the predefined categories (e.g., laugh stimuli perceived as positive. While the adult vocalisations received more extreme valence ratings, rated motivation to respond to the sounds was highest for the infant sounds. The major advantages of this database are the inclusion of vocalisations from naturalistic situations, which represent genuine expressions of emotion, and the inclusion of vocalisations from animals and infants, providing comparison stimuli for use in cross-species and developmental studies. The associated website provides a detailed description of the physical properties of the each sound stimulus along with cross-category descriptions.

  15. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  16. INNOVATIVE TENDENCIES OF FUTURE MUSIC TEACHERS’ SINGING TRAINING IN THE PROCESS OF PROFESSIONAL TRAINING

    Directory of Open Access Journals (Sweden)

    Si Daofen

    2017-04-01

    Full Text Available In the article the innovative tendencies of singing training of Art institutes students at pedagogical universities is presented. The issue introduced in the article is relevant as processes of modernization of higher art and pedagogical education require implementing new scientific approaches and innovative technologies into future music teachers’ training to ensure the comprehensive development of a young generation in modern conditions. So the aim of the article is to disclose the main features of implementing innovative technologies into future music teachers’ training. The analysis of pedagogical and psychological literature shows that the main features of methodological training to work with schoolchildren are the following: mastering professional knowledge, taking into account characteristics of adults’ and children’s phonation; considering aesthetical and value qualities of vocal sound according to modern standards of singers’ training; comprehensive development of vocal, melodic and harmonic hearing; an ability to get schoolchildren’s correct vocal sound; developing skills of methodological analysis of singing process. Due to analysis of scientific works by V. Antoniuk, N. Hrebeniuk, V. Morozov it is reported that efficiency of students and singers’ performance depends on their readiness to make independent decisions in practical creative and performing process, that is a general tendency in the singing training. That’s why one of the main objectives of future music teacher training to performing activities during the years of study is thought to be developing singers’ independence. Among the most effective innovative technologies of future music teachers’ singing training the author proposes technologies of vocal and choral performance by V. Yemelianova, V. Morozova, H. Struve. It is proved that none of innovative concepts, discussed in the article, cannot be mechanically implemented in current national conditions

  17. Neuroplasticity beyond Sounds: Neural Adaptations Following Long-Term Musical Aesthetic Experiences

    Directory of Open Access Journals (Sweden)

    Mark Reybrouck

    2015-03-01

    Full Text Available Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active “agent” coping in highly individual ways with the sounds. The findings concerning the neural adaptations in musicians, following long-term exposure to music, are then reviewed by keeping in mind the distinct subprocesses of a musical aesthetic experience. We conclude that these neural adaptations can be conceived of as the immediate and lifelong interactions with multisensorial stimuli (having a predominant auditory component, which result in lasting changes of the internal state of the “agent”. In a continuous loop, these changes affect, in turn, the subprocesses involved in a musical aesthetic experience, towards the final goal of achieving better perceptual, motor and proprioceptive responses to the immediate demands of the sounding environment. The resulting neural adaptations in musicians closely depend on the duration of the interactions, the starting age, the involvement of attention, the amount of motor practice and the musical genre played.

  18. Benefits of music training are widespread and lifelong: a bibliographic review of their non-musical effects.

    Science.gov (United States)

    Dawson, William J

    2014-06-01

    Recent publications indicate that musical training has effects on non-musical activities, some of which are lifelong. This study reviews recent publications collected from the Performing Arts Medicine Association bibliography. Music training, whether instrumental or vocal, produces beneficial and long-lasting changes in brain anatomy and function. Anatomic changes occur in brain areas devoted to hearing, speech, hand movements, and coordination between both sides of the brain. Functional benefits include improved sound processing and motor skills, especially in the upper extremities. Training benefits extend beyond music skills, resulting in higher IQs and school grades, greater specialized sensory and auditory memory/recall, better language memory and processing, heightened bilateral hand motor functioning, and improved integration and synchronization of sensory and motor functions. These changes last long after music training ends and can minimize or prevent age-related loss of brain cells and some mental functions. Early institution of music training and prolonged duration of training both appear to contribute to these positive changes.

  19. Cognitive flexibility modulates maturation and music-training-related changes in neural sound discrimination.

    Science.gov (United States)

    Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna

    2016-07-01

    Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Communicative Musicality

    DEFF Research Database (Denmark)

    Holck, Ulla

    2010-01-01

    university, Stephen Malloch listened to tapes of mothers and their babies ‘chatting’ with each other, recorded by Trevarthen in the 70’s. One of the first tapes was the vocal interaction of Laura and her mother. “As I listened, intrigued by the fluid give and take of the communication, and the lilting speech...... of the mother as she chatted with her baby, I began to tap my foot. I am, by training, a musician, so I was very used to automatically feeling the beat as I listened to musical sounds.… I replaced the tape, and again, I could sense a distinct rhythmicity and melodious give and take to the gentle prompting...... therapy as purely protomusic. But with Malloch & Trevarthen’s focus on musicality as the innate human abilities that make music production and appreciation possible, this discussion can easily move on. These and many other essential discussions await us – thanks to this comprehensive – and demanding...

  1. Musical Narratives: A Study of a Young Child's Identity Work in and through Music-Making

    Science.gov (United States)

    Barrett, Margaret S.

    2011-01-01

    The investigation of infants' and young children's early musical engagement as singers, song-makers, and music-makers has provided some insight into children's early vocal and musical development. Recent research has highlighted the vital role of interactive vocalization or "communicative musicality" in infants' general development, including…

  2. Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality.

    Science.gov (United States)

    Kirchberger, Martin; Russo, Frank A

    2016-02-01

    A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions. © The Author(s) 2016.

  3. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    DEFF Research Database (Denmark)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory......-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared...... with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies...

  4. Verbal learning in the context of background music: no influence of vocals and instrumentals on verbal learning.

    OpenAIRE

    Jancke L; Brugger E; Brummer M; Scherrer S; Alahmadi N

    2014-01-01

    BACKGROUND: Whether listening to background music enhances verbal learning performance is still a matter of dispute. In this study we investigated the influence of vocal and instrumental background music on verbal learning. METHODS: 226 subjects were randomly assigned to one of five groups (one control group and 4 experimental groups). All participants were exposed to a verbal learning task. One group served as control group while the 4 further groups served as experimental groups. The con...

  5. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.

    Directory of Open Access Journals (Sweden)

    Bruno L Giordano

    Full Text Available Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.

  6. 民族声乐教学的多样化分析%Diversity analysis of national vocal music teaching

    Institute of Scientific and Technical Information of China (English)

    胡水静

    2015-01-01

    在民族文化中,民族音乐是其中重要的组成部分之一。它不仅体现一个民族性格特征,同时也象征着一种民族凝聚力。我国有56个民族,每个民族都有独属于自身的民族文化,这就形成了我国民族音乐的多样性和差异性,然而现今单一的民族声乐所沿用的传统教学模式满足不了当前的教学需求。本文对民族声乐的类型进行阐述,分析了民族声乐的多样化,并针对其多样化,提出几点看法,以供参考。%In the national culture, national music is one of the important parts. It not only reflects a national personality characteristic, but also a symbol of a national cohesion. There are 56 peoples in our country, every nation has unique to their own national culture, it has formed the diversity and differences of Chinese national music, however, the single national vocal music today used by the traditional teaching mode can't satisfy the current requirements of teaching. In this paper, the paper expounds the types of national vocal music, analyzes the diversity of the national vocal music, and for its diversity, puts forward some views, for your reference.

  7. Emotional recognition from dynamic facial, vocal and musical expressions following traumatic brain injury.

    Science.gov (United States)

    Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle

    2017-01-01

    To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.

  8. The influence of caregiver singing and background music on vocally expressed emotions and moods in dementia care: a qualitative analysis.

    Science.gov (United States)

    Götell, Eva; Brown, Steven; Ekman, Sirkka-Liisa

    2009-04-01

    Music and singing are considered to have a strong impact on human emotions. Such an effect has been demonstrated in caregiving contexts with dementia patients. The aim of the study was to illuminate vocally expressed emotions and moods in the communication between caregivers and persons with severe dementia during morning care sessions. Three types of caring sessions were compared: the "usual" way, with no music; with background music playing; and with the caregiver singing to and/or with the patient. Nine persons with severe dementia living in a nursing home in Sweden and five professional caregivers participated in this study. Qualitative content analysis was used to examine videotaped recordings of morning care sessions, with a focus on vocally expressed emotions and moods during verbal communication. Compared to no music, the presence of background music and caregiver singing improved the mutuality of the communication between caregiver and patient, creating a joint sense of vitality. Positive emotions were enhanced, and aggressiveness was diminished. Whereas background music increased the sense of playfulness, caregiver singing enhanced the sense of sincerity and intimacy in the interaction. Caregiver singing and background music can help the caregiver improve the patient's ability to express positive emotions and moods, and to elicit a sense of vitality on the part of the person with severe dementia. The results further support the value of caregiver singing as a method to improve the quality of dementia care.

  9. "Sounds of Intent in the Early Years": A Proposed Framework of Young Children's Musical Development

    Science.gov (United States)

    Voyajolu, Angela; Ockelford, Adam

    2016-01-01

    "Sounds of Intent in the Early Years" explores the musical development of children from birth to five years of age. Observational evidence has been utilised together with key literature on musical development and core concepts of zygonic theory (Ockelford, 2013) to investigate the applicability of the original "Sounds of…

  10. Modeling vocalization with ECoG cortical activity recorded during vocal production in the macaque monkey.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Fujii, Naotaka; Averbeck, Bruno B; Mishkin, Mortimer

    2014-01-01

    Vocal production is an example of controlled motor behavior with high temporal precision. Previous studies have decoded auditory evoked cortical activity while monkeys listened to vocalization sounds. On the other hand, there have been few attempts at decoding motor cortical activity during vocal production. Here we recorded cortical activity during vocal production in the macaque with a chronically implanted electrocorticographic (ECoG) electrode array. The array detected robust activity in motor cortex during vocal production. We used a nonlinear dynamical model of the vocal organ to reduce the dimensionality of `Coo' calls produced by the monkey. We then used linear regression to evaluate the information in motor cortical activity for this reduced representation of calls. This simple linear model accounted for circa 65% of the variance in the reduced sound representations, supporting the feasibility of using the dynamical model of the vocal organ for decoding motor cortical activity during vocal production.

  11. Tool-use-associated sound in the evolution of language.

    Science.gov (United States)

    Larsson, Matz

    2015-09-01

    Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.

  12. Sensing Emotion in Voices: Negativity Bias and Gender Differences in a Validation Study of the Oxford Vocal (?OxVoc?) Sounds Database

    OpenAIRE

    Young, Katherine S.; Parsons, Christine E.; LeBeau, Richard T.; Tabak, Benjamin A.; Sewart, Amy R.; Stein, Alan; Kringelbach, Morten L.; Craske, Michelle G.

    2016-01-01

    Emotional expressions are an essential element of human interactions. Recent work has increasingly recognized that emotional vocalizations can color and shape interactions between individuals. Here we present data on the psychometric properties of a recently developed database of authentic nonlinguistic emotional vocalizations from human adults and infants (the Oxford Vocal 'OxVoc' Sounds Database; Parsons, Young, Craske, Stein, & Kringelbach, 2014). In a large sample (n = 562), we demonstrat...

  13. Instrument Identification in Polyphonic Music: Feature Weighting to Minimize Influence of Sound Overlaps

    Directory of Open Access Journals (Sweden)

    Goto Masataka

    2007-01-01

    Full Text Available We provide a new solution to the problem of feature variations caused by the overlapping of sounds in instrument identification in polyphonic music. When multiple instruments simultaneously play, partials (harmonic components of their sounds overlap and interfere, which makes the acoustic features different from those of monophonic sounds. To cope with this, we weight features based on how much they are affected by overlapping. First, we quantitatively evaluate the influence of overlapping on each feature as the ratio of the within-class variance to the between-class variance in the distribution of training data obtained from polyphonic sounds. Then, we generate feature axes using a weighted mixture that minimizes the influence via linear discriminant analysis. In addition, we improve instrument identification using musical context. Experimental results showed that the recognition rates using both feature weighting and musical context were 84.1 for duo, 77.6 for trio, and 72.3 for quartet; those without using either were 53.4, 49.6, and 46.5 , respectively.

  14. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  15. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  16. Sound experiences: the vision of experimental musician on the folkloric music in modern society

    Directory of Open Access Journals (Sweden)

    Rieko Tanaka

    2016-11-01

    Full Text Available This work begins narrating how folk music has always been a remnant in the influence on classical composers. It makes special mention of origin Hungarian musicians Bela Bartok, Zoltan Kodaly. This Musicians are considerate in this work as the most immediate ancestors of an experimental musicians northamericans, because both are influenced by their passion for folk music. We select as musicians principals exponents of American experimental music to John Cage, Lou Harrison and Carl Ruggles. Their works will be considered and analyzed in this text as the sounds as the experiences. Composers that will analyze the sound as experience, as feeling, as emotion, as time and origin. related traits in folk music and experimental music. Not forgetting in this work, and in his final considerations, the relationship between the musician, creation, society and art.

  17. Auditory profiles of classical, jazz, and rock musicians: Genre-specific sensitivity to musical sound features

    Directory of Open Access Journals (Sweden)

    Mari eTervaniemi

    2016-01-01

    Full Text Available When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e. varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians’ and non-musicians’ accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians, timing (classical and jazz musicians, transposition (jazz musicians, and melody contour (jazz and rock musicians. Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content.

  18. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features.

    Science.gov (United States)

    Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna

    2015-01-01

    When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content.

  19. Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style

    Science.gov (United States)

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception. PMID:22242169

  20. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Directory of Open Access Journals (Sweden)

    Eduardo A Garza Villarreal

    Full Text Available Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  1. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Science.gov (United States)

    Villarreal, Eduardo A Garza; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  2. Affective priming effects of musical sounds on the processing of word meaning.

    Science.gov (United States)

    Steinbeis, Nikolaus; Koelsch, Stefan

    2011-03-01

    Recent studies have shown that music is capable of conveying semantically meaningful concepts. Several questions have subsequently arisen particularly with regard to the precise mechanisms underlying the communication of musical meaning as well as the role of specific musical features. The present article reports three studies investigating the role of affect expressed by various musical features in priming subsequent word processing at the semantic level. By means of an affective priming paradigm, it was shown that both musically trained and untrained participants evaluated emotional words congruous to the affect expressed by a preceding chord faster than words incongruous to the preceding chord. This behavioral effect was accompanied by an N400, an ERP typically linked with semantic processing, which was specifically modulated by the (mis)match between the prime and the target. This finding was shown for the musical parameter of consonance/dissonance (Experiment 1) and then extended to mode (major/minor) (Experiment 2) and timbre (Experiment 3). Seeing that the N400 is taken to reflect the processing of meaning, the present findings suggest that the emotional expression of single musical features is understood by listeners as such and is probably processed on a level akin to other affective communications (i.e., prosody or vocalizations) because it interferes with subsequent semantic processing. There were no group differences, suggesting that musical expertise does not have an influence on the processing of emotional expression in music and its semantic connotations.

  3. Correlation of vocals and lyrics with left temporal musicogenic epilepsy.

    Science.gov (United States)

    Tseng, Wei-En J; Lim, Siew-Na; Chen, Lu-An; Jou, Shuo-Bin; Hsieh, Hsiang-Yao; Cheng, Mei-Yun; Chang, Chun-Wei; Li, Han-Tao; Chiang, Hsing-I; Wu, Tony

    2018-03-15

    Whether the cognitive processing of music and speech relies on shared or distinct neuronal mechanisms remains unclear. Music and language processing in the brain are right and left temporal functions, respectively. We studied patients with musicogenic epilepsy (ME) that was specifically triggered by popular songs to analyze brain hyperexcitability triggered by specific stimuli. The study included two men and one woman (all right-handed, aged 35-55 years). The patients had sound-triggered left temporal ME in response to popular songs with vocals, but not to instrumental, classical, or nonvocal piano solo versions of the same song. Sentimental lyrics, high-pitched singing, specificity/familiarity, and singing in the native language were the most significant triggering factors. We found that recognition of the human voice and analysis of lyrics are important causal factors in left temporal ME and provide observational evidence that sounds with speech structure are predominantly processed in the left temporal lobe. A literature review indicated that language-associated stimuli triggered ME in the left temporal epileptogenic zone at a nearly twofold higher rate compared with the right temporal region. Further research on ME may enhance understanding of the cognitive neuroscience of music. © 2018 New York Academy of Sciences.

  4. Female listeners’ autonomic responses to dramatic shifts between loud and soft music/sound passages: a study of heavy metal songs

    Directory of Open Access Journals (Sweden)

    Tzu-Han eCheng

    2016-02-01

    Full Text Available Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners’ respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners’ respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listener’s heart rate in response to the following music passage. These findings have potential implications for future research of the temporal dynamics of musical emotions.

  5. New "Field" of Vocal Music Teaching and Research: Research on the Construction of a Novel Interaction Mode

    Science.gov (United States)

    Li, Donglan

    2015-01-01

    This paper, as an attempt to find a solution to the problem of "Identity Crisis" brought about by the traditional spoon-feeding Education Mode, explores to construct a new mode of vocal music teaching characterized by an interaction on an equal and democratic footing between learners and the teacher in light of Habermas' Communicative…

  6. What Vowels Can Tell Us about the Evolution of Music

    Directory of Open Access Journals (Sweden)

    Gertraud Fenk-Oczlon

    2017-09-01

    Full Text Available Whether music and language evolved independently of each other or whether both evolved from a common precursor remains a hotly debated topic. We here emphasize the role of vowels in the language-music relationship, arguing for a shared heritage of music and speech. Vowels play a decisive role in generating the sound or sonority of syllables, the main vehicles for transporting prosodic information in speech and singing. Timbre is, beyond question, the primary parameter that allows us to discriminate between different vowels, but vowels also have intrinsic pitch, intensity, and duration. There are striking correspondences between the number of vowels and the number of pitches in musical scales across cultures: an upper limit of roughly 12 elements, a lower limit of 2, and a frequency peak at 5–7 elements. Moreover, there is evidence for correspondences between vowels and scales even in specific cultures, e.g., cultures with three vowels tend to have tritonic scales. We report a match between vowel pitch and musical pitch in meaningless syllables of Alpine yodelers, and highlight the relevance of vocal timbre in the music of many non-Western cultures, in which vocal timbre/vowel timbre and musical melody are often intertwined. Studies showing the pivotal role of vowels and their musical qualities in the ontogeny of language and in infant directed speech, will be used as further arguments supporting the hypothesis that music and speech evolved from a common prosodic precursor, where the vowels exhibited both pitch and timbre variations.

  7. Real estate ads in Emei music frog vocalizations: female preference for calls emanating from burrows.

    Science.gov (United States)

    Cui, Jianguo; Tang, Yezhong; Narins, Peter M

    2012-06-23

    During female mate choice, both the male's phenotype and resources (e.g. his nest) contribute to the chooser's fitness. Animals other than humans are not known to advertise resource characteristics to potential mates through vocal communication; although in some species of anurans and birds, females do evaluate male qualities through vocal communication. Here, we demonstrate that calls of the male Emei music frog (Babina dauchina), vocalizing from male-built nests, reflect nest structure information that can be recognized by females. Inside-nest calls consisted of notes with energy concentrated at lower frequency ranges and longer note durations when compared with outside-nest calls. Centre frequencies and note durations of the inside calls positively correlate with the area of the burrow entrance and the depth of the burrow, respectively. When given a choice between outside and inside calls played back alternately, more than 70 per cent of the females (33/47) chose inside calls. These results demonstrate that males of this species faithfully advertise whether or not they possess a nest to potential mates by vocal communication, which probably facilitates optimal mate selection by females. These results revealed a novel function of advertisement calls, which is consistent with the wide variation in both call complexity and social behaviour within amphibians.

  8. DISCO: An object-oriented system for music composition and sound design

    Energy Technology Data Exchange (ETDEWEB)

    Kaper, H. G.; Tipei, S.; Wright, J. M.

    2000-09-05

    This paper describes an object-oriented approach to music composition and sound design. The approach unifies the processes of music making and instrument building by using similar logic, objects, and procedures. The composition modules use an abstract representation of musical data, which can be easily mapped onto different synthesis languages or a traditionally notated score. An abstract base class is used to derive classes on different time scales. Objects can be related to act across time scales, as well as across an entire piece, and relationships between similar objects can replicate traditional music operations or introduce new ones. The DISCO (Digital Instrument for Sonification and Composition) system is an open-ended work in progress.

  9. Full-Band Quasi-Harmonic Analysis and Synthesis of Musical Instrument Sounds with Adaptive Sinusoids

    Directory of Open Access Journals (Sweden)

    Marcelo Caetano

    2016-05-01

    Full Text Available Sinusoids are widely used to represent the oscillatory modes of musical instrument sounds in both analysis and synthesis. However, musical instrument sounds feature transients and instrumental noise that are poorly modeled with quasi-stationary sinusoids, requiring spectral decomposition and further dedicated modeling. In this work, we propose a full-band representation that fits sinusoids across the entire spectrum. We use the extended adaptive Quasi-Harmonic Model (eaQHM to iteratively estimate amplitude- and frequency-modulated (AM–FM sinusoids able to capture challenging features such as sharp attacks, transients, and instrumental noise. We use the signal-to-reconstruction-error ratio (SRER as the objective measure for the analysis and synthesis of 89 musical instrument sounds from different instrumental families. We compare against quasi-stationary sinusoids and exponentially damped sinusoids. First, we show that the SRER increases with adaptation in eaQHM. Then, we show that full-band modeling with eaQHM captures partials at the higher frequency end of the spectrum that are neglected by spectral decomposition. Finally, we demonstrate that a frame size equal to three periods of the fundamental frequency results in the highest SRER with AM–FM sinusoids from eaQHM. A listening test confirmed that the musical instrument sounds resynthesized from full-band analysis with eaQHM are virtually perceptually indistinguishable from the original recordings.

  10. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  11. Auditory learning through active engagement with sound: Biological impact of community music lessons in at-risk children

    Directory of Open Access Journals (Sweden)

    Nina eKraus

    2014-11-01

    Full Text Available The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements in the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1,000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for one year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to an instrumental training class. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. These findings speak to the potential of active engagement with sound (i.e., music-making to engender experience-dependent neuroplasticity during trand may inform the development of strategies for auditory

  12. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    Science.gov (United States)

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  13. The sound of friction: Real-time models, playability and musical applications

    Science.gov (United States)

    Serafin, Stefania

    Friction, the tangential force between objects in contact, in most engineering applications needs to be removed as a source of noise and instabilities. In musical applications, friction is a desirable component, being the sound production mechanism of different musical instruments such as bowed strings, musical saws, rubbed bowls and any other sonority produced by interactions between rubbed dry surfaces. The goal of the dissertation is to simulate different instrument whose main excitation mechanism is friction. An efficient yet accurate model of a bowed string instrument, which combines the latest results in violin acoustics with the efficient digital waveguide approach, is provided. In particular, the bowed string physical model proposed uses a thermodynamic friction model in which the finite width of the bow is taken into account; this solution is compared to the recently developed elasto-plastic friction models used in haptics and robotics. Different solutions are also proposed to model the body of the instrument. Other less common instruments driven by friction are also proposed, and the elasto-plastic model is used to provide audio-visual simulations of everyday friction sounds such as squeaking doors and rubbed wine glasses. Finally, playability evaluations and musical applications in which the models have been used are discussed.

  14. [Biofeedback in young singer vocal training].

    Science.gov (United States)

    Ciochină, Paula; Ciochină, Al D; Burlui, Ada; Zaharia, D

    2007-01-01

    Biofeedback therapy is a learning process that is based on "operant conditioning" techniques. To estimate the significance of biofeedback to an accurate and faster control of singing voice emission. Significantly, it was discovered that professional singers active in performing of both classical and music theatre repertoire with regard to the visual-kinesthetic effect of melodic contour in musical notation as it affect vocal timbre. The results of the study also indicate that the development of new technology for youth singer vocal training, may be useful to these singers.

  15. Visual Representation in GENESIS as a tool for Physical Modeling, Sound Synthesis and Musical Composition

    OpenAIRE

    Villeneuve, Jérôme; Cadoz, Claude; Castagné, Nicolas

    2015-01-01

    The motivation of this paper is to highlight the importance of visual representations for artists when modeling and simulating mass-interaction physical networks in the context of sound synthesis and musical composition. GENESIS is a musician-oriented software environment for sound synthesis and musical composition. However, despite this orientation, a substantial amount of effort has been put into building a rich variety of tools based on static or dynamic visual representations of models an...

  16. A Computerized Tomography Study of Vocal Tract Setting in Hyperfunctional Dysphonia and in Belting.

    Science.gov (United States)

    Saldias, Marcelo; Guzman, Marco; Miranda, Gonzalo; Laukkanen, Anne-Maria

    2018-04-03

    Vocal tract setting in hyperfunctional patients is characterized by a high larynx and narrowing of the epilaryngeal and pharyngeal region. Similar observations have been made for various singing styles, eg, belting. The voice quality in belting has been described to be loud, speech like, and high pitched. It is also often described as sounding "pressed" or "tense". The above mentioned has led to the hypothesis that belting may be strenuous to the vocal folds. However, singers and teachers of belting do not regard belting as particularly strenuous. This study investigates possible similarities and differences between hyperfunctional voice production and belting. This study concerns vocal tract setting. Four male patients with hyperfunctional dysphonia and one male contemporary commercial music singer were registered with computerized tomography while phonating on [a:] in their habitual speaking pitch. Additionally, the singer used the pitch G4 in belting. The scannings were studied in sagittal and transversal dimensions by measuring lengths, widths, and areas. Various similarities were found between belting and hyperfunction: high vertical larynx position, small hypopharyngeal width, and epilaryngeal outlet. On the other hand, belting differed from dysphonia (in addition to higher pitch) by a wider lip and jaw opening, and larger volumes of the oral cavity. Belting takes advantage of "megaphone shape" of the vocal tract. Future studies should focus on modeling and simulation to address sound energy transfer. Also, they should consider aerodynamic variables and vocal fold vibration to evaluate the "price of decibels" in these phonation types. Copyright © 2018. Published by Elsevier Inc.

  17. The power of musical sound and its implications for primary education in South Africa: An experiential discussion

    Directory of Open Access Journals (Sweden)

    Christina Auerbach

    2014-11-01

    Full Text Available In this article, the power of musical sound and its transformative effects on human beings are explored, as perceived since ancient times and discussed in recent literature. An evolving research project is then reviewed, with a group of primary school children from disadvantaged backgrounds with no prior formal musical training. In essence, the aim of the study in progress is to determine how musical sound can be used to facilitate mindfulness, develop wholeness and facilitate the holistic growth of young South African learners, especially those from deprived backgrounds. Initial findings suggest that when musical sound experiences are included in everyday education of young learners, there are moments of joy, spontaneity, a sense of unity and well-being. The listening capacity of the children in the group has refined and performance levels at school have improved.

  18. Musical functioning, speech lateralization and the amusias.

    Science.gov (United States)

    Berman, I W

    1981-01-17

    Amusia is a condition in which musical capacity is impaired by organic brain disease. Music is in a sense a language and closely resembles speech, both executively and receptively. For musical functioning, rhythmic sense and sense of sounds are essential. Musical ability resides largely in the right (non-dominant) hemisphere. Tests have been devised for the assessment of musical capabilities by Dorgeuille, Grison and Wertheim. Classification of amusia includes vocal amusia, instrumental amnesia, musical agraphia, musical amnesia, disorders of rhythm, and receptive amusia. Amusia like aphasia has clinical significance, and the two show remarkable similarities and often co-exist. Usually executive amusia occurs with executive aphasia and receptive amusia with receptive aphasia, but amusias can exist without aphasia. Severely executive aphasics can sometimes sing with text (words), and this ability is used in the treatment of aphasia. As with aphasia, there is correlation between type of amusia and site of lesion. Thus in executive amusia, the lesion generally occurs in the frontal lobe. In receptive amusia, the lesion is mainly in the temporal lobe. If aphasia is also present the lesion will be in the left (dominant) hemisphere.

  19. Music and Sound Elements in Time Estimation and Production of Children with Attention Deficit/Hyperactivity Disorder (ADHD

    Directory of Open Access Journals (Sweden)

    Luiz Rogerio Jorgensen Carrer

    2015-09-01

    Full Text Available ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families’ lives. Music, with its playful, spontaneous, affective, motivational, temporal and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants age 6 to 14 years, recruited at NANI-Unifesp/SP, sub-divided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds and time estimation with music. Results: 1. Performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms were statistically lower than control group (p<0,05; 2. In the task comparing musical excerpts of the same duration (7s, ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  20. The effect of musical practice on gesture/sound pairing

    Directory of Open Access Journals (Sweden)

    Alice Mado eProverbio

    2015-04-01

    Full Text Available Learning to play a musical instrument is a demanding process requiring years of intense practice. Dramatic changes in brain connectivity, volume and functionality have been shown in skilled musicians. It is thought that music learning involves the formation of novel audio visuomotor associations, but not much is known about the gradual acquisition of this ability. In the present study, we investigated whether formal music training enhances audiovisual multisensory processing. To this end, pupils at different stages of education were examined based on the hypothesis that the strength of audio/visuomotor associations would be augmented as a function of the number of years of conservatory study (expertise. The study participants were violin and clarinet students of pre-academic and academic levels and of different chronological ages, ages of acquisition and academic levels. A violinist and a clarinetist each played the same score, and each participant viewed the video corresponding to his or her instrument. Pitch, intensity, rhythm and sound duration were matched across instruments. In half of the trials, the soundtrack did not match (in pitch the corresponding musical gestures. Data analysis indicated a correlation between the number of years of formal training (expertise and the ability to detect an audiomotor incongruence in music performance (relative to the musical instrument practiced, thus suggesting a direct correlation between knowing how to play and perceptual sensitivity.

  1. Interdisciplinary Lessons in Musical Acoustics: The Science-Math-Music Connection

    Science.gov (United States)

    Rogers, George L.

    2004-01-01

    The National Standards for Arts Education encourages teachers to help students make connections between music and other disciplines. Many state curriculum guides likewise encourage educators to integrate curricula and find common ground between different subjects. Music--particularly vocal music--offers ample opportunities to find relationships…

  2. Speech, Sound and Music Processing: Embracing Research in India

    DEFF Research Database (Denmark)

    classical music and its impact in cognitive science are the focus of discussion. Eminent scientist from the USA, Japan, Sweden, France, Poland, Taiwan, India and other European and Asian countries have delivered state-of-the-art lectures in these areas every year at different places providing an opportunity......The Computer Music Modeling and Retrieval (CMMR) 2011 conference was the 8th event of this international series, and the first that took place outside Europe. Since its beginnings in 2003, this conference has been co-organized by the Laboratoire de M´ecanique et d’Acoustique in Marseille, France......, and the Department of Architecture, Design and Media Technology (ad:mt), University of Aalborg, Esbjerg, Denmark, and has taken place in France, Italy, Spain, and Denmark. Historically, CMMR offers a cross-disciplinary overview of current music information retrieval and sound modeling activities and related topics...

  3. Music listening engages specific cortical regions within the temporal lobes: differences between musicians and non-musicians.

    Science.gov (United States)

    Angulo-Perkins, Arafat; Aubé, William; Peretz, Isabelle; Barrios, Fernando A; Armony, Jorge L; Concha, Luis

    2014-10-01

    Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (aSTG) (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific

  4. 白俄罗斯声乐研究在中国%Belarus vocal music study in China

    Institute of Scientific and Technical Information of China (English)

    赖越歌

    2014-01-01

    2013年中国与白俄罗斯两国首脑共同将两国关系提升至全面战略伙伴关系,开启了中白关系新纪元。两国间的文化交流更加稳固,白俄罗斯深厚的专业音乐教育为中国培养了一批优秀的歌唱家与声乐教育师资。我国对于白俄罗斯的声乐教育研究主要基于这些学成归国的博士、硕士学位获得者,他们在留学期间得益于语言、资料的优势,研究视角关注了身边的人和事,这对于推动中白音乐文化交流起到了重要的作用。%In 2013 China and belarus to relations between the two countries up to the heads of the two countries comprehensive strategic partnership, opens the white relations in new era. Cultural exchanges between the two countries, belarus profound professional music education for China to cultivate a batch of excellent singers and vocal music education of teachers. For belarus's vocal music education research in China is mainly based on these, Dr., master's degree gainer learn homecoming, they benefit from the advantages of language, and information during the study, the research perspectives focus on the people and things around, this to promote the white music played an important role in cultural communication.

  5. When music is salty: The crossmodal associations between sound and taste.

    Science.gov (United States)

    Guetta, Rachel; Loui, Psyche

    2017-01-01

    Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.

  6. Effects of two types and two genre of music on social behavior in captive chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Videan, Elaine N; Fritz, Jo; Howell, Sue; Murphy, James

    2007-01-01

    Is music just noise, and thus potentially harmful to laboratory animals, or can it have a beneficial effect? Research addressing this question has generated mixed results, perhaps because of the different types and styles of music used across various studies. The purpose of this study was to test the effects of 2 different types (vocal versus instrumental) and 2 genres (classical vocal versus 'easy-listening' vocal) of music on social behavior in 31 female and 26 male chimpanzees (Pan troglodytes). Results indicated that instrumental music was more effective at increasing affiliative behavior in both male and female chimpanzees, whereas vocal music was more effective at decreasing agonistic behavior. A comparison of 2 genre of vocal music indicated that easy-listening (slower tempo) vocal music was more effective at decreasing agonistic behavior in male chimpanzees than classical (faster tempo) vocal music. Agonistic behavior in females remained low (music. These results indicate that, like humans, captive chimpanzees react differently to various types and genres of music. The reactions varied depending on both the sex of the subject and the type of social behavior examined. Management programs should consider both type and genre when implementing a musical enrichment program for nonhuman primates.

  7. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    Science.gov (United States)

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  8. Sound, music and gender in mobile games

    DEFF Research Database (Denmark)

    Machin, David; Van Leeuwen, T.

    2016-01-01

    resource, they can communicate very specific meanings and carry ideologies. In this paper, using multimodal critical discourse analysis, we analyse the sounds and music in two proto-games that are played on mobile devices: Genie Palace Divine and Dragon Island Race. While visually the two games are highly...... and impersonal and specific kinds of social relations which, we show, is highly gendered. It can also signal priorities, ideas and values, which in both cases, we show, relate to a world where there is simply no time to stop and think. © 2016, equinox publishing....

  9. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  10. A new multifeature mismatch negativity (MMN) paradigm for the study of music perception with more real-sounding stimuli

    DEFF Research Database (Denmark)

    Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas

    . Interestingly, this reduction did not hold for mistunings and slide in the melody, probably due to interval mistuning and the high voice superiority effect. Our results indicate that it is possible to use the MMN for the study of more real-sounding music and that stimulus complexity plays a crucial role......The MMN is a brain response elicited by deviants in a series of repetitive sounds that has been valuable for the study of music perception. However, most MMN experimental designs use simple tone patterns as stimuli, failing to represent the complexity of everyday music. Our goal was to develop...... a new paradigm using more real-sounding stimuli. Concretely, we wanted to assess the perception of nonrepetitive melodies when presented alone and when embedded in two-part music. An Alberti bass used previously served both as a comparison and as the second voice in the two-part stimuli. We used MEG...

  11. Effects of Listening to Music versus Environmental Sounds in Passive and Active Situations on Levels of Pain and Fatigue in Fibromyalgia.

    Science.gov (United States)

    Mercadíe, Lolita; Mick, Gérard; Guétin, Stéphane; Bigand, Emmanuel

    2015-10-01

    In fibromyalgia, pain symptoms such as hyperalgesia and allodynia are associated with fatigue. Mechanisms underlying such symptoms can be modulated by listening to pleasant music. We expected that listening to music, because of its emotional impact, would have a greater modulating effect on the perception of pain and fatigue in patients with fibromyalgia than listening to nonmusical sounds. To investigate this hypothesis, we carried out a 4-week study in which patients with fibromyalgia listened to either preselected musical pieces or environmental sounds when they experienced pain in active (while carrying out a physical activity) or passive (at rest) situations. Concomitant changes of pain and fatigue levels were evaluated. When patients listened to music or environmental sounds at rest, pain and fatigue levels were significantly reduced after 20 minutes of listening, with no difference of effect magnitude between the two stimuli. This improvement persisted 10 minutes after the end of the listening session. In active situations, pain did not increase in presence of the two stimuli. Contrary to our expectations, music and environmental sounds produced a similar relieving effect on pain and fatigue, with no benefit gained by listening to pleasant music over environmental sounds. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  12. Prenatal complex rhythmic music sound stimulation facilitates postnatal spatial learning but transiently impairs memory in the domestic chick.

    Science.gov (United States)

    Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S

    2011-01-01

    Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.

  13. Sound and vision: visualization of music with a soap film

    Science.gov (United States)

    Gaulon, C.; Derec, C.; Combriat, T.; Marmottant, P.; Elias, F.

    2017-07-01

    A vertical soap film, freely suspended at the end of a tube, is vibrated by a sound wave that propagates in the tube. If the sound wave is a piece of music, the soap film ‘comes alive’: colours, due to iridescences in the soap film, swirl, split and merge in time with the music (see the snapshots in figure 1 below). In this article, we analyse the rich physics behind these fascinating dynamical patterns: it combines the acoustic propagation in a tube, the light interferences, and the static and dynamic properties of soap films. The interaction between the acoustic wave and the liquid membrane results in capillary waves on the soap film, as well as non-linear effects leading to a non-oscillatory flow of liquid in the plane of the film, which induces several spectacular effects: generation of vortices, diphasic dynamical patterns inside the film, and swelling of the soap film under certain conditions. Each of these effects is associated with a characteristic time scale, which interacts with the characteristic time of the music play. This article shows the richness of those characteristic times that lead to dynamical patterns. Through its artistic interest, the experiments presented in this article provide a tool for popularizing and demonstrating science in the classroom or to a broader audience.

  14. Music preference in degus (Octodon degus): Analysis with Chilean folk music.

    OpenAIRE

    Shigeru Watanabe; Katharina Braun; Maria Mensch; Henning Scheich

    2018-01-01

    Most nonhuman animals do not show selective preference for types of music, but researchers have typically employed only Western classical music in such studies. Thus, there has been bias in music choice. Degus (Octodon degus), originally from the mountain areas of Chile, have highly developed vocal communication. Here, we examined music preference of degus using not only Western classical music (music composed by Bach and Stravinsky), but also South American folk music (Chilean and Peruvian)....

  15. Indifference to dissonance in native Amazonians reveals cultural variation in music perception.

    Science.gov (United States)

    McDermott, Josh H; Schultz, Alan F; Undurraga, Eduardo A; Godoy, Ricardo A

    2016-07-28

    by biology remains debated. One widely discussed phenomenon is that some combinations of notes are perceived by Westerners as pleasant, or consonant, whereas others are perceived as unpleasant,or dissonant. The contrast between consonance and dissonance is central to Western music and its origins have fascinated scholars since the ancient Greeks. Aesthetic responses to consonance are commonly assumed by scientists to have biological roots, and thus to be universally present in humans. Ethnomusicologists and composers, in contrast, have argued that consonance is a creation of Western musical culture. The issue has remained unresolved, partly because little is known about the extent of cross-cultural variation in consonance preferences. Here we report experiments with the Tsimane'--a native Amazonian society with minimal exposure to Western culture--and comparison populations in Bolivia and the United States that varied in exposure to Western music. Participants rated the pleasantness of sounds. Despite exhibiting Western-like discrimination abilities and Western-like aesthetic responses to familiar sounds and acoustic roughness, the Tsimane' rated consonant and dissonant chords and vocal harmonies as equally pleasant. By contrast, Bolivian city- and town-dwellers exhibited significant preferences for consonance,albeit to a lesser degree than US residents. The results indicate that consonance preferences can be absent in cultures sufficiently isolated from Western music, and are thus unlikely to reflect innate biases or exposure to harmonic natural sounds. The observed variation in preferences is presumably determined by exposure to musical harmony, suggesting that culture has a dominant role in shaping aesthetic responses to music.

  16. Mobile phone conversations, listening to music and quiet (electric) cars: Are traffic sounds important for safe cycling?

    Science.gov (United States)

    Stelling-Konczak, A; van Wee, G P; Commandeur, J J F; Hagenzieker, M

    2017-09-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe cycling. Furthermore, the study investigated the potential safety implications of limited auditory information caused by quiet (electric) cars and by cyclists listening to music or talking on the phone. An Internet survey among 2249 cyclists in three age groups (16-18, 30-40 and 65-70year old) was carried out to collect information on the following aspects: 1) the auditory perception of traffic sounds, including the sounds of quiet (electric) cars; 2) the possible compensatory behaviours of cyclists who listen to music or talk on their mobile phones; 3) the possible contribution of listening to music and talking on the phone to cycling crashes and incidents. Age differences with respect to those three aspects were analysed. Results show that listening to music and talking on the phone negatively affects perception of sounds crucial for safe cycling. However, taking into account the influence of confounding variables, no relationship was found between the frequency of listening to music or talking on the phone and the frequency of incidents among teenage cyclists. This may be due to cyclists' compensating for the use of portable devices. Listening to music or talking on the phone whilst cycling may still pose a risk in the absence of compensatory behaviour or in a traffic environment with less extensive and less safe cycling infrastructure than the Dutch setting. With the increasing number of quiet (electric) cars on the road, cyclists in the future may also need to compensate for the limited auditory input of these cars. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Teaching Children Musical Perception with MUSIC-AR

    Directory of Open Access Journals (Sweden)

    Valéria Farinazzo Martins

    2015-03-01

    Full Text Available Unfortunately in Brazil there is a non compulsory musical education in schools that leads to loss of sound/musical perception of Brazilian children. This fact, associated with the lack of software for the teaching of musical perception, inspired the creation of Music-AR, a set of software that uses Augmented Reality technology for the teaching of sound properties, such as timbre, pitch and sound intensity. There were four small applications for that: the first one allows the child to manipulate virtual objects linked to sounds, this way, the child can loosen and stretch virtual objects relating them to the (bass and treble sound pitch; the second focus on the concept of sound intensity, associating it to virtual animals been far or near to the children; the third is related to duration of the sound (short or long, and the last is about timbre – the personality of the sound. Tests were applied and the results are presented in this work.

  18. A comparison of ambient casino sound and music: effects on dissociation and on perceptions of elapsed time while playing slot machines.

    Science.gov (United States)

    Noseworthy, Theodore J; Finlay, Karen

    2009-09-01

    This research examined the effects of a casino's auditory character on estimates of elapsed time while gambling. More specifically, this study varied whether the sound heard while gambling was ambient casino sound alone or ambient casino sound accompanied by music. The tempo and volume of both the music and ambient sound were varied to manipulate temporal engagement and introspection. One hundred and sixty (males = 91) individuals played slot machines in groups of 5-8, after which they provided estimates of elapsed time. The findings showed that the typical ambient casino auditive environment, which characterizes the majority of gaming venues, promotes understated estimates of elapsed duration of play. In contrast, when music is introduced into the ambient casino environment, it appears to provide a cue of interval from which players can more accurately reconstruct elapsed duration of play. This is particularly the case when the tempo of the music is slow and the volume is high. Moreover, the confidence with which time estimates are held (as reflected by latency of response) is higher in an auditive environment with music than in an environment that is comprised of ambient casino sounds alone. Implications for casino management are discussed.

  19. Emotional expressions in voice and music: same code, same effect?

    Science.gov (United States)

    Escoffier, Nicolas; Zhong, Jidan; Schirmer, Annett; Qiu, Anqi

    2013-08-01

    Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition. Copyright © 2011 Wiley Periodicals, Inc.

  20. The Sound of 1-bit: Technical constraint and musical creativity on the 48k Sinclair ZX Spectrum

    Directory of Open Access Journals (Sweden)

    Kenneth B. McAlpine

    2017-12-01

    Full Text Available This article explores constraint as a driver of creativity and innovation in early video game soundtracks. Using what was, perhaps, the most constrained platform of all, the 48k Sinclair ZX Spectrum, as a prism through which to examine the development of an early branch of video game music, the paper explores the creative approaches adopted by programmers to circumvent the Spectrum’s technical limitations so as to coax the hardware into performing feats of musicality that it had never been designed to achieve. These solutions were not without computational or aural cost, however, and their application often imparted a unique characteristic to the sound, which over time came to define the aesthetic of the 8-bit computer soundtrack, a sound which has been developed since as part of the emerging chiptune scene. By discussing pivotal moments in the development of ZX Spectrum music, this article will show how the application of binary impulse trains, granular synthesis, and pulse-width modulation came to shape the sound of 1-bit music.

  1. A sonoridade vocal e a prática coral no Barroco: subsídios para a performance barroca nos dias atuais The vocal sonority and the choral practice in the Baroque period: guidelines for today's Baroque music performance

    Directory of Open Access Journals (Sweden)

    Angelo José Fernandes

    2008-12-01

    Full Text Available Este trabalho é uma pequena parte de uma ampla pesquisa sobre prática e sonoridade de diversos estilos de música coral. A partir de uma investigação bibliográfica, que inclui autores desde o período Barroco, temos como objetivos: a descrição da sonoridade vocal e coral ao longo do referido período; a abordagem dos tipos vocais da época; a análise de alguns procedimentos técnico-vocais; a descrição de características importantes da prática coral no período; e, por fim, uma apresentação de sugestões técnicas e estilísticas para a prática da música coral barroca na atualidade.This paper is a small part of a large research on practice and sonority of many choral music styles. Through bibliographical investigation of works written by authors from the Baroque period to the present, our goals are: the description of the vocal sonority throughout the Baroque period; the presentations of the vocal types in the Baroque; the analysis of some vocal techniques; the description of important aspects of the choral practice of the period; and finally, the presentation of some technical and stylistic suggestions for the practice of the Baroque choral music in the present.

  2. Technology for the Sound of Music

    Science.gov (United States)

    1994-01-01

    In the early 1960s during an industry recession, Kaman Aircraft lost several defense contracts. Forced to diversify, the helicopter manufacturer began to manufacture acoustic guitars. Kaman's engineers used special vibration analysis equipment based on aerospace technology. While a helicopter's rotor system is highly susceptible to vibration, which must be reduced or "dampened," vibration enhances a guitar's sound. After two years of vibration analysis Kaman produced an instrument, which is very successful. The Ovation guitar is made of fiberglass. It is stronger than the traditional rosewood and manufactured with adapted aircraft techniques such as jigs and fixtures, reducing labor and assuring quality and cost control. Kaman Music Corporation now has annual sales of $100 million.

  3. Affective responses in tamarins elicited by species-specific music

    OpenAIRE

    Snowdon, Charles T.; Teie, David

    2009-01-01

    Theories of music evolution agree that human music has an affective influence on listeners. Tests of non-humans provided little evidence of preferences for human music. However, prosodic features of speech (‘motherese’) influence affective behaviour of non-verbal infants as well as domestic animals, suggesting that features of music can influence the behaviour of non-human species. We incorporated acoustical characteristics of tamarin affiliation vocalizations and tamarin threat vocalizations...

  4. How does Architecture Sound for Different Musical Instrument Performances?

    DEFF Research Database (Denmark)

    Saher, Konca; Rindel, Jens Holger

    2006-01-01

    This paper discusses how consideration of sound _in particular a specific musical instrument_ impacts the design of a room. Properly designed architectural acoustics is fundamental to improve the listening experience of an instrument in rooms in a conservatory. Six discrete instruments (violin, c...... different instruments and the choir experience that could fit into same category of room. For all calculations and the auralizations, a computational model is used: ODEON 7.0....

  5. The Effects of Three Physical and Vocal Warm-Up Procedures on Acoustic and Perceptual Measures of Choral Sound.

    Science.gov (United States)

    Cook-Cunningham, Sheri L; Grady, Melissa L

    2018-03-01

    The purpose of this investigation was to assess the effects of three warm-up procedures (vocal-only, physical-only, physical/vocal combination) on acoustic and perceptual measures of choir sound. The researchers tested three videotaped, 5-minute, choral warm-up procedures on three university choirs. After participating in a warm-up procedure, each choir was recorded singing a folk song for long-term average spectra and pitch analysis. Singer participants responded to a questionnaire about preferences after each warm-up procedure. Warm-up procedures and recording sessions occurred during each choir's regular rehearsal time and in each choir's regular rehearsal space during three consecutive rehearsals. Long-term average spectra results demonstrated more resonant singing after the physical/vocal warm-up for two of the three choirs. Pitch analysis results indicate that all three choirs sang "in-tune" or with the least pitch deviation after participating in the physical/vocal warm-up. Singer questionnaire responses showed general preference for the physical/vocal combination warm-up, and singer ranking of the three procedures indicated the physical/vocal warm-up as the most favored for readiness to sing. In the context of this study with these three university choir participants, it seems that a combination choral warm-up that includes physical and vocal aspects is preferred by singers, enables more resonant singing, and more in-tune singing. Findings from this study could provide teachers and choral directors with important information as they structure and experiment with their choral warm-up procedures. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Quantifying Shapes: Mathematical Techniques for Analysing Visual Representations of Sound and Music

    Directory of Open Access Journals (Sweden)

    Genevieve L. Noyce

    2013-12-01

    Full Text Available Research on auditory-visual correspondences has a long tradition but innovative experimental paradigms and analytic tools are sparse. In this study, we explore different ways of analysing real-time visual representations of sound and music drawn by both musically-trained and untrained individuals. To that end, participants' drawing responses captured by an electronic graphics tablet were analysed using various regression, clustering, and classification techniques. Results revealed that a Gaussian process (GP regression model with a linear plus squared-exponential covariance function was able to model the data sufficiently, whereas a simpler GP was not a good fit. Spectral clustering analysis was the best of a variety of clustering techniques, though no strong groupings are apparent in these data. This was confirmed by variational Bayes analysis, which only fitted one Gaussian over the dataset. Slight trends in the optimised hyperparameters between musically-trained and untrained individuals allowed for the building of a successful GP classifier that differentiated between these two groups. In conclusion, this set of techniques provides useful mathematical tools for analysing real-time visualisations of sound and can be applied to similar datasets as well.

  7. Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users

    Science.gov (United States)

    Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias

    2014-01-01

    Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes

  8. Communicating Earth Science Through Music: The Use of Environmental Sound in Science Outreach

    Science.gov (United States)

    Brenner, C.

    2017-12-01

    The need for increased public understanding and appreciation of Earth science has taken on growing importance over the last several decades. Human society faces critical environmental challenges, both near-term and future, in areas such as climate change, resource allocation, geohazard threat and the environmental degradation of ecosystems. Science outreach is an essential component to engaging both policymakers and the public in the importance of managing these challenges. However, despite considerable efforts on the part of scientists and outreach experts, many citizens feel that scientific research and methods are both difficult to understand and remote from their everyday experience. As perhaps the most accessible of all art forms, music can provide a pathway through which the public can connect to Earth processes. The Earth is not silent: environmental sound can be sampled and folded into musical compositions, either with or without the additional sounds of conventional or electronic instruments. These compositions can be used in conjunction with other forms of outreach (e.g., as soundtracks for documentary videos or museum installations), or simply stand alone as testament to the beauty of geology and nature. As proof of concept, this presentation will consist of a musical composition that includes sounds from various field recordings of wind, swamps, ice and water (including recordings from the inside of glaciers).

  9. Vocal handicap index in popular and erudite professional singers.

    Science.gov (United States)

    Loiola-Barreiro, Camila Miranda; Silva, Marta Assumpção de Andrada E

    To compare the voice handicap index of popular and erudite professional singers according to gender, age, professional experience time, and presence or absence of self-reported vocal complaints. One hundred thirty-two professional singers, 74 popular and 58 erudite, who responded to a questionnaire with regards to identification, age, gender, professional experience time in singing, musical genres (for popular singers), vocal classification (for erudite singers), presence of self-reported vocal complaints, and the specific protocols for popular (Modern Singing Handicap Index - MSHI) and erudite (Classical Singing Handicap Index - CSHI) singing. Higher proportion of women and higher incidence of vocal complaints were observed in the popular singers compared with the erudite singers. Most of the popular singers belonged to the genre of Brazilian Popular Music. Regarding the classification of erudite singers, there was greater participation of sopranos and tenors. No statistical differences were observed with respect to age and professional experience time between the groups. Comparison of the MSHI and CSHI scores showed no statistically significant difference between these scores and genre or age in both groups of singers. Professional experience time was related to the total score and the subscales disability and impairment in the MSHI, only for popular singers with vocal complaints. There was no correlation between these variables and the CSHI for erudite singers. The impact of vocal difficulty/problem interferes differently in these two musical genres when related to vocal complaint and professional experience time. The MSHI and CSHI protocols proved to be important tools not only for the identification of problems, but also for the understanding of how these individuals relate their voices with this occupational activity.

  10. Spectral envelope sensitivity of musical instrument sounds.

    Science.gov (United States)

    Gunawan, David; Sen, D

    2008-01-01

    It is well known that the spectral envelope is a perceptually salient attribute in musical instrument timbre perception. While a number of studies have explored discrimination thresholds for changes to the spectral envelope, the question of how sensitivity varies as a function of center frequency and bandwidth for musical instruments has yet to be addressed. In this paper a two-alternative forced-choice experiment was conducted to observe perceptual sensitivity to modifications made on trumpet, clarinet and viola sounds. The experiment involved attenuating 14 frequency bands for each instrument in order to determine discrimination thresholds as a function of center frequency and bandwidth. The results indicate that perceptual sensitivity is governed by the first few harmonics and sensitivity does not improve when extending the bandwidth any higher. However, sensitivity was found to decrease if changes were made only to the higher frequencies and continued to decrease as the distorted bandwidth was widened. The results are analyzed and discussed with respect to two other spectral envelope discrimination studies in the literature as well as what is predicted from a psychoacoustic model.

  11. Connected up to Sound. Approach to the Musical Consumption of Young People´s Daily Life in Santiago de Cuba

    Directory of Open Access Journals (Sweden)

    MSc. Ligia Lavielle-Pullés

    2015-11-01

    Full Text Available Among the different cultural consumptions of young people, the musical consumption is one of the most trascendent. This is the topic of the following paper, which main proposal lies on the constant presence of musical sounds in youth daily life. Leisure, relaxation, sensation of company and moods are  transmitted by music. Besides, other cultural productions, artistic or not, such as, audiovisual materials are blended with musical products and take high places on people’s preferences. This research i based on the theoretical frameworks of consumption and youth. The narrow communion of quantitative and qualitative methodologies has allowed to articulate it and get an interpretative criteria about the musical consumption process, in which it has been important the consideration of the sound singularity and its influence in the media. Keywords: musical consumption, youth, musical preferences.

  12. A Geometrical Method for Sound-Hole Size and Location Enhancement in Lute Family Musical Instruments: The Golden Method

    Directory of Open Access Journals (Sweden)

    Soheil Jafari

    2017-11-01

    Full Text Available This paper presents a new analytical approach, the Golden Method, to enhance sound-hole size and location in musical instruments of the lute family in order to obtain better sound damping characteristics based on the concept of the golden ratio and the instrument geometry. The main objective of the paper is to increase the capability of lute family musical instruments in keeping a note for a certain time at a certain level to enhance the instruments’ orchestral characteristics. For this purpose, a geometry-based analytical method, the Golden Method is first described in detail in an itemized feature. A new musical instrument is then developed and tested to confirm the ability of the Golden Method in optimizing the acoustical characteristics of musical instruments from a damping point of view by designing the modified sound-hole. Finally, the new-developed instrument is tested, and the obtained results are compared with those of two well-known instruments to confirm the effectiveness of the proposed method. The experimental results show that the suggested method is able to increase the sound damping time by at least 2.4% without affecting the frequency response function and other acoustic characteristics of the instrument. This methodology could be used as the first step in future studies on design, optimization and evaluation of musical instruments of the lute family (e.g., lute, oud, barbat, mandolin, setar, and etc..

  13. Temporal-Spectral Characterization and Classification of Marine Mammal Vocalizations and Diesel-Electric Ships Radiated Sound over Continental Shelf Scale Regions with Coherent Hydrophone Array Measurements

    Science.gov (United States)

    Huang, Wei

    The passive ocean acoustic waveguide remote sensing (POAWRS) technology is capable of monitoring a large variety of underwater sound sources over instantaneous wide areas spanning continental-shelf scale regions. POAWRS uses a large-aperture densely-sampled coherent hydrophone array to significantly enhance the signal-to-noise ratio via beamforming, enabling detection of sound sources roughly two-orders of magnitude more distant in range than that possible with a single hydrophone. The sound sources detected by POAWRS include ocean biology, geophysical processes, and man-made activities. POAWRS provides detection, bearing-time estimation, localization, and classification of underwater sound sources. The volume of underwater sounds detected by POAWRS is immense, typically exceeding a million unique signal detections per day, in the 10-4000 Hz frequency range, making it a tremendously challenging task to distinguish and categorize the various sound sources present in a given region. Here we develop various approaches for characterizing and clustering the signal detections for various subsets of data acquired using the POAWRS technology. The approaches include pitch tracking of the dominant signal detections, time-frequency feature extraction, clustering and categorization methods. These approaches are essential for automatic processing and enhancing the efficiency and accuracy of POAWRS data analysis. The results of the signal detection, clustering and classification analysis are required for further POAWRS processing, including localization and tracking of a large number of oceanic sound sources. Here the POAWRS detection, localization and clustering approaches are applied to analyze and elucidate the vocalization behavior of humpback, sperm and fin whales in the New England continental shelf and slope, including the Gulf of Maine from data acquired using coherent hydrophone arrays. The POAWRS technology can also be applied for monitoring ocean vehicles. Here the

  14. Glottal volume velocity waveform characteristics in subjects with and without vocal training, related to gender, sound intensity, fundamental frequency, and age

    NARCIS (Netherlands)

    Sulter, AM; Wit, HP

    Glottal volume velocity waveform characteristics of 224 subjects, categorized in four groups according to gender and vocal training, were determined, and their relations to sound-pressure level, fundamental frequency, intra-oral pressure, and age were analyzed. Subjects phonated at three intensity

  15. Glottal volume velocity waveform characteristics in subjects with and without vocal training, related to gender, sound intensity, fundamental frequency, and age

    NARCIS (Netherlands)

    Sulter, AM; Wit, HP

    1996-01-01

    Glottal volume velocity waveform characteristics of 224 subjects, categorized in four groups according to gender and vocal training, were determined, and their relations to sound-pressure level, fundamental frequency, intra-oral pressure, and age were analyzed. Subjects phonated at three intensity

  16. A socio-musical analysis of Ayo Bankole's choral music: Fun Mi N ...

    African Journals Online (AJOL)

    ... diverse African elements and major features of African music are present in the melody, rhythm and harmony of the vocal work. The paper concluded that Fun Mi N'Ibeji contained both elements of traditional Nigerian music and Western classical music which are employed in expressing the traditional beliefs of the Yoruba ...

  17. Sound pressure levels generated at risk volume steps of portable listening devices: types of smartphone and genres of music.

    Science.gov (United States)

    Kim, Gibbeum; Han, Woojae

    2018-05-01

    The present study estimated the sound pressure levels of various music genres at the volume steps that contemporary smartphones deliver, because these levels put the listener at potential risk for hearing loss. Using six different smartphones (Galaxy S6, Galaxy Note 3, iPhone 5S, iPhone 6, LG G2, and LG G3), the sound pressure levels for three genres of K-pop music (dance-pop, hip-hop, and pop-ballad) and a Billboard pop chart of assorted genres were measured through an earbud for the first risk volume that was at the risk sign proposed by the smartphones, as well as consecutive higher volumes using a sound level meter and artificial mastoid. The first risk volume step of the Galaxy S6 and the LG G2, among the six smartphones, had the significantly lowest (84.1 dBA) and highest output levels (92.4 dBA), respectively. As the volume step increased, so did the sound pressure levels. The iPhone 6 was loudest (113.1 dBA) at the maximum volume step. Of the music genres, dance-pop showed the highest output level (91.1 dBA) for all smartphones. Within the frequency range of 20~ 20,000 Hz, the sound pressure level peaked at 2000 Hz for all the smartphones. The results showed that the sound pressure levels of either the first volume step or the maximum volume step were not the same for the different smartphone models and genres of music, which means that the risk volume sign and its output levels should be unified across the devices for their users. In addition, the risk volume steps proposed by the latest smartphone models are high enough to cause noise-induced hearing loss if their users habitually listen to music at those levels.

  18. Acoustic characteristics of modern Greek Orthodox Church music.

    Science.gov (United States)

    Delviniotis, Dimitrios S

    2013-09-01

    Some acoustic characteristics of the two types of vocal music of the Greek Orthodox Church Music, the Byzantine chant (BC) and ecclesiastical speech (ES), are studied in relation to the common Greek speech and the Western opera. Vocal samples were obtained, and their acoustic parameters of sound pressure level (SPL), fundamental frequency (F0), and the long-time average spectrum (LTAS) characteristics were analyzed. Twenty chanters, including two chanters-singers of opera, sang (BC) and read (ES) the same hymn of Byzantine music (BM), the two opera singers sang the same aria of opera, and common speech samples were obtained, and all audio were analyzed. The distribution of SPL values showed that the BC and ES have higher SPL by 9 and 12 dB, respectively, than common speech. The average F0 in ES tends to be lower than the common speech, and the smallest standard deviation (SD) of F0 values characterizes its monotonicity. The tone-scale intervals of BC are close enough to the currently accepted theory with SD equal to 0.24 semitones. The rate and extent of vibrato, which is rare in BC, equals 4.1 Hz and 0.6 semitones, respectively. The average LTAS slope is greatest in BC (+4.5 dB) but smaller than in opera (+5.7 dB). In both BC and ES, instead of a singer's formant appearing in an opera voice, a speaker's formant (SPF) was observed around 3300 Hz, with relative levels of +6.3 and +4.6 dB, respectively. The two vocal types of BM, BC, and ES differ both to each other and common Greek speech and opera style regarding SPL, the mean and SD of F0, the LTAS slope, and the relative level of SPF. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. Effectiveness of radio spokesperson's gender, vocal pitch and accent and the use of music in radio advertising

    Directory of Open Access Journals (Sweden)

    Josefa D. Martín-Santana

    2015-07-01

    Full Text Available The aim of this study is to analyze how certain voice features of radio spokespersons and background music influence the advertising effectiveness of a radio spot from the cognitive, affective and conative perspectives. We used a 2 × 2 × 2 × 2 experimental design in 16 different radio programs in which an ad hoc radio spot was inserted during advertising block. This ad changed according to combinations of spokesperson's gender (male–female, vocal pitch (low–high and accent (local–standard. In addition to these independent factors, the effect of background music in advertisements was also tested and compared with those that only had words. 987 regular radio listeners comprised the sample that was exposed to the radio program we created. Based on the differences in the levels of effectiveness in the tested voice features, our results suggest that the choice of the voice in radio advertising is one of the most important decisions an advertiser faces. Furthermore, the findings show that the inclusion of music does not always imply greater effectiveness.

  20. Autoshaping Infant Vocalizations

    OpenAIRE

    Myers, Alexander McNaughton

    1981-01-01

    A series of five experiments was conducted to determine whether operant or respondent factors controlled the emission of a particular vocalization ( "Q" ) by human infants 16 to 18 months old. Experiment 1 consisted of a pilot investigation of the effects of an autoshaping procedure on three infants' vocal behavior. All three subjects demonstrated increased emission of the target sound during the CR period. Experiments 2 through 4 attempted to replicate the findings of Experiment 1 under cont...

  1. Extensão vocal de idosos coralistas e não coralistas Vocal range in aged choristers and non-choristers

    Directory of Open Access Journals (Sweden)

    Tatiana Fernandes Rocha

    2007-06-01

    Full Text Available OBJETIVO: comparar a extensão vocal de idosos coralistas e não coralistas e analisar a influência da prática do canto-coral amador na extensão vocal dos mesmos. MÉTODOS: extração dos valores da extensão vocal em semitons por meio de um teclado musical e análise comparativa do número de semitons entre 40 idosos coralistas e 40 não coralistas. RESULTADOS: o número de semitons atingido pelos coralistas é significativamente maior que o atingido pelos não coralistas. O perfil de extensão vocal dos idosos coralistas foi de 27 a 39 semitons, perfazendo um total de 3 oitavas, 1 tom e 1 semitom. O perfil de extensão vocal dos idosos não coralistas foi de 18 a 35 semitons, perfazendo um total de 2 oitavas, 5 tons e 1 semitom. CONCLUSÃO: a prática do canto coral amador aumenta a extensão vocal de idosos coralistas.PURPOSE: compare the vocal extension of senior choristers and non-choristers and analyze the influence of the practice of the amateur coral-song in the vocal extension of the aforementioned subjects. METHODS: extracting the vocal extension through a musical keyboard and comparative analysis of the number of half-notes among 40 senior choristers and 40 non-choristers. RESULTS: the number of half-notes achieved by the choristers is significantly higher than the one achieved by the non-choristers. The vocal extension profile of the seniors choristers was from 27 to 39 half-notes, totalizing a sum of 3 octaves, 1 tone and 1 half-note. The profile of the no-choristers seniors' vocal extension was from 18 to 35 half-notes, totalizing a sum of 2 octaves, 5 tones and 1 half-note. CONCLUSION: The practice of the amateur coral song increases the choristers seniors' vocal extension.

  2. Adapted to roar: functional morphology of tiger and lion vocal folds.

    Directory of Open Access Journals (Sweden)

    Sarah A Klemuk

    Full Text Available Vocal production requires active control of the respiratory system, larynx and vocal tract. Vocal sounds in mammals are produced by flow-induced vocal fold oscillation, which requires vocal fold tissue that can sustain the mechanical stress during phonation. Our understanding of the relationship between morphology and vocal function of vocal folds is very limited. Here we tested the hypothesis that vocal fold morphology and viscoelastic properties allow a prediction of fundamental frequency range of sounds that can be produced, and minimal lung pressure necessary to initiate phonation. We tested the hypothesis in lions and tigers who are well-known for producing low frequency and very loud roaring sounds that expose vocal folds to large stresses. In histological sections, we found that the Panthera vocal fold lamina propria consists of a lateral region with adipocytes embedded in a network of collagen and elastin fibers and hyaluronan. There is also a medial region that contains only fibrous proteins and hyaluronan but no fat cells. Young's moduli range between 10 and 2000 kPa for strains up to 60%. Shear moduli ranged between 0.1 and 2 kPa and differed between layers. Biomechanical and morphological data were used to make predictions of fundamental frequency and subglottal pressure ranges. Such predictions agreed well with measurements from natural phonation and phonation of excised larynges, respectively. We assume that fat shapes Panthera vocal folds into an advantageous geometry for phonation and it protects vocal folds. Its primary function is probably not to increase vocal fold mass as suggested previously. The large square-shaped Panthera vocal fold eases phonation onset and thereby extends the dynamic range of the voice.

  3. Music publishing

    OpenAIRE

    Simões, Alberto; Almeida, J. J.

    2003-01-01

    Current music publishing in the Internet is mainly concerned with sound publishing. We claim that music publishing is not only to make sound available but also to define relations between a set of music objects like music scores, guitar chords, lyrics and their meta-data. We want an easy way to publish music in the Internet, to make high quality paper booklets and even to create Audio CD's. In this document we present a workbench for music publishing based on open formats, using open-source t...

  4. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  5. Voicing the Technological Body. Some Musicological Reflections on Combinations of Voice and Technology in Popular Music

    Directory of Open Access Journals (Sweden)

    Florian Heesch

    2016-05-01

    Full Text Available The article deals with interrelations of voice, body and technology in popular music from a musicological perspective. It is an attempt to outline a systematic approach to the history of music technology with regard to aesthetic aspects, taking the identity of the singing subject as a main point of departure for a hermeneutic reading of popular song. Although the argumentation is based largely on musicological research, it is also inspired by the notion of presentness as developed by theologian and media scholar Walter Ong. The variety of the relationships between voice, body, and technology with regard to musical representations of identity, in particular gender and race, is systematized alongside the following cagories: (1 the “absence of the body,” that starts with the establishment of phonography; (2 “amplified presence,” as a signifier for uses of the microphone to enhance low sounds in certain manners; and (3 “hybridity,” including vocal identities that blend human body sounds and technological processing, whereby special focus is laid on uses of the vocoder and similar technologies.

  6. Humans mimicking animals: A cortical hierarchy for human vocal communication sounds

    Science.gov (United States)

    Talkington, William J.; Rapuano, Kristina M.; Hitt, Laura; Frum, Chris A.; Lewis, James W.

    2012-01-01

    Numerous species possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). In humans, the superior temporal sulci (STS) putatively represent homologous voice-sensitive areas of cortex. However, STS regions have recently been reported to represent auditory experience or “expertise” in general rather than showing exclusive sensitivity to human vocalizations per se. Using functional magnetic resonance imaging and a unique non-stereotypical category of complex human non-verbal vocalizations – human-mimicked versions of animal vocalizations – we found a cortical hierarchy in humans optimized for processing meaningful conspecific utterances. This left-lateralized hierarchy originated near primary auditory cortices and progressed into traditional speech-sensitive areas. These results suggest that the cortical regions supporting vocalization perception are initially organized by sensitivity to the human vocal tract in stages prior to the STS. Additionally, these findings have implications for the developmental time course of conspecific vocalization processing in humans as well as its evolutionary origins. PMID:22674283

  7. Hoarse with No Name: Chronic Voice Problems, Policy and Music Teacher Marginalisation

    Science.gov (United States)

    Schmidt, Patrick; Morrow, Sharon L.

    2016-01-01

    The voice is arguably one of the most important tools of the trade for music teachers. However, vocal health for music teachers is often relegated to the margins of policy discussion. This article investigates the social and political environs where vocal health resides, arguing that music teachers must be the first advocates for the enforcement…

  8. Determination of sound types and source levels of airborne vocalizations by California sea lions, Zalophus californianus, in rehabilitation at the Marine Mammal Center in Sausalito, California

    Science.gov (United States)

    Schwalm, Afton Leigh

    California sea lions (Zalophus californianus) are a highly popular and easily recognized marine mammal in zoos, aquariums, circuses, and often seen by ocean visitors. They are highly vocal and gregarious on land. Surprisingly, little research has been performed on the vocalization types, source levels, acoustic properties, and functions of airborne sounds used by California sea lions. This research on airborne vocalizations of California sea lions will advance the understanding of this aspect of California sea lions communication, as well as examine the relationship between health condition and acoustic behavior. Using a PhillipsRTM digital recorder with attached microphone and a calibrated RadioShackRTM sound pressure level meter, acoustical data were recorded opportunistically on California sea lions during rehabilitation at The Marine Mammal Center in Sausalito, CA. Vocalizations were analyzed using frequency, time, and amplitude variables with Raven Pro: Interactive Sound Analysis Software Version 1.4 (The Cornell Lab of Ornithology, Ithaca, NY). Five frequency, three time, and four amplitude variables were analyzed for each vocalization. Differences in frequency, time, and amplitude variables were not significant by sex. The older California sea lion group produced vocalizations that were significantly lower in four frequency variables, significantly longer in two time variables, significantly higher in calibrated maximum and minimum amplitude variables, and significantly lower in frequency at maximum and minimum amplitude compared with pups. Six call types were identified: bark, goat, growl/grumble, bark/grumble, bark/growl, and grumble/moan. The growl/grumble call was higher in dominant beginning, ending, and minimum frequency, as well as in the frequency at maximum amplitude compared with the bark, goat, bark/grumble calls in the first versus last vocalization sample. The goat call was significantly higher in first harmonic interval than any other call type

  9. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  10. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  11. Microsound and Macrocosm : Gérard Grisey’s Explorations of Musical Sound and Space

    NARCIS (Netherlands)

    Kursell, J.; Schäfer, A.; Kaduri, Y.

    2016-01-01

    This chapter investigates concepts of space in French composer Gérard Grisey’s music. From the 1970s onward, he used sound spectrograms, introducing the compositional technique of “spectralism,” which can be rooted in Arnold Schoenberg’s concept of Klangfarbe. The cycle Les Espaces acoustiques

  12. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  13. Music for the birds: effects of auditory enrichment on captive bird species.

    Science.gov (United States)

    Robbins, Lindsey; Margulis, Susan W

    2016-01-01

    With the increase of mixed species exhibits in zoos, targeting enrichment for individual species may be problematic. Often, mammals may be the primary targets of enrichment, yet other species that share their environment (such as birds) will unavoidably be exposed to the enrichment as well. The purpose of this study was to determine if (1) auditory stimuli designed for enrichment of primates influenced the behavior of captive birds in the zoo setting, and (2) if the specific type of auditory enrichment impacted bird behavior. Three different African bird species were observed at the Buffalo Zoo during exposure to natural sounds, classical music and rock music. The results revealed that the average frequency of flying in all three bird species increased with naturalistic sounds and decreased with rock music (F = 7.63, df = 3,6, P = 0.018); vocalizations for two of the three species (Superb Starlings and Mousebirds) increased (F = 18.61, df = 2,6, P = 0.0027) in response to all auditory stimuli, however one species (Lady Ross's Turacos) increased frequency of duetting only in response to rock music (X(2) = 18.5, df = 2, P < 0.0001). Auditory enrichment implemented for large mammals may influence behavior in non-target species as well, in this case leading to increased activity by birds. © 2016 Wiley Periodicals, Inc.

  14. Adapting Music for the Ninth Grade Mixed Chorus.

    Science.gov (United States)

    McIntosh, Kathleen

    1980-01-01

    The author discusses how the ninth grader's vocal development, personality development and musical preferences create unique problems in selecting music for ninth grade choirs. Suggestions are made for adapting published choral music. A list of sacred, secular and Christmas music is included. (KC)

  15. Perception and Modeling of Affective Qualities of Musical Instrument Sounds across Pitch Registers.

    Science.gov (United States)

    McAdams, Stephen; Douglas, Chelsea; Vempala, Naresh N

    2017-01-01

    Composers often pick specific instruments to convey a given emotional tone in their music, partly due to their expressive possibilities, but also due to their timbres in specific registers and at given dynamic markings. Of interest to both music psychology and music informatics from a computational point of view is the relation between the acoustic properties that give rise to the timbre at a given pitch and the perceived emotional quality of the tone. Musician and nonmusician listeners were presented with 137 tones produced at a fixed dynamic marking (forte) playing tones at pitch class D# across each instrument's entire pitch range and with different playing techniques for standard orchestral instruments drawn from the brass, woodwind, string, and pitched percussion families. They rated each tone on six analogical-categorical scales in terms of emotional valence (positive/negative and pleasant/unpleasant), energy arousal (awake/tired), tension arousal (excited/calm), preference (like/dislike), and familiarity. Linear mixed models revealed interactive effects of musical training, instrument family, and pitch register, with non-linear relations between pitch register and several dependent variables. Twenty-three audio descriptors from the Timbre Toolbox were computed for each sound and analyzed in two ways: linear partial least squares regression (PLSR) and nonlinear artificial neural net modeling. These two analyses converged in terms of the importance of various spectral, temporal, and spectrotemporal audio descriptors in explaining the emotion ratings, but some differences also emerged. Different combinations of audio descriptors make major contributions to the three emotion dimensions, suggesting that they are carried by distinct acoustic properties. Valence is more positive with lower spectral slopes, a greater emergence of strong partials, and an amplitude envelope with a sharper attack and earlier decay. Higher tension arousal is carried by brighter sounds

  16. MUSICAL INTERPRETATION OF THE POETIC SOURCE IN THE RAP POEM FROM THE VOCAL-SYMPHONIC CYCLE FOR BARITONE AND ORCHESTRA APRÈS UNE LECTURE BY GHENADIE CIOBANU

    Directory of Open Access Journals (Sweden)

    CIOBANU-SUHOMLIN IRINA

    2017-06-01

    Full Text Available The author considers the peculiarities of the interpretation of the poetic text in the music of one of the poems of the vocal and symphonic cycle for baritone and orchestra „Après une Lecture” by Ghenadie Ciobanu. The purpose of the article is to reveal the specific features of the musical and poetic synthesis achieved by the composer in the „Rap Poem”: in the description of the genre basis of the poetic and musical works, their figurative content, the correlation of their syntax, the specific musical intonation of the poetic text, the role of the orchestra. The composer’s method of working with a word is evaluated from the novelty standpoint, and the final artistic result is determined from the position of its stylistic affiliation as a musical exemple of the „third direction” in the music of the 20th century.

  17. Vocal Hygiene Habits and Vocal Handicap Among Conservatory Students of Classical Singing.

    Science.gov (United States)

    Achey, Meredith A; He, Mike Z; Akst, Lee M

    2016-03-01

    This study sought to assess classical singing students' compliance with vocal hygiene practices identified in the literature and to explore the relationship between self-reported vocal hygiene practice and self-reported singing voice handicap in this population. The primary hypothesis was that increased attention to commonly recommended vocal hygiene practices would correlate with reduced singing voice handicap. This is a cross-sectional, survey-based study. An anonymous survey assessing demographics, attention to 11 common vocal hygiene recommendations in both performance and nonperformance periods, and the Singing Voice Handicap Index 10 (SVHI-10) was distributed to classical singing teachers to be administered to their students at two major schools of music. Of the 215 surveys distributed, 108 were returned (50.2%), of which 4 were incomplete and discarded from analysis. Conservatory students of classical singing reported a moderate degree of vocal handicap (mean SVHI-10, 12; range, 0-29). Singers reported considering all 11 vocal hygiene factors more frequently when preparing for performances than when not preparing for performances. Of these, significant correlations with increased handicap were identified for consideration of stress reduction in nonperformance (P = 0.01) and performance periods (P = 0.02) and with decreased handicap for consideration of singing voice use in performance periods alone (P = 0.02). Conservatory students of classical singing report more assiduous attention to vocal hygiene practices when preparing for performances and report moderate degrees of vocal handicap overall. These students may have elevated risk for dysphonia and voice disorders which is not effectively addressed through common vocal hygiene recommendations alone. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Vocal Health Education and Medical Resources for Graduate-Level Vocal Performance Students.

    Science.gov (United States)

    Latham, Katherine; Messing, Barbara; Bidlack, Melissa; Merritt, Samantha; Zhou, Xian; Akst, Lee M

    2017-03-01

    Most agree that education about vocal health and physiology can help singers avoid the development of vocal disorders. However, little is known about how this kind of education is provided to singers as part of their formal training. This study describes the amount of instruction in these topics provided through graduate-level curricula, who provides this instruction, and the kinds of affiliations such graduate singing programs have with medical professionals. This is an online survey of music schools with graduate singing programs. Survey questions addressed demographics of the programs, general attitudes about vocal health instruction for singers, the amount of vocal health instruction provided and by whom it was taught, perceived barriers to including more vocal health instruction, and any affiliations the voice program might have with medical personnel. Eighty-one survey responses were received. Instruction on vocal health was provided in 95% of the schools. In 55% of the schools, none of this instruction was given by a medical professional. Limited time in the curriculum, lack of financial support, and lack of availability of medical professional were the most frequently reported barriers to providing more instruction. When programs offered more hours of instruction, they were more likely to have some of that instruction given by a medical professional (P = 0.008) and to assess the amount of instruction provided positively (P = 0.001). There are several perceived barriers to incorporating vocal health education into graduate singing programs. Opportunity exists for more collaboration between vocal pedagogues and medical professionals in the education of singers about vocal health. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Principles of structure building in music, language and animal song

    Science.gov (United States)

    Rohrmeier, Martin; Zuidema, Willem; Wiggins, Geraint A.; Scharff, Constance

    2015-01-01

    Human language, music and a variety of animal vocalizations constitute ways of sonic communication that exhibit remarkable structural complexity. While the complexities of language and possible parallels in animal communication have been discussed intensively, reflections on the complexity of music and animal song, and their comparisons, are underrepresented. In some ways, music and animal songs are more comparable to each other than to language as propositional semantics cannot be used as indicator of communicative success or wellformedness, and notions of grammaticality are less easily defined. This review brings together accounts of the principles of structure building in music and animal song. It relates them to corresponding models in formal language theory, the extended Chomsky hierarchy (CH), and their probabilistic counterparts. We further discuss common misunderstandings and shortcomings concerning the CH and suggest ways to move beyond. We discuss language, music and animal song in the context of their function and motivation and further integrate problems and issues that are less commonly addressed in the context of language, including continuous event spaces, features of sound and timbre, representation of temporality and interactions of multiple parallel feature streams. We discuss these aspects in the light of recent theoretical, cognitive, neuroscientific and modelling research in the domains of music, language and animal song. PMID:25646520

  20. 6. The Interdisciplinary Dimension of the Vocalchoral Culture of the Pupil in the Music School

    Directory of Open Access Journals (Sweden)

    Glebov Ana

    2018-03-01

    Full Text Available Process in the context of the vocal-choral culture of the students of the music school. The concept of interdisciplinary is considered as a symbiosis of two or more academic disciplines in the formation of the vocal-choral culture of pupils in the musical-artistic field. In this case, interdisciplinary occurs in integrating the knowledge, capabilities and aptitudes formed, taking into account significant specific factors. Thus, the interdisciplinary approach is carried out through such disciplines as solfeggio, history of music, instrument, ensemble, but also the integration into the vocal-choral of the method of interiorizing the music through the philosophical, musical and psycho-pedagogical aspects.

  1. Innovation In Music

    OpenAIRE

    2014-01-01

    The music industry is a fast moving field with new technology and methodological advances combining to catalyse innovations all the time. 'Innovation in Music 2013' was an international conference exploring this topic, held in December 2013 in York, Uk. The event covered specific and cross-disciplinary aspects of the music industry including music creation, technology, production and business, sound engineering, mastering, post production and sound design, games music and cross-disciplinary t...

  2. A Joint Prosodic Origin of Language and Music

    Directory of Open Access Journals (Sweden)

    Steven Brown

    2017-10-01

    Full Text Available Vocal theories of the origin of language rarely make a case for the precursor functions that underlay the evolution of speech. The vocal expression of emotion is unquestionably the best candidate for such a precursor, although most evolutionary models of both language and speech ignore emotion and prosody altogether. I present here a model for a joint prosodic precursor of language and music in which ritualized group-level vocalizations served as the ancestral state. This precursor combined not only affective and intonational aspects of prosody, but also holistic and combinatorial mechanisms of phrase generation. From this common stage, there was a bifurcation to form language and music as separate, though homologous, specializations. This separation of language and music was accompanied by their (reunification in songs with words.

  3. Music and natural sounds in an auditory steady-state response based brain-computer interface to increase user acceptance.

    Science.gov (United States)

    Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk

    2017-05-01

    Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Elaborate Mimetic Vocal Displays by Female Superb Lyrebirds

    Directory of Open Access Journals (Sweden)

    Anastasia H Dalziell

    2016-04-01

    Full Text Available Some of the most striking vocalizations in birds are made by males that incorporate vocal mimicry in their sexual displays. Mimetic vocalization in females is largely undescribed, but it is unclear whether this is because of a lack of selection for vocal mimicry in females, or whether the phenomenon has simply been overlooked. These issues are thrown into sharp relief in the superb lyrebird, Menura novaehollandiae, a basal oscine passerine with a lek-like mating system and female uniparental care. The spectacular mimetic song display produced by courting male lyrebirds is a textbook example of a sexually selected trait, but the vocalizations of female lyrebirds are largely unknown. Here, we provide the first analysis of the structure and context of the vocalizations of female lyrebirds. Female lyrebirds were completely silent during courtship; however, females regularly produced sophisticated vocal displays incorporating both lyrebird-specific vocalizations and imitations of sounds within their environment. The structure of female vocalizations varied significantly with context. While foraging, females mostly produced a complex lyrebird-specific song, whereas they gave lyrebird-specific alarm calls most often during nest defense. Within their vocal displays females also included a variety of mimetic vocalizations, including imitations of the calls of dangerous predators, and of alarm calls and song of harmless heterospecifics. Females gave more mimetic vocalizations during nest defense than while foraging, and the types of sounds they imitated varied between these contexts, suggesting that mimetic vocalizations have more than one function. These results are inconsistent with previous portrayals of vocalizations by female lyrebirds as rare, functionless by-products of sexual selection on males. Instead, our results support the hypotheses that complex female vocalizations play a role in nest defense and mediate female-female competition for

  5. Hearing of note: an electrophysiologic and psychoacoustic comparison of pitch discrimination between vocal and instrumental musicians.

    Science.gov (United States)

    Nikjeh, Dee A; Lister, Jennifer J; Frisch, Stefan A

    2008-11-01

    Cortical auditory evoked potentials of instrumental musicians suggest that music expertise modifies pitch processing, yet less is known about vocal musicians. Mismatch negativity (MMN) to pitch deviances and difference limen for frequency (DLF) were examined among 61 young adult women, including 20 vocalists, 21 instrumentalists, and 20 nonmusicians. Stimuli were harmonic tone complexes from the mid-female vocal range (C4-G4). MMN was elicited by multideviant paradigm. DLF was obtained by an adaptive psychophysical paradigm. Musicians detected pitch changes earlier and DLFs were 50% smaller than nonmusicians. Both vocal and instrumental musicians possess superior sensory-memory representations for acoustic parameters. Vocal musicians with instrumental training appear to have an auditory neural advantage over instrumental or vocal only musicians. An incidental finding reveals P3a as a sensitive index of music expertise.

  6. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Tragicomedy, Melodrama, and Genre in Early Sound Films: The Case of Two “Sad Clown” Musicals

    Directory of Open Access Journals (Sweden)

    Michael G. Garber

    2016-10-01

    Full Text Available This interdisciplinary study applies the theatrical theories of stage genres to examples of the early sound cinema, the 1930 Hollywood musicals Puttin’ on the Ritz (starring Harry Richman, and with songs by Irving Berlin and Free and Easy (starring Buster Keaton. The discussion focuses on the phenomenon of the sad clown as a symbol of tragicomedy. Springing from Rick Altman’s delineation of the “sad clown” sub-subgenre of the show musical subgenre, outlined in The American Film Musical, this article shows that, in these seminal movie musicals, naïve melodrama and “gag” comedy coexist with the tonalities, structures, philosophy, and images of the sophisticated genre of tragicomedy, including by incorporating the grotesque into the mise en scene of their musical production numbers.

  8. Sacred byzantine music and its influence on old East Slavic Orthodox music

    Directory of Open Access Journals (Sweden)

    Włodzimierz Wołosiuk

    2014-11-01

    Full Text Available Sacred Byzantine music originates from three sources: “the liturgy of heaven”, synagogue music as well as old Greek theory of music and lays at the bottom of the East Slavs liturgical chant. The tonal base of the Byzantine music formed tetrachords. From them the so called Diatonic mode took shape. It was the easiest and the most popular sound arrangement steming from Greek music. The Cristian Church considered it to be in accordance with its Spirit and needs. From the tetrachords mentioned above other tones were created, namely Doric tones, Lydian, Phrygian and Mixolydian and, together withall their derivatives they gave beginning to the Oktoechos tradition. Byzantine music was flourishing in monasteries and in town areas andmany different forms were elaborated on like troparions, kontakions, stichiry, canons, etc. If one speaks about composers then certainly some names cannot be omitted. These are: St. Anatolius (Patriarchof Constantinople, St. Andrew of Crete, St. Romanos the Melodist,St. Sophronius of Jerusalem and, above all, St. John of Damascus who collected and systematized the liturgical chants creating mentioned Oktoechos. The acceptance of the Greek form of Christianity by Rus’ caused a cultivation of the sacred Greek vocal art on its territory which manifested in a form of so called Znamenny chant. This type of chant was at first similar to the Greek model but later on it moved away from it. Musical notation of the Old East Slavic singing was based on neumes which names in Old East Slavic have changed a little and only few survived. Furthermore, liturgical note books together with their genre and music content have been taken over from Byzantium. Especially visible in the Old East Slavic monody, Byzantine patterns were pervading also the later polyphony which proves they were always current. Moreover, this allows to claim that Rus’ became the real successor of the Greek Orthodox traditions in new circumstances of sacral

  9. Film Music. Factfile No. 8.

    Science.gov (United States)

    Elsas, Diana, Ed.; And Others

    Organizations listed here with descriptive information include film music clubs and music guilds and associations. These are followed by a representative list of schools offering film music and/or film sound courses. Sources are listed for soundtrack recordings, sound effects/production music, films on film music, and oral history programs. The…

  10. Hearing Things: Music and Sounds the Traveller Heard and Didn’t Hear on the Grand Tour

    Directory of Open Access Journals (Sweden)

    Vanessa Agnew

    2012-11-01

    Full Text Available For Charles Burney, as for other Enlightenment scholars engaged in historicising music, the problem was not only how to reconstruct a history of something as ephemeral as music, but the more intractable one of cultural boundaries. Non-European music could be excluded from a general history on the grounds that it was so much noise and no music. The music of Egypt and classical antiquity, on the other hand, were likely ancestors of European music and clearly had to be accorded a place within the general history. But before that place could be determined, Burney and his contemporaries were faced with a stunning silence. What was Egyptian music? What were its instruments? What its sound? The paper examines the work of scholars like Burney and James Bruce and their efforts to reconstruct past music by traveling to exotic places. Travel and a form of historical reenactment emerge as central not only to eighteenth-century historical method, but central, too, to the reconstruction of past sonic worlds. This essay argues that this method remains available to contemporary scholars as well.

  11. Musical preferences and learning outcome of medical students in cadaver dissection laboratory: A Nigerian survey.

    Science.gov (United States)

    Anyanwu, G E; Nto, J N; Agu, A U; Ekezie, J; Esom, E A

    2016-11-01

    Background music has been reported to enhance learning in the cadaver dissection laboratory. This study was designed to determine the impact of various forms of musical genre and some of their characteristics on students' learning outcome in the dissection laboratory. Some selected musical genre in vocal and non-vocal forms and at different tempi and volume were played as background music (BM) to 253 Medical and Dental students during various sessions of cadaver dissection. Psychological Stress assessment was done using Psychological stress measure-9. Participants love for music, preferred musical genre and other musical characteristics were assessed. The impact of the various musical genre and their characteristics on learning was done via written examination on the region dissected during each musical session. A positive relationship was noted between students' preference for musical genre during leisure with their preference for BM during private study time (Pmusical genre on some selected learning factors. Country and Classical music gave the highest positive impact on the various learning factors in CDL followed by R&B. No significant difference was noted between the cognitive values of vocal and non-vocal music. Classical music most effectively reduced the stress induced by dissection in the CDL while Reggae and High life musical genre created a more stressful environment than regular background noise (Pmusical genre and their various characteristics. The inability to isolate the particular musical genre with these desired properties could account for the controversies in the reports of the role of music in academic environment. Copyright © 2016 Elsevier GmbH. All rights reserved.

  12. Análise perceptivo-auditiva de parâmetros vocais em cantores da noite do estilo musical brega da cidade do Recife Perceptual vocal pattern analysis of singers from kitschy musical style in Recife

    Directory of Open Access Journals (Sweden)

    Elthon Gomes Fernandes da Silva

    2009-09-01

    Full Text Available OBJETIVO: avaliar de forma perceptivo-auditiva a voz dos cantores da noite do estilo musical Brega da cidade do Recife. MÉTODOS: pesquisa realizada na clínica-escola do curso de Fonoaudiologia da Universidade Federal de Pernambuco e na emissora de TV Rede Estação canal 14, ambos localizados na cidade do Recife. Trata-se de estudo observacional, transversal e descritivo. Com anuência de 13 cantores, maiores de 18 anos, houve gravação da voz falada na emissão sustentada de vogais e durante a música "parabéns pra você"; na voz cantada realizou-se a gravação de trecho de música pertencente ao repertório do cantor. RESULTADOS: tempos de fonação reduzidos; modificações no pitch e loudness, comparando voz falada e cantada, ambos passando de adequados para, respectivamente, agudo e elevada; mudanças na ressonância, que era laringofaríngea e tornou-se equilibrada com compensação nasal. Houve manutenção do ataque vocal brusco; mudança do registro modal misto na voz habitual para o modal cabeça na voz profissional; predominância da qualidade vocal clara na voz falada e padrões adequados para modulação, projeção e articulação na voz cantada. CONCLUSÃO: os cantores da noite do estilo musical Brega da cidade do Recife apresentaram tempos de fonação reduzidos e tiveram, da voz falada para a voz cantada, mudanças no pitch, loudness e ressonância e manutenção das características vocais para ataque e registro. A qualidade vocal clara na voz falada foi predominante, assim como a modulação adequada, boa projeção e articulação precisa estavam entre os padrões vocais mais frequentes na voz cantada.PURPOSE: to evaluate the perceptual form concerning the voice of the singers from kitschy musical style in Recife. METHODS: clinical research was carried out in the clinic-school of Speech, Language and Hearing Sciences course in the Federal University of Pernambuco and Network TV Station channel 14, both located in the

  13. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users.

    Science.gov (United States)

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.

  14. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  15. Musicians and music making as a model for the study of brain plasticity.

    Science.gov (United States)

    Schlaug, Gottfried

    2015-01-01

    Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of sensory and motor skills over the course of a musician's lifetime. Thus, musicians offer an excellent human model for studying behavioral-cognitive as well as brain effects of acquiring, practicing, and maintaining these specialized skills. Research has shown that repeatedly practicing the association of motor actions with specific sound and visual patterns (musical notation), while receiving continuous multisensory feedback will strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) as well as multimodal integration regions. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. Furthermore, the plasticity of this system as a result of long term and intense interventions suggest the potential for music making activities (e.g., forms of singing) as an intervention for neurological and developmental disorders to learn and relearn associations between auditory and motor functions such as vocal motor functions. © 2015 Elsevier B.V. All rights reserved.

  16. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  17. Computer Music Synthesis and Composition

    Science.gov (United States)

    Ayers, Lydia

    What is computer music composition? Composers are using the computer for everything from MIDI instruments communicating with computer sequencers, pitch trackers analyzing the sounds of acoustic instruments and converting them to pitch information, live performers with recorded music, performers with interactive computer programs, computer music produced by dancers using sensors, automatic music composition with the computer programs composing the music, composing with sounds or parts of sounds rather than notes, how to structure the use of time, composing with timbres, or the colors of sounds, and timbre morphing, such as a gong morphing to a voice, composing with textures and texture morphing, such as fluttertonguing morphing to pitch, granular synthesis, trills and convolution.

  18. The prenatal roots of music

    Directory of Open Access Journals (Sweden)

    David Ernest Teie

    2016-08-01

    Full Text Available Although the idea that pulse in music may be related to human pulse is ancient and has recently been promoted by researchers (Parncutt, 2006; Snowdon & Teie, 2010, there has been no ordered delineation of the characteristics of music that are based on the sounds of the womb. I describe features of music that are based on sounds that are present in the womb: tempo of pulse (pulse is understood as the regular, underlying beat that defines the meter, amplitude contour of pulse, meter, musical notes, melodic frequency range, continuity, syllabic contour, melodic rhythm, melodic accents, phrase length, and phrase contour. There are a number of features of prenatal development that allow for the formation of long-term memories of the sounds of the womb in the areas of the brain that are responsible for emotions. Taken together, these features and the similarities between the sounds of the womb and the elemental building blocks of music allow for a postulation that the fetal acoustic environment may provide the bases for the fundamental musical elements that are found in the music of all cultures. This hypothesis is supported by a one-to-one matching of the universal features of music with the sounds of the womb: 1 all of the regularly heard sounds that are present in the fetal environment are represented in the music of every culture, and 2 all of the features of music that are present in the music of all cultures can be traced to the fetal environment.

  19. The remarkable vocal anatomy of the koala (Phascolarctos cinereus): insights into low-frequency sound production in a marsupial species.

    Science.gov (United States)

    Frey, Roland; Reby, David; Fritsch, Guido; Charlton, Benjamin D

    2018-04-01

    Koalas are characterised by a highly unusual vocal anatomy, with a descended larynx and velar vocal folds, allowing them to produce calls at disproportionately low frequencies. Here we use advanced imaging techniques, histological data, classical macroscopic dissection and behavioural observations to provide the first detailed description and interpretation of male and female koala vocal anatomy. We show that both males and females have an elongated pharynx and soft palate, resulting in a permanently descended larynx. In addition, the hyoid apparatus has a human-like configuration in which paired dorsal, resilient ligaments suspend the hyoid apparatus from the skull, while the ventral parts tightly connect to the descended larynx. We also show that koalas can retract the larynx down into the thoracic inlet, facilitated by a dramatic evolutionary transformation of the ventral neck muscles. First, the usual retractors of the larynx and the hyoid have their origins deep in the thorax. Secondly, three hyoid muscles have lost their connection to the hyoid skeleton. Thirdly, the genioglossus and geniohyoid muscles are greatly increased in length. Finally, the digastric, omohyoid and sternohyoid muscles, connected by a common tendinous intersection, form a guiding channel for the dynamic down-and-up movements of the ventral hyoid parts and the larynx. We suggest that these features evolved to accommodate the low resting position of the larynx and assist in its retraction during call production. We also confirm that the edges of the intra-pharyngeal ostium have specialised to form the novel, extra-laryngeal velar vocal folds, which are much larger than the true intra-laryngeal vocal folds in both sexes, but more developed and specialised for low frequency sound production in males than in females. Our findings illustrate that strong selection pressures on acoustic signalling not only lead to the specialisation of existing vocal organs but can also result in the evolution

  20. Off the beaten track: Freud, sound and music. Statement of a problem and some historico-critical notes.

    Science.gov (United States)

    Baralea, Francesco; Minazzi, Vera

    2008-10-01

    The authors note that the element of sound and music has no place in the model of mental functioning bequeathed to us by Freud, which is dominated by the visual and the representational. They consider the reasons for this exclusion and its consequences, and ask whether the simple biographical explanation offered by Freud himself is acceptable. This contribution reconstructs the historical and cultural background to that exclusion, cites some relevant emblematic passages, and discusses Freud's position on music and on the aesthetic experience in general. Particular attention is devoted to the relationship between Freud and Lipps, which is important both for the originality of Lipps's thinking in the turn-of-the-century debate and for his ideas on the musical aspects of the foundations of psychic life, at which Freud 'stopped', as he himself wrote. Moreover, the shade of Lipps accompanied Freud throughout his scientific career from 1898 to 1938. Like all foundations, that of psychoanalysis was shaped by a system of inclusions and exclusions. The exclusion of the element of sound and music is understandable in view of the cultural background to the development of the concepts of the representational unconscious and infantile sexuality. While the consequences have been far reaching, the knowledge accumulated since that exclusion enables us to resume, albeit on a different basis, the composition of the 'unfinished symphony' of the relationship between psychoanalysis and music.

  1. Precursors of Dancing and Singing to Music in Three- to Four-Months-Old Infants

    Science.gov (United States)

    Fujii, Shinya; Watanabe, Hama; Oohashi, Hiroki; Hirashima, Masaya; Nozaki, Daichi; Taga, Gentaro

    2014-01-01

    Dancing and singing to music involve auditory-motor coordination and have been essential to our human culture since ancient times. Although scholars have been trying to understand the evolutionary and developmental origin of music, early human developmental manifestations of auditory-motor interactions in music have not been fully investigated. Here we report limb movements and vocalizations in three- to four-months-old infants while they listened to music and were in silence. In the group analysis, we found no significant increase in the amount of movement or in the relative power spectrum density around the musical tempo in the music condition compared to the silent condition. Intriguingly, however, there were two infants who demonstrated striking increases in the rhythmic movements via kicking or arm-waving around the musical tempo during listening to music. Monte-Carlo statistics with phase-randomized surrogate data revealed that the limb movements of these individuals were significantly synchronized to the musical beat. Moreover, we found a clear increase in the formant variability of vocalizations in the group during music perception. These results suggest that infants at this age are already primed with their bodies to interact with music via limb movements and vocalizations. PMID:24837135

  2. Spectral analysis of musical sounds with emphasis on the piano

    CERN Document Server

    Koenig, David M

    2014-01-01

    There are three parts to this book which addresses the analysis of musical sounds from the viewpoint of someone at the intersection between physicists, engineers, piano technicians, and musicians. The reader is introduced to a variety of waves and a variety of ways of presenting, visualizing, and analyzing them in the first part. A tutorial on the tools used throughout the book accompanies this introduction. The mathematics behind the tools is left to the appendices. Part 2 is a graphical survey of the classical areas of acoustics that pertain to musical instruments: vibrating strings, bars, membranes, and plates. Part 3 is devoted almost exclusively to the piano. Several two- and three-dimensional graphical tools are introduced to study the following characteristics of pianos: individual notes and interactions among them, the missing fundamental, inharmonicity, tuning visualization, the different distribution of harmonic power for the various zones of the piano keyboard, and potential uses for quality contro...

  3. The Musical Emotional Bursts: A validated set of musical affect bursts to investigate auditory affective processing.

    Directory of Open Access Journals (Sweden)

    Sébastien ePaquette

    2013-08-01

    Full Text Available The Musical Emotional Bursts (MEB consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear and neutrality. These musical bursts were designed to be the musical analogue of the Montreal Affective Voices (MAV – a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 sec improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (n:40 or a clarinet (n:40. The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, nonlinguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli (30 stimuli x 4 [3 emotions + neutral] x 2 instruments by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task; 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80 was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0% and fearful or sad violin (88.0% each MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.

  4. The Importance of Music in Early Childhood.

    Science.gov (United States)

    Levinowitz, Lili M.

    1998-01-01

    Surveys some of the research in music education that validates the inclusion of music for its own sake in models for early childhood learning. Focuses on topics that include, but are not limited to, child and vocal development, the importance of movement for children, and adult involvement in music education. (CMK)

  5. Vocal Sight-Reading Assessment: Technological Advances, Student Perceptions, and Instructional Implications

    Science.gov (United States)

    Henry, Michele

    2015-01-01

    This study investigated choral singers' comfort level using computer technology for vocal sight-reading assessment. High school choral singers (N = 138) attending a summer music camp completed a computer-based sight-reading assessment and accompanying pre- and posttest surveys on their musical backgrounds and perceptions about technology. A large…

  6. Principles of structure building in music, language and animal song.

    Science.gov (United States)

    Rohrmeier, Martin; Zuidema, Willem; Wiggins, Geraint A; Scharff, Constance

    2015-03-19

    Human language, music and a variety of animal vocalizations constitute ways of sonic communication that exhibit remarkable structural complexity. While the complexities of language and possible parallels in animal communication have been discussed intensively, reflections on the complexity of music and animal song, and their comparisons, are underrepresented. In some ways, music and animal songs are more comparable to each other than to language as propositional semantics cannot be used as indicator of communicative success or wellformedness, and notions of grammaticality are less easily defined. This review brings together accounts of the principles of structure building in music and animal song. It relates them to corresponding models in formal language theory, the extended Chomsky hierarchy (CH), and their probabilistic counterparts. We further discuss common misunderstandings and shortcomings concerning the CH and suggest ways to move beyond. We discuss language, music and animal song in the context of their function and motivation and further integrate problems and issues that are less commonly addressed in the context of language, including continuous event spaces, features of sound and timbre, representation of temporality and interactions of multiple parallel feature streams. We discuss these aspects in the light of recent theoretical, cognitive, neuroscientific and modelling research in the domains of music, language and animal song. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Audacity in Vocal Improvisation: Motivating Elementary School Students through Technology

    Science.gov (United States)

    Sichivitsa, Veronica

    2007-01-01

    Every day, music teachers face the challenge of motivating less-confident student singers in general music classes. Teaching vocal improvisation can be a difficult task, because students are often self-conscious about their voices and too intimidated to sing in front of their peers. Technology can be an excellent motivational tool in the classroom…

  8. Phase-Specific Vocalizations of Male Mice at the Initial Encounter during the Courtship Sequence.

    Directory of Open Access Journals (Sweden)

    Yui K Matsumoto

    Full Text Available Mice produce ultrasonic vocalizations featuring a variety of syllables. Vocalizations are observed during social interactions. In particular, males produce numerous syllables during courtship. Previous studies have shown that vocalizations change according to sexual behavior, suggesting that males vary their vocalizations depending on the phase of the courtship sequence. To examine this process, we recorded large sets of mouse vocalizations during male-female interactions and acoustically categorized these sounds into 12 vocal types. We found that males emitted predominantly short syllables during the first minute of interaction, more long syllables in the later phases, and mainly harmonic sounds during mounting. These context- and time-dependent changes in vocalization indicate that vocal communication during courtship in mice consists of at least three stages and imply that each vocalization type has a specific role in a phase of the courtship sequence. Our findings suggest that recording for a sufficiently long time and taking the phase of courtship into consideration could provide more insights into the role of vocalization in mouse courtship behavior in future study.

  9. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  10. Avian vocal mimicry: a unified conceptual framework.

    Science.gov (United States)

    Dalziell, Anastasia H; Welbergen, Justin A; Igic, Branislav; Magrath, Robert D

    2015-05-01

    Mimicry is a classical example of adaptive signal design. Here, we review the current state of research into vocal mimicry in birds. Avian vocal mimicry is a conspicuous and often spectacular form of animal communication, occurring in many distantly related species. However, the proximate and ultimate causes of vocal mimicry are poorly understood. In the first part of this review, we argue that progress has been impeded by conceptual confusion over what constitutes vocal mimicry. We propose a modified version of Vane-Wright's (1980) widely used definition of mimicry. According to our definition, a vocalisation is mimetic if the behaviour of the receiver changes after perceiving the acoustic resemblance between the mimic and the model, and the behavioural change confers a selective advantage on the mimic. Mimicry is therefore specifically a functional concept where the resemblance between heterospecific sounds is a target of selection. It is distinct from other forms of vocal resemblance including those that are the result of chance or common ancestry, and those that have emerged as a by-product of other processes such as ecological convergence and selection for large song-type repertoires. Thus, our definition provides a general and functionally coherent framework for determining what constitutes vocal mimicry, and takes account of the diversity of vocalisations that incorporate heterospecific sounds. In the second part we assess and revise hypotheses for the evolution of avian vocal mimicry in the light of our new definition. Most of the current evidence is anecdotal, but the diverse contexts and acoustic structures of putative vocal mimicry suggest that mimicry has multiple functions across and within species. There is strong experimental evidence that vocal mimicry can be deceptive, and can facilitate parasitic interactions. There is also increasing support for the use of vocal mimicry in predator defence, although the mechanisms are unclear. Less progress has

  11. Bupropion XL-induced motor and vocal tics.

    Science.gov (United States)

    Kayhan, Fatih; Uguz, Faruk; Kayhan, Ayşegül; Toktaş, Fikriye Ilay

    2014-01-01

    Tics are stereotypical repetitive involuntary movements (motor tics) or sounds (vocal tics). Although the emergence of tics were reported in a few cases with the use of selective serotonin reuptake inhibitors, there was no case with bupropion extended-release (Bupropion XL). The current case report presents a male patient developing motor and vocal tics with the use of bupropion XL.

  12. Cortical representations of communication sounds.

    Science.gov (United States)

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  13. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  14. THE BODY IN MUSIC, THE MUSIC IN BODY: A COMMUNITY INTEGRATION PROJECT

    Directory of Open Access Journals (Sweden)

    Fábio Pra da Silva de Souza

    2009-07-01

    Full Text Available This article is the result of the Artistic and Culture Department extension project “The Body in Music, The Music in Body: a community integration project” that took music to community on Universidade Federal de Santa Catarina (UFSC in two different ways: buy choir singing group and the music therapy activities on Parkinson Disease Patients Group. The objectives of this project were to integrate choir singing to dance, develop music perception and body expression. Broadcast choir singing, improve facial expression and body movements. To reach these objectives were done body expression workshops, music and vocal technique rehearsals, rhythm exercises and some auditions among UFSC an other public places in Florianópolis.

  15. Learning while Babbling: Prelinguistic Object-Directed Vocalizations Indicate a Readiness to Learn

    Science.gov (United States)

    Goldstein, Michael H.; Schwade, Jennifer; Briesch, Jacquelyn; Syal, Supriya

    2010-01-01

    Two studies illustrate the functional significance of a new category of prelinguistic vocalizing--object-directed vocalizations (ODVs)--and show that these sounds are connected to learning about words and objects. Experiment 1 tested 12-month-old infants' perceptual learning of objects that elicited ODVs. Fourteen infants' vocalizations were…

  16. Intuitive Music

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2009-01-01

    Handbook for people who wish to play or teach freely improvised music and improvisation pieces. With sections on how to start with different types of groups, training of musical awareness, parameters of the musical sound, the history of improvised music and some improvisational pieces....

  17. Alternative measures to observe and record vocal fold vibrations

    NARCIS (Netherlands)

    Schutte, HK; McCafferty, G; Coman, W; Carroll, R

    1996-01-01

    Vocal fold vibration patterns form the basis for the production of vocal sound. Over the years much effort has been spend to optimize the ways to visualize and give a description of these patterns. Before video possibilities became available the description of the patterns was Very time-consuming.

  18. Benign Lesions of The Vocal Fold

    Directory of Open Access Journals (Sweden)

    Ozgur Surmelioglu

    2013-02-01

    Full Text Available Benign lesions of vocal folds are common disorders. Fifty percent of patients who have sound complaints are found to have these lesions after endoscopic and stroboscopic examinations. Benign vocal fold diseases are primarily caused by vibratory trauma. However they may also occur as a result of viral infections and congenital causes. These lesions are often presented with the complaints of dysphonia. [Archives Medical Review Journal 2013; 22(1.000: 86-95

  19. Music Conductor Gesture Recognized Interactive Music Generation System

    OpenAIRE

    CHEN, Shuai; MAEDA, Yoichiro; TAKAHASHI, Yasutake

    2012-01-01

    In the research of interactive music generation, we propose a music generation method, that the computer generates the music automatically, and then the music will be arranged under the human music conductor's gestures, before it outputs to us. In this research, the generated music is processed from chaotic sound, which is generated from the network of chaotic elements in realtime. The music conductor's hand motions are detected by Microsoft Kinect in this system. Music theories are embedded ...

  20. Name that tune: decoding music from the listening brain.

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  1. Name that tune: Decoding music from the listening brain

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  2. SOME METHODIC ASPECTS OF VOCAL RESPIRATION WITHIN ACADEMIC SINGING TEACHING

    Directory of Open Access Journals (Sweden)

    AGA LUDMILA

    2015-12-01

    Full Text Available This article presents the author’s reflections on the methodical problems of vocal respiration treated by Ludmila Aga as one of the essential elements of vocal technique. Based on her own rich experience as opera soloist and vocal teacher, the author reviews some theoretical principles which treat this problem. Besides, L. Aga proposes some helpful exercises for developing vocal respiration abilities. The article combines data from physiology, history and the theory of performing arts, methods of singing. Having an applied character, this work might be helpful for the singing teachers from the colleges and higher instituti­ons of music proile, as well as for the students of the Academic Singing Department.

  3. Morphometric Study of Vocal Folds in Indian Cadavers

    Directory of Open Access Journals (Sweden)

    Rawal J.D.

    2015-06-01

    Full Text Available Introduction: -The larynx is an air passage and a sphincteric device used in respiration and phonation. The larynx, from inside outwards has a framework of mucosa surrounded by fibro-elastic membrane which in turn is surrounded by cartilages and then a layer of muscles. Vocal folds are intrinsic ligament of larynx covered by mucosal folds. Larynx generates sound through rhythmic opening and closing of the vocal folds. The perceived pitch of human voice mainly depends upon fundamental frequency of sound generated by larynx. Aim: - The aim of present study is to measure various dimensions of vocal folds in Indian cadavers. Material & Methods: - 50 larynx were obtained from embalmed cadavers, of which 10 larynx were of females. Vocal cords were dissected from the larynx and morphometric analysis was done. Results and Conclusions: - The average total length of the vocal folds was found to be 16.11 mm. ± 2.62 mm. in male and 14.10 mm. ± 1.54 mm. in female cadavers. The average width of the vocal folds was found to be 4.38 mm. ± 0.74 mm. in male and 3.60 mm. ± 0.64 mm. in female cadavers. The average total length of the membranous part of the vocal folds was found to be 11.90 mm. ± 1.86 mm. in male and 10.45 mm. ± 1.81 mm. in female cadavers. The average ratio of the length of the membranous and the cartilaginous parts of the vocal folds was calculated to be 3.10 ± 0.96in male and 2.85 ± 0.73in female cadavers.

  4. Expressiveness in musical performance: Pedagogic aspect

    Directory of Open Access Journals (Sweden)

    Jović Natalija R.

    2016-01-01

    Full Text Available The subject of our research relates to pedagogic aspects of expressive vocal-instrumental musical performance. We intended to examine: (1 how undergraduate students see/conceptualize and evaluate expressiveness in musical performance; (2 whether and how they were trained in the skill of expressive musical performance during their musical training; (3 whether and in which way they rehearse the expressive component of musical performance and interpretation and (4 whether there are any differences regarding gender, age, instrument, department, year of study and years of instrument playing in relation to the group of dependant variables related to expressiveness, tuition and practice. The sample for the research included 82 students of instrumental and theory departments at the Faculty of Music in Belgrade. Psychological and pedagogical aspects of musical expressiveness during vocal-instrumental performance were analyzed. The results show that students highly evaluate expressiveness but its place is secondary compared to mastering technical and tonal requirements. Statistically significant differences were shown regarding gender, age and departments. It can be concluded that there is a potential for the development and enhancement of expressiveness of students if we abandon the traditional view that expressiveness is linked exclusively to talent. The findings indicate that pedagogical work should be directed towards finding purposeful strategies for training individual expressiveness.

  5. Singers' and Nonsingers' Perception of Vocal Vibrato.

    Science.gov (United States)

    Reddy, A Anita; Subramanian, Uma

    2015-09-01

    Vibrato, a small, nevertheless an important component in the singing voice is known to enrich the overall singing voice quality. However, in the perception of overall performance, it is often neglected. Singing performance is often appreciated by a mixed audience of those who love music, but not necessarily sing and other singers who may or may not be teachers of singing. The objectives of the present study were aimed at investigating singers' and nonsingers' perception of vocal vibrato and its effect on the ratings of singer's overall performance. Prerecorded audio samples of the chorus of a hymn (How Great Thou Art) as sung by 10 singers (both men and women) were played via a speaker to two groups of judges which consisted of three experienced singers and three experienced nonsingers. The singer judges (SJs) were vocal instructors in Western classical, music theater, pop, and contemporary styles. Seven parameters (presence of vibrato, rate, extent, conspicuousness, quality, periodicity, and type) related to vibrato were evaluated through auditory perception by these two groups of judges on a rating scale developed specifically for the study, and one parameter evaluated singer's overall performance. Cohen's Kappa statistical analysis was used for inter-rater reliability within groups. Nonsinger judges (NSJs) within the group showed varied ratings as did SJs, yet SJs did have higher agreement than NSJs. Chi-square analysis was used across groups. Both groups were distinct from each other in their perception of vibrato. Ratings of singer's overall performance were not affected for NSJs, but certainly affected for SJ. It could not be concluded that ratings on singer's overall performance was affected as a result of vibrato. Since vibrato is often over-ridden by the singer's voice. But a rare occasion can arise where a vibrato may not sound pleasant and can affect the listener's perception of the singer's performance. Often a feedback from listeners would help monitor

  6. Natural variations of vocal effort and comfort in simulated acoustic environments

    DEFF Research Database (Denmark)

    Pelegrin Garcia, David; Brunskog, Jonas

    2010-01-01

    acoustic conditions, artificially generated by electroacoustic means. The vocal intensity decreased with the objective parameter support, which quantifies the amount of sound reflections provided by the room at the talker‟s ears,relative to the direct sound, at a rate of -0.21 dB/dB. The reading pace......Many teachers suffer from voice problems related to the use of their voices in the working environment. The noise generated by students and external sound sources (like traffic noise or neighboring classrooms) is a major problem, as it leads to an increased vocal effort. In the absence of high...... levels of background noise, the room has also an effect on the talker‟s voice. In order to quantify the relative importance of the acoustic environment on the vocal demands for teachers, a laboratory investigation was carried out. Thirteen teachers had to read a text aloud under ten different room...

  7. Do high sound pressure levels of crowing in roosters necessitate passive mechanisms for protection against self-vocalization?

    Science.gov (United States)

    Claes, Raf; Muyshondt, Pieter G G; Dirckx, Joris J J; Aerts, Peter

    2018-02-01

    High sound pressure levels (>120dB) cause damage or death of the hair cells of the inner ear, hence causing hearing loss. Vocalization differences are present between hens and roosters. Crowing in roosters is reported to produce sound pressure levels of 100dB measured at a distance of 1m. In this study we measured the sound pressure levels that exist at the entrance of the outer ear canal. We hypothesize that roosters may benefit from a passive protective mechanism while hens do not require such a mechanism. Audio recordings at the level of the entrance of the outer ear canal of crowing roosters, made in this study, indeed show that a protective mechanism is needed as sound pressure levels can reach amplitudes of 142.3dB. Audio recordings made at varying distances from the crowing rooster show that at a distance of 0.5m sound pressure levels already drop to 102dB. Micro-CT scans of a rooster and chicken head show that in roosters the auditory canal closes when the beak is opened. In hens the diameter of the auditory canal only narrows but does not close completely. A morphological difference between the sexes in shape of a bursa-like slit which occurs in the outer ear canal causes the outer ear canal to close in roosters but not in hens. Copyright © 2017 Elsevier GmbH. All rights reserved.

  8. MUSIC RADIO-JOURNALISM

    Directory of Open Access Journals (Sweden)

    Dubovtceva Ludmila I.

    2014-04-01

    Full Text Available The article is based on years of practical experience, the author highlights the main radio genres in which music correspondent, music reviewer, music commentator, and music leading and a disc jockey work. Theoretical principles of their creative activities are analyzed in common journalistic genres, such as interview, reportage, talk show, live broadcast, radiofilm, as well as specialized genres like concert on demand and music competition. Journalist’ speech is seen as a logical element, the incoming with music in art-structural relationships. However, it does not become the predominant sound layer and aims to harmonious correlation or local penetration into music opus. In addition, important links in music journalism are defined the auxiliary "offscreen" editor's job and keeping the original sound archive. The author cites a number of own work examples on the air.

  9. Visual classification of feral cat Felis silvestris catus vocalizations.

    Science.gov (United States)

    Owens, Jessica L; Olsen, Mariana; Fontaine, Amy; Kloth, Christopher; Kershenbaum, Arik; Waller, Sara

    2017-06-01

    Cat vocal behavior, in particular, the vocal and social behavior of feral cats, is poorly understood, as are the differences between feral and fully domestic cats. The relationship between feral cat social and vocal behavior is important because of the markedly different ecology of feral and domestic cats, and enhanced comprehension of the repertoire and potential information content of feral cat calls can provide both better understanding of the domestication and socialization process, and improved welfare for feral cats undergoing adoption. Previous studies have used conflicting classification schemes for cat vocalizations, often relying on onomatopoeic or popular descriptions of call types (e.g., "miow"). We studied the vocalizations of 13 unaltered domestic cats that complied with our behavioral definition used to distinguish feral cats from domestic. A total of 71 acoustic units were extracted and visually analyzed for the construction of a hierarchical classification of vocal sounds, based on acoustic properties. We identified 3 major categories (tonal, pulse, and broadband) that further breakdown into 8 subcategories, and show a high degree of reliability when sounds are classified blindly by independent observers (Fleiss' Kappa K  = 0.863). Due to the limited behavioral contexts in this study, additional subcategories of cat vocalizations may be identified in the future, but our hierarchical classification system allows for the addition of new categories and new subcategories as they are described. This study shows that cat vocalizations are diverse and complex, and provides an objective and reliable classification system that can be used in future studies.

  10. Music and the Three Appeals of Classical Rhetoric

    Science.gov (United States)

    LeCoat, Gerard G.

    1976-01-01

    Contends that rhetorical theory of the sixteenth through the eighteenth centuries influenced the theory of the composition of music and offers examples of vocal music which was adapted to the rhetorical appeals of logos, ethos, and pathos. (MH)

  11. Voice Savers for Music Teachers

    Science.gov (United States)

    Cookman, Starr

    2012-01-01

    Music teachers are in a class all their own when it comes to voice use. These elite vocal athletes require stamina, strength, and flexibility from their voices day in, day out for hours at a time. Voice rehabilitation clinics and research show that music education ranks high among the professionals most commonly affected by voice problems.…

  12. Hudba v reklamních spotech

    OpenAIRE

    ŘEŘÁBEK, Lukáš

    2016-01-01

    The topic of this thesis is Music in advertising spots. The theoretical part describes advertising history, its theory, strategies, rules and used language. It is followed by purpose of TV commercial and use of instrumental or vocal music which sounds at background. The practical part contains the implementation of music activities into specific lessons for small children at primary school. I focus on vocal, rhythmical or intonation training, improvisation and learning new songs through six T...

  13. Jukebox-Musical: The State and Prospects

    Directory of Open Access Journals (Sweden)

    Olga-Lisa Monde

    2012-08-01

    Full Text Available This article analyzes the concept of ‘jukebox musical’, classification of this kind of musical theatre productions, as well as those features which are characteristic for time of creation of these shows. During the last five decades there formed a whole separate area in musical theatre – the jukebox-musical, species of which may include: a musical essay, a musical concert, a musical drama, and a musical anthology. The importance of these productions for the world of music history is essential: they not only perpetuate the memory of some famous composers, singers, musicians, librettists, and lyricists, but also carefully preserve musical and vocal styles in relation to a particular historical period.

  14. Proximal mechanisms for sound production in male Pacific walruses

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Reichmuth, Colleen

    2012-01-01

    features more similar to those found in industrial work places than in nature. The patterned knocks and bells that comprise male songs are not thought to be true vocalizations, but rather, sounds produced with structures other than the vocal tract and larynx. To determine how male walruses produce and emit......The songs of male walruses during the breeding season have been noted to have some of the most unusual characteristics that have been observed among mammalian sounds. In contrast to the more guttural vocalizations of most other carnivores, their acoustic displays have impulsive and metallic...... anatomical origins of knocking and bell sounds and gained a mechanistic understanding of how these sounds are generated within the body and transmitted to the environment. These pathways are illustrated with acoustic and video data and considered with respect to the unique biology of this species....

  15. The Nigerian Art Music Composer, His Training, Vocal Compositions ...

    African Journals Online (AJOL)

    The music arena has undergone some changes within the past decades in Nigeria; partly due to contact of Nigerians with music of the other world cultures and due to intercultural borrowings within Nigeria. This trend has been a masterminding force in the shaping of the musical arena in Nigeria with the art music composer ...

  16. Music preference in degus (Octodon degus: Analysis with Chilean folk music.

    Directory of Open Access Journals (Sweden)

    Shigeru Watanabe

    2018-05-01

    Full Text Available Most nonhuman animals do not show selective preference for types of music, but researchers have typically employed only Western classical music in such studies. Thus, there has been bias in music choice. Degus (Octodon degus, originally from the mountain areas of Chile, have highly developed vocal communication. Here, we examined music preference of degus using not only Western classical music (music composed by Bach and Stravinsky, but also South American folk music (Chilean and Peruvian. The degus preferred the South American music to the Western classical music but did not show selective preference between the two Western classical music choices. Furthermore, the degus preferred the Chilean to the Peruvian music to some extent. In the second experiment, we examined preference for music vs. silence. Degus overall showed a preference for Chilean music over silence, but preferred silence over Western music. The present results indicate that the previous negative data for musical preference in nonhuman animals may be due to biased music selection (Krause, 2012. Our results suggest the possibility that the soundscape of an environment influences folk music created by native peoples living there and the auditory preference of other resident animals there.

  17. Enhanced Processing of Vocal Melodies in Childhood

    Science.gov (United States)

    Weiss, Michael W.; Schellenberg, E. Glenn; Trehub, Sandra E.; Dawber, Emily J.

    2015-01-01

    Music cognition is typically studied with instrumental stimuli. Adults remember melodies better, however, when they are presented in a biologically significant timbre (i.e., the human voice) than in various instrumental timbres (Weiss, Trehub, & Schellenberg, 2012). We examined the impact of vocal timbre on children's processing of melodies.…

  18. The First Call Note Plays a Crucial Role in Frog Vocal Communication.

    Science.gov (United States)

    Yue, Xizi; Fan, Yanzhu; Xue, Fei; Brauth, Steven E; Tang, Yezhong; Fang, Guangzhan

    2017-08-31

    Vocal Communication plays a crucial role in survival and reproductive success in most amphibian species. Although amphibian communication sounds are often complex consisting of many temporal features, we know little about the biological significance of each temporal component. The present study examined the biological significance of notes of the male advertisement calls of the Emei music frog (Babina daunchina) using the optimized electroencephalogram (EEG) paradigm of mismatch negativity (MMN). Music frog calls generally contain four to six notes separated approximately by 150 millisecond intervals. A standard stimulus (white noise) and five deviant stimuli (five notes from one advertisement call) were played back to each subject while simultaneously recording multi-channel EEG signals. The results showed that the MMN amplitude for the first call note was significantly larger than for that of the others. Moreover, the MMN amplitudes evoked from the left forebrain and midbrain were typically larger than those from the right counterpart. These results are consistent with the ideas that the first call note conveys more information than the others for auditory recognition and that there is left-hemisphere dominance for processing information derived from conspecific calls in frogs.

  19. Self-masking: Listening during vocalization. Normal hearing.

    Science.gov (United States)

    Borg, Erik; Bergkvist, Christina; Gustafsson, Dan

    2009-06-01

    What underlying mechanisms are involved in the ability to talk and listen simultaneously and what role does self-masking play under conditions of hearing impairment? The purpose of the present series of studies is to describe a technique for assessment of masked thresholds during vocalization, to describe normative data for males and females, and to focus on hearing impairment. The masking effect of vocalized [a:] on narrow-band noise pulses (250-8000 Hz) was studied using the maximum vocalization method. An amplitude-modulated series of sound pulses, which sounded like a steam engine, was masked until the criterion of halving the perceived pulse rate was reached. For masking of continuous reading, a just-follow-conversation criterion was applied. Intra-session test-retest reproducibility and inter-session variability were calculated. The results showed that female voices were more efficient in masking high frequency noise bursts than male voices and more efficient in masking both a male and a female test reading. The male had to vocalize 4 dBA louder than the female to produce the same masking effect on the test reading. It is concluded that the method is relatively simple to apply and has small intra-session and fair inter-session variability. Interesting gender differences were observed.

  20. Studies in musical acoustics and psychoacoustics

    CERN Document Server

    2017-01-01

    This book comprises twelve articles which cover a range of topics from musical instrument acoustics to issues in psychoacoustics and sound perception as well as neuromusicology. In addition to experimental methods and data acquisition, modeling (such as FEM or wave field synthesis) and numerical simulation plays a central role in studies addressing sound production in musical instruments as well as interaction of radiated sound with the environment. Some of the studies have a focus on psychoacoustic aspects in regard to virtual pitch and timbre as well as apparent source width (for techniques such as stereo or ambisonics) in music production. Since musical acoustics imply subjects playing instruments or singing in order to produce sound according to musical structures, this area is also covered including a study that presents an artifical intelligent agent capable to interact with a real ('analog') player in musical genres such as traditional and free jazz. .

  1. Condicionamento vocal individualizado para profissionais da voz cantada - relato de casos

    Directory of Open Access Journals (Sweden)

    Mara Behlau

    2014-10-01

    Full Text Available Este estudo tem como tema o desenvolvimento de programas de condicionamento vocal individualizado para a demanda específica de três profissionais da voz cantada. Trata-se de um relato de caso de três profissionais da voz cantada: uma cantora e atriz de teatro musical brasileiro, um cantor sertanejo e um cantor de roque. Os três indivíduos foram submetidos à avaliação fonoaudiológica, apresentando queixas relacionadas a cansaço e fadiga após uso da voz e/ou aperfeiçoamento vocal e realizaram avaliação otorrinolaringológica prévia. Os três pacientes estavam em processo de terapia fonoaudiológica, sendo uma das etapas o desenvolvimento de um programa de condicionamento vocal individualizado, de acordo com a demanda, necessidade e disponibilidade de cada sujeito. A cantora e atriz de teatro musical brasileiro aderiu facilmente à proposta personalizada de condicionamento vocal, pois sentia os ganhos de realização uma mobilização fisiológica antes de passar para a técnica-artística que já executava. O cantor sertanejo aderiu facilmente ao programa de condicionamento vocal individualizado, sem nenhuma dificuldade, e relatou extrema melhora em seu conforto e desempenho vocal durante o canto com os exercícios selecionados. O cantor de roque apresentou maior flexibilização de trato vocal, estabilidade da emissão no canto, ampliação da tessitura vocal, maior precisão articulatória e redução da constrição global excessiva após dar início ao programa de condicionamento vocal individualizado.O condicionamento vocal individualizado mostra efeitos positivos, principalmente para os profissionais da voz, pois os indivíduos trabalham exatamente sob a demanda que será utilizada com o uso profissional da voz,com exercícios específicos e direcionados para suas necessidades.

  2. Voicework in Music Therapy : Research and Practice

    NARCIS (Netherlands)

    Baker, Felicity; Uhlig, S.

    2011-01-01

    ‘Baker and Uhlig’s new book gives many salient examples of innovative vocal techniques and methods that can be used with different populations. This much needed and timely new book will add to the literature base of vocal music therapy as well as making a valuable contribution to our field by

  3. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  4. Signal-to-background ratio preferences of normal-hearing listeners as a function of music

    Science.gov (United States)

    Barrett, Jillian Gallant

    The purpose of this study was to identify listeners' signal-to-background-ratio (SBR) preference levels for vocal music and to investigate whether or not SBR differences existed for different music genres. The ``signal'' was the singer's voice, and the ``background'' was the accompanying music. Three songs were each produced in two different genres (total of 6 genres represented). Each song was performed by three male and three female singers. Analyses addressed influences of musical genre, singing style, and singer timbre on listener's SBR choices. Fifty-three normal-hearing California State University of Northridge students ranging in age from 20-52 years participated as subjects. Subjects adjusted the overall music loudness to a comfortable listening level, and manipulated a second gain control which affected only the singer's voice. Subjects listened to 72 stimuli and adjusted the singer's voice to the level they felt sounded appropriate in comparison to the background music. Singer and Genre were the two primary contributors to significant differences in subject's SBR preferences, although the results clearly indicate Genre, Style and Singer interact in different combinations under different conditions. SBR differences for each song, each singer, and each subject did not occur in a predictable manner, and support the hypothesis that SBR preferences are neither fixed nor dependent merely upon music application or setting. Further investigations regarding psychoacoustical bases responsible for differences in SBR preferences are warranted.

  5. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  6. METHODOLOGICAL ANALYSIS OF STUDYING THE PROBLEM OF PERCEPTION IN FUTURE MUSIC TEACHERS’ PROFESSIONAL TRAINING

    Directory of Open Access Journals (Sweden)

    Zhang Bo

    2017-04-01

    Full Text Available In the article the methodological analysis of problem of perception in future music teachers’ professional training is presented. The author of the article analyses works of outstanding scientists in philosophy, psychology, and art education. The hierarchical system of musical perception options is revealed. A methodological foundation is supported by consideration of the following modern research in specialty – a theory and methodology of musical study that gives proper appearance and circumstantiality to the presented material. Studying the vocal and choral researches in the field of forming the valued music art perception by future music teachers, an author sets an aim to present the methodological analysis of the problem of perception in future music teachers’ professional training. Realization of the system approach to updating the problem of forming the valued music art perception of future music teachers while being trained to vocal and choral work with senior pupils extends their artistic awareness; contributes to distinguishing art works, phenomena; to seeing their properties; to providing orientation in the informative content of music art works. The special attention is paid to revealing methodological principles of perception of category research in the aspect of the valued understanding images of music art works. As a result of analysing scientific sources on the issue of voice production the author of the article finds out that perception is densely related to transformation of external information, conditioning for forming images, operating category attention, memory, thinking, and emotions. The features of perception of maintaining vocal and choral studies and students’ extrapolation are analysed in the process of future professional activity with senior pupils in the aspects of perception and transformation of musical and intonation information, analysis, object perception, and interpretation in accordance with future

  7. The Perception of Sounds in Phonographic Space

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    . The third chapter examines how listeners understand and make sense of phonographic space. In the form of a critique of Pierre Schaeffer and Roger Scruton’s notion of the acousmatic situation, I argue that our experience of recorded music has a twofold focus: the sound-in-itself and the sound’s causality...... the use of metaphors and image schemas in the experience and conceptualisation of phonographic space. With reference to descriptions of recordings by sound engineers, I argue that metaphors are central to our understanding of recorded music. This work is grounded in the tradition of cognitive linguistics......This thesis is about the perception of space in recorded music, with particular reference to stereo recordings of popular music. It explores how sound engineers create imaginary musical environments in which sounds appear to listeners in different ways. It also investigates some of the conditions...

  8. A cervid vocal fold model suggests greater glottal efficiency in calling at high frequencies.

    Directory of Open Access Journals (Sweden)

    Ingo R Titze

    2010-08-01

    Full Text Available Male Rocky Mountain elk (Cervus elaphus nelsoni produce loud and high fundamental frequency bugles during the mating season, in contrast to the male European Red Deer (Cervus elaphus scoticus who produces loud and low fundamental frequency roaring calls. A critical step in understanding vocal communication is to relate sound complexity to anatomy and physiology in a causal manner. Experimentation at the sound source, often difficult in vivo in mammals, is simulated here by a finite element model of the larynx and a wave propagation model of the vocal tract, both based on the morphology and biomechanics of the elk. The model can produce a wide range of fundamental frequencies. Low fundamental frequencies require low vocal fold strain, but large lung pressure and large glottal flow if sound intensity level is to exceed 70 dB at 10 m distance. A high-frequency bugle requires both large muscular effort (to strain the vocal ligament and high lung pressure (to overcome phonation threshold pressure, but at least 10 dB more intensity level can be achieved. Glottal efficiency, the ration of radiated sound power to aerodynamic power at the glottis, is higher in elk, suggesting an advantage of high-pitched signaling. This advantage is based on two aspects; first, the lower airflow required for aerodynamic power and, second, an acoustic radiation advantage at higher frequencies. Both signal types are used by the respective males during the mating season and probably serve as honest signals. The two signal types relate differently to physical qualities of the sender. The low-frequency sound (Red Deer call relates to overall body size via a strong relationship between acoustic parameters and the size of vocal organs and body size. The high-frequency bugle may signal muscular strength and endurance, via a 'vocalizing at the edge' mechanism, for which efficiency is critical.

  9. CHAMBER VOCAL CREATIONS BY SNEJANA PÎSLARI: GENERAL CHARACTERISTICS, GENRE AND STYLE FEATURES

    Directory of Open Access Journals (Sweden)

    COADĂ TATIANA

    2016-06-01

    Full Text Available The author suggests a general characterization of the chamber vocal creations by Snejana Pîslari. The present work represents a detailed analysis of the romances, written by the composer on lyrics by M. Eminescu and N. Labiş. The author reveals the genre and style features of the chamber vocal works composed by S. Pîslari and the eccentricity of the musical language. Another landmark of the present work is S. Pîslari’s individual composition style which is distinguishable by the use of experimental ideas with elements of folklore, as well as by the use of new means of musical expressivity.

  10. Inner Sound: Altered States of Consciousness in Electronic Music and Audio-Visual Media

    DEFF Research Database (Denmark)

    Weinel, Jonathan

    Over the last century, developments in electronic music and art have enabled new possibilities for creating audio and audio-visual artworks. With this new potential has come the possibility for representing subjective internal conscious states, such as the experience of hallucinations, using...... the creative influence of ASCs, from Amazonian chicha festivals to the synaesthetic assaults of neon raves; and from an immersive outdoor electroacoustic performance on an Athenian hilltop to a mushroom trip on a tropical island in virtual reality. Beginning with a discussion of consciousness, the book...... explores how our subjective realities may change during states of dream, psychedelic experience, meditation, and trance. Taking a broad view across a wide range of genres, Inner Sound draws connections between shamanic art and music, and the modern technoshamanism of psychedelic rock, electronic dance...

  11. A prática musical religiosa no Brasil e em Portugal na segunda metade do século XVIII: paralelo e fundamentação para a interpretação vocal da música de José Joaquim Emerico Lobo de Mesquita Religious musical practice in Brazil and Portugal in the second half of eighteenth century: parallels and basis for the vocal interpretation of the music of José Joaquim Emerico Lobo de Mesquita

    Directory of Open Access Journals (Sweden)

    Katya Beatriz de Oliveira

    2011-12-01

    Full Text Available Na busca da sonoridade histórica e estilisticamente embasada para a interpretação vocal da música sacra mineira da segunda metade do século XVIII e início do XIX, tornase necessário um paralelo com a música realizada em Portugal durante o século XVIII, que constate a influência desta sobre a música mineira setecentista. Neste artigo pretendemos investigar as similaridades estilísticas entre a música do compositor mineiro José Joaquim Emerico Lobo de Mesquita, particularmente no solo de soprano de sua Missa em Mi bemol, com o solo de soprano de uma Missa a cinco vozes do compositor David Perez e o primeiro movimento do moteto Care Deus si respiro, para soprano solo e cordas, de Niccolò Jommelli, compositores italianos da escola napolitana atuantes em Portugal na segunda metade do século XVIII. Temos como objetivo justificar a utilização de tratados europeus para a fundamentação dos aspectos interpretativos da música mineira.In order to achieve a historically and stylisticallybased vocal interpretation of the sacred music of Minas Gerais (Brazil in the second half of the eighteenth century and early nineteenth century, it is necessary to compare it with Portuguese music written during the same period, thus tracing the Portuguese influence in the Brazilian works. In this essay we intend to investigate the similarities in style between the music of the composer from Minas Gerais José Joaquim Emerico Lobo de Mesquita, particularly in the soprano solo of his Mass in E flat, and the soprano solo of a Mass for five voices by David Perez and the first movement of the motet Care Deus si respiro, for soprano solo and strings, by Niccoló Jommelli, Italian composers of the Neapolitan school who worked for the Portuguese court in the second half of the 18th century. We will therefore attempt to justify the use of historical European singing treatises to use as basis for the vocal performance of this music from Minas Gerais.

  12. Practiced musical style shapes auditory skills.

    Science.gov (United States)

    Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari

    2012-04-01

    Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music. © 2012 New York Academy of Sciences.

  13. Experiments in Area of Musical Sound in the Chamber and Instrumental Works by Rodion Shchedrin

    Directory of Open Access Journals (Sweden)

    Zaytseva Marina

    2016-10-01

    Full Text Available The article scientifically proves the peculiarities of Rodion Shchedrin’s musical thinking. Having analysed such piano works by Rodion Shchedrin as "Imitating Albéniz", "Humoresque", arranged for violin and piano by D. M. Tsyganov, "Balalaika" for solo violin without a bow, there have been identified composer’s innovations in the field of violin sound. It has been proved that the search for new expressive violin-coloristic resources was due to the desire of the composer to discover the new worlds of sound, create an original work, which would masterfully implement the most complex creative tasks.

  14. The musical brain: brain waves reveal the neurophysiological basis of musicality in human subjects.

    Science.gov (United States)

    Tervaniemi, M; Ilvonen, T; Karma, K; Alho, K; Näätänen, R

    1997-04-18

    To reveal neurophysiological prerequisites of musicality, auditory event-related potentials (ERPs) were recorded from musical and non-musical subjects, musicality being here defined as the ability to temporally structure auditory information. Instructed to read a book and to ignore sounds, subjects were presented with a repetitive sound pattern with occasional changes in its temporal structure. The mismatch negativity (MMN) component of ERPs, indexing the cortical preattentive detection of change in these stimulus patterns, was larger in amplitude in musical than non-musical subjects. This amplitude enhancement, indicating more accurate sensory memory function in musical subjects, suggests that even the cognitive component of musicality, traditionally regarded as depending on attention-related brain processes, in fact, is based on neural mechanisms present already at the preattentive level.

  15. Acoustics for Music Majors-- A Laboratory Course

    Science.gov (United States)

    McDonald, Perry F.

    1972-01-01

    Brief descriptions of several of the laboratory experiments which have been incorporated into an acoustics course for music majors. Includes vibratory motion and sound generation, nature, speed, and pitch of sound, spectrum analysis and electronic synthesis of musical sound and some conventional sound experiments. (Author/TS)

  16. Ostracized Sounds: Notes on Busking Music

    Directory of Open Access Journals (Sweden)

    Ignazio Macchiarella

    2015-07-01

    Full Text Available Today, buskers’ music has a special fascination, above all in urban context, as it is one of the rare occasion to listen to lively performances. Many people nowadays appreciate the street performance and stop to listen to it: otherwise than canned music that comes out from the speakers, live music is immediately perceived like living aspect of public life, whatever its technical or aesthetic qualities. Nonetheless, buskers are sometimes banned and persecuted by the city police regulations. The paper introduces this specific range of music making on the basis of a field research experience.

  17. Vocal Fry Use in Adult Female Speakers Exposed to Two Languages.

    Science.gov (United States)

    Gibson, Todd A; Summers, Connie; Walls, Sydney

    2017-07-01

    Several studies have identified the widespread use of vocal fry among American women. Popular explanations for this phenomenon appeal to sociolinguistic purposes that likely take significant time for second language users to learn. The objective of this study was to determine if mere exposure to this vocal register, as opposed to nuanced sociolinguistic motivations, might explain its widespread use. This study used multigroup within- and between-subjects design. Fifty-eight women from one of three language background groups (functionally monolingual in English, functionally monolingual in Spanish, and Spanish-English bilinguals) living in El Paso, Texas, repeated a list of nonwords conforming to the sound rules of English and another list of nonwords conforming to the sound rules of Spanish. Perceptual analysis identified each episode of vocal fry. There were no statistically significant differences between groups in their frequency of vocal fry use despite large differences in their amount of English-language exposure. All groups produced more vocal fry when repeating English than when repeating Spanish nonwords. Because the human perceptual system encodes for vocal qualities even after minimal language experience, the widespread use of vocal fry among female residents in the United States likely is owing to mere exposure to English rather than nuanced sociolinguistic motivations. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. The vocal load of Reform Jewish cantors in the USA.

    Science.gov (United States)

    Hapner, Edie; Gilman, Marina

    2012-03-01

    Jewish cantors comprise a subset of vocal professionals that is not well understood by vocal health professionals. This study aimed to document the vocal demands, vocal training, reported incidence of voice problems, and treatment-seeking behavior of Reform Jewish cantors. The study used a prospective observational design to anonymously query Reform Jewish cantors using a 35-item multiple-choice survey distributed online. Demographic information, medical history, vocal music training, cantorial duties, history of voice problems, and treatment-seeking behavior were addressed. Results indicated that many of the commonly associated risk factors for developing voice disorders were present in this population, including high vocal demands, reduced vocal downtime, allergies, and acid reflux. Greater than 65% of the respondents reported having had a voice problem that interfered with their ability to perform their duties at some time during their careers. Reform Jewish cantors are a population of occupational voice users who may be currently unidentified and underserved by vocal health professionals. The results of the survey suggest that Reform Jewish cantors are occupational voice users and are at high risk for developing voice disorders. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. Sound attenuation and preferred music in the treatment of problem behavior maintained by escape from noise.

    Science.gov (United States)

    Kettering, Tracy L; Fisher, Wayne W; Kelley, Michael E; LaRue, Robert H

    2018-06-06

    We examined the extent to which different sounds functioned as motivating operations (MO) that evoked problem behavior during a functional analysis for two participants. Results suggested that escape from loud noises reinforced the problem behavior for one participant and escape from arguing reinforced problem behavior for the other participant. Noncontingent delivery of preferred music through sound-attenuating headphones decreased problem behavior without the use of extinction for both participants. We discuss the results in terms of the abolishing effects of the intervention. © 2018 Society for the Experimental Analysis of Behavior.

  20. Ultrahromatizm as a Sound Meditation

    Directory of Open Access Journals (Sweden)

    Zaytseva Marina

    2016-08-01

    Full Text Available The article scientifically substantiates the insights on the theory and the practice of using microchromatic in modern musical art, defines compositional and expressive possibilities of microtonal system in the works of composers of XXI century. It justifies the author's interpretation of the concept of “ultrahromatizm”, as a principle of musical thinking, which is connected with the sound space conception as the space-time continuum. The paper identifies the correlation of the notions “microchromatism” and “ultrahromatizm”. If microchromosome is understood, first and for most, as the technique of dividing the sound into microparticles, ultrahromatizm is interpreted as the principle of musical and artistic consciousness, as the musical focus of consciousness on the formation of the specific model of sound meditation and understanding of the world.

  1. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  2. Vocal mechanisms in birds and bats: a comparative view

    Directory of Open Access Journals (Sweden)

    Suthers Roderick A.

    2004-01-01

    Full Text Available Vocal signals play a very important role in the life of both birds and echolocating bats, but these two unrelated groups of flying vertebrates have very different vocal systems. They nevertheless must solve many of the same problems in producing sound. This brief review examines avian and microchiropteran motor mechanisms for: 1 coordinating the timing of phonation with the vocal motor pattern that controls its acoustic properties, and 2 achieving respiratory strategies that provide adequate ventilation for pulmonary gas exchange, while also facilitating longer duration songs or trains of sonar pulses.

  3. Effects of vocal training in a musicophile with congenital amusia.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Vuvan, Dominique T; Girard, Pier-Yves; Peretz, Isabelle; Russo, Frank A

    2016-12-01

    Congenital amusia is a condition in which an individual suffers from a deficit of musical pitch perception and production. Individuals suffering from congenital amusia generally tend to abstain from musical activities. Here, we present the unique case of Tim Falconer, a self-described musicophile who also suffers from congenital amusia. We describe and assess Tim's attempts to train himself out of amusia through a self-imposed 18-month program of formal vocal training and practice. We tested Tim with respect to music perception and vocal production across seven sessions including pre- and post-training assessments. We also obtained diffusion-weighted images of his brain to assess connectivity between auditory and motor planning areas via the arcuate fasciculus (AF). Tim's behavioral and brain data were compared to that of normal and amusic controls. While Tim showed temporary gains in his singing ability, he did not reach normal levels, and these gains faded when he was not engaged in regular lessons and practice. Tim did show some sustained gains with respect to the perception of musical rhythm and meter. We propose that Tim's lack of improvement in pitch perception and production tasks is due to long-standing and likely irreversible reduction in connectivity along the AF fiber tract.

  4. Singing in Primary Schools: Case Studies of Good Practice in Whole Class Vocal Tuition

    Science.gov (United States)

    Lamont, Alexandra; Daubney, Alison; Spruce, Gary

    2012-01-01

    Within the context of British initiatives in music education such as the Wider Opportunities programme in England and the recommendations of the Music Manifesto emphasising the importance of singing in primary schools, the current paper explores examples of good practice in whole-class vocal tuition. The research included seven different primary…

  5. Auditory responses in the amygdala to social vocalizations

    Science.gov (United States)

    Gadziola, Marie A.

    The underlying goal of this dissertation is to understand how the amygdala, a brain region involved in establishing the emotional significance of sensory input, contributes to the processing of complex sounds. The general hypothesis is that communication calls of big brown bats (Eptesicus fuscus) transmit relevant information about social context that is reflected in the activity of amygdalar neurons. The first specific aim analyzed social vocalizations emitted under a variety of behavioral contexts, and related vocalizations to an objective measure of internal physiological state by monitoring the heart rate of vocalizing bats. These experiments revealed a complex acoustic communication system among big brown bats in which acoustic cues and call structure signal the emotional state of a sender. The second specific aim characterized the responsiveness of single neurons in the basolateral amygdala to a range of social syllables. Neurons typically respond to the majority of tested syllables, but effectively discriminate among vocalizations by varying the response duration. This novel coding strategy underscores the importance of persistent firing in the general functioning of the amygdala. The third specific aim examined the influence of acoustic context by characterizing both the behavioral and neurophysiological responses to natural vocal sequences. Vocal sequences differentially modify the internal affective state of a listening bat, with lower aggression vocalizations evoking the greatest change in heart rate. Amygdalar neurons employ two different coding strategies: low background neurons respond selectively to very few stimuli, whereas high background neurons respond broadly to stimuli but demonstrate variation in response magnitude and timing. Neurons appear to discriminate the valence of stimuli, with aggression sequences evoking robust population-level responses across all sound levels. Further, vocal sequences show improved discrimination among stimuli

  6. Adopting a music-to-heart rate alignment strategy to measure the impact of music and its tempo on human heart rate

    OpenAIRE

    Van Dyck, Edith; Six, Joren; Soyer, Esin Nisa; Denys, Marlies; Bardijn, Ilka; Leman, Marc

    2017-01-01

    Music is frequently used as a means of relaxation. Conversely, it is used as a means of arousal in sports and exercise contexts. Previous research suggests that tempo is one of the most significant determinants of music-related arousal and relaxation effects. Here we investigate the specific effect of music tempo, but also more generally, the influence of music on human heart rate. We took the pulses of 32 participants in silence, and then we played them non-vocal, ambient music at a tempo co...

  7. Music and Careers for the Junior High Student.

    Science.gov (United States)

    Carlson, Bruce

    The curriculum guide describes an exemplary project designed to provide junior high school students with an opportunity to explore careers related to the world of music. The units present objectives, activities, and resources related to the following occupations: pop music artist, professional musician (union), instrumental and vocal music…

  8. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  9. Music and Culture Areas of Native California

    OpenAIRE

    Keeling, Richard

    1992-01-01

    This paper sketches the principal music and culture areas of native California and identifies general characteristics that distinguish the region in the overall sphere of Native American music. Rather than provide notations or detailed analyses I describe the music according to a set of general parameters that I have found useful in previous comparative research. The following elements are considered: (1) vocal quality or timbre; (2) presence of words or vocables, text-setting, and repetition...

  10. Universal mechanisms of sound production and control in birds and mammals

    DEFF Research Database (Denmark)

    Elemans, Coen; Rasmussen, Jeppe Have; Herbst, Christian T.

    2015-01-01

    As animals vocalize, their vocal organ transforms motor commands into vocalizations for social communication. In birds, the physical mechanisms by which vocalizations are produced and controlled remain unresolved because of the extreme difficulty in obtaining in vivo measurements. Here, we...... learning and is common to MEAD sound production across birds and mammals, including humans....

  11. Sound for Health

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    From astronomy to biomedical sciences: music and sound as tools for scientific investigation Music and science are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge. Science and technology have revolutionised the way artists work, interact, and create. The impact of innovative materials, new communication media, more powerful computers, and faster networks on the creative process is evident: we all can become artists in the digital era. What is less known, is that arts, and music in particular, are having a profound impact the way scientists operate, and think. From the early experiments by Kepler to the modern data sonification applications in medicine – sound and music are playing an increasingly crucial role in supporting science and driving innovation. In this talk. Dr. Domenico Vicinanza will be highlighting the complementarity and the natural synergy between music and science, with specific reference to biomedical sciences. Dr. Vicinanza will take t...

  12. Preferred sound levels of portable music players and listening habits among adults: a field study.

    Science.gov (United States)

    Kähäri, Kim R; Aslund, T; Olsson, J

    2011-01-01

    The main purpose of this descriptive field study was to explore music listening habits and preferred listening levels with portable music players (PMPs). We were also interested in seeing whether any exposure differences could be observed between the sexes. Data were collected during 12 hours at Stockholm Central Station, where people passing by were invited to measure their preferred PMP listening level by using a KEMAR manikin. People were also asked to answer a questionnaire about their listening habits. In all, 60 persons (41 men and 19 women) took part in the questionnaire study and 61 preferred PMP levels to be measured. Forty-one of these sound level measurements were valid to be reported after consideration was taken to acceptable measuring conditions. The women (31 years) and the men (33 years) started to use PMPs on a regular basis in their early 20s. Ear canal headphones/ear buds were the preferred headphone types. Fifty-seven percent of the whole study population used their PMP on a daily basis. The measured LAeq60 sec levels corrected for free field ranged between 73 and 102 dB, with a mean value of 83 dB. Sound levels for different types of headphones are also presented. The results of this study indicate that there are two groups of listeners: people who listen less frequently and at lower, safer sound levels, and people with excessive listening habits that may indeed damage their hearing sensory organ in time.

  13. Preferred sound levels of portable music players and listening habits among adults: A field study

    Directory of Open Access Journals (Sweden)

    Kim R Kahari

    2011-01-01

    Full Text Available The main purpose of this descriptive field study was to explore music listening habits and preferred listening levels with portable music players (PMPs. We were also interested in seeing whether any exposure differences could be observed between the sexes. Data were collected during 12 hours at Stockholm Central Station, where people passing by were invited to measure their preferred PMP listening level by using a KEMAR manikin. People were also asked to answer a questionnaire about their listening habits. In all, 60 persons (41 men and 19 women took part in the questionnaire study and 61 preferred PMP levels to be measured. Forty-one of these sound level measurements were valid to be reported after consideration was taken to acceptable measuring conditions. The women (31 years and the men (33 years started to use PMPs on a regular basis in their early 20s. Ear canal headphones/ear buds were the preferred headphone types. Fifty-seven percent of the whole study population used their PMP on a daily basis. The measured LAeq60 sec levels corrected for free field ranged between 73 and 102 dB, with a mean value of 83 dB. Sound levels for different types of headphones are also presented. The results of this study indicate that there are two groups of listeners: people who listen less frequently and at lower, safer sound levels, and people with excessive listening habits that may indeed damage their hearing sensory organ in time.

  14. Rhesus monkeys (Macaca mulatta) detect rhythmic groups in music, but not the beat.

    Science.gov (United States)

    Honing, Henkjan; Merchant, Hugo; Háden, Gábor P; Prado, Luis; Bartolo, Ramón

    2012-01-01

    It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates. This observation supports the vocal learning hypothesis that suggests rhythmic entrainment to be a by-product of the vocal learning mechanisms that are shared by several bird and mammal species, including humans, but that are only weakly developed, or missing entirely, in nonhuman primates. To test this hypothesis we measured auditory event-related potentials (ERPs) in two rhesus monkeys (Macaca mulatta), probing a well-documented component in humans, the mismatch negativity (MMN) to study rhythmic expectation. We demonstrate for the first time in rhesus monkeys that, in response to infrequent deviants in pitch that were presented in a continuous sound stream using an oddball paradigm, a comparable ERP component can be detected with negative deflections in early latencies (Experiment 1). Subsequently we tested whether rhesus monkeys can detect gaps (omissions at random positions in the sound stream; Experiment 2) and, using more complex stimuli, also the beat (omissions at the first position of a musical unit, i.e. the 'downbeat'; Experiment 3). In contrast to what has been shown in human adults and newborns (using identical stimuli and experimental paradigm), the results suggest that rhesus monkeys are not able to detect the beat in music. These findings are in support of the hypothesis that beat induction (the cognitive mechanism that supports the perception of a regular pulse from a varying rhythm) is species-specific and absent in nonhuman primates. In addition, the findings support the auditory timing dissociation hypothesis, with rhesus monkeys being sensitive to rhythmic grouping (detecting the start of a rhythmic group), but not to the induced beat (detecting a regularity from a varying rhythm).

  15. Rhesus monkeys (Macaca mulatta detect rhythmic groups in music, but not the beat.

    Directory of Open Access Journals (Sweden)

    Henkjan Honing

    Full Text Available It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates. This observation supports the vocal learning hypothesis that suggests rhythmic entrainment to be a by-product of the vocal learning mechanisms that are shared by several bird and mammal species, including humans, but that are only weakly developed, or missing entirely, in nonhuman primates. To test this hypothesis we measured auditory event-related potentials (ERPs in two rhesus monkeys (Macaca mulatta, probing a well-documented component in humans, the mismatch negativity (MMN to study rhythmic expectation. We demonstrate for the first time in rhesus monkeys that, in response to infrequent deviants in pitch that were presented in a continuous sound stream using an oddball paradigm, a comparable ERP component can be detected with negative deflections in early latencies (Experiment 1. Subsequently we tested whether rhesus monkeys can detect gaps (omissions at random positions in the sound stream; Experiment 2 and, using more complex stimuli, also the beat (omissions at the first position of a musical unit, i.e. the 'downbeat'; Experiment 3. In contrast to what has been shown in human adults and newborns (using identical stimuli and experimental paradigm, the results suggest that rhesus monkeys are not able to detect the beat in music. These findings are in support of the hypothesis that beat induction (the cognitive mechanism that supports the perception of a regular pulse from a varying rhythm is species-specific and absent in nonhuman primates. In addition, the findings support the auditory timing dissociation hypothesis, with rhesus monkeys being sensitive to rhythmic grouping (detecting the start of a rhythmic group, but not to the induced beat (detecting a regularity from a varying rhythm.

  16. The speech choir in central European theatres and literary-musical works in the first third of the 20th century

    Directory of Open Access Journals (Sweden)

    Meyer-Kalkus Reinhart

    2015-01-01

    Full Text Available Speech choirs emerged as an offshoot of the choral gatherings of a wider youth musical and singing movement in the first half of the 20th century. The occasionally expressed opinion that choral speaking was cultivated primarily by the Hitler Youth and pressed into service on behalf of Nazi nationalist and racist propaganda is, historically, only partially accurate. The primary forces of choral speaking in Germany were, from 1919, the Social Democratic workers’ and cultural movement and the Catholic youth groups, in addition to elementary and secondary schools. The popularity of speech choirs around 1930 was also echoed in the music of the time. Compositions for musical speech choirs were produced by composers like Heinz Thiessen, Arnold Schönberg, Ernst Toch, Carl Orff, Vladimir Vogel, Luigi Nono, Helmut Lachenmann and Wolfgang Rihm. Moving forward from the Schönberg School, the post-1945 new music thereby opens up the spectrum of vocal expressions of sound beyond that of the singing voice. It does so not only for solo voices but for the choir as well.

  17. Categorization of common sounds by cochlear implanted and normal hearing adults.

    Science.gov (United States)

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. [The Bell Labs contributions to (singing) voice enginee­ring].

    Science.gov (United States)

    Vincent, C

    While in «art» and «traditional» music, the nimbleness of the voice and the mastering of the vocal tone are put into pers­pective, in «popular» music, sound engineering takes the lead, and relegates the vocal virtuosity of the interpreter to second place. We propose to study here three technologies with contri­butions to music. All are developed and patented by the Bell Labs: The artificial larynx (and its derivatives, Sonovox and TalkBox), the vocoder and the speech synthesis. After a presen­tation of the source-filter theory, vital to these innovations, the principle of these three technologies is explained. A brief historical is outlined and is complemented by examples of films and musical selections depicting these processes. In light of these elements, we conclude: Sound engineering, and in parti­cular the modification of voice sonority, has become an indis­pensable component in the process of «pop» artistic musical creation.

  19. Mobile phone conversations, listening to music and quiet (electric) cars : are traffic sounds important for safe cycling?

    NARCIS (Netherlands)

    Stelling-Konczak, A. Wee, G.P. van Commandeur, J.J.F. & Hagenzieker, M.P.

    2017-01-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe

  20. Mobile phone conversations, listening to music and quiet (electric) cars : Are traffic sounds important for safe cycling?

    NARCIS (Netherlands)

    Stelling-Konczak, A.; van Wee, G. P.; Commandeur, J. J.F.; Hagenzieker, M.

    2017-01-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe

  1. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  2. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  3. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  4. The Importance of Vocal Parameters Correlation

    Directory of Open Access Journals (Sweden)

    Valentin Ghisa

    2016-06-01

    Full Text Available To analyze communication we need to study the main parameters that describe the vocal sounds from the point of view of information content transfer efficiency. In this paper we analyze the physical quality of the “on air" information transfer, according to the audio streaming parameters and from the particular phonetic nature of the human factor. Applying this statistical analysis we aim to identify and record the correlation level of the acoustical parameters with the vocal ones and the impact which the presence of this cross-correlation can have on communication structures’ improvement.

  5. MUSIC-CONTENT-ADAPTIVE ROBUST PRINCIPAL COMPONENT ANALYSIS FOR A SEMANTICALLY CONSISTENT SEPARATION OF FOREGROUND AND BACKGROUND IN MUSIC AUDIO SIGNALS

    OpenAIRE

    Papadopoulos , Hélène; Ellis , Daniel P.W.

    2014-01-01

    International audience; Robust Principal Component Analysis (RPCA) is a technique to decompose signals into sparse and low rank components, and has recently drawn the attention of the MIR field for the problem of separating leading vocals from accompaniment, with appealing re-sults obtained on small excerpts of music. However, the perfor-mance of the method drops when processing entire music tracks. We present an adaptive formulation of RPCA that incorporates music content information to guid...

  6. The Development and Validation of a Rubric to Enhance Performer Feedback for Undergraduate Vocal Solo Performance

    Science.gov (United States)

    Herrell, Katherine A.

    2014-01-01

    This is a study of the development and validation of a rubric to enhance performer feedback for undergraduate vocal solo performance. In the literature, assessment of vocal performance is under-represented, and the value of feedback from the assessment of musical performances, from the point of view of the performer, is nonexistent. The research…

  7. Music Listening Is Creative

    Science.gov (United States)

    Kratus, John

    2017-01-01

    Active music listening is a creative activity in that the listener constructs a uniquely personal musical experience. Most approaches to teaching music listening emphasize a conceptual approach in which students learn to identify various characteristics of musical sound. Unfortunately, this type of listening is rarely done outside of schools. This…

  8. Characteristics of phonation onset in a two-layer vocal fold model.

    Science.gov (United States)

    Zhang, Zhaoyan

    2009-02-01

    Characteristics of phonation onset were investigated in a two-layer body-cover continuum model of the vocal folds as a function of the biomechanical and geometric properties of the vocal folds. The analysis showed that an increase in either the body or cover stiffness generally increased the phonation threshold pressure and phonation onset frequency, although the effectiveness of varying body or cover stiffness as a pitch control mechanism varied depending on the body-cover stiffness ratio. Increasing body-cover stiffness ratio reduced the vibration amplitude of the body layer, and the vocal fold motion was gradually restricted to the medial surface, resulting in more effective flow modulation and higher sound production efficiency. The fluid-structure interaction induced synchronization of more than one group of eigenmodes so that two or more eigenmodes may be simultaneously destabilized toward phonation onset. At certain conditions, a slight change in vocal fold stiffness or geometry may cause phonation onset to occur as eigenmode synchronization due to a different pair of eigenmodes, leading to sudden changes in phonation onset frequency, vocal fold vibration pattern, and sound production efficiency. Although observed in a linear stability analysis, a similar mechanism may also play a role in register changes at finite-amplitude oscillations.

  9. Predicting Achievable Fundamental Frequency Ranges in Vocalization Across Species.

    Directory of Open Access Journals (Sweden)

    Ingo Titze

    2016-06-01

    Full Text Available Vocal folds are used as sound sources in various species, but it is unknown how vocal fold morphologies are optimized for different acoustic objectives. Here we identify two main variables affecting range of vocal fold vibration frequency, namely vocal fold elongation and tissue fiber stress. A simple vibrating string model is used to predict fundamental frequency ranges across species of different vocal fold sizes. While average fundamental frequency is predominantly determined by vocal fold length (larynx size, range of fundamental frequency is facilitated by (1 laryngeal muscles that control elongation and by (2 nonlinearity in tissue fiber tension. One adaptation that would increase fundamental frequency range is greater freedom in joint rotation or gliding of two cartilages (thyroid and cricoid, so that vocal fold length change is maximized. Alternatively, tissue layers can develop to bear a disproportionate fiber tension (i.e., a ligament with high density collagen fibers, increasing the fundamental frequency range and thereby vocal versatility. The range of fundamental frequency across species is thus not simply one-dimensional, but can be conceptualized as the dependent variable in a multi-dimensional morphospace. In humans, this could allow for variations that could be clinically important for voice therapy and vocal fold repair. Alternative solutions could also have importance in vocal training for singing and other highly-skilled vocalizations.

  10. Vocal warm-up practices and perceptions in vocalists: a pilot survey.

    Science.gov (United States)

    Gish, Allison; Kunduk, Melda; Sims, Loraine; McWhorter, Andrew J

    2012-01-01

    Investigated in a pilot study the type, duration, and frequency of vocal warm-up regimens in the singing community using a survey. One hundred seventeen participants completed an online survey. Participants included voice students from undergraduate, masters, and doctoral music programs and professional singers. Fifty-four percent of participants reported always using vocal warm-up before singing. Twenty-two percent of the participants used vocal cool down. The most preferred warm-up duration was of 5-10 minutes in duration. Despite using vocal warm-up, 26% of the participants reported experiencing voice problems. Females tended to use vocal warm-up more frequently than males. Females also tended to use longer warm-up sessions than males. Education of the participants did not appear to have any noticeable effect on the vocal warm-up practices. The most commonly used singing warm-up exercises were ascending/descending five-note scales, ascending/descending octave scales, legato arpeggios, and glissandi. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  11. Human Computer Music Performance

    OpenAIRE

    Dannenberg, Roger B.

    2012-01-01

    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...

  12. Oral and vocal fold diadochokinesis in dysphonic women.

    Science.gov (United States)

    Louzada, Talita; Beraldinelle, Roberta; Berretin-Felix, Giédre; Brasolotto, Alcione Ghedini

    2011-01-01

    The evaluation of oral and vocal fold diadochokinesis (DDK) in individuals with voice disorders may contribute to the understanding of factors that affect the balanced vocal production. Scientific studies that make use of this assessment tool support the knowledge advance of this area, reflecting the development of more appropriate therapeutic planning. To compare the results of oral and vocal fold DDK in dysphonic women and in women without vocal disorders. For this study, 28 voice recordings of women from 19 to 54 years old, diagnosed with dysphonia and submitted to a voice assessment from speech pathologist and otorhinolaryngologist, were used. The control group included 30 nondysphonic women evaluated in prior research from normal adults. The analysis parameters like number and duration of emissions, as well as the regularity of the repetition of syllables "pa", "ta", "ka" and the vowels "a" and "i," were provided by the Advanced Motor Speech Profile program (MSP) Model-5141, version-2.5.2 (KayPentax). The DDK sequence "pataka" was analyzed quantitatively through the Sound Forge 7.0 program, as well as manually with the audio-visual help of sound waves. Average values of oral and vocal fold DDK dysphonic and nondysphonic women were compared using the "t Student" test and were considered significant when pwomen (CvP=10.42%, 12.79%, 12.05%; JittP=2.05%, 6.05%, 3.63%) compared to the control group (CvP=8.86%; 10.95%, 11.20%; JittP=1.82%, 2.98%, 3.15%). Although the results do not indicate any difficulties in oral and laryngeal motor control in the dysphonic group, the largest instability in vocal fold DDK in the experimental group should be considered, and studies of this ability in individuals with communication disorders must be intensified.

  13. Experimenting with Brass Musical Instruments.

    Science.gov (United States)

    LoPresto, Michael C.

    2003-01-01

    Describes experiments to address the properties of brass musical instruments that can be used to demonstrate sound in any level physics course. The experiments demonstrate in a quantitative fashion the effects of the mouthpiece and bell on the frequencies of sound waves and thus the musical pitches produced. (Author/NB)

  14. Finite element modelling of vocal tract changes after voice therapy

    Directory of Open Access Journals (Sweden)

    Vampola T.

    2011-06-01

    Full Text Available Two 3D finite element (FE models were constructed, based on CT measurements of a subject phonating on [a:] before and after phonation into a tube. Acoustic analysis was performed by exciting the models with acoustic flow velocity at the vocal folds. The generated acoustic pressure of the response was computed in front of the mouth and inside the vocal tract for both FE models. Average amplitudes of the pressure oscillations inside the vocal tract and in front of the mouth were compared to display the cost-efficiency of sound energy transfer at different formant frequencies. The formants F1–F3 correspond to classical vibration modes also solvable by 1D vocal tract model. However, for higher formants, there occur more complicated transversal modes which require 3D modelling. A special attention is given to the higher frequency range (above 3.5 Hz where transversal modes exist between piriform sinuses and valleculae. Comparison of the pressure oscillation inside and outside the vocal tract showed that formants differ in their efficiency, F4 (at about 3.5 kHz, i.e. at the speaker’s or singer’s formant region being the most effective. The higher formants created a clear formant cluster around 4 kHz after the vocal exercise with the tube. Since the human ear is most sensitive to frequencies between 2 and 4 kHz concentration of sound energy in this frequency region (F4–F5 is effective for communication. The results suggest that exercising using phonation into tubes help in improving the vocal economy.

  15. Stereotypic Laryngeal and Respiratory Motor Patterns Generate Different Call Types in Rat Ultrasound Vocalization

    Science.gov (United States)

    RIEDE, TOBIAS

    2014-01-01

    Rodents produce highly variable ultrasound whistles as communication signals unlike many other mammals, who employ flow-induced vocal fold oscillations to produce sound. The role of larynx muscles in controlling sound features across different call types in ultrasound vocalization (USV) was investigated using laryngeal muscle electromyographic (EMG) activity, subglottal pressure measurements and vocal sound output in awake and spontaneously behaving Sprague–Dawley rats. Results support the hypothesis that glottal shape determines fundamental frequency. EMG activities of thyroarytenoid and cricothyroid muscles were aligned with call duration. EMG intensity increased with fundamental frequency. Phasic activities of both muscles were aligned with fast changing fundamental frequency contours, for example in trills. Activities of the sternothyroid and sternohyoid muscles, two muscles involved in vocal production in other mammals, are not critical for the production of rat USV. To test how stereotypic laryngeal and respiratory activity are across call types and individuals, sets of ten EMG and subglottal pressure parameters were measured in six different call types from six rats. Using discriminant function analysis, on average 80% of parameter sets were correctly assigned to their respective call type. This was significantly higher than the chance level. Since fundamental frequency features of USV are tightly associated with stereotypic activity of intrinsic laryngeal muscles and muscles contributing to build-up of subglottal pressure, USV provide insight into the neurophysiological control of peripheral vocal motor patterns. PMID:23423862

  16. An analysis of rhythm in Japanese and English popular music

    NARCIS (Netherlands)

    Sadakata, M.; Desain, P.W.M.; Honing, H.J.; Patel, A.D.; Iversen, J.R.

    2003-01-01

    Recently, there has been evidence that the rhythm in English and French non-vocal musical themes are significantly different in their contrastiveness of successive durations in the same manner as those of spoken language, suggesting that acomposer's native language exerts an influence on the music

  17. Phonetic characteristics of vocalizations during pain

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Lautenbacher, Stefan; Salinas-Ranneberg, Melissa

    2017-01-01

    ” (central vowel, sounding like a darker “e” as in hesitations like “ehm”)—as experimental approximations to natural vocalizations. Methods: In 50 students vowel production and self-report ratings were assessed during painful and nonpainful heat stimulation (hot water immersion) as well as during baseline......Introduction and Objectives: There have, yet, been only few attempts to phonetically characterize the vocalizations of pain, although there is wide agreement that moaning, groaning, or other nonverbal utterance can be indicative of pain. We studied the production of vowels “u,” “a,” “i”, and “schwa...... pain. Furthermore, changes from nonpainful to painful stimulations in these parameters also significantly predicted concurrent changes in pain ratings. Conclusion: Vocalization characteristics of pain seem to be best described by an increase in pitch and in loudness. Future studies using more specific...

  18. Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.

    Science.gov (United States)

    Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo

    2014-04-01

    Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.

  19. Music holographic physiotherapy by laser

    Science.gov (United States)

    Liao, Changhuan

    1996-09-01

    Based on the relationship between music and nature, the paper compares laser and light with music sound on the principles of synergetics, describes music physically and objectively, and proposes a music holographic therapy by laser. Maybe it will have certain effects on mechanism study and clinical practice of the music therapy.

  20. Investigation of the Sound Pressure Level (SPL) of earphones during music listening with the use of physical ear canal models

    Science.gov (United States)

    Aying, K. P.; Otadoy, R. E.; Violanda, R.

    2015-06-01

    This study investigates on the sound pressure level (SPL) of insert-type earphones that are commonly used for music listening of the general populace. Measurements of SPL from earphones of different respondents were measured by plugging the earphone to a physical ear canal model. Durations of the earphone used for music listening were also gathered through short interviews. Results show that 21% of the respondents exceed the standard loudness/duration relation recommended by the World Health Organization (WHO).

  1. Hearing the Music in the Spectrum of Hydrogen

    Science.gov (United States)

    LoPresto, Michael C.

    2016-01-01

    Throughout a general education course on sound and light aimed at music and art students, analogies between subjective perceptions of objective properties of sound and light waves are a recurring theme. Demonstrating that the pitch and loudness of musical sounds are related to the frequency and intensity of a sound wave is simple and students are…

  2. RIJOQ: VOCAL MUSIC OF DAYAK BENUAQ FROM KUTAI, EAST KALIMANTAN

    Directory of Open Access Journals (Sweden)

    Ester

    2014-06-01

    Full Text Available Rijoq, a Dayak Benuaq vocal music, has been passed down from generation to generation through oral tradition for hundreds of years. When and how it was founded, developed and preserved in the Dayak community remains questionable. But according to some research done by scholars, Rijoq has its origin from Dayak Bawo, a tribe living in the borderlines between Central, South, and East Kalimantan. Rijoq is normally performed during festivities, such as: initiation, reconciliation, menugal (rice planting and potong kerbau (buffalo slaughtering. Rijoq’s texts have very deep messages which are considered still relevant to today’s life context. On the one hand, it speaks about the horizontal relationship—human beings and their fellows, and human beings and its nature—; and on the other hand, the vertical relationship—human beings with their Creator. The primary concern of doing this research is to preserve Rijoq as written and recorded documents. So far, this research has been successful in notating and recording five kinds of Rijoq, that is Peket Muat Bolupm (working together to build lives, Rijoq Patuk Ajer (advice, Rijoq Natal Tautn Bayuq (Christmas and New Year, Rijoq Isiq Asekng Sookng Bawe (the expression of a man’s feeling who is falling in love with a woman, and Rijoq Lati Tana Orekng Tepa (forests and lands are disappearing and gone. But this paper is not intended to discuss these five kinds of Rijoq. Isiq Asekng Sookng Bawe is chosen as it is the oldest and the most difficult Rijoq among the rest.

  3. Functional results after external vocal fold medialization thyroplasty with the titanium vocal fold medialization implant.

    Science.gov (United States)

    Schneider, Berit; Denk, Doris-Maria; Bigenzahn, Wolfgang

    2003-04-01

    A persistent insufficiency of glottal closure is mostly a consequence of a unilateral vocal fold movement impairment. It can also be caused by vocal fold atrophy or scarring processes with regular bilateral respiratory vocal fold function. Because of consequential voice, breathing, and swallowing impairments, a functional surgical treatment is required. The goal of the study was to outline the functional results after medialization thyroplasty with the titanium vocal fold medialization implant according to Friedrich. In the period of 1999 to 2001, an external vocal fold medialization using the titanium implant was performed on 28 patients (12 women and 16 men). The patients were in the age range of 19 to 84 years. Twenty-two patients had a paralysis of the left-side vocal fold, and six patients, of the right-side vocal fold. Detailed functional examinations were executed on all patients before and after the surgery: perceptive voice sound analysis according to the "roughness, breathiness, and hoarseness" method, judgment of the s/z ratio and voice dysfunction index, voice range profile measurements, videostroboscopy, and pulmonary function tests. In case of dysphagia/aspiration, videofluoroscopy of swallowing was also performed. The respective data were statistically analyzed (paired t test, Wilcoxon-test). All patients reported on improvement of voice, swallowing, and breathing functions postoperatively. Videostroboscopy revealed an almost complete glottal closure after surgery in all of the patients. All voice-related parameters showed a significant improvement. An increase of the laryngeal resistance by the medialization procedure could be excluded by analysis of the pulmonary function test. The results confirm the external medialization of the vocal folds as an adequate method in the therapy of voice, swallowing, and breathing impairment attributable to an insufficient glottal closure. The titanium implant offers, apart from good tissue tolerability, the

  4. Effective music therapy techniques in the treatment of nonfluent aphasia.

    Science.gov (United States)

    Tomaino, Concetta M

    2012-04-01

    In music therapy for nonfluent aphasia patients who have difficulty producing meaningful words, phrases, and sentences, various benefits of singing have been identified: strengthened breathing and vocal ability, improved articulation and prosody of speech, and increased verbal and nonverbal communicative behaviors. This paper will introduce these various techniques used in clinical music therapy, and summarize findings based on our recent study to illustrate the strength of different techniques emphasizing rhythm, pitch, memory, and vocal/oral motor components dealing with different symptoms. The efficacy of each component is enhanced or diminished by the choice of music and the way it is interactively delivered. This indicates that neural mechanisms underlying speech improvement vary greatly with available acoustic and social cues in aphasic brain. © 2012 New York Academy of Sciences.

  5. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  6. On music Therapy : Music and Healing

    OpenAIRE

    栗林, 文雄

    1996-01-01

    The theory of sound as energy is based on the relationship between music and positive humanfeelings. It was discussed the music therapy is effective in the care and cure of elderly with behavioral disorderssuch as senile dementia, and in patients in palliative medicine wards with cancer and in patientswith various kinds of mental disorders such as schizophrenia. alcohol. drug addiction and so on.

  7. Musical and linguistic expertise influence pre-attentive and attentive processing of non-speech sounds.

    Science.gov (United States)

    Marie, Céline; Kujala, Teija; Besson, Mireille

    2012-04-01

    The aim of this experiment was two-fold. Our first goal was to determine whether linguistic expertise influences the pre-attentive [as reflected by the Mismatch Negativity - (MMN)] and the attentive processing (as reflected by behavioural discrimination accuracy) of non-speech, harmonic sounds. The second was to directly compare the effects of linguistic and musical expertise. To this end, we compared non-musician native speakers of a quantity language, Finnish, in which duration is a phonemically contrastive cue, with French musicians and French non-musicians. Results revealed that pre-attentive and attentive processing of duration deviants was enhanced in Finn non-musicians and French musicians compared to French non-musicians. By contrast, MMN in French musicians was larger than in both Finns and French non-musicians for frequency deviants, whereas no between-group differences were found for intensity deviants. By showing similar effects of linguistic and musical expertise, these results argue in favor of common processing of duration in music and speech. Copyright © 2010 Elsevier Srl. All rights reserved.

  8. 音乐声的情感属性与听者的情绪反应%Emotional Attributes of the Musical Sound and Mood Reactions of the Listeners

    Institute of Scientific and Technical Information of China (English)

    杨倩; 孟子厚

    2013-01-01

    以音乐声的情感为目标进行标注与分类,考察不同情感类别的音乐声对听者情绪的影响,分析听者情绪分量和音乐声情感属性之间的关联。%The musical sound was labelled and classified according to the different emotions, in the purpose of observation on the emotional influence that different types of music have on listeners, and analysis of the correlation between the mood components of the listeners and the emotional attributes of the musical sound.

  9. Further Evaluation of Methods to Identify Matched Stimulation

    OpenAIRE

    Rapp, John T

    2007-01-01

    The effects of preferred stimulation on the vocal stereotypy of 2 individuals were evaluated in two experiments. The results of Experiment 1 showed that (a) the vocal stereotypy of both participants persisted in the absence of social consequences, (b) 1 participant manipulated toys that did and did not produce auditory stimulation, but only sound-producing toys decreased his vocal stereotypy, and (c) only noncontingent music decreased vocal stereotypy for the other participant, but sterotypy ...

  10. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  11. In situ vocal fold properties and pitch prediction by dynamic actuation of the songbird syrinx

    DEFF Research Database (Denmark)

    Düring, Daniel N; Knörlein, Benjamin J; Elemans, Coen P H

    2017-01-01

    , forces and torques exerted on, and motion of the syringeal skeleton during song. Here, we present a novel marker-based 3D stereoscopic imaging technique to reconstruct 3D motion of servo-controlled actuation of syringeal muscle insertions sites in vitro and focus on two muscles controlling sound pitch......The biomechanics of sound production forms an integral part of the neuromechanical control loop of avian vocal motor control. However, we critically lack quantification of basic biomechanical parameters describing the vocal organ, the syrinx, such as material properties of syringeal elements...... motion and forces, acoustic effects of muscle recruitment, and calibration of computational birdsong models, enabling experimental access to the entire neuromechanical control loop of vocal motor control....

  12. Assessing Vocal Performances Using Analytical Assessment: A Case Study

    Science.gov (United States)

    Gynnild, Vidar

    2016-01-01

    This study investigated ways to improve the appraisal of vocal performances within a national academy of music. Since a criterion-based assessment framework had already been adopted, the conceptual foundation of an assessment rubric was used as a guide in an action research project. The group of teachers involved wanted to explore thinking…

  13. Improving left spatial neglect through music scale playing.

    Science.gov (United States)

    Bernardi, Nicolò Francesco; Cioffi, Maria Cristina; Ronchi, Roberta; Maravita, Angelo; Bricolo, Emanuela; Zigiotto, Luca; Perucca, Laura; Vallar, Giuseppe

    2017-03-01

    The study assessed whether the auditory reference provided by a music scale could improve spatial exploration of a standard musical instrument keyboard in right-brain-damaged patients with left spatial neglect. As performing music scales involves the production of predictable successive pitches, the expectation of the subsequent note may facilitate patients to explore a larger extension of space in the left affected side, during the production of music scales from right to left. Eleven right-brain-damaged stroke patients with left spatial neglect, 12 patients without neglect, and 12 age-matched healthy participants played descending scales on a music keyboard. In a counterbalanced design, the participants' exploratory performance was assessed while producing scales in three feedback conditions: With congruent sound, no-sound, or random sound feedback provided by the keyboard. The number of keys played and the timing of key press were recorded. Spatial exploration by patients with left neglect was superior with congruent sound feedback, compared to both Silence and Random sound conditions. Both the congruent and incongruent sound conditions were associated with a greater deceleration in all groups. The frame provided by the music scale improves exploration of the left side of space, contralateral to the right hemisphere, damaged in patients with left neglect. Performing a scale with congruent sounds may trigger at some extent preserved auditory and spatial multisensory representations of successive sounds, thus influencing the time course of space scanning, and ultimately resulting in a more extensive spatial exploration. These findings offer new perspectives also for the rehabilitation of the disorder. © 2015 The British Psychological Society.

  14. A feasibility study of predictable and unpredictable surf-like sounds for tinnitus therapy using personal music players.

    Science.gov (United States)

    Durai, Mithila; Kobayashi, Kei; Searchfield, Grant D

    2018-05-28

    To evaluate the feasibility of predictable or unpredictable amplitude-modulated sounds for tinnitus therapy. The study consisted of two parts. (1) An adaptation experiment. Loudness level matches and rating scales (10-point) for loudness and distress were obtained at a silent baseline and at the end of three counterbalanced 30-min exposures (silence, predictable and unpredictable). (2) A qualitative 2-week sound therapy feasibility trial. Participants took home a personal music player (PMP). Part 1: 23 individuals with chronic tinnitus and part 2: seven individuals randomly selected from Part 1. Self-reported tinnitus loudness and annoyance were significantly lower than baseline ratings after acute unpredictable sound exposure. Tinnitus annoyance ratings were also significantly lower than the baseline but the effect was small. The feasibility trial identified that participant preferences for sounds varied. Three participants did not obtain any benefit from either sound. Three participants preferred unpredictable compared to predictable sounds. Some participants had difficulty using the PMP, the average self-report hours of use were low (less <1 h/day). Unpredictable surf-like sounds played using a PMP is a feasible tinnitus treatment. Further work is required to improve the acceptance of the sound and ease of PMP use.

  15. The Decline and Revival of Music Education in New South Wales Schools, 1920-1956

    Science.gov (United States)

    Chaseling, Marilyn; Boyd, William E.

    2014-01-01

    This paper overviews the decline and revival of music education in New South Wales schools from 1920 to 1956. Commencing with a focus on vocal music during the period up to 1932, a time of decline in music teaching, the paper examines initiatives introduced in 1933 to address shortcomings in music education, and the subsequent changes in…

  16. Two organizing principles of vocal production: Implications for nonhuman and human primates.

    Science.gov (United States)

    Owren, Michael J; Amoss, R Toby; Rendall, Drew

    2011-06-01

    Vocal communication in nonhuman primates receives considerable research attention, with many investigators arguing for similarities between this calling and speech in humans. Data from development and neural organization show a central role of affect in monkey and ape sounds, however, suggesting that their calls are homologous to spontaneous human emotional vocalizations while having little relation to spoken language. Based on this evidence, we propose two principles that can be useful in evaluating the many and disparate empirical findings that bear on the nature of vocal production in nonhuman and human primates. One principle distinguishes production-first from reception-first vocal development, referring to the markedly different role of auditory-motor experience in each case. The second highlights a phenomenon dubbed dual neural pathways, specifically that when a species with an existing vocal system evolves a new functionally distinct vocalization capability, it occurs through emergence of a second parallel neural pathway rather than through expansion of the extant circuitry. With these principles as a backdrop, we review evidence of acoustic modification of calling associated with background noise, conditioning effects, audience composition, and vocal convergence and divergence in nonhuman primates. Although each kind of evidence has been interpreted to show flexible cognitively mediated control over vocal production, we suggest that most are more consistent with affectively grounded mechanisms. The lone exception is production of simple, novel sounds in great apes, which is argued to reveal at least some degree of volitional vocal control. If also present in early hominins, the cortically based circuitry surmised to be associated with these rudimentary capabilities likely also provided the substrate for later emergence of the neural pathway allowing volitional production in modern humans. © 2010 Wiley-Liss, Inc.

  17. Principles of musical acoustics

    CERN Document Server

    Hartmann, William M

    2013-01-01

    Principles of Musical Acoustics focuses on the basic principles in the science and technology of music. Musical examples and specific musical instruments demonstrate the principles. The book begins with a study of vibrations and waves, in that order. These topics constitute the basic physical properties of sound, one of two pillars supporting the science of musical acoustics. The second pillar is the human element, the physiological and psychological aspects of acoustical science. The perceptual topics include loudness, pitch, tone color, and localization of sound. With these two pillars in place, it is possible to go in a variety of directions. The book treats in turn, the topics of room acoustics, audio both analog and digital, broadcasting, and speech. It ends with chapters on the traditional musical instruments, organized by family. The mathematical level of this book assumes that the reader is familiar with elementary algebra. Trigonometric functions, logarithms and powers also appear in the book, but co...

  18. An abstract approach to music.

    Energy Technology Data Exchange (ETDEWEB)

    Kaper, H. G.; Tipei, S.

    1999-04-19

    In this article we have outlined a formal framework for an abstract approach to music and music composition. The model is formulated in terms of objects that have attributes, obey relationships, and are subject to certain well-defined operations. The motivation for this approach uses traditional terms and concepts of music theory, but the approach itself is formal and uses the language of mathematics. The universal object is an audio wave; partials, sounds, and compositions are special objects, which are placed in a hierarchical order based on time scales. The objects have both static and dynamic attributes. When we realize a composition, we assign values to each of its attributes: a (scalar) value to a static attribute, an envelope and a size to a dynamic attribute. A composition is then a trajectory in the space of aural events, and the complex audio wave is its formal representation. Sounds are fibers in the space of aural events, from which the composer weaves the trajectory of a composition. Each sound object in turn is made up of partials, which are the elementary building blocks of any music composition. The partials evolve on the fastest time scale in the hierarchy of partials, sounds, and compositions. The ideas outlined in this article are being implemented in a digital instrument for additive sound synthesis and in software for music composition. A demonstration of some preliminary results has been submitted by the authors for presentation at the conference.

  19. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  20. New and Old User Interface Metaphors in Music Production

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    2017-01-01

    This paper outlines a theoretical framework for interaction with sound in music mixing. Using cognitive linguistic theory and studies exploring the spatiality of recorded music, it is argued that the logic of music mixing builds on three master metaphors—the signal flow metaphor, the sound stage...... metaphor and the container metaphor. I show how the metaphorical basis for interacting with sound in music mixing has changed with the development of recording technology, new aesthetic ideals and changing terminology. These changes are studied as expressions of underlying thought patterns that govern how...... music producers and engineers make sense of their actions. In conclusion, this leads to suggestions for a theoretical framework through which more intuitive music mixing interfaces may be developed in the future....

  1. Validating a perceptual distraction model using a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user's perceived distraction caused by audio-on-audio interference. Originally, the distraction model was trained with music targets and interferers using a simple loudspeaker setup, consisting of only two...... sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. The results show that the model performance is equally good in both zones, i.e., with both speech- on-music and music-on-speech stimuli...

  2. Sound and Music in A Mixed Martial Arts Gym: Exploring the Functions and Effects of Organized Noise as an Aid to Training and Fighting

    Directory of Open Access Journals (Sweden)

    John Paul

    2014-05-01

    Full Text Available This paper has two distinct yet interrelated parts. First, it is a study into the sociology of sound and music—an exploration of how the phenomena of noise organizes and structures human behavior. Second, it is an auditory ethnographic excursion into the world of mixed martial arts (MMA fighting. Using a general qualitative approach grounded by the soundscape, participant observation and semi-structured interviews, we query MMA fighters’ experiences with sound and music, noting how these “sonic things” become key aids in bonding, training, and fighting. Lastly, we describe how participants use music to achieve various motivational and psychophysical outcomes.

  3. From Leisure to Work: Amateur Musicians Taking up Instrumental or Vocal Teaching as a Second Career

    Science.gov (United States)

    Taylor, Angela; Hallam, Susan

    2011-01-01

    This article aims to increase our understanding of how amateur musicians become teachers as a change of career, how they use their musical and life skills in their teaching, and how their teaching impacts on their musical identity. The questionnaire responses of 67 career-change instrumental and vocal teachers showed evidence of their strong…

  4. Attitudes of college music students towards noise in youth culture.

    Science.gov (United States)

    Chesky, Kris; Pair, Marla; Lanford, Scott; Yoshimura, Eri

    2009-01-01

    The effectiveness of a hearing loss prevention program within a college may be dependent on attitudes among students majoring in music. The purpose of this study was to assess the attitudes of music majors toward noise and to compare them to students not majoring in music. Participants ( N = 467) filled out a questionnaire designed to assess attitudes toward noise in youth culture and attitudes toward influencing their sound environment. Results showed that students majoring in music have a healthier attitude toward sound compared to students not majoring in music. Findings also showed that music majors are more aware and attentive to noise in general, likely to perceive sound that may be risky to hearing as something negative, and are more likely to carry out behaviors to decrease personal exposure to loud sounds. Due to these differences, music majors may be more likely than other students to respond to and benefit from a hearing loss prevention program.

  5. Difference between the vocalizations of two sister species of pigeons explained in dynamical terms.

    Science.gov (United States)

    Alonso, R Gogui; Kopuchian, Cecilia; Amador, Ana; Suarez, Maria de Los Angeles; Tubaro, Pablo L; Mindlin, Gabriel B

    2016-05-01

    Vocal communication is an unique example, where the nonlinear nature of the periphery can give rise to complex sounds even when driven by simple neural instructions. In this work we studied the case of two close-related bird species, Patagioenas maculosa and Patagioenas picazuro, whose vocalizations differ only in the timbre. The temporal modulation of the fundamental frequency is similar in both cases, differing only in the existence of sidebands around the fundamental frequency in the P. maculosa. We tested the hypothesis that the qualitative difference between these vocalizations lies in the nonlinear nature of the syrinx. In particular, we propose that the roughness of maculosa's vocalizations is due to an asymmetry between the right and left vibratory membranes, whose nonlinear dynamics generate the sound. To test the hypothesis, we generated a biomechanical model for vocal production with an asymmetric parameter Q with which we can control the level of asymmetry between these membranes. Using this model we generated synthetic vocalizations with the principal acoustic features of both species. In addition, we confirmed the anatomical predictions by making post mortem inspection of the syrinxes, showing that the species with tonal song (picazuro) has a more symmetrical pair of membranes compared to maculosa.

  6. Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Moonen, Marc; Wouters, Jan

    2018-01-01

    Although for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory. The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants' speech and pitch detection performance was investigated. During the initial visit preceding the take-home test, the participants' speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs. Twelve postlingually deafened CI users participated in the study. The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of "Genre." The correlations were investigated with Pearson's and Spearman's correlation coefficients. All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of "Genre" was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres. The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and

  7. Efeito imediato de técnicas vocais em mulheres sem queixa vocal Immediate effect of vocal techniques in women without vocal complaint

    Directory of Open Access Journals (Sweden)

    Eliane Cristina Pereira

    2011-10-01

    Full Text Available OBJETIVO: verificar o efeito imediato das técnicas vocais vibração, som nasal e sobrearticulação na voz e na laringe de mulheres sem queixas vocais. MÉTODO: participaram da pesquisa 32 sujeitos do sexo feminino, com idades entre 20 e 45 anos, sem queixas vocais, com qualidade vocal avaliada entre normal e alteração de grau leve Os sujeitos foram submetidos à análise perceptivo-auditiva pela escala visual analógica da vogal /ε/ e fala espontânea, análise acústica e laringoestroboscopia antes e após a realização das técnicas. RESULTADOS: a análise perceptivo-auditiva revelou melhora significante dos parâmetros impressão global da voz, rouquidão e estabilidade na vogal /ε/ e articulação na fala espontânea. A análise acústica evidenciou melhora significante do jitter e shimmer. A laringoestroboscopia evidenciou significante melhora no fechamento glótico e melhora na movimentação muco-ondulatória das pregas vocais. CONCLUSÃO: as técnicas vocais estudadas são capazes de proporcionar melhora imediata significante da qualidade vocal e da configuração laríngea.PURPOSE: to check the immediate effect of vocal techniques: vibration, nasal sound and overarticulation. METHOD: 32 female subjects with normal to mild dysphonia took part in the study, with ages from 20 to 45 years. Subjects were submitted to perceptual analysis and laryngostroboscopic exams before and after the use of vocal techniques. RESULTS: subjects' vocal classification in perceptual analysis after accomplishing the vocal techniques showed significant improvement on parameters voice global impression, hoarseness and stability; and, in spontaneous speech, one showed a significant improvement on the parameter articulation. The acoustic analysis evidenced significant improvement of the jitter and shimmer. Laryngostroboscopic examination evidenced a significant increase in the glottic closing and an increase in the mucondulatory movement of the vocal folds

  8. Vocal effort modulates the motor planning of short speech structures

    Science.gov (United States)

    Taitz, Alan; Shalom, Diego E.; Trevisan, Marcos A.

    2018-05-01

    Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.

  9. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  10. Sound Health: Music Gets You Moving and More

    Science.gov (United States)

    ... diseases and conditions,” he explains. Your Brain on Music The brain is a complex processing hub. It’s ... out a person’s voice in a noisy background. Music Therapy Listening to and making music on your ...

  11. Vocal behaviour of Orange River Francolin Scleroptila ...

    African Journals Online (AJOL)

    Fieldwork to study the vocal behaviour of Orange River Francolin Scleroptilia levaillantoides was conducted on a farm in the Heidelberg district, Gauteng province, South Africa, during August 2009 to March 2011. Orange River Francolins possess a basic repertoire of seven calls and one mechanical sound. From 83 ...

  12. Music Videos: The Look of the Sound

    Science.gov (United States)

    Aufderheide, Pat

    1986-01-01

    Asserts that music videos, rooted in mass marketing culture, are reshaping the language of advertising, affecting the flow of information. Raises question about the society that creates and receives music videos. (MS)

  13. When speech sounds like music.

    Science.gov (United States)

    Falk, Simone; Rathcke, Tamara; Dalla Bella, Simone

    2014-08-01

    Repetition can boost memory and perception. However, repeating the same stimulus several times in immediate succession also induces intriguing perceptual transformations and illusions. Here, we investigate the Speech to Song Transformation (S2ST), a massed repetition effect in the auditory modality, which crosses the boundaries between language and music. In the S2ST, a phrase repeated several times shifts to being heard as sung. To better understand this unique cross-domain transformation, we examined the perceptual determinants of the S2ST, in particular the role of acoustics. In 2 Experiments, the effects of 2 pitch properties and 3 rhythmic properties on the probability and speed of occurrence of the transformation were examined. Results showed that both pitch and rhythmic properties are key features fostering the transformation. However, some properties proved to be more conducive to the S2ST than others. Stable tonal targets that allowed for the perception of a musical melody led more often and quickly to the S2ST than scalar intervals. Recurring durational contrasts arising from segmental grouping favoring a metrical interpretation of the stimulus also facilitated the S2ST. This was, however, not the case for a regular beat structure within and across repetitions. In addition, individual perceptual abilities allowed to predict the likelihood of the S2ST. Overall, the study demonstrated that repetition enables listeners to reinterpret specific prosodic features of spoken utterances in terms of musical structures. The findings underline a tight link between language and music, but they also reveal important differences in communicative functions of prosodic structure in the 2 domains.

  14. Tuning Features of Chinese Folk Song Singing: A Case Study of Hua'er Music.

    Science.gov (United States)

    Yang, Yang; Welch, Graham; Sundberg, Johan; Himonides, Evangelos

    2015-07-01

    The learning and teaching of different singing styles, such as operatic and Chinese folk singing, was often found to be very challenging in professional music education because of the complexity of varied musical properties and vocalizations. By studying the acoustical and musical parameters of the singing voice, this study identified distinctive tuning characteristics of a particular folk music in China-Hua'er music-to inform the ineffective folk singing practices, which were hampered by the neglect of inherent tuning issues in music. Thirteen unaccompanied folk song examples from four folk singers were digitally audio recorded in a sound studio. Using an analyzing toolkit consisting of Praat, PeakFit, and MS Excel, the fundamental frequencies (F0) of these song examples were extracted into sets of "anchor pitches" mostly used, which were further divided into 253 F0 clusters. The interval structures of anchor pitches within each song were analyzed and then compared across 13 examples providing parameters that indicate the tuning preference of this particular singing style. The data analyses demonstrated that all singers used a tuning pattern consisting of five major anchor pitches suggesting a nonequal-tempered bias in singing. This partly verified the pentatonic scale proposed in previous empirical research but also argued a potential misunderstanding of the studied folk music scale that failed to take intrinsic tuning issues into consideration. This study suggests that, in professional music training, any tuning strategy should be considered in terms of the reference pitch and likely tuning systems. Any accompanying instruments would need to be tuned to match the underlying tuning bias. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. The 'folk-choral concept' of the music of Okechukwu Ndubuisi ...

    African Journals Online (AJOL)

    Okechukwu Ndubuisi is a distinguished composer and arranger of vocal music in Nigeria. Even though he was an accomplished pianist, his major contribution in the field of Nigerian art music was in the choral medium. This work therefore examines the composer and his works in order to establish the contributions he has ...

  16. Three-month-old human infants use vocal cues of body size.

    Science.gov (United States)

    Pietraszewski, David; Wertz, Annie E; Bryant, Gregory A; Wynn, Karen

    2017-06-14

    Differences in vocal fundamental ( F 0 ) and average formant ( F n ) frequencies covary with body size in most terrestrial mammals, such that larger organisms tend to produce lower frequency sounds than smaller organisms, both between species and also across different sex and life-stage morphs within species. Here we examined whether three-month-old human infants are sensitive to the relationship between body size and sound frequencies. Using a violation-of-expectation paradigm, we found that infants looked longer at stimuli inconsistent with the relationship-that is, a smaller organism producing lower frequency sounds, and a larger organism producing higher frequency sounds-than at stimuli that were consistent with it. This effect was stronger for fundamental frequency than it was for average formant frequency. These results suggest that by three months of age, human infants are already sensitive to the biologically relevant covariation between vocalization frequencies and visual cues to body size. This ability may be a consequence of developmental adaptations for building a phenotype capable of identifying and representing an organism's size, sex and life-stage. © 2017 The Author(s).

  17. Music in our ears: the biological bases of musical timbre perception.

    Directory of Open Access Journals (Sweden)

    Kailash Patil

    Full Text Available Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound's physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.

  18. Vocal production complexity correlates with neural instructions in the oyster toadfish (Opsanus tau)

    DEFF Research Database (Denmark)

    Elemans, C. P. H.; Mensinger, A. F.; Rome, L. C.

    2014-01-01

    frequencies are determined directly by the firing rate of a vocal-acoustic neural network that drives the contraction frequency of superfast swimbladder muscles. The oyster toadfish boatwhistle call starts with an irregular sound waveform that could be an emergent property of the peripheral nonlinear sound...

  19. FEATURES OF MINIMALIST MUSIC FUNCTIONING IN FILMS

    Directory of Open Access Journals (Sweden)

    Mikheeva Julia V.

    2015-01-01

    Full Text Available The article examines the role of musical minimalism in aesthetic perception and theoretical interpretation of cinematographic works. The film music of Philip Glass, Michael Nyman, Alfred Schnittke, Arvo Pärt, Alexei Aigui is analysed. Author specifies the analysis of the principles of musical minimalism in films in two basic phenomenons. The first one is a transcending of art space through the self-worth of a single sound (sound pattern. The second one is a change the meaning of film-time through the repetitive music technique.

  20. Oral and vocal fold diadochokinesis in dysphonic women

    Directory of Open Access Journals (Sweden)

    Talita Louzada

    2011-12-01

    Full Text Available The evaluation of oral and vocal fold diadochokinesis (DDK in individuals with voice disorders may contribute to the understanding of factors that affect the balanced vocal production. Scientific studies that make use of this assessment tool support the knowledge advance of this area, reflecting the development of more appropriate therapeutic planning. Objective: To compare the results of oral and vocal fold DDK in dysphonic women and in women without vocal disorders. Material and methods: For this study, 28 voice recordings of women from 19 to 54 years old, diagnosed with dysphonia and submitted to a voice assessment from speech pathologist and otorhinolaryngologist, were used. The control group included 30 nondysphonic women evaluated in prior research from normal adults. The analysis parameters like number and duration of emissions, as well as the regularity of the repetition of syllables "pa", "ta", "ka" and the vowels "a" and "i," were provided by the Advanced Motor Speech Profile program (MSP Model-5141, version-2.5.2 (KayPentax. The DDK sequence "pataka" was analyzed quantitatively through the Sound Forge 7.0 program, as well as manually with the audio-visual help of sound waves. Average values of oral and vocal fold DDK dysphonic and nondysphonic women were compared using the "t Student" test and were considered significant when p<0.05. Results: The findings showed no significant differences between populations; however, the coefficient of variation of period (CvP and jitter of period (JittP average of the "ka," "a" and "i" emissions, respectively, were higher in dysphonic women (CvP=10.42%, 12.79%, 12.05%; JittP=2.05%, 6.05%, 3.63% compared to the control group (CvP=8.86%; 10.95%, 11.20%; JittP=1.82%, 2.98%, 3.15%. Conclusion: Although the results do not indicate any difficulties in oral and laryngeal motor control in the dysphonic group, the largest instability in vocal fold DDK in the experimental group should be considered, and

  1. Processing of vocalizations in humans and monkeys: A comparative fMRI study

    International Nuclear Information System (INIS)

    Joly, Olivier; Orban, Guy A.; Pallier, Christophe; Ramus, Franck; Pressnitzer, Daniel; Vanduffel, Wim

    2012-01-01

    Humans and many other animals use acoustical signals to mediate social interactions with con-specifics. The evolution of sound-based communication is still poorly understood and its neural correlates have only recently begun to be investigated. In the present study, we applied functional MRI to humans and macaque monkeys listening to identical stimuli in order to compare the cortical networks involved in the processing of vocalizations. At the first stages of auditory processing, both species showed similar fMRI activity maps within and around the lateral sulcus (the Sylvian fissure in humans). Monkeys showed remarkably similar responses to monkey calls and to human vocal sounds (speech or otherwise), mainly in the lateral sulcus and the adjacent superior temporal gyrus (STG). In contrast, a preference for human vocalizations and especially for speech was observed in the human STG and superior temporal sulcus (STS). The STS and Broca's region were especially responsive to intelligible utterances. The evolution of the language faculty in humans appears to have recruited most of the STS. It may be that in monkeys, a much simpler repertoire of vocalizations requires less involvement of this temporal territory. (authors)

  2. New sounds, new stories : narrativity in contemporary music

    NARCIS (Netherlands)

    Meelberg, Vincent

    2006-01-01

    In this dissertation, I study the relation between narrativity and contemporary composed music. The purpose of this study is twofold. Firstly, in so doing, I am able to articulate what musical narrativity is in a precise manner. Since many contemporary musical works question or problematize the

  3. A study on the mutual relationship between Sa’di’s ghazals and Iranian vocal music

    Directory of Open Access Journals (Sweden)

    Zolfaghar Alami

    2016-06-01

    feature in which he moves from the parts to the whole and as a result, such a trend gives an artistic dynamism to the development of the thought in ghazal couplets. Position of phonemes: Language features also play an essential part in why men of music welcome the great Sheikh’s ghazals. In the course of speech, each sentence, word and syllable involves some degrees of high and low pitched voice. In poetry, phonemes in addition to the role they play in the musical beauty of the poem, sometimes in harmonious with other parts induce meaning. It is in such a situation where the poet, using words with certain letters, creates with those letters the image he wants to present. Impressiveness of ghazal rhythm: The beauty of a word is a result of the suitable choice and collection of that word in the poem. In sa’di’s ghazals the poet’s word possibilities and his awareness of words and their different forms are so vast that they make his poetry uniform and that the sequence of its parts can be guessed. Sa’di’s ghazals enjoy uniform and rhythmic tone and they regardless of the way the poet relates the event, in musical expression follow a particular order, too. Exchange of couplets and musical melodies: The main purpose of the present paper is investigating the mutual relationship between couplets of sa’di’s ghazals, Naqarat and melodies of musical tones. This type of ghazal due to its magic tone and its suitability for the radif of music is the most famous one and has become popular among people. The tone of the lover’s call, request, submissiveness and helplessness, yearning and pain of love, the lover’s unconditional compliance and the beloved’s undeniable dominion and … in sa’di’s ghazals has been shown in such a way that they are in complete agreement with elegances of Persian vocal “Gooshes” and can be conveyed in the best way to the listener. It is all these features that have made Sa’di’s ghazals preferable in connection with Persian

  4. Vocal performance reflects individual quality in a nonpasserine

    NARCIS (Netherlands)

    Janicke, T.; Hahn, S.M.; Ritz, M.S.; Peter, H.-U.

    2008-01-01

    Recent studies on mate-quality recognition in passerines showed that females use subtle differences in sound production to assess males. We analysed long calls of brown skuas, Catharacta antarctica lonnbergi, to test whether vocal performance could serve as an indicator of individual quality in a

  5. PROCEEDINGS OF THE 2008 Computers in Music Modeling and Retrieval and Network for Cross-Disciplinary Studies of Music and Meaning Conference

    DEFF Research Database (Denmark)

    and encouraged studies that linked sound modeling by analysis-synthesis to perception and cognition. CMMR 2008 seeks to enlarge upon the Sense of Sounds-concept by taking into account the musical structure as a whole. More precisely, the workshop will have as its theme Genesis of Meaning in Sound and Music......The field of computer music is interdisciplinary by nature and closely related to a number of areas in computer science and engineering. The first CMMR  gatherings mainly focused on information retrieval, programming, digital libraries, hypermedia, artificial intelligence, acoustics and signal...... processing. In 2005 CMMR started moving towards a more interdisciplinary view of the field by putting increased emphasis on the investigation of the role of human interaction at all levels of musical practice. CMMR 2007 focused on the Sense of Sounds from the synthesis and retrieval point of view...

  6. Effects of Style, Tempo, and Performing Medium on Children's Music Preference.

    Science.gov (United States)

    LeBlanc, Albert

    1981-01-01

    Fifth-graders listened to a tape incorporating fast and slow vocal and instrumental excerpts within the generic styles of rock/pop, country, older jazz, newer jazz, art music, and band music. A preference hierarchy emerged favoring the popular styles. Across pooled styles, faster tempos and instrumentals were slightly preferred. (Author/SJL)

  7. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  8. Modeling the Biomechanical Influence of Epilaryngeal Stricture on the Vocal Folds: A Low-Dimensional Model of Vocal-Ventricular Fold Coupling

    Science.gov (United States)

    Moisik, Scott R.; Esling, John H.

    2014-01-01

    Purpose: Physiological and phonetic studies suggest that, at moderate levels of epilaryngeal stricture, the ventricular folds impinge upon the vocal folds and influence their dynamical behavior, which is thought to be responsible for constricted laryngeal sounds. In this work, the authors examine this hypothesis through biomechanical modeling.…

  9. ''1/f noise'' in music: Music from 1/f noise

    Energy Technology Data Exchange (ETDEWEB)

    Voss, R.F.; Clarke, J.

    1978-01-01

    The spectral density of fluctuations in the audio power of many musical selections and of English speech varies approximately as 1/f (f is the frequency) down to a frequency of 5 x 10/sup -4/ Hz. This result implies that the audio-power fluctuations are correlated over all times in the same manner as ''1/f noise'' in electronic components. The frequency fluctuations of music also have a 1/f spectral density at frequencies down to the inverse of the length of the piece of music. The frequency fluctuations of English speech have a quite different behavior, with a single characteristic time of about 0.1 s, the average length of a syllable. The observations on music suggest that 1/f noise is a good choice for stochastic composition. Compositions in which the frequency and duration of each note were determined by 1/f noise sources sounded pleasing. Those generated by white-noise sources sounded too random, while those generated by 1/f/sup 2/ noise sounded too correlated.

  10. Timbre as an Elusive Component of Imagery for Music

    Directory of Open Access Journals (Sweden)

    Freya Bailes

    2007-01-01

    Full Text Available Evidence of the ability to imagine timbre is either anecdotal, or applies to isolated instrument tones rather than timbre in real music. Experiments were conducted to infer the vividness of timbre in imagery for music. Music students were asked to judge whether the timbre of a sounded target note was the same or different from the original following a heard, imagined, or control musical context. A pilot experiment manipulated instrumentation, while the main experiment manipulated sound filters. The hypothesis that participants are able to internalise timbral aspects of music was supported by an ability to perform the timbre discrimination task, and by facilitated response when imaging the timbre context compared with non-imaging. However, while participants were able to mentally represent timbre, this was not always reported as being a conscious dimension of their musical image. This finding is discussed in relation to previous research suggesting that timbre may be a sound characteristic that is optionally present in imagery for music.

  11. Music Aid

    DEFF Research Database (Denmark)

    Søderberg, Ene Alicia; Odgaard, Rasmus Emil; Bitsch, Sarah

    2016-01-01

    This paper explores the possibility of breaking the barrier between deaf and hearing people when it comes to the subject of making music. Suggestions on how deaf and hearing people can collaborate in creating music together, are presented. The conducted research will focus on deaf people...... with a general interest in music as well as hearing musicians as target groups. Through reviewing different related research areas, it is found that visualization of sound along with a haptic feedback can help deaf people interpret and interact with music. With this in mind, three variations of a collaborative...

  12. Functional MRI of the vocalization-processing network in the macaque brain

    Directory of Open Access Journals (Sweden)

    Michael eOrtiz-Rios

    2015-04-01

    Full Text Available Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC, medial geniculate nucleus (MGN, auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG and sulcus (STS. Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (scrambled calls also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt.

  13. Paralinguistic mechanisms of production in human "beatboxing": a real-time magnetic resonance imaging study.

    Science.gov (United States)

    Proctor, Michael; Bresch, Erik; Byrd, Dani; Nayak, Krishna; Narayanan, Shrikanth

    2013-02-01

    Real-time magnetic resonance imaging (rtMRI) was used to examine mechanisms of sound production by an American male beatbox artist. rtMRI was found to be a useful modality with which to study this form of sound production, providing a global dynamic view of the midsagittal vocal tract at frame rates sufficient to observe the movement and coordination of critical articulators. The subject's repertoire included percussion elements generated using a wide range of articulatory and airstream mechanisms. Many of the same mechanisms observed in human speech production were exploited for musical effect, including patterns of articulation that do not occur in the phonologies of the artist's native languages: ejectives and clicks. The data offer insights into the paralinguistic use of phonetic primitives and the ways in which they are coordinated in this style of musical performance. A unified formalism for describing both musical and phonetic dimensions of human vocal percussion performance is proposed. Audio and video data illustrating production and orchestration of beatboxing sound effects are provided in a companion annotated corpus.

  14. Evolutionary Sound Synthesis Controlled by Gestural Data

    Directory of Open Access Journals (Sweden)

    Jose Fornari

    2011-05-01

    Full Text Available This article focuses on the interdisciplinary research involving Computer Music and Generative Visual Art. We describe the implementation of two interactive artistic systems based on principles of Gestural Data (WILSON, 2002 retrieval and self-organization (MORONI, 2003, to control an Evolutionary Sound Synthesis method (ESSynth. The first implementation uses, as gestural data, image mapping of handmade drawings. The second one uses gestural data from dynamic body movements of dance. The resulting computer output is generated by an interactive system implemented in Pure Data (PD. This system uses principles of Evolutionary Computation (EC, which yields the generation of a synthetic adaptive population of sound objects. Considering that music could be seen as “organized sound” the contribution of our study is to develop a system that aims to generate "self-organized sound" – a method that uses evolutionary computation to bridge between gesture, sound and music.

  15. Music Activities for "Little Wolf's Song"

    Science.gov (United States)

    Cardany, Audrey Berger

    2015-01-01

    Drawn from Britta Techentrup's children's book "Little Wolf's Song", the author shares music activities appropriate for preschool and children in primary grades. Children will enjoy Technentrup's tender family story, while exploring vocal and instrumental timbres, as well as reading, writing, and creating with melodic contour.

  16. Dynamic compression and sound quality of music

    NARCIS (Netherlands)

    Lieshout, van R.A.J.M.; Wagenaars, W.M.; Houtsma, A.J.M.; Stikvoort, E.F.

    1984-01-01

    Amplitude compression is often used to match the dynamic: range of music to a particular playback situation in order to ensure, e .g ., continuous audibility in a noisy environment or unobtrusiveness if the music is intended as a quiet background. Since amplitude compression is a nonlinear process,

  17. Music as design

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2012-01-01

    The incorporation of the sounds of the surrounding world in music is today a familiar phenomenon on the electronic music and audio art scenes, and to some extent also in contemporary music. It is rarer for a contemporary audio or visual artist to use music as the form-giving element for a semi......-realistic event or narrative. In a way the phenomenon can be compared to Puccini's operas, or to the ground-breaking dance performances for which the choreographer Pina Bauch became famous, where musicalization produced stylizations fo everyday events. Familiar, readable events were reinforced and relocated...

  18. Is laughter a better vocal change detector than a growl?

    Science.gov (United States)

    Pinheiro, Ana P; Barros, Carla; Vasconcelos, Margarida; Obermeier, Christian; Kotz, Sonja A

    2017-07-01

    The capacity to predict what should happen next and to minimize any discrepancy between an expected and an actual sensory input (prediction error) is a central aspect of perception. Particularly in vocal communication, the effective prediction of an auditory input that informs the listener about the emotionality of a speaker is critical. What is currently unknown is how the perceived valence of an emotional vocalization affects the capacity to predict and detect a change in the auditory input. This question was probed in a combined event-related potential (ERP) and time-frequency analysis approach. Specifically, we examined the brain response to standards (Repetition Positivity) and to deviants (Mismatch Negativity - MMN), as well as the anticipatory response to the vocal sounds (pre-stimulus beta oscillatory power). Short neutral, happy (laughter), and angry (growls) vocalizations were presented both as standard and deviant stimuli in a passive oddball listening task while participants watched a silent movie and were instructed to ignore the vocalizations. MMN amplitude was increased for happy compared to neutral and angry vocalizations. The Repetition Positivity was enhanced for happy standard vocalizations. Induced pre-stimulus upper beta power was increased for happy vocalizations, and predicted the modulation of the standard Repetition Positivity. These findings indicate enhanced sensory prediction for positive vocalizations such as laughter. Together, the results suggest that positive vocalizations are more effective predictors in social communication than angry and neutral ones, possibly due to their high social significance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Od ne-hudby k hudbě. Hudební využití hluku ve 20. století // From non-music to music. Musical use of noise in the 20th century

    Directory of Open Access Journals (Sweden)

    Matěj Kratochvíl

    2016-12-01

    Full Text Available The range of sounds available to musicians has grown considerably during the 20th century. The invention of sound recording has made it possible to document any sonic event and to use it in a musical way. This text summarizes the history of this development and tries to show how the use of nonmusical sounds can be explained as the next stage in the history of western classical and how we can compare it with so called emancipation of dissonance at the end of the 20th century. It also analyzes various esthetic trends concerning the use of non-musical sounds and the ways in which the broadening of sonic possibilities influenced our understanding of what music is and how it functions as a way of human communication.

  20. Experimenting with musical intervals

    Science.gov (United States)

    Lo Presto, Michael C.

    2003-07-01

    When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

  1. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Singing Voice Analysis, Synthesis, and Modeling

    Science.gov (United States)

    Kim, Youngmoo E.

    The singing voice is the oldest musical instrument, but its versatility and emotional power are unmatched. Through the combination of music, lyrics, and expression, the voice is able to affect us in ways that no other instrument can. The fact that vocal music is prevalent in almost all cultures is indicative of its innate appeal to the human aesthetic. Singing also permeates most genres of music, attesting to the wide range of sounds the human voice is capable of producing. As listeners we are naturally drawn to the sound of the human voice, and, when present, it immediately becomes the focus of our attention.

  3. The effectiveness of music on pain among preterm infants in the neonatal intensive care unit: a systematic review.

    Science.gov (United States)

    Pölkki, Tarja; Korhonen, Anne

    BA), and music as an auditory stimulus not exceed 75 dB in NICU. If earphones or other devices are used, sound sources should be kept at reasonable distances from the infant's ear, played for brief periods and at levels below 55 dB.Music listening can be initiated with or without the involvement of a music therapist. In this review, music can be implemented for premature infants by a music therapist or any health care providers and it will include both recorded and live music. Regardless of the type of music, several studies have investigated the short term effects of music on preterm infants, including the improvement in physiological outcomes (e.g. oxygen saturation, heart rate, respiratory rate, and blood pressure), as well as in behavioural state (e.g. crying, facial expression, body movements) and pain scores. For example, Chou et al. showed that premature infants receiving recorded music, that was the combination of womb sounds and the mother singing, with endotracheal suctioning had significantly higher oxygen saturation than when they did not receive music. Butt & Kisilewsky compared recorded music involving both the vocal and instrumental version of Brahms lullaby versus no music, and found that infants older than 31 weeks demonstrated significant reduction in heart rate, behavioral state and pain.In the study of Arnon et al. the infants receiving live music, compared with infants receiving recorded music or no music, had significantly reduced heart rate and behavioral scores during the post-intervention period. Live music comprised of a lullaby sang by the female voice with frame drum and an accompanying harp. The same music was played by a tape recorder. Live music showed significant benefits, whereas no statistically significant changes were found for the recorded music and control groups. Teckenberg-Jansson et al. indicated that music therapy combined with kangaroo care decreased the pulse, slowed down the respiration and increased the transcutaneous oxygen

  4. Musical expertise and foreign speech perception.

    Science.gov (United States)

    Martínez-Montes, Eduardo; Hernández-Pérez, Heivet; Chobert, Julie; Morgado-Rodríguez, Lisbet; Suárez-Murias, Carlos; Valdés-Sosa, Pedro A; Besson, Mireille

    2013-01-01

    The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN) design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT) or equivalent that were either far from (Large deviants) or close to (Small deviants) the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception are discussed.

  5. Musical expertise and foreign speech perception

    Directory of Open Access Journals (Sweden)

    Eduardo eMartínez-Montes

    2013-11-01

    Full Text Available The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT or equivalent that were either far from (Large deviants or close to (Small deviants the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception is discussed.

  6. Foetal response to music and voice.

    Science.gov (United States)

    Al-Qahtani, Noura H

    2005-10-01

    To examine whether prenatal exposure to music and voice alters foetal behaviour and whether foetal response to music differs from human voice. A prospective observational study was conducted in 20 normal term pregnant mothers. Ten foetuses were exposed to music and voice for 15 s at different sound pressure levels to find out the optimal setting for the auditory stimulation. Music, voice and sham were played to another 10 foetuses via a headphone on the maternal abdomen. The sound pressure level was 105 db and 94 db for music and voice, respectively. Computerised assessment of foetal heart rate and activity were recorded. 90 actocardiograms were obtained for the whole group. One way anova followed by posthoc (Student-Newman-Keuls method) analysis was used to find if there is significant difference in foetal response to music and voice versus sham. Foetuses responded with heart rate acceleration and motor response to both music and voice. This was statistically significant compared to sham. There was no significant difference between the foetal heart rate acceleration to music and voice. Prenatal exposure to music and voice alters the foetal behaviour. No difference was detected in foetal response to music and voice.

  7. A Comparative Analysis of the Universal Elements of Music and the Fetal Environment

    Science.gov (United States)

    Teie, David

    2016-01-01

    Although the idea that pulse in music may be related to human pulse is ancient and has recently been promoted by researchers (Parncutt, 2006; Snowdon and Teie, 2010), there has been no ordered delineation of the characteristics of music that are based on the sounds of the womb. I describe features of music that are based on sounds that are present in the womb: tempo of pulse (pulse is understood as the regular, underlying beat that defines the meter), amplitude contour of pulse, meter, musical notes, melodic frequency range, continuity, syllabic contour, melodic rhythm, melodic accents, phrase length, and phrase contour. There are a number of features of prenatal development that allow for the formation of long-term memories of the sounds of the womb in the areas of the brain that are responsible for emotions. Taken together, these features and the similarities between the sounds of the womb and the elemental building blocks of music allow for a postulation that the fetal acoustic environment may provide the bases for the fundamental musical elements that are found in the music of all cultures. This hypothesis is supported by a one-to-one matching of the universal features of music with the sounds of the womb: (1) all of the regularly heard sounds that are present in the fetal environment are represented in the music of every culture, and (2) all of the features of music that are present in the music of all cultures can be traced to the fetal environment. PMID:27555828

  8. Introducing contemporary music into music education: teachers' and students' opinions

    OpenAIRE

    Zdešar, Neža

    2016-01-01

    Music is a cultural phenomenon and a human need, which has existed throughout time and in every culture. Modern times offer a broad palette of music, which is outgrowing this oversaturated environment of sound, therefore it is important to be able to critically choose the appropriate type of music for a variety of particular needs and situations. During primary education, emphasis is placed on the objectives of developing aesthetic sensitivity towards artistic values and the ecology of the en...

  9. Comparison of different methods for eliciting exercise-to-music for clients with Alzheimer's disease.

    Science.gov (United States)

    Cevasco, Andrea M; Grant, Roy E

    2003-01-01

    Many of the noted problems associated with Alzheimer's disease (AD) sometimes can be delayed, retarded, or even reversed with proper exercise and interaction with the environment. An overwhelming body of research efforts has revealed that music activity brings about the greatest degree of responsiveness, including exercise, in clients with AD; yet, specific techniques which elicit the greatest amount of physical responses during the music activities remain unidentified. The purpose of this study was two-fold: comparing two methods of intervention and comparing responses to vocal versus instrumental music during exercise and exercise with instruments. In Experiment 1 the authors compared 2 treatment conditions to facilitate exercise during music activities: (a) verbalizing the movement for each task once, one beat before commencing, followed by visual cueing for the remainder of the task; (b) verbal and visual cueing for each revolution or change in rhythm for the duration of the task. Data collection over 38 sessions consisted of recording the participation of each client at 30-second intervals for the duration of each treatment condition, indicating at each interval whether the client was participating in the designated movement (difficult), participating in exercise approximating the designated movement (easy), or not participating. Results indicated that the continuous verbal cueing/easy treatment elicited significantly greater participation than one verbal cue/difficult treatment, p music, vocal versus instrumental, during types of activities, exercise with and without instruments, were examined. Data were collected over 26 sessions, 52 activities, in the same 2 assisted living facilities as those in Experiment 1, but one year later Results indicated that both the type of activity and the type of music had some effect on participation. Also, data indicated participation in exercise to instrumental music was significantly greater than exercise with instruments

  10. Promoting Vocal Health in the Choral Rehearsal: When Planning for and Conducting Choral Rehearsals, Guide Your Students in Healthful Singing

    Science.gov (United States)

    Webb, Jeffrey L.

    2007-01-01

    Choral conductors can positively affect the voices in their choirs through their instruction. It is their job to teach the choir not only the music, but also the healthy ways of singing it. Promoting vocal health benefits both singers and conductors. For singers, it helps remove the risk factors for vocal fatigue. For the choral conductor,…

  11. High-precision spatial localization of mouse vocalizations during social interaction.

    Science.gov (United States)

    Heckman, Jesse J; Proville, Rémi; Heckman, Gert J; Azarfar, Alireza; Celikel, Tansu; Englitz, Bernhard

    2017-06-07

    Mice display a wide repertoire of vocalizations that varies with age, sex, and context. Especially during courtship, mice emit ultrasonic vocalizations (USVs) of high complexity, whose detailed structure is poorly understood. As animals of both sexes vocalize, the study of social vocalizations requires attributing single USVs to individuals. The state-of-the-art in sound localization for USVs allows spatial localization at centimeter resolution, however, animals interact at closer ranges, involving tactile, snout-snout exploration. Hence, improved algorithms are required to reliably assign USVs. We develop multiple solutions to USV localization, and derive an analytical solution for arbitrary vertical microphone positions. The algorithms are compared on wideband acoustic noise and single mouse vocalizations, and applied to social interactions with optically tracked mouse positions. A novel, (frequency) envelope weighted generalised cross-correlation outperforms classical cross-correlation techniques. It achieves a median error of ~1.4 mm for noise and ~4-8.5 mm for vocalizations. Using this algorithms in combination with a level criterion, we can improve the assignment for interacting mice. We report significant differences in mean USV properties between CBA mice of different sexes during social interaction. Hence, the improved USV attribution to individuals lays the basis for a deeper understanding of social vocalizations, in particular sequences of USVs.

  12. Music appreciation and music listening in prelingual and postlingually deaf adult cochlear implant recipients.

    Science.gov (United States)

    Moran, Michelle; Rousset, Alexandra; Looi, Valerie

    2016-01-01

    To explore the music appreciation of prelingually deaf adults using cochlear implants (CIs). Cohort study. Adult CI recipients were recruited based on hearing history and asked to complete the University of Canterbury Music Listening Questionnaire (UCMLQ) to assess each individual's music listening and appreciation. Results were compared to previous responses to the UCMLQ from a large cohort of postlingually deaf CI recipients. Fifteen prelingually deaf and 15 postlingually deaf adult cochlear implant recipients. No significant differences were found between the prelingual and postlingual participants for amount of music listening or music listening enjoyment with their CI. Sound quality of common instruments was favourable for both groups, with no significant difference in the pleasantness/naturalness of instrument sounds between the groups. Prelingually deaf CI recipients rated themselves as significantly less able to follow a melody line and identify instrument styles compared to their postlingual peers. The results suggest that the pre- and postlingually deaf CI recipients demonstrate equivalent levels of music appreciation. This finding is of clinical importance, as CI clinicians should be actively encouraging all of their recipients to explore music listening as a part of their rehabilitation.

  13. The role of musical aptitude and language skills in preattentive duration processing in school-aged children.

    Science.gov (United States)

    Milovanov, Riia; Huotilainen, Minna; Esquef, Paulo A A; Alku, Paavo; Välimäki, Vesa; Tervaniemi, Mari

    2009-08-28

    We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.

  14. Dynamic interactions between musical, cardiovascular, and cerebral rhythms in humans.

    Science.gov (United States)

    Bernardi, Luciano; Porta, Cesare; Casucci, Gaia; Balsamo, Rossella; Bernardi, Nicolò F; Fogari, Roberto; Sleight, Peter

    2009-06-30

    Reactions to music are considered subjective, but previous studies suggested that cardiorespiratory variables increase with faster tempo independent of individual preference. We tested whether compositions characterized by variable emphasis could produce parallel instantaneous cardiovascular/respiratory responses and whether these changes mirrored music profiles. Twenty-four young healthy subjects, 12 musicians (choristers) and 12 nonmusician control subjects, listened (in random order) to music with vocal (Puccini's "Turandot") or orchestral (Beethoven's 9th Symphony adagio) progressive crescendos, more uniform emphasis (Bach cantata), 10-second period (ie, similar to Mayer waves) rhythmic phrases (Giuseppe Verdi's arias "Va pensiero" and "Libiam nei lieti calici"), or silence while heart rate, respiration, blood pressures, middle cerebral artery flow velocity, and skin vasomotion were recorded.Common responses were recognized by averaging instantaneous cardiorespiratory responses regressed against changes in music profiles and by coherence analysis during rhythmic phrases. Vocal and orchestral crescendos produced significant (P=0.05 or better) correlations between cardiovascular or respiratory signals and music profile, particularly skin vasoconstriction and blood pressures, proportional to crescendo, in contrast to uniform emphasis, which induced skin vasodilation and reduction in blood pressures. Correlations were significant both in individual and group-averaged signals. Phrases at 10-second periods by Verdi entrained the cardiovascular autonomic variables. No qualitative differences in recorded measurements were seen between musicians and nonmusicians. Music emphasis and rhythmic phrases are tracked consistently by physiological variables. Autonomic responses are synchronized with music, which might therefore convey emotions through autonomic arousal during crescendos or rhythmic phrases.

  15. Replacing the Orchestra? - The Discernibility of Sample Library and Live Orchestra Sounds.

    Directory of Open Access Journals (Sweden)

    Reinhard Kopiez

    Full Text Available Recently, musical sounds from pre-recorded orchestra sample libraries (OSL have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602 on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.

  16. Tuvan music and World Music

    Directory of Open Access Journals (Sweden)

    Maxim V. Chaposhnikov

    2017-06-01

    the world. Tuva now has its rock, as well as avant-garde, women bands, etc. Throat-singing became a popular type of vocal arts. Tuvan music attracts hundreds of thousands of fans around the world. Circulation of media, especially with Tuvan music, allowed musicians of Sayan-Altai region and Central Asia to rediscover many of their own genres. Thus, Tuvan music burst into the world like a Mongol invasion in early 1990s. Global success of Tuvan music on the world music market is obvious. But the process of normalization of Tuvan music perception by the outer world is inevitable.  This article has audio files attached (please, see "Supplementary files" in "Article tools".

  17. Distress vocalization sequences broadcasted by bats carry redundant information.

    Science.gov (United States)

    Hechavarría, Julio C; Beetz, M Jerome; Macias, Silvio; Kössl, Manfred

    2016-07-01

    Distress vocalizations (also known as alarm or screams) are an important component of the vocal repertoire of a number of animal species, including bats, humans, monkeys and birds, among others. Although the behavioral relevance of distress vocalizations is undeniable, at present, little is known about the rules that govern vocalization production when in alarmful situations. In this article, we show that when distressed, bats of the species Carollia perspicillata produce repetitive vocalization sequences in which consecutive syllables are likely to be similar to one another regarding their physical attributes. The uttered distress syllables are broadband (12-73 kHz) with most of their energy focussing at 23 kHz. Distress syllables are short (~4 ms), their average sound pressure level is close to 70 dB SPL, and they are produced at high repetition rates (every 14 ms). We discuss that, because of their physical attributes, bat distress vocalizations could serve a dual purpose: (1) advertising threatful situations to conspecifics, and (2) informing the threatener that the bats are ready to defend themselves. We also discuss possible advantages of advertising danger/discomfort using repetitive utterances, a calling strategy that appears to be ubiquitous across the animal kingdom.

  18. Music Engineering as a Novel Strategy for Enhancing Music Enjoyment in the Cochlear Implant Recipient.

    Science.gov (United States)

    Kohlberg, Gavriel D; Mancuso, Dean M; Chari, Divya A; Lalwani, Anil K

    2015-01-01

    Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Compared to the original song, modified versions containing only 1-3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience.

  19. From Vocal Replication to Shared Combinatorial Speech Codes: A Small Step for Evolution, A Big Step for Language

    Science.gov (United States)

    Oudeyer, Pierre-Yves

    Humans use spoken vocalizations, or their signed equivalent, as a physical support to carry language. This support is highly organized: vocalizations are built with the re-use of a small number of articulatory units, which are themselves discrete elements carved up by each linguistic community in the articulatory continuum. Moreover, the repertoires of these elementary units (the gestures, the phonemes, the morphemes) have a number of structural regularities: for example, while our vocal tract allows physically the production of hundreds of vowels, each language uses most often 5, and never more than 20 of them. Also, certain vowels are very frequent, like /a,e,i,o,u/, and some others are very rare, like /en/. All the speakers of a given linguistic community categorize the speech sounds in the same manner, and share the same repertoire of vocalizations. Speakers of different communities may have very different ways of categorizing sounds (for example, Chinese use tones to distinguish sounds), and repertoires of vocalizations. Such an organized physical support of language is crucial for the existence of language, and thus asking how it may have appeared in the biological and/or cultural history of humans is a fundamental questions. In particular, one can wonder how much the evolution of human speech codes relied on specific evolutionary innovations, and thus how difficult (or not) it was for speech to appear.

  20. Transformations: Technology and the Music Industry.

    Science.gov (United States)

    Peters, G. David

    2001-01-01

    Focuses on the companies and organizations of the Music Industry Conference (MIC). Addresses topics such as: changes in companies due to technology, audio compact discs, the music instrument digital interface (MIDI) , digital sound recording, and the MIC on-line music instruction programs offered. (CMK)

  1. Musicians: Cultural Workers or Spreaders of Musical Sounds

    Directory of Open Access Journals (Sweden)

    Esperanza Londoño La Rotta

    2013-08-01

    Full Text Available Taking account of the object of study and of the type of musical and pedagogical practices found in the monographies written by undergraduate students from the professionalization program Creative Colombia (first cohort, it could be argued that their musical and pedagogical knowledge is not egocentrically centered in themselves, and that, on the contrary, they try to build bridges and interconnections with other arts and areas of knowledge, proving thus that the musical art is an essential part of their daily work and constitutes a vivid experience in terms of cultural transformation. There were then proposed several processes of social change in the realm of the school’s micro-universe, the classroom, the town’s Cultural House, its municipal band, etc. In those various processes, the music teacher was seen as a political person capable of using his musical knowledge to mediate in social realities, as a musicianmediator. So, it became clear that the real task was not to form mere musical trainers, but cultural mediators who may understand their artistic practice and teaching as an opportunity to acquire a deeper understanding of culture, as well as the means to transform it. Along the whole process, the students leaned on the trend and perspective of constructivism.

  2. Issues of academic study and practical acquisition of Tuvan music (a case study of Tuvan instrumental music

    Directory of Open Access Journals (Sweden)

    Valentina Yu. Suzukey

    2017-06-01

    Full Text Available In the 20th century, Tuvan music culture has undergone dramatic upheaval and a number of transformations. Today we face an acute need to rethink the achievements and losses incurred over that period of time. The objective of this article is to reconsider some basic parameters of Tuvan music culture that are responsible for preserving the integrity of its sound structure. The relevance of the topic is due to a current conceptual rift between the musical practices and their scholarly interpretations. In the Soviet period, culture throughout the entire USSR was solely driven by the European model of musical development with no reliance on practices typical for ethnical cultures. We are currently witnessing a decline in the numbers of those representing oral and audial traditional culture, while the numbers of music college graduates, those who studied at conservatoires, universities, academies of culture and arts, and thus come as bearers of values lying outside of the tradition. Tuvan musical practice is experiencing an invasion of academic vocabulary and non-relevant appraisal criteria. However, Tuvan musical culture, having always been primarily oral, has developed its own acoustic structure, as well as mechanisms and methods for non-scriptory transfer of knowledge. But these vernacular methods are still insufficiently explored. The author postulates that the system of Tuvan instrumental music organization is unique and acts as a basis for unconventional sound of musical instruments and xöömei (throat singing. Distinctive timbre and inimitable flair of the sound is achieved by original system of bourdon-overtone sound coordination. Music is created for audial enjoyment. But musicologists (mainly in Russia are still analyzing the notation they keep making of performed folk instrumental pieces and xöömei. Such an approach drastically narrows the entire panorama of traditional instrumental music. A positive factor is that contemporary Tuvan

  3. Music therapy improvisation

    Directory of Open Access Journals (Sweden)

    Mira Kuzma

    2001-09-01

    Full Text Available In this article, the technique of music therapy – music therapy improvisation is introduced. In this form of music therapy the improvising partners share meaning through the improvisation: the improvisation is not an end in itself: it portrays meaning that is personal, complex and can be shared with the partner. The therapeutic work, then, is meeting and matching the client's music in order to give the client an experience of "being known", being responded through sounds and being able to express things and communicate meaningfully. Rather than the client playing music, the therapy is about developing the engagement through sustained, joint improvisations. In music therapy, music and emotion share fundamental features: one may represent the other, i.e., we hear the music not as music but as dynamic emotional states. The concept of dynamic structure explains why music makes therapeutic sense.

  4. Observational Learning in the Music Masterclass

    Science.gov (United States)

    Haddon, Elizabeth

    2014-01-01

    This article contributes to research on music masterclasses through examining learning through observation. It investigates how students are learning as observers in this context; whether and how they will transfer their masterclass learning to their own instrumental/vocal development, and whether they have discussed learning through observation.…

  5. Influence of water depth on the sound generated by air-bubble vibration in the water musical instrument

    Science.gov (United States)

    Ohuchi, Yoshito; Nakazono, Yoichi

    2014-06-01

    We have developed a water musical instrument that generates sound by the falling of water drops within resonance tubes. The instrument can give people who hear it the healing effect inherent in the sound of water. The sound produced by falling water drops arises from air- bubble vibrations. To investigate the impact of water depth on the air-bubble vibrations, we conducted experiments at varying values of water pressure and nozzle shape. We found that air-bubble vibration frequency does not change at a water depth of 50 mm or greater. Between 35 and 40 mm, however, the frequency decreases. At water depths of 30 mm or below, the air-bubble vibration frequency increases. In our tests, we varied the nozzle diameter from 2 to 4 mm. In addition, we discovered that the time taken for air-bubble vibration to start after the water drops start falling is constant at water depths of 40 mm or greater, but slower at depths below 40 mm.

  6. Perceptual processing of a complex musical context

    DEFF Research Database (Denmark)

    Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas

    play a fundamental role in music perception. The mismatch negativity (MMN) is a brain response that offers a unique insight into these processes. The MMN is elicited by deviants in a series of repetitive sounds and reflects the perception of change in physical and abstract sound regularities. Therefore......, it is regarded as a prediction error signal and a neural correlate of the updating of predictive perceptual models. In music, the MMN has been particularly valuable for the assessment of musical expectations, learning and expertise. However, the MMN paradigm has an important limitation: its ecological validity....... To this aim we will develop a new paradigm using more real-sounding stimuli. Our stimuli will be two-part music excerpts made by adding a melody to a previous design based on the Alberti bass (Vuust et al., 2011). Our second goal is to determine how the complexity of this context affects the predictive...

  7. The natural horn as an efficient sound radiating system ...

    African Journals Online (AJOL)

    Results obtained showed that the locally made horn are efficient sound radiating systems and are therefore excellent for sound production in local musical renditions. These findings, in addition to the portability and low cost of the horns qualify them to be highly recommended for use in music making and for other purposes ...

  8. The Sounds of Metal

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    2015-01-01

    Two, I propose that this framework allows for at least a theoretical distinction between the way in which extreme metal – e.g. black metal, doom metal, funeral doom metal, death metal – relates to its sound as music and the way in which much other music may be conceived of as being constituted...

  9. Encountering Complexity: Native Musics in the Curriculum.

    Science.gov (United States)

    Boyea, Andrea

    1999-01-01

    Describes Native American musics, focusing on issues such as music and the experience of time, metaphor and metaphorical aspects, and spirituality and sounds from nature. Discusses Native American metaphysics and its reflection in the musics. States that an effective curriculum would provide a new receptivity to Native American musics. (CMK)

  10. Linking prenatal experience to the emerging musical mind.

    Science.gov (United States)

    Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E

    2013-09-03

    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

  11. Voice classification and vocal tract of singers: a study of x-ray images and morphology.

    Science.gov (United States)

    Roers, Friederike; Mürbe, Dirk; Sundberg, Johan

    2009-01-01

    This investigation compares vocal tract dimensions and the classification of singer voices by examining an x-ray material assembled between 1959 and 1991 of students admitted to the solo singing education at the University of Music, Dresden, Germany. A total of 132 images were available to analysis. Different classifications' values of the lengths of the total vocal tract, the pharynx, and mouth cavities as well as of the relative position of the larynx, the height of the palatal arch, and the estimated vocal fold length were analyzed statistically, and some significant differences were found. The length of the pharynx cavity seemed particularly influential on the total vocal tract length, which varied systematically with classification. Also studied were the relationships between voice classification and the body height and weight and the body mass index. The data support the hypothesis that there are consistent morphological vocal tract differences between singers of different voice classifications.

  12. Contemporary commercial music (CCM) survey: who's teaching what in nonclassical music.

    Science.gov (United States)

    LoVetri, Jeannette L; Weekly, Edrie Means

    2003-06-01

    Currently, there is an increasing interest in and demand for training in a wide variety of nonclassical music--contemporary commercial music (CCM)--and most particularly for music theater. A survey of singing teachers was completed to elucidate their training, education, and experience with and methods of teaching CCM. Teachers were at colleges, universities, and conservatories as well as in private studios, both nationally and in several foreign countries. A substantial percentage of those teaching CCM had neither formal education in teaching it nor professional experience. Many of the respondents indicated conflict between classical and CCM styles. Respondents were generally familiar with voice science and voice medicine as well as certain CCM terminology. Teachers expressed an interest in obtaining more information, with an emphasis on healthy vocal production. These results are discussed, as well as implications for the singing teacher who desires specific training to teach CCM.

  13. Influence of Music on the Behaviors of Crowd in Urban Open Public Spaces.

    Science.gov (United States)

    Meng, Qi; Zhao, Tingting; Kang, Jian

    2018-01-01

    Sound environment plays an important role in urban open spaces, yet studies on the effects of perception of the sound environment on crowd behaviors have been limited. The aim of this study, therefore, is to explore how music, which is considered an important soundscape element, affects crowd behaviors in urban open spaces. On-site observations were performed at a 100 m × 70 m urban leisure square in Harbin, China. Typical music was used to study the effects of perception of the sound environment on crowd behaviors; then, these behaviors were classified into movement (passing by and walking around) and non-movement behaviors (sitting). The results show that the path of passing by in an urban leisure square with music was more centralized than without music. Without music, 8.3% of people passing by walked near the edge of the square, whereas with music, this percentage was zero. In terms of the speed of passing by behavior, no significant difference was observed with the presence or absence of background music. Regarding the effect of music on walking around behavior in the square, the mean area and perimeter when background music was played were smaller than without background music. The mean speed of those exhibiting walking around behavior with background music in the square was 0.296 m/s slower than when no background music was played. For those exhibiting sitting behavior, when background music was not present, crowd density showed no variation based on the distance from the sound source. When music was present, it was observed that as the distance from the sound source increased, crowd density of those sitting behavior decreased accordingly.

  14. The Linked Dual Representation model of vocal perception and production

    Directory of Open Access Journals (Sweden)

    Sean eHutchins

    2013-11-01

    Full Text Available The voice is one of the most important media for communication, yet there is a wide range of abilities in both the perception and production of the voice. In this article, we review this range of abilities, focusing on pitch accuracy as a particularly informative case, and look at the factors underlying these abilities. Several classes of models have been posited describing the relationship between vocal perception and production, and we review the evidence for and against each class of model. We look at how the voice is different from other musical instruments and review evidence about both the association and the dissociation between vocal perception and production abilities. Finally, we introduce the Linked Dual Representation model, a new approach which can account for the broad patterns in prior findings, including trends in the data which might seem to be countervailing. We discuss how this model interacts with higher-order cognition and examine its predictions about several aspects of vocal perception and production.

  15. Una experiencia de improvisación vocal en el nivel de educación infantil.

    OpenAIRE

    Delgado Castanedo, Carolina

    2016-01-01

    RESUMEN En el presente documento se recopila y maneja información sobre dos elementos fundamentales dentro de la Educación Musical: por un lado, la creatividad musical, abordando su definición, características y metodologías y, por otro lado, la improvisación musical, epicentro del trabajo, tratando aspectos como: su definición, objetivos, fundamentos, su relación con el ámbito educativo y el currículo y destacando la importancia de la voz y la improvisación vocal. A través de una experien...

  16. La creación de identidades culturales a través del sonido Music Distribution in the Consumer Society: the Creation of Cultural Identities Through Sound

    Directory of Open Access Journals (Sweden)

    Jaime Hormigos Ruiz

    2010-03-01

    rituals of human kind. No one knows exactly how and why the man has started to make music but the music has been a means of perceiving the world, a powerful instrument of knowledge. Traditionally, creation and distribution of music has been tied to the need to communicate feelings and experiences that can not be expressed through common language. This paper describes how our society has generated a multitude of sounds that are distributed freely through the new technologies. This set of sounds is creating cultural identities that are unable to manage his current music and understand their communicative speech. To this end, the paper examines the profound changes that music is experiencing in a consumer society. These changes make it necessary to establish a new paradigm for analysis that allows structuring the diversity of sounds, analyzing their creation, distribution and consumption. Finally, the paper states that permanent contact with the music changes the way we perceive sounds. In contemporary society, music has gone from being a vital need to become an instrument of consumption. This has led to significant changes in their functions, significance and social use.

  17. Genetics and genomics of musical abilities

    OpenAIRE

    Oikkonen, Jaana

    2016-01-01

    Most people have the capacity for music perception and production, but the degree of music competency varies between individuals. In this thesis, I studied abilities to identify pitch, tone duration and sound patterns with Karma s test for auditory structuring (KMT), and Seashore s tests for time (ST) and pitch (SP). These abilities can be considered as basic components of musicality. Additionally, I studied self-reported musical activities, especially composing and arranging. Musical ability...

  18. Human-based percussion and self-similarity detection in electroacoustic music

    Science.gov (United States)

    Mills, John Anderson, III

    Electroacoustic music is music that uses electronic technology for the compositional manipulation of sound, and is a unique genre of music for many reasons. Analyzing electroacoustic music requires special measures, some of which are integrated into the design of a preliminary percussion analysis tool set for electroacoustic music. This tool set is designed to incorporate the human processing of music and sound. Models of the human auditory periphery are used as a front end to the analysis algorithms. The audio properties of percussivity and self-similarity are chosen as the focus because these properties are computable and informative. A collection of human judgments about percussion was undertaken to acquire clearly specified, sound-event dimensions that humans use as a percussive cue. A total of 29 participants was asked to make judgments about the percussivity of 360 pairs of synthesized snare-drum sounds. The grouped results indicate that of the dimensions tested rise time is the strongest cue for percussivity. String resonance also has a strong effect, but because of the complex nature of string resonance, it is not a fundamental dimension of a sound event. Gross spectral filtering also has an effect on the judgment of percussivity but the effect is weaker than for rise time and string resonance. Gross spectral filtering also has less effect when the stronger cue of rise time is modified simultaneously. A percussivity-profile algorithm (PPA) is designed to identify those instants in pieces of music that humans also would identify as percussive. The PPA is implemented using a time-domain, channel-based approach and psychoacoustic models. The input parameters are tuned to maximize performance at matching participants' choices in the percussion-judgment collection. After the PPA is tuned, the PPA then is used to analyze pieces of electroacoustic music. Real electroacoustic music introduces new challenges for the PPA, though those same challenges might affect

  19. Impaired socio-emotional processing in a developmental music disorder

    Science.gov (United States)

    Lima, César F.; Brancatisano, Olivia; Fancourt, Amy; Müllensiefen, Daniel; Scott, Sophie K.; Warren, Jason D.; Stewart, Lauren

    2016-01-01

    Some individuals show a congenital deficit for music processing despite normal peripheral auditory processing, cognitive functioning, and music exposure. This condition, termed congenital amusia, is typically approached regarding its profile of musical and pitch difficulties. Here, we examine whether amusia also affects socio-emotional processing, probing auditory and visual domains. Thirteen adults with amusia and 11 controls completed two experiments. In Experiment 1, participants judged emotions in emotional speech prosody, nonverbal vocalizations (e.g., crying), and (silent) facial expressions. Target emotions were: amusement, anger, disgust, fear, pleasure, relief, and sadness. Compared to controls, amusics were impaired for all stimulus types, and the magnitude of their impairment was similar for auditory and visual emotions. In Experiment 2, participants listened to spontaneous and posed laughs, and either inferred the authenticity of the speaker’s state, or judged how much laughs were contagious. Amusics showed decreased sensitivity to laughter authenticity, but normal contagion responses. Across the experiments, mixed-effects models revealed that the acoustic features of vocal signals predicted socio-emotional evaluations in both groups, but the profile of predictive acoustic features was different in amusia. These findings suggest that a developmental music disorder can affect socio-emotional cognition in subtle ways, an impairment not restricted to auditory information. PMID:27725686

  20. Initial experiments with Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Graugaard, Lars

    2005-01-01

    The classic orchestra has a diminishing role in society, while hard-disc recorded music plays a predominant role today. A simple to use pointer interface in 2D for producing music is presented as a means for playing in a social situation. The sounds of the music are produced by a low-level...... synthesizer, and the music is produced by simple gestures that are repeated easily. The gestures include left-to-right and right-to-left motion shapes for spectral envelope and temporal envelope of the sounds, with optional backwards motion for the addition of noise; downward motion for note onset and several...... other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence is finalized instantly with one upward gesture. The synthesis employs a novel interface structure, the multiple musical gesture...

  1. The Physics and Psychophysics of Music An Introduction

    CERN Document Server

    Roederer, Juan G

    2009-01-01

    This book, a classic in its field, deals with the physical systems and physiological processes that intervene in music. It analyzes what objective, physical properties of sound are associated with what subjective psychological sensations of music, and it describes how these sound patterns are actually generated in musical instruments, how they propagate through the environment, and how they are detected by the ear and interpreted in the brain. Using the precise language of science, but without complicated mathematics, the author weaves a close mesh of the physics, psychophysics and neurobiology relevant to music. A prior knowledge of physics, mathematics, neurobiology or psychology is not required to understand most of the book; it is, however, assumed that the reader is familiar with music - in particular, with musical notation, musical scales and intervals, and some of the basics of musical instruments. This new edition presents substantially updated coverage of psychoacoustics, including: • New results f...

  2. Reading the Music and Understanding the Therapeutic Process: Documentation, Analysis and Interpretation of Improvisational Music Therapy

    Directory of Open Access Journals (Sweden)

    Deborah Parker

    2011-01-01

    Full Text Available This article is concerned primarily with the challenges of presenting clinical material from improvisational music therapy. My aim is to propose a model for the transcription of music therapy material, or “musicotherapeutic objects” (comparable to Bion’s “psychoanalytic objects”, which preserves the integrated “gestalt” of the musical experience as far as possible, whilst also supporting detailed analysis and interpretation. Unwilling to resort to use of visual documentation, but aware that many important indicators in music therapy are non-sounding, I propose a richly annotated score, where traditional music notation is integrated with graphic and verbal additions, in order to document non-sounding events. This model is illustrated within the context of a clinical case with a high functioning autistic woman. The four transcriptions, together with the original audio tracks, present significant moments during the course of music therapy, attesting to the development of the dyadic relationship, with reference to John Bowlby’s concept of a “secure base” as the most appropriate dynamic environment for therapy.

  3. Glottal aerodynamics in compliant, life-sized vocal fold models

    Science.gov (United States)

    McPhail, Michael; Dowell, Grant; Krane, Michael

    2013-11-01

    This talk presents high-speed PIV measurements in compliant, life-sized models of the vocal folds. A clearer understanding of the fluid-structure interaction of voiced speech, how it produces sound, and how it varies with pathology is required to improve clinical diagnosis and treatment of vocal disorders. Physical models of the vocal folds can answer questions regarding the fundamental physics of speech, as well as the ability of clinical measures to detect the presence and extent of disorder. Flow fields were recorded in the supraglottal region of the models to estimate terms in the equations of fluid motion, and their relative importance. Experiments were conducted over a range of driving pressures with flow rates, given by a ball flowmeter, and subglottal pressures, given by a micro-manometer, reported for each case. Imaging of vocal fold motion, vector fields showing glottal jet behavior, and terms estimated by control volume analysis will be presented. The use of these results for a comparison with clinical measures, and for the estimation of aeroacoustic source strengths will be discussed. Acknowledge support from NIH R01 DC005642.

  4. Long-term memory of heterospecific vocalizations by African lions

    Science.gov (United States)

    Grinnell, Jon; van Dyk, Gus; Slotow, Rob

    2005-09-01

    Animals that use and evaluate long-distance signals have the potential to glean valuable information about others in their environment via eavesdropping. In those areas where they coexist, African lions (Panthera leo) are a significant eavesdropper on spotted hyenas (Crocuta crocuta), often using hyena vocalizations to locate and scavenge from hyena kills. This relationship was used to test African lions' long-term memory of the vocalizations of spotted hyenas via playback experiments. Hyena whoops and a control sound (Canis lupus howls) were played to three populations of lions in South Africa: (1) lions with past experience of spotted hyenas; (2) lions with current experience; and (3) lions with no experience. The results strongly suggest that lions have the cognitive ability to remember the vocalizations of spotted hyenas even after 10 years with no contact of any kind with them. Such long-term memory of heterospecific vocalizations may be widespread in species that gain fitness benefits from eavesdropping on others, but where such species are sympatric and often interact it may pass unrecognized as short-term memory instead.

  5. Exploring vocal recovery after cranial nerve injury in Bengalese finches.

    Science.gov (United States)

    Urbano, Catherine M; Peterson, Jennifer R; Cooper, Brenton G

    2013-02-08

    Songbirds and humans use auditory feedback to acquire and maintain their vocalizations. The Bengalese finch (Lonchura striata domestica) is a songbird species that rapidly modifies its vocal output to adhere to an internal song memory. In this species, the left side of the bipartite vocal organ is specialized for producing louder, higher frequencies (≥2.2kHz) and denervation of the left vocal muscles eliminates these notes. Thus, the return of higher frequency notes after cranial nerve injury can be used as a measure of vocal recovery. Either the left or right side of the syrinx was denervated by resection of the tracheosyringeal portion of the hypoglossal nerve. Histologic analyses of syringeal muscle tissue showed significant muscle atrophy in the denervated side. After left nerve resection, songs were mainly composed of lower frequency syllables, but three out of five birds recovered higher frequency syllables. Right nerve resection minimally affected phonology, but it did change song syntax; syllable sequence became abnormally stereotyped after right nerve resection. Therefore, damage to the neuromuscular control of sound production resulted in reduced motor variability, and Bengalese finches are a potential model for functional vocal recovery following cranial nerve injury. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Modular and Adaptive Control of Sound Processing

    Science.gov (United States)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis

  7. Music training alters the course of adolescent auditory development

    Science.gov (United States)

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  8. [The phenomenon of pain in the history of music – observations of neurobiological mechanisms of pain and its expressions in western music].

    Science.gov (United States)

    Gasenzer, E R; Neugebauer, E A M

    2014-12-01

    Purpose of this essay is to provide a historical overview how music has dealt with the emotion and sensation of pain, as well as an overview over the more recent medical research into the relationship of music and pain. Since the beginnings of western music humans have put their emotions into musical sounds. During the baroque era, composers developed musical styles that expressed human emotions and our experiences of nature. In some compositions, like in operas, we find musical representations of pain. During Romanticism artists began to intrude into the soul of their audience. New expressive harmonies and styles touch the soul and the consciousness of the listener. With the inception of atonality dissonant sounds where experienced as a physical pain.The physiology of deep brain structures (like thalamus, hypothalamus or limbic system) and the physiology of the acoustic pathway process consonant and dissonant sound and musical perceptions in ways, that are similar to the perception of pain. In the thalamus and in the limbic system music and pain meet.The relationships of music and pain is a wide open research field with such interesting questions as the role of dopamine in the perception of consonant or dissonant music, or the processing of pain during music listening. Musicology has not yet embarked on a general investigation of how musical compositions express pain and how that has developed or changed over the centuries. Music therapy, neuro-musicology and the performing arts medicine are scientific fields that offer a lot of ideas for medical and musical research projects. © Georg Thieme Verlag KG Stuttgart · New York.

  9. From a Music Industry to Sound Industries

    OpenAIRE

    Thor Magnusson

    2013-01-01

    Commodification has been an inherent aspect of music for many centuries. The aggregation of the diverse commodification practices could be described as an "industry," but this is an industry that has always been in a state of transition. New technologies, media formats, and practices appear regularly, requiring swift responses by the incumbent music industry. Although periods of relative stabil- ity have existed, where economic structures become established, the field has always been ch...

  10. Music Engineering as a Novel Strategy for Enhancing Music Enjoyment in the Cochlear Implant Recipient

    Directory of Open Access Journals (Sweden)

    Gavriel D. Kohlberg

    2015-01-01

    Full Text Available Objective. Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI listener. Methods. Normal hearing (NH adults (N=16 and CI listeners (N=9 evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Results. Compared to the original song, modified versions containing only 1–3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Conclusions. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience.

  11. 3D Room Visualization on Android Based Mobile Device (with Philips™’ Surround Sound Music Player

    Directory of Open Access Journals (Sweden)

    Durio Etgar

    2012-12-01

    Full Text Available This project’s specifically purposed as a demo application, so anyone can get the experience of a surround audio room without having to physically involved to it, with a main idea of generating a 3D surround sound room scenery coupled with surround sound in a handier package, namely, a “Virtual Listen Room”. Virtual Listen Room set a foundation of an innovative visualization that later will be developed and released as one of way of portable advertisement. This application was built inside of Android environment. Android device had been chosen as the implementation target, since it leaves massive development spaces and mostly contains essential components needed on this project, including graphic processor unit (GPU.  Graphic manipulation can be done using an embedded programming interface called OpenGL ES, which is planted in all Android devices generally. Further, Android has a Accelerometer Sensor that is needed to be coupled with scene to produce a dynamic movement of the camera. Surround sound effect can be reached with a decoder from Phillips called MPEG Surround Sound Decoder. To sum the whole project, we got an application with sensor-dynamic 3D room visualization coupled with Philips’ Surround Sound Music Player. We can manipulate several room’s properties; Subwoofer location, Room light, and how many speakers inside it, the application itself works well despite facing several performance problems before, later to be solved. [Keywords : Android,Visualization,Open GL; ES; 3D; Surround Sensor

  12. 3D Room Visualization on Android Based Mobile Device (with Philips™’ Surround Sound Music Player

    Directory of Open Access Journals (Sweden)

    Durio Etgar

    2013-01-01

    Full Text Available This project’s specifically purposed as a demo application, so anyone can get the experience of a surround audio room without having to physically involved to it, with a main idea of generating a 3D surround sound room scenery coupled with surround sound in a handier package, namely, a “Virtual Listen Room”. Virtual Listen Room set a foundation of an innovative visualization that later will be developed and released as one of way of portable advertisement. This application was built inside of Android environment. Android device had been chosen as the implementation target, since it leaves massive development spaces and mostly contains essential components needed on this project, including graphic processor unit (GPU. Graphic manipulation can be done using an embedded programming interface called OpenGL ES, which is planted in all Android devices generally. Further, Android has a Accelerometer Sensor that is needed to be coupled with scene to produce a dynamic movement of the camera. Surround sound effect can be reached with a decoder from Phillips called MPEG Surround Sound Decoder. To sum the whole project, we got an application with sensor-dynamic 3D room visualization coupled with Philips’ Surround Sound Music Player. We can manipulate several room’s properties; Subwoofer location, Room light, and how many speakers inside it, the application itself works well despite facing several performance problems before, later to be solved.

  13. Improvisation: An Essential Element of Musical Proficiency.

    Science.gov (United States)

    Dobbins, Bill

    1980-01-01

    The author discusses the importance of improvisation, suggesting that improvisation be introduced in the earliest stages of education and be taught through an approach that integrates ear training, sight-reading, instrumental and vocal techniques and theory into a unified and complete understanding of music as a language. (Author/KC)

  14. Musical anhedonia: selective loss of emotional experience in listening to music.

    Science.gov (United States)

    Satoh, Masayuki; Nakase, Taizen; Nagata, Ken; Tomimoto, Hidekazu

    2011-10-01

    Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.

  15. LONGITUDINAL STUDIES OF MUSICALLY GIFTED SCHOOLGIRLS

    Directory of Open Access Journals (Sweden)

    Svetlana N. Loseva

    2016-01-01

    Full Text Available The aim of the article is to consider the empirical aspects of the development of musical gifted schoolgirls in vocal and choral activities.Methods. Scientific methods of research (observation, questionnaire, interview, formative experiment, longitude, testing are used. Data are analyzed using a complex of psychodiagnostic techniques: culture and free intelligence test by R. Cattell; the modified creative test by F. Williams; personal multifactorial questionnaire by R. Cattell. The reliability of the results and the validity of the findings is provided by the use of reliable and proven in the domestic and foreign psychology methods and techniques, using different statistical methods of data processing, the definition of parametric and non-parametric statistical tests (Student’s t-criterion, Spearman’s rank correlation, criterion U-Mann – Whitney, criterion T-Wilcoxon, L-criterion trends Page.Results and scientific novelty. Results of longitudinal research of development of musical aptitude are presented. Experimental work in which schoolgirls of 8–15 years participated, was carried out on the basis of creative choral collectives of Irkutsk within three years. Features of development of the pupils who are engaged in singing are revealed. It is established that in process of development of specially developed program (comprehension of emotional and semantic aspects of perception of a piece of music, finding of ability to distinguish musical timbres and the general coloring of sounding etc. analytical and intonation hearing improves under pupils’ age. Regular long singing trainings promote formation of cogitative operations: active development of divergent, rational and logical thinking and intelligence in general, and also acquisition of skills of a self-assessment. Besides, such occupations dispose children and teenagers to emotional responsiveness and spiritual self-improvement.Practical significance. The research

  16. Two Shared Rapid Turn Taking Sound Interfaces for Novices

    DEFF Research Database (Denmark)

    Hansen, Anne-Marie; Andersen, Hans Jørgen; Raudaskoski, Pirkko Liisa

    2012-01-01

    in an interleaved fashion: Systems A and B used a fuzzy logic algorithm and pattern recognition to respond with modifications of a background rhythms. In an experiment with a pen tablet interface as the music instrument, users aged 10-13 were to tap tones and continue each other’s melody. The sound systems rewarded......This paper presents the results of user interaction with two explorative music environments (sound system A and B) that were inspired from the Banda Linda music tradition in two different ways. The sound systems adapted to how a team of two players improvised and made a melody together...... users sonically, if they managed to add tones to their mutual melody in a rapid turn taking manner with rhythmical patterns. Videos of experiment sessions show that user teams contributed to a melody in ways that resemble conversation. Interaction data show that each sound system made player teams play...

  17. The cognitive organization of music knowledge: a clinical analysis.

    Science.gov (United States)

    Omar, Rohani; Hailstone, Julia C; Warren, Jane E; Crutch, Sebastian J; Warren, Jason D

    2010-04-01

    Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer's disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer's disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.

  18. Emotions evoked by the sound of music: characterization, classification, and measurement.

    Science.gov (United States)

    Zentner, Marcel; Grandjean, Didier; Scherer, Klaus R

    2008-08-01

    One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.

  19. The New Sound of Music.

    Science.gov (United States)

    Medved, Michael

    1992-01-01

    The contrast between private contentment with family life and public pessimism about the state of the U.S. family mirrors the frightening view of human relations presented by popular culture, particularly popular music. Many African-American leaders deplore any association between African-American culture and current, often obscene, pop music…

  20. Aeroacoustics of Musical Instruments

    NARCIS (Netherlands)

    Fabre, B.; Gilbert, J.; Hirschberg, Abraham; Pelorson, X.

    2012-01-01

    We are interested in the quality of sound produced by musical instruments and their playability. In wind instruments, a hydrodynamic source of sound is coupled to an acoustic resonator. Linear acoustics can predict the pitch of an instrument. This can significantly reduce the trial-and-error process

  1. Acoustics of a Music Venue/Bar—A Case Study

    Directory of Open Access Journals (Sweden)

    Ramani Ramakrishnan

    2016-03-01

    Full Text Available A vacant unit, once used by a Portuguese Deli, was converted to a bar/music room in Toronto. The unit was divided into two spaces along its north-south axis. The western portion was designed as a music room that would provide a performance space from a solo artist to a Jazz combo to a small rock band. The eastern part was designed as a regular bar/dining area. The plan also called for a microbrewery unit at the back of the unit. The bar music can be loud, while the music room can be pianissimo to forte depending on the type of performance. The acoustical design aspects are critical for the music room. In addition, the acoustical separation between the two spaces is equally important. The music room/bar is currently in use. The design results are compared to actual field measurements. The results showed that the music venue performed satisfactorily. The acoustical separation between the music venue and the bar/restaurant was better than expected other than an installation deficiency of the south side sound lock doors. The background sound along the northern portion was NC-35 or less. However, the southern portion’s background sound exceeded NC-35 due to the hissing of the return air grille. The acoustical design and the performance results of the music venue-bar/restaurant are presented in this paper.

  2. Understanding Vocalization Might Help to Assess Stressful Conditions in Piglets

    Directory of Open Access Journals (Sweden)

    Diego Pereira Neves

    2013-09-01

    Full Text Available Assessing pigs’ welfare is one of the most challenging subjects in intensive pig farming. Animal vocalization analysis is a noninvasive procedure and may be used as a tool for assessing animal welfare status. The objective of this research was to identify stress conditions in piglets reared in farrowing pens through their vocalization. Vocal signals were collected from 40 animals under the following situations: normal (baseline, feeling cold, in pain, and feeling hunger. A unidirectional microphone positioned about 15 cm from the animals’ mouth was used for recording the acoustic signals. The microphone was connected to a digital recorder, where the signals were digitized at the 44,100 Hz frequency. The collected sounds were edited and analyzed. The J48 decision tree algorithm available at the Weka® data mining software was used for stress classification. It was possible to categorize diverse conditions from the piglets’ vocalization during the farrowing phase (pain, cold and hunger, with an accuracy rate of 81.12%. Results indicated that vocalization might be an effective welfare indicator, and it could be applied for assessing distress from pain, cold and hunger in farrowing piglets.

  3. Direct effects of music in non-auditory cells in culture

    Directory of Open Access Journals (Sweden)

    Nathalia dos Reis Lestard

    2013-01-01

    Full Text Available The biological effects of electromagnetic waves are widely studied, especially due to their harmful effects, such as radiation-induced cancer and to their application in diagnosis and therapy. However, the biological effects of sound, another physical agent to which we are frequently exposed have been considerably disregarded by the scientific community. Although a number of studies suggest that emotions evoked by music may be useful in medical care, alleviating stress and nociception in patients undergoing surgical procedures as well as in cancer and burned patients, little is known about the mechanisms by which these effects occur. It is generally accepted that the mechanosensory hair cells in the ear transduce the sound-induced mechanical vibrations into neural impulses, which are interpreted by the brain and evoke the emotional effects. In the last decade; however, several studies suggest that the response to music is even more complex. Moreover, recent evidence comes out that cell types other than auditory hair cells could response to audible sound. However, what is actually sensed by the hair cells, and possible by other cells in our organism, are physical differences in fluid pressure induced by the sound waves. Therefore, there is no reasonable impediment for any cell type of our body to respond to a pure sound or to music. Hence, the aim of the present study was to evaluate the response of a human breast cancer cell line, MCF7, to music. The results′ obtained suggest that music can alter cellular morpho-functional parameters, such as cell size and granularity in cultured cells. Moreover, our results suggest for the 1 st time that music can directly interfere with hormone binding to their targets, suggesting that music or audible sounds could modulate physiological and pathophysiological processes.

  4. How music affects soundscape: Musical preferences in Skadarlija

    Directory of Open Access Journals (Sweden)

    Dumnić Marija

    2017-01-01

    Full Text Available In this article I analyze musical preferences in the context of tavern performances in Skadarlija, a popular tourist quarter in Belgrade, Serbia, on the basis of ethnographic data collection. I argue that this specific musicscape relies on communicative and affective aspects of particular performances. I pay special attention to the repertoires performed and the way in which they interweave. The aim of this article is to demonstrate how musical preferences influence sound environment, especially in the context of the tourism industry. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 177004: Serbian Musical Identities Within Local and Global Frameworks: Traditions, Changes, Challenges

  5. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  6. Application of a Musical Whistling Certificate Examination System as a Group Examination

    Science.gov (United States)

    Mori, Mikio; Ogihara, Mitsuhiro; Sugahara, Shin-Ichi; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro

    Recently, some professional whistlers have set up music schools to teach musical whistling. However, so far, there is no licensed examination for musical whistling. In this paper, we propose an examination system for evaluating musical whistling. The system conducts an examination in musical whistling on a personal computer (PC). It can be used to award four grades, from the second to the fifth. These grades are designed according to the standards adopted by the school for musical whistling established by the Japanese professional whistler Moku-San. It is expected that the group examination of this examination is held in the examination center where other general certification examinations are held. Thus, the influence of the whistle sound on the PC microphone normally used should be considered. For this purpose, we examined the feasibility of using a bone-conductive microphone for a musical whistling certificate examination system. This paper shows that the proposed system in which bone-transmitted sounds are considered gives good performance under a noisy environment, as demonstrated in a group examination of musical whistling using bone-transmitted sounds. The timing of a candidates whistling tends to not match because the applause sound output from the PC was inaudible for a person older than 60 years.

  7. THE 'FOLK-CHORAL CONCEPT' OF THE MUSIC OF OKECHUKWU ...

    African Journals Online (AJOL)

    the contributions he has made towards the development of art music in. Nigeria. .... child. It is also sad to note that so many children grow within this process .... vocal arrangement with the robust movements of the Soprano and Altos in.

  8. A validated battery of vocal emotional expressions

    Directory of Open Access Journals (Sweden)

    Pierre Maurage

    2007-11-01

    Full Text Available For a long time, the exploration of emotions focused on facial expression, and vocal expression of emotion has only recently received interest. However, no validated battery of emotional vocal expressions has been published and made available to the researchers’ community. This paper aims at validating and proposing such material. 20 actors (10 men recorded sounds (words and interjections expressing six basic emotions (anger, disgust, fear, happiness, neutral and sadness. These stimuli were then submitted to a double validation phase: (1 preselection by experts; (2 quantitative and qualitative validation by 70 participants. 195 stimuli were selected for the final battery, each one depicting a precise emotion. The ratings provide a complete measure of intensity and specificity for each stimulus. This paper provides, to our knowledge, the first validated, freely available and highly standardized battery of emotional vocal expressions (words and intonations. This battery could constitute an interesting tool for the exploration of prosody processing among normal and pathological populations, in neuropsychology as well as psychiatry. Further works are nevertheless needed to complement the present material.

  9. How Music Technology Can Make Sound and Music Worlds Accessible to Student Composers in Further Education Colleges

    Science.gov (United States)

    Kardos, Leah

    2012-01-01

    I am a composer, producer, pianist and part-time music lecturer at a Further Education college where I teach composing on Music Technology courses at levels 3 (equivalent to A-level) and 4 (Undergraduate/Foundation Degree). A "Music Technology" course, distinct from a "Music" course, often attracts applicants from diverse musical backgrounds; it…

  10. The Sound of Leadership: Transformational Leadership in Music

    Science.gov (United States)

    Hall, John L.

    2008-01-01

    Leadership and music are two topics that are rarely mentioned together. However, their universal, intriguing, and complex nature allows a unique framework for helping individuals learn leadership concepts. In this paper several songs have been selected from various music genres. Each demonstrates elements of leadership. Aspects of popular culture…

  11. Effects of Tempo and Performing Medium on Children's Music Preference.

    Science.gov (United States)

    LeBlanc, Albert; Cote, Richard

    1983-01-01

    This study measured the effect of three levels of tempo and two levels of performing medium, vocal and instrumental, on the expressed preference of fifth- and sixth-grade students for traditional jazz music listening examples. (Author/SR)

  12. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians

    Directory of Open Access Journals (Sweden)

    Kumar, Prawin

    2015-12-01

    Full Text Available Introduction Enhanced auditory perception in musicians is likely to result from auditory perceptual learning during several years of training and practice. Many studies have focused on biological processing of auditory stimuli among musicians. However, there is a lack of literature on temporal resolution and active auditory discrimination skills in vocal musicians. Objective The aim of the present study is to assess temporal resolution and active auditory discrimination skill in vocal musicians. Method The study participants included 15 vocal musicians with a minimum professional experience of 5 years of music exposure, within the age range of 20 to 30 years old, as the experimental group, while 15 age-matched non-musicians served as the control group. We used duration discrimination using pure-tones, pulse-train duration discrimination, and gap detection threshold tasks to assess temporal processing skills in both groups. Similarly, we assessed active auditory discrimination skill in both groups using Differential Limen of Frequency (DLF. All tasks were done using MATLab software installed in a personal computer at 40dBSL with maximum likelihood procedure. The collected data were analyzed using SPSS (version 17.0. Result Descriptive statistics showed better threshold for vocal musicians compared with non-musicians for all tasks. Further, independent t-test showed that vocal musicians performed significantly better compared with non-musicians on duration discrimination using pure tone, pulse train duration discrimination, gap detection threshold, and differential limen of frequency. Conclusion The present study showed enhanced temporal resolution ability and better (lower active discrimination threshold in vocal musicians in comparison to non-musicians.

  13. Sound is Multi-Dimensional

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2006-01-01

    First part of this work examines the concept of musical parameter theory and discusses its methodical use. Second part is an annotated catalogue of 33 different students' compositions, presented in their totality with English translations, created between 1985 and 2006 as part of the subject...... Intuitive Music at Music Therapy, AAU. 20 of these have sound files as well. The work thus serves as an anthology of this form of composition. All the compositions are systematically presented according to parameters: pitch, duration, dynamics, timbre, density, pulse-no pulse, tempo, stylistic...

  14. Replication of elite music performance enhancement following alpha/theta neurofeedback and application to novice performance and improvisation with SMR benefits.

    Science.gov (United States)

    Gruzelier, J H; Holmes, P; Hirst, L; Bulpin, K; Rahman, S; van Run, C; Leach, J

    2014-01-01

    Alpha/theta (A/T) and sensory-motor rhythm (SMR) neurofeedback were compared in university instrumentalists who were novice singers with regard to prepared and improvised instrumental and vocal performance in three music domains: creativity/musicality, technique and communication/presentation. Only A/T training enhanced advanced playing seen in all three domains by expert assessors and validated by correlations with learning indices, strongest with Creativity/Musicality as shown by Egner and Gruzelier (2003). Here A/T gains extended to novice performance - prepared vocal, improvised vocal and instrumental - and were recognised by a lay audience who judged the prepared folk songs. SMR learning correlated positively with Technical Competence and Communication in novice performance, in keeping with SMR neurofeedback's known impact on lower-order processes such as attention, working memory and psychomotor skills. The importance of validation through learning indices was emphasised in the interpretation of neurofeedback outcome. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. MUSIC AND SOCIETY

    OpenAIRE

    Dr. Abhay Dubey

    2017-01-01

    In India, music is believed to be as eternal as God. Before the creation of the world —it existed as the all-pervading sound of "Om" —ringing through space. Brahma, the Creator, revealed the four Vedas, the last of which was the Sama Veda —dealing with music. Vedic hymns were ritualistic chants of invocation to different nature gods. It is not strange therefore to find the beginnings of Hindu music associated with Gods and Goddesses. The mythological heaven of Indra, God of Rain, was inhabite...

  16. ICE on the road to auditory sensitivity reduction and sound localization in the frog.

    Science.gov (United States)

    Narins, Peter M

    2016-10-01

    Frogs and toads are capable of producing calls at potentially damaging levels that exceed 110 dB SPL at 50 cm. Most frog species have internally coupled ears (ICE) in which the tympanic membranes (TyMs) communicate directly via the large, permanently open Eustachian tubes, resulting in an inherently directional asymmetrical pressure-difference receiver. One active mechanism for auditory sensitivity reduction involves the pressure increase during vocalization that distends the TyM, reducing its low-frequency airborne sound sensitivity. Moreover, if sounds generated by the vocal folds arrive at both surfaces of the TyM with nearly equal amplitudes and phases, the net motion of the eardrum would be greatly attenuated. Both of these processes appear to reduce the motion of the frog's TyM during vocalizations. The implications of ICE in amphibians with respect to sound localizations are discussed, and the particularly interesting case of frogs that use ultrasound for communication yet exhibit exquisitely small localization jump errors is brought to light.

  17. "Nerozlišuji v tvorbě věci drobné a velké"

    Czech Academy of Sciences Publication Activity Database

    Kordík, Pavel

    2016-01-01

    Roč. 13, č. 26 (2016), s. 26-53 ISSN 1214-7915 Institutional support: RVO:68378076 Keywords : Blue Sky * František Hrubín * play * vocal sound * musical morphology * song Subject RIV: AL - Art, Architecture, Cultural Heritage

  18. The association of noise sensitivity with music listening, training, and aptitude

    Directory of Open Access Journals (Sweden)

    Marina Kliuchko

    2015-01-01

    Full Text Available After intensive, long-term musical training, the auditory system of a musician is specifically tuned to perceive musical sounds. We wished to find out whether a musician′s auditory system also develops increased sensitivity to any sound of everyday life, experiencing them as noise. For this purpose, an online survey, including questionnaires on noise sensitivity, musical background, and listening tests for assessing musical aptitude, was administered to 197 participants in Finland and Italy. Subjective noise sensitivity (assessed with the Weinstein′s Noise Sensitivity Scale was analyzed for associations with musicianship, musical aptitude, weekly time spent listening to music, and the importance of music in each person′s life (or music importance. Subjects were divided into three groups according to their musical expertise: Nonmusicians (N = 103, amateur musicians (N = 44, and professional musicians (N = 50. The results showed that noise sensitivity did not depend on musical expertise or performance on musicality tests or the amount of active (attentive listening to music. In contrast, it was associated with daily passive listening to music, so that individuals with higher noise sensitivity spent less time in passive (background listening to music than those with lower sensitivity to noise. Furthermore, noise-sensitive respondents rated music as less important in their life than did individuals with lower sensitivity to noise. The results demonstrate that the special sensitivity of the auditory system derived from musical training does not lead to increased irritability from unwanted sounds. However, the disposition to tolerate contingent musical backgrounds in everyday life depends on the individual′s noise sensitivity.

  19. The association of noise sensitivity with music listening, training, and aptitude.

    Science.gov (United States)

    Kliuchko, Marina; Heinonen-Guzejev, Marja; Monacis, Lucia; Gold, Benjamin P; Heikkilä, Kauko V; Spinosa, Vittoria; Tervaniemi, Mari; Brattico, Elvira

    2015-01-01

    After intensive, long-term musical training, the auditory system of a musician is specifically tuned to perceive musical sounds. We wished to find out whether a musician's auditory system also develops increased sensitivity to any sound of everyday life, experiencing them as noise. For this purpose, an online survey, including questionnaires on noise sensitivity, musical background, and listening tests for assessing musical aptitude, was administered to 197 participants in Finland and Italy. Subjective noise sensitivity (assessed with the Weinstein's Noise Sensitivity Scale) was analyzed for associations with musicianship, musical aptitude, weekly time spent listening to music, and the importance of music in each person's life (or music importance). Subjects were divided into three groups according to their musical expertise: Nonmusicians (N = 103), amateur musicians (N = 44), and professional musicians (N = 50). The results showed that noise sensitivity did not depend on musical expertise or performance on musicality tests or the amount of active (attentive) listening to music. In contrast, it was associated with daily passive listening to music, so that individuals with higher noise sensitivity spent less time in passive (background) listening to music than those with lower sensitivity to noise. Furthermore, noise-sensitive respondents rated music as less important in their life than did individuals with lower sensitivity to noise. The results demonstrate that the special sensitivity of the auditory system derived from musical training does not lead to increased irritability from unwanted sounds. However, the disposition to tolerate contingent musical backgrounds in everyday life depends on the individual's noise sensitivity.

  20. Ratio-scaling of listener preference of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian

    2005-01-01

    -trivial assumption in the case of complex spatial sounds. In the present study the Bradley-Terry-Luce (BTL) model was employed to investigate the unidimensionality of preference judgments made by 40 listeners on multichannel reproduced sound. Short musical excerpts played back in eight reproduction modes (mono...... music). As a main result, the BTL model was found to predict the choice frequencies well. This implies that listeners were able to integrate the complex nature of the sounds into a unidimensional preference judgment. It further implies the existence of a preference scale on which the reproduction modes...

  1. Evaluating signal-to-noise ratios, loudness, and related measures as indicators of airborne sound insulation.

    Science.gov (United States)

    Park, H K; Bradley, J S

    2009-09-01

    Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.

  2. Polyphony in Iranian Music

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Massoudieh

    2017-08-01

    Full Text Available Although Iranian regional music, like Iranian traditional[*] music, is basically monophonic and follows the rules of heterophony, we occasionally run across polyphonic pieces, although most have been unwittingly formed as such. This study shows that these polyphonic pieces could be found in the following forms: 1. The meeting of two vocal parts, where the second singer starts singing before the melody is completed by the first. 2. Imitations, as a result of singing the same melody by a few singers who consecutively start singing with some delay between their parts. 3. Simultaneous playing of variants of the same melody by two players (variant heterophony. 4. Changing between the soloist and the chorus (in the responsorial form or between one chorus and another (in an antiphoner[**] where a chorus begins the next part of the lyrics before the soloist or the other chorus is finished with their own part. 5. Polyphony resulting from the playing of a melody by a few singers where each singer sings the melody based on their own voice register depending on their physiological features. 6. Accompanying the first singer using alternate changes to the drone note or following the up-going or down-going movement of the melody in playing the tamira (in Lorestan, the dotār (in Khorasan and tamderā (in Turkman leads to the conscious parallelism of two voices. The radif of traditional music and the Iranian regional music, like those of other Middle East countries, is monophonic and follows the forms of heterophony; that is, the same melody is played and changed by two or more players. The change of a specific melody by two players, or a player and a singer, sometimes leads to the simultaneous playing of two different notes. Such an interference or combination of two sounds is a matter of heterophony, and by no means of harmony or accord. Interference of notes or combinations of notes in heterophony are not predictable. Since the melody is played extempore

  3. ISEE : An Intuitive Sound Editing Environment

    NARCIS (Netherlands)

    Vertegaal, R.P.H.; Bonis, E.

    1994-01-01

    This article presents ISEE, an intuitive sound editing environment, as a general sound synthesis model based on expert auditory perception and cognition of musical instruments. It discusses the backgrounds of current synthesizer user interface design and related timbre space research. Of the three

  4. Paradoxical vocal changes in a trained singer by focally cooling the right superior temporal gyrus.

    Science.gov (United States)

    Katlowitz, Kalman A; Oya, Hiroyuki; Howard, Matthew A; Greenlee, Jeremy D W; Long, Michael A

    2017-04-01

    The production and perception of music is preferentially mediated by cortical areas within the right hemisphere, but little is known about how these brain regions individually contribute to this process. In an experienced singer undergoing awake craniotomy, we demonstrated that direct electrical stimulation to a portion of the right posterior superior temporal gyrus (pSTG) selectively interrupted singing but not speaking. We then focally cooled this region to modulate its activity during vocalization. In contrast to similar manipulations in left hemisphere speech production regions, pSTG cooling did not elicit any changes in vocal timing or quality. However, this manipulation led to an increase in the pitch of speaking with no such change in singing. Further analysis revealed that all vocalizations exhibited a cooling-induced increase in the frequency of the first formant, raising the possibility that potential pitch offsets may have been actively avoided during singing. Our results suggest that the right pSTG plays a key role in vocal sensorimotor processing whose impact is dependent on the type of vocalization produced. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Evaluating musical instruments

    International Nuclear Information System (INIS)

    Campbell, D. Murray

    2014-01-01

    Scientific measurements of sound generation and radiation by musical instruments are surprisingly hard to correlate with the subtle and complex judgments of instrumental quality made by expert musicians

  6. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  7. Social Context Predicts Vocalization Use in the Courtship Behaviors of Weddell Seals (Leptonychotes weddellii: A Case Study

    Directory of Open Access Journals (Sweden)

    Ludivine R. Russell

    2016-05-01

    Full Text Available Despite previous research, no study has convincingly demonstrated what role if any vocalizations might play in the reproductive behavior of Weddell seals (Leptonychotes weddellii. To better understand that role, we created an artificial territory for an adult, male Weddell seal under the shore-fast ice in McMurdo Sound, Antarctica, and recorded its in situ vocalizations and non-vocal behaviors with an underwater video camera and hydrophone while alone, with another male, and with one or more females. Additionally, we simultaneously recorded the vocalizations and non-vocal behaviors from a female interacting with the male. Analysis of 86 hr of video and audio recordings showed: 1 the male vocalized more than the female, 2 the male’s vocal repertoire was larger than the females’ repertoire, 3 vocalizations changed quantitatively and qualitatively with social context, and 4 patterns of vocalizations and non-vocal behaviors were detected with Theme, pattern recognition software from Noldus Information Technology. These results provided strong evidence that vocalizations played an important role during courtship, and together with the significant behavioral sequences, vocal and non-vocal, they provided insight into the function of their vocalizations including chirps, growls, jaw claps, knocks, mews, trills, and trills + knocks.

  8. The Micronium-A Musical MEMS instrument

    NARCIS (Netherlands)

    Engelen, Johannes Bernardus Charles; de Boer, Hans L.; de Boer, Hylco; Beekman, Jethro G.; Fortgens, Laurens C.; de Graaf, Derk B.; Vocke, Sander; Abelmann, Leon

    The Micronium is a musical instrument fabricated from silicon using microelectromechanical system (MEMS) technology. It is—to the best of our knowledge—the first musical micro-instrument fabricated using MEMS technology, where the actual sound is generated by mechanical microstructures. The

  9. Learning about the Dynamic Sun through Sounds

    Science.gov (United States)

    Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.

    2008-06-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.

  10. Assessment of vocal intensity in lecturers depending on acoustic properties of lecture rooms

    Directory of Open Access Journals (Sweden)

    Witold Mikulski

    2015-08-01

    Full Text Available Background: Lombard’s effect increases the level of vocal intensity in the environment, in which noise occurs. This article presents the results of the author’s own study of vocal intensity level and A-weighted sound pressure level of background noise during normal lectures. The aim of the study was to define whether above-mentioned parameters depend on acoustic properties of rooms (classrooms or lecture rooms and to define how many lectors speak with raised voice. Material and Methods: The study was performed in a group of 50 teachers and lecturers in 10 classrooms with cubature of 160–430 m3 and reverberation time of 0.37–1.3 s (group A consisted of 3 rooms which fulfilled, group B consisted of 3 rooms which almost fulfilled and group C consisted of 4 rooms which did not fulfill criteria based on reverberation time (maximum permissible value is 0.6–0.8 s according to PN-B-02151-4:2015. Criteria of raising voice were based on vocal intensity level (maximum value: 65 dB according to EN ISO 9921:2003. The values of above-mentioned parameters were determined from modes of A-weighted sound pressure level distributions during lectures. Results: Great differentiation of vocal intensity level between lectors was found. In classrooms of group A lectors were not using raised voice, in group B – 21%, and in group C – 60% of lectors were using raised voice. Conclusions: It was observed that acoustic properties of classrooms (defined by reverberation time exert their effect on lecturer’s vocal intensity level (i.e., raising voice, which may contribute to the increased risk of vocal tract illnesses. The occurrence of Lombard’s effect in groups of teachers and lecturers, conducting lectures in rooms, was evidenced. Med Pr 2015;66(4:487–496

  11. Satire in Music

    Directory of Open Access Journals (Sweden)

    Leon Stefanija

    2011-12-01

    Full Text Available The article surveys the scope of satire and suggests its range. It is divided into six sections. The introductory comment (The semantics of music briefly outlines the fact that music has always been a part of communicative endeavors. The historical background of the semantic issues in music is described (Historical surmises, which is necessary to define satire in music as a specific genre combining features from different musical forms. The third section discusses six areas as the most common contexts of musical satire: 1 satirical music theater works, such as the opera Il Girello by Jacopo Melani, the famous Coff ee Cantata (Schweigt Still, plaudert nicht, BWV 211 by Johann Sebastian Bach, Der Schulmeister by Georg Philipp Telemann, The Beggar’s Opera by John Gay, and so on; 2 musical genres associated with satire, either a within vocal-instrumental music; for instance, opera buffa, Singspiel, operetta, cabaret, vaudeville, and so on, or b in instrumental pieces, such as capriccios, grotesques, scherzos, burlesques, and so on; 3 individual features or compositional parts related to satire; for example, in a vocal music, the Satiro in Orfeo by Luigi Rossi, the range of the Orlando character in eighteenth-century opera, who “may be satire, a fool or hero, but never all together” (Harris, 1986, 106, the satirical antihero Matěj Brouček in Leoš Janáček’s work, and also Lady Macbeth, and in b instrumental music, such as the sermon of St. Anthony in Gustav Mahler’s Second Symphony, his marches, and “low-brow tunes,” a number of episodes in Dmitri Shostakovich’s works, and so on; 4 works variously related to criticism, such as the work of Eric Satie, Kurt Weill, Luigi Nono, Maurizio Kagel, and Vinko Globokar, as well as Fran Milčinski (a.k.a. Ježek, Laibach, or Bob Dylan; 5 music journalism, from Johann Beer and Louis-Abel Beffroy de Reigny and his popular pieces de circonstance, to nineteenth-century music journalism, George

  12. What do monkeys' music choices mean?

    Science.gov (United States)

    Lamont, Alexandra M

    2005-08-01

    McDermott and Hauser have recently shown that although monkeys show some types of preferences for sound, preferences for music are found only in humans. This suggests that music might be a relatively recent adaptation in human evolution. Here, I focus on the research methods used by McDermott and Hauser, and consider the findings in relation to infancy research and music psychology.

  13. Artabilitation ICMC Panel paper Denmark 2007:Non-Formal Rehabilitation via Immersiveinteractive Music Environments

    DEFF Research Database (Denmark)

    Brooks, Tony; Petersson, Eva; Eaglestone, Barry

    2007-01-01

    This paper brings together perspectives of the ICMC 2007 ArtAbilitation Panel on non-formal rehabilitation via immersive interactive music environments. Issues covered are sound therapy, musical topologies, brainwave control and research methodology.......This paper brings together perspectives of the ICMC 2007 ArtAbilitation Panel on non-formal rehabilitation via immersive interactive music environments. Issues covered are sound therapy, musical topologies, brainwave control and research methodology....

  14. Music in film and animation: experimental semiotics applied to visual, sound and musical structures

    Science.gov (United States)

    Kendall, Roger A.

    2010-02-01

    The relationship of music to film has only recently received the attention of experimental psychologists and quantificational musicologists. This paper outlines theory, semiotical analysis, and experimental results using relations among variables of temporally organized visuals and music. 1. A comparison and contrast is developed among the ideas in semiotics and experimental research, including historical and recent developments. 2. Musicological Exploration: The resulting multidimensional structures of associative meanings, iconic meanings, and embodied meanings are applied to the analysis and interpretation of a range of film with music. 3. Experimental Verification: A series of experiments testing the perceptual fit of musical and visual patterns layered together in animations determined goodness of fit between all pattern combinations, results of which confirmed aspects of the theory. However, exceptions were found when the complexity of the stratified stimuli resulted in cognitive overload.

  15. Music and language perception: expectations, structural integration, and cognitive sequencing.

    Science.gov (United States)

    Tillmann, Barbara

    2012-10-01

    Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes. Copyright © 2012 Cognitive Science Society, Inc.

  16. The sound of cooperation: Musical influences on cooperative behavior.

    Science.gov (United States)

    Kniffin, Kevin M; Yan, Jubo; Wansink, Brian; Schulze, William D

    2017-03-01

    Music as an environmental aspect of professional workplaces has been closely studied with respect to consumer behavior while sparse attention has been given to its relevance for employee behavior. In this article, we focus on the influence of music upon cooperative behavior within decision-making groups. Based on results from two extended 20-round public goods experiments, we find that happy music significantly and positively influences cooperative behavior. We also find a significant positive association between mood and cooperative behavior. Consequently, while our studies provide partial support for the relevance of affect in relation to cooperation within groups, we also show an independently important function of happy music that fits with a theory of synchronous and rhythmic activity as a social lubricant. More generally, our findings indicate that music and perhaps other atmospheric variables that are designed to prime consumer behavior might have comparably important effects for employees and consequently warrant closer investigation. Copyright © 2016 The Authors Journal of Organizational Behavior Published by John Wiley & Sons Ltd.

  17. Speech dysprosody but no music 'dysprosody' in Parkinson's disease

    NARCIS (Netherlands)

    Harris, Robert; Leenders, Klaus L.; de Jong, Bauke M.

    2016-01-01

    Parkinson's disease is characterized not only by bradykinesia, rigidity, and tremor, but also by impairments of expressive and receptive linguistic prosody. The facilitating effect of music with a salient beat on patients' gait suggests that it might have a similar effect on vocal behavior, however

  18. Common Vocal Effects and Partial Glottal Vibration in Professional Nonclassical Singers.

    Science.gov (United States)

    Caffier, Philipp P; Ibrahim Nasr, Ahmed; Ropero Rendon, Maria Del Mar; Wienhausen, Sascha; Forbes, Eleanor; Seidner, Wolfram; Nawka, Tadeus

    2018-05-01

    To multidimensionally investigate common vocal effects in experienced professional nonclassical singers, to examine their mechanism of production and reproducibility, to demonstrate the existence of partial glottal vibration, and to assess the potential of damage to the voice from nonclassical singing. Individual cohort study. Ten male singers aged between 25 and 46 years (34 ± 7 years [mean ± SD]) with different stylistic backgrounds were recruited (five pop/rock/metal, five musical theater). Participants repeatedly presented the usual nonclassical vocal effects and techniques in their repertoire. All performances were documented and analyzed using established instruments (eg, auditory-perceptual assessment, videolaryngostroboscopy, electroglottography, voice function diagnostics). The vocal apparatus of all singers was healthy and capable of high performance. Typical nonclassical vocal effects were breathy voice, creaky voice, vocal fry, grunting, distortion, rattle, belt, and twang. All effects could be easily differentiated from each other. They were intraindividually consistently repeatable and also interindividually produced in a similar manner. A special feature in one singer was the first evidence of partial glottal vibration when belting in the high register. The unintended transition to this reduced voice quality was accompanied by physical fatigue and inflexible respiratory support. The long-lasting use of the investigated nonclassical vocal effects had no negative impact on trained singers. The possibility of long-term damage depends on the individual constitution, specific use, duration, and extent of the hyperfunction. The incidence of partial glottal vibration and its consequences require continuing research to learn more about efficient and healthy vocal function in nonclassical singing. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Software for objective comparison of vocal acoustic features over weeks of audio recording: KLFromRecordingDays

    Science.gov (United States)

    Soderstrom, Ken; Alalawi, Ali

    KLFromRecordingDays allows measurement of Kullback-Leibler (KL) distances between 2D probability distributions of vocal acoustic features. Greater KL distance measures reflect increased phonological divergence across the vocalizations compared. The software has been used to compare *.wav file recordings made by Sound Analysis Recorder 2011 of songbird vocalizations pre- and post-drug and surgical manipulations. Recordings from individual animals in *.wav format are first organized into subdirectories by recording day and then segmented into individual syllables uttered and acoustic features of these syllables using Sound Analysis Pro 2011 (SAP). KLFromRecordingDays uses syllable acoustic feature data output by SAP to a MySQL table to generate and compare "template" (typically pre-treatment) and "target" (typically post-treatment) probability distributions. These distributions are a series of virtual 2D plots of the duration of each syllable (as x-axis) to each of 13 other acoustic features measured by SAP for that syllable (as y-axes). Differences between "template" and "target" probability distributions for each acoustic feature are determined by calculating KL distance, a measure of divergence of the target 2D distribution pattern from that of the template. KL distances and the mean KL distance across all acoustic features are calculated for each recording day and output to an Excel spreadsheet. Resulting data for individual subjects may then be pooled across treatment groups and graphically summarized and used for statistical comparisons. Because SAP-generated MySQL files are accessed directly, data limits associated with spreadsheet output are avoided, and the totality of vocal output over weeks may be objectively analyzed all at once. The software has been useful for measuring drug effects on songbird vocalizations and assessing recovery from damage to regions of vocal motor cortex. It may be useful in studies employing other species, and as part of speech

  20. Software for objective comparison of vocal acoustic features over weeks of audio recording: KLFromRecordingDays

    Directory of Open Access Journals (Sweden)

    Ken Soderstrom

    2017-01-01

    Full Text Available KLFromRecordingDays allows measurement of Kullback–Leibler (KL distances between 2D probability distributions of vocal acoustic features. Greater KL distance measures reflect increased phonological divergence across the vocalizations compared. The software has been used to compare *.wav file recordings made by Sound Analysis Recorder 2011 of songbird vocalizations pre- and post-drug and surgical manipulations. Recordings from individual animals in *.wav format are first organized into subdirectories by recording day and then segmented into individual syllables uttered and acoustic features of these syllables using Sound Analysis Pro 2011 (SAP. KLFromRecordingDays uses syllable acoustic feature data output by SAP to a MySQL table to generate and compare “template” (typically pre-treatment and “target” (typically post-treatment probability distributions. These distributions are a series of virtual 2D plots of the duration of each syllable (as x-axis to each of 13 other acoustic features measured by SAP for that syllable (as y-axes. Differences between “template” and “target” probability distributions for each acoustic feature are determined by calculating KL distance, a measure of divergence of the target 2D distribution pattern from that of the template. KL distances and the mean KL distance across all acoustic features are calculated for each recording day and output to an Excel spreadsheet. Resulting data for individual subjects may then be pooled across treatment groups and graphically summarized and used for statistical comparisons. Because SAP-generated MySQL files are accessed directly, data limits associated with spreadsheet output are avoided, and the totality of vocal output over weeks may be objectively analyzed all at once. The software has been useful for measuring drug effects on songbird vocalizations and assessing recovery from damage to regions of vocal motor cortex. It may be useful in studies employing other