WorldWideScience

Sample records for non-native speech perception

  1. Native Speakers' Perception of Non-Native English Speech

    Science.gov (United States)

    Jaber, Maysa; Hussein, Riyad F.

    2011-01-01

    This study is aimed at investigating the rating and intelligibility of different non-native varieties of English, namely French English, Japanese English and Jordanian English by native English speakers and their attitudes towards these foreign accents. To achieve the goals of this study, the researchers used a web-based questionnaire which…

  2. Decoding speech perception by native and non-native speakers using single-trial electrophysiological data.

    Directory of Open Access Journals (Sweden)

    Alex Brandmeyer

    Full Text Available Brain-computer interfaces (BCIs are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1 Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2 Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native. A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.

  3. Non-native Speech Learning in Older Adults.

    Science.gov (United States)

    Ingvalson, Erin M; Nowicki, Casandra; Zong, Audrey; Wong, Patrick C M

    2017-01-01

    Though there is an extensive literature investigating the ability of younger adults to learn non-native phonology, including investigations into individual differences in younger adults' lexical tone learning, very little is known about older adults' ability to learn non-native phonology, including lexical tone. There are several reasons to suspect that older adults would use different learning mechanisms when learning lexical tone than younger adults, including poorer perception of dynamic pitch, greater reliance on working memory capacity in second language learning, and poorer category learning in older adulthood. The present study examined the relationships among older adults' baseline sensitivity for pitch patterns, working memory capacity, and declarative memory capacity with their ability to learn to associate tone with lexical meaning. In older adults, baseline pitch pattern sensitivity was not associated with generalization performance. Rather, older adults' learning performance was best predicted by declarative memory capacity. These data suggest that training paradigms will need to be modified to optimize older adults' non-native speech sound learning success.

  4. Semantic and phonetic enhancements for speech-in-noise recognition by native and non-native listeners.

    Science.gov (United States)

    Bradlow, Ann R; Alexander, Jennifer A

    2007-04-01

    Previous research has shown that speech recognition differences between native and proficient non-native listeners emerge under suboptimal conditions. Current evidence has suggested that the key deficit that underlies this disproportionate effect of unfavorable listening conditions for non-native listeners is their less effective use of compensatory information at higher levels of processing to recover from information loss at the phoneme identification level. The present study investigated whether this non-native disadvantage could be overcome if enhancements at various levels of processing were presented in combination. Native and non-native listeners were presented with English sentences in which the final word varied in predictability and which were produced in either plain or clear speech. Results showed that, relative to the low-predictability-plain-speech baseline condition, non-native listener final word recognition improved only when both semantic and acoustic enhancements were available (high-predictability-clear-speech). In contrast, the native listeners benefited from each source of enhancement separately and in combination. These results suggests that native and non-native listeners apply similar strategies for speech-in-noise perception: The crucial difference is in the signal clarity required for contextual information to be effective, rather than in an inability of non-native listeners to take advantage of this contextual information per se.

  5. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  6. Non-Native University Students' Perception of Plagiarism

    Science.gov (United States)

    Ahmad, Ummul Khair; Mansourizadeh, Kobra; Ai, Grace Koh Ming

    2012-01-01

    Plagiarism is a complex issue especially among non-native students and it has received a lot of attention from researchers and scholars of academic writing. Some scholars attribute this problem to cultural perceptions and different attitudes toward texts. This study evaluates student perception of different aspects of plagiarism. A small group of…

  7. How much does language proficiency by non-native listeners influence speech audiometric tests in noise?

    Science.gov (United States)

    Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger

    2015-01-01

    The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.

  8. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  9. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  10. Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO

    Science.gov (United States)

    Tamati, Terrin N.; Pisoni, David B.

    2015-01-01

    Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary

  11. Designing acoustics for linguistically diverse classrooms: Effects of background noise, reverberation and talker foreign accent on speech comprehension by native and non-native English-speaking listeners

    Science.gov (United States)

    Peng, Zhao Ellen

    The current classroom acoustics standard (ANSI S12.60-2010) recommends core learning spaces not to exceed background noise level (BNL) of 35 dBA and reverberation time (RT) of 0.6 second, based on speech intelligibility performance mainly by the native English-speaking population. Existing literature has not correlated these recommended values well with student learning outcomes. With a growing population of non-native English speakers in American classrooms, the special needs for perceiving degraded speech among non-native listeners, either due to realistic room acoustics or talker foreign accent, have not been addressed in the current standard. This research seeks to investigate the effects of BNL and RT on the comprehension of English speech from native English and native Mandarin Chinese talkers as perceived by native and non-native English listeners, and to provide acoustic design guidelines to supplement the existing standard. This dissertation presents two studies on the effects of RT and BNL on more realistic classroom learning experiences. How do native and non-native English-speaking listeners perform on speech comprehension tasks under adverse acoustic conditions, if the English speech is produced by talkers of native English (Study 1) versus native Mandarin Chinese (Study 2)? Speech comprehension materials were played back in a listening chamber to individual listeners: native and non-native English-speaking in Study 1; native English, native Mandarin Chinese, and other non-native English-speaking in Study 2. Each listener was screened for baseline English proficiency level, and completed dual tasks simultaneously involving speech comprehension and adaptive dot-tracing under 15 acoustic conditions, comprised of three BNL conditions (RC-30, 40, and 50) and five RT scenarios (0.4 to 1.2 seconds). The results show that BNL and RT negatively affect both objective performance and subjective perception of speech comprehension, more severely for non-native

  12. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Science.gov (United States)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  13. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    Science.gov (United States)

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  14. Exploring public perception of non-native species from a visions of nature perspective.

    Science.gov (United States)

    Verbrugge, Laura N H; Van den Born, Riyan J G; Lenders, H J Rob

    2013-12-01

    Not much is known about lay public perceptions of non-native species and their underlying values. Public awareness and engagement, however, are important aspects in invasive species management. In this study, we examined the relations between the lay public's visions of nature, their knowledge about non-native species, and their perceptions of non-native species and invasive species management with a survey administered in the Netherlands. Within this framework, we identified three measures for perception of non-native species: perceived risk, control and engagement. In general, respondents scored moderate values for perceived risk and personal engagement. However, in case of potential ecological or human health risks, control measures were supported. Respondents' images of the human-nature relationship proved to be relevant in engagement in problems caused by invasive species and in recognizing the need for control, while images of nature appeared to be most important in perceiving risks to the environment. We also found that eradication of non-native species was predominantly opposed for species with a high cuddliness factor such as mammals and bird species. We conclude that lay public perceptions of non-native species have to be put in a wider context of visions of nature, and we discuss the implications for public support for invasive species management.

  15. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    Directory of Open Access Journals (Sweden)

    Beverly Hannah

    2017-12-01

    Full Text Available Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF and Auditory-FacialGestural (AFG inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  16. Optimizing Automatic Speech Recognition for Low-Proficient Non-Native Speakers

    Directory of Open Access Journals (Sweden)

    Catia Cucchiarini

    2010-01-01

    Full Text Available Computer-Assisted Language Learning (CALL applications for improving the oral skills of low-proficient learners have to cope with non-native speech that is particularly challenging. Since unconstrained non-native ASR is still problematic, a possible solution is to elicit constrained responses from the learners. In this paper, we describe experiments aimed at selecting utterances from lists of responses. The first experiment on utterance selection indicates that the decoding process can be improved by optimizing the language model and the acoustic models, thus reducing the utterance error rate from 29–26% to 10–8%. Since giving feedback on incorrectly recognized utterances is confusing, we verify the correctness of the utterance before providing feedback. The results of the second experiment on utterance verification indicate that combining duration-related features with a likelihood ratio (LR yield an equal error rate (EER of 10.3%, which is significantly better than the EER for the other measures in isolation.

  17. The Attitudes and Perceptions of Non-Native English Speaking ...

    African Journals Online (AJOL)

    native English speaking adults toward explicit grammar instruction (EGI). The factors influencing those attitudes and perceptions are also explored. The data collected in this study indicate that adult English as a second language (ESL) students ...

  18. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  19. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  20. Atypical lateralization of ERP response to native and non-native speech in infants at risk for autism spectrum disorder.

    Science.gov (United States)

    Seery, Anne M; Vogel-Farley, Vanessa; Tager-Flusberg, Helen; Nelson, Charles A

    2013-07-01

    Language impairment is common in autism spectrum disorders (ASD) and is often accompanied by atypical neural lateralization. However, it is unclear when in development language impairment or atypical lateralization first emerges. To address these questions, we recorded event-related-potentials (ERPs) to native and non-native speech contrasts longitudinally in infants at risk for ASD (HRA) over the first year of life to determine whether atypical lateralization is present as an endophenotype early in development and whether these infants show delay in a very basic precursor of language acquisition: phonemic perceptual narrowing. ERP response for the HRA group to a non-native speech contrast revealed a trajectory of perceptual narrowing similar to a group of low-risk controls (LRC), suggesting that phonemic perceptual narrowing does not appear to be delayed in these high-risk infants. In contrast there were significant group differences in the development of lateralized ERP response to speech: between 6 and 12 months the LRC group displayed a lateralized response to the speech sounds, while the HRA group failed to display this pattern. We suggest the possibility that atypical lateralization to speech may be an ASD endophenotype over the first year of life. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Non-Native Japanese Listeners' Perception of Vowel Length Contrasts in Japanese and Modern Standard Arabic (MSA)

    Science.gov (United States)

    Tsukada, Kimiko

    2012-01-01

    This study aimed to compare the perception of short vs. long vowel contrasts in Japanese and Modern Standard Arabic (MSA) by four groups of listeners differing in their linguistic backgrounds: native Arabic (NA), native Japanese (NJ), non-native Japanese (NNJ) and Australian English (OZ) speakers. The NNJ and OZ groups shared the first language…

  2. Unpacking Race, Culture, and Class in Rural Alaska: Native and Non-Native Multidisciplinary Professionals' Perceptions of Child Sexual Abuse

    Science.gov (United States)

    Bubar, Roe; Bundy-Fazioli, Kimberly

    2011-01-01

    The purpose of this study was to unpack notions of class, culture, and race as they relate to multidisciplinary team (MDT) professionals and their perceptions of prevalence in child sexual abuse cases in Native and non-Native rural Alaska communities. Power and privilege within professional settings is significant for all social work professionals…

  3. Non-Native Speakers of the Language of Instruction: Self-Perceptions of Teaching Ability

    Science.gov (United States)

    Samuel, Carolyn

    2017-01-01

    Given the linguistically diverse instructor and student populations at Canadian universities, mutually comprehensible oral language may not be a given. Indeed, both instructors who are non-native speakers of the language of instruction (NNSLIs) and students have acknowledged oral communication challenges. Little is known, though, about how the…

  4. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  5. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  6. Native and Non-native English Teachers' Perceptions of their Professional Identity: Convergent or Divergent?

    Directory of Open Access Journals (Sweden)

    Zia Tajeddin

    2016-10-01

    Full Text Available There is still a preference for native speaker teachers in the language teaching profession, which is supposed to influence the self-perceptions of native and nonnative teachers. However, the status of English as a globalized language is changing the legitimacy of native/nonnative teacher dichotomy. This study sought to investigate native and nonnative English-speaking teachers’ perceptions about native and nonnative teachers’ status and the advantages and disadvantages of being a native or nonnative teacher. Data were collected by means of a questionnaire and a semi-structured interview. A total of 200 native and nonnative teachers of English from the UK and the US, i.e. the inner circle, and Turkey and Iran, the expanding circle, participated in this study. A significant majority of nonnative teachers believed that native speaker teachers have better speaking proficiency, better pronunciation, and greater self-confidence. The findings also showed nonnative teachers’ lack of self-confidence and awareness of their role and status compared with native-speaker teachers, which could be the result of existing inequities between native and nonnative English-speaking teachers in ELT. The findings also revealed that native teachers disagreed more strongly with the concept of native teachers’ superiority over nonnative teachers. Native teachers argued that nonnative teachers have a good understanding of teaching methodology whereas native teachers are more competent in correct language. It can be concluded that teacher education programs in the expanding-circle countries should include materials for teachers to raise their awareness of their own professional status and role and to remove their misconception about native speaker fallacy.

  7. THE REFLECTION OF BILINGUALISM IN THE SPEECH OF PRESCHOOL CHILDREN SPEAKING NATIVE (ERZYA AND NON-NATIVE (RUSSIAN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Mosina, N.M.

    2016-03-01

    Full Text Available This article considers the specific features of Mordovian speech of 16 bilingual children, aged 3 to 7 years, speaking both the Erzya and Russian languages, living in Mordovia. Their language is studied on the example of short stories in pictures, and it attempts to identify the influence of the Russian language on the Erzya one and to detect the occurrences of interference at the lexical and grammatical levels.

  8. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  10. Student perceptions of native and non-native speaker language instructors: A comparison of ESL and Spanish

    Directory of Open Access Journals (Sweden)

    Laura Callahan

    2006-12-01

    Full Text Available The question of the native vs. non-native speaker status of second and foreign language instructors has been investigated chiefly from the perspective of the teacher. Anecdotal evidence suggests that students have strong opinions on the relative qualities of instruction by native and non-native speakers. Most research focuses on students of English as a foreign or second language. This paper reports on data gathered through a questionnaire administered to 55 university students: 31 students of Spanish as FL and 24 students of English as SL. Qualitative results show what strengths students believe each type of instructor has, and quantitative results confirm that any gap students may perceive between the abilities of native and non-native instructors is not so wide as one might expect based on popular notions of the issue. ESL students showed a stronger preference for native-speaker instructors overall, and were at variance with the SFL students' ratings of native-speaker instructors' performance on a number of aspects. There was a significant correlation in both groups between having a family member who is a native speaker of the target language and student preference for and self-identification with a native speaker as instructor. (English text

  11. How may the basal ganglia contribute to auditory categorization and speech perception?

    Directory of Open Access Journals (Sweden)

    Sung-Joo eLim

    2014-08-01

    Full Text Available Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.

  12. Word Durations in Non-Native English

    Science.gov (United States)

    Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.

    2010-01-01

    In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172

  13. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  14. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  15. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  16. The motor theory of speech perception revisited.

    Science.gov (United States)

    Massaro, Dominic W; Chen, Trevor H

    2008-04-01

    Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.

  17. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  18. Individual differneces in degraded speech perception

    Science.gov (United States)

    Carbonell, Kathy M.

    One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.

  19. Native and Non-native Teachers’ Pragmatic Criteria for Rating Request Speech Act: The Case of American and Iranian EFL Teachers

    Directory of Open Access Journals (Sweden)

    Minoo Alemi

    2017-04-01

    Full Text Available Abstract: Over the last few decades, several aspects of pragmatic knowledge and its effects on teaching  and  learning  a  second  language  (L2  have  been  explored  in  many  studies.  However, among  various  studies,  the  area  of  interlanguage  pragmatic  (ILP  assessment  is  quite  novel issue and many features of it have remained unnoticed. As ILP assessment has received more attention recently, the necessity of investigation on the EFL teachers‟ rating criteria for rating various  speech  acts  has  become  important.  In  this  respect,  the  present  study  aimed  to investigate  the  native  and  non-native EFL teachers‟ rating scores and criteria  regarding  the speech  act  of  request.  To  this  end,  50  American  ESL  teachers  and  50  Iranian  EFL  teachers participated to rate the EFL learners‟ responses to video-prompted Discourse Completion Tests (DCTs  regarding  the  speech  act  of  request.  Raters  were  supposed to rate the EFL learners‟ responses and mention their criteria for assessment. The results of the content analysis of raters‟ comments revealed nine criteria that they considered in their assessment. Moreover, the result of  the  t-test  and  chi-square analyses of raters‟ rating scores and criteria proved that there are significant differences between native and non-native EFL teachers‟ rating pattern. The results of this study also shed light on importance of sociopragmatic and pragmalinguistic features in native  and  non-native teachers‟ pragmatic rating, which can have several implications for L2 teachers, learners, and material developers. معیارهای معلمان زبان بومی و غیربومی در نمره دهی کنش کلامی درخواست : مورد معلمان انگلیسی زبان آمریکایی و ایرانی چکیده: طی چند دهه اخیر،  جنبه های 

  20. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  1. The Beginnings of Danish Speech Perception

    DEFF Research Database (Denmark)

    Østerbye, Torkil

    , in the light of the rich and complex Danish sound system. The first two studies report on native adults’ perception of Danish speech sounds in quiet and noise. The third study examined the development of language-specific perception in native Danish infants at 6, 9 and 12 months of age. The book points......Little is known about the perception of speech sounds by native Danish listeners. However, the Danish sound system differs in several interesting ways from the sound systems of other languages. For instance, Danish is characterized, among other features, by a rich vowel inventory and by different...... reductions of speech sounds evident in the pronunciation of the language. This book (originally a PhD thesis) consists of three studies based on the results of two experiments. The experiments were designed to provide knowledge of the perception of Danish speech sounds by Danish adults and infants...

  2. Non-Native & Native English Teachers

    Directory of Open Access Journals (Sweden)

    İrfan Tosuncuoglu

    2017-12-01

    Full Text Available In many countries the primary (mother tongue language is not English but there is a great demand for English language teachers all over the world. The demand in this field is try to be filled largely by non-native English speaking teachers who have learned English in the country or abroad, or from another non native English peaking teachers. In some countries, particularly those where English speaking is a a sign of status, the students prefer to learn English from a native English speaker. The perception is that a non-native English speaking teacher is a less authentic teacher than a native English speaker and their instruction is not satifactory in some ways. This paper will try to examine the literature to explore whether there is a difference in instructional effectiveness between NNESTs and native English teachers.

  3. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...

  4. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  5. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  6. Assessing the Performance of Automatic Speech Recognition Systems When Used by Native and Non-Native Speakers of Three Major Languages in Dictation Workflows

    DEFF Research Database (Denmark)

    Zapata, Julián; Kirkedal, Andreas Søeborg

    2015-01-01

    In this paper, we report on a two-part experiment aiming to assess and compare the performance of two types of automatic speech recognition (ASR) systems on two different computational platforms when used to augment dictation workflows. The experiment was performed with a sample of speakers...

  7. Reflections on mirror neurons and speech perception

    Science.gov (United States)

    Lotto, Andrew J.; Hickok, Gregory S.; Holt, Lori L.

    2010-01-01

    The discovery of mirror neurons, a class of neurons that respond when a monkey performs an action and also when the monkey observes others producing the same action, has promoted a renaissance for the Motor Theory (MT) of speech perception. This is because mirror neurons seem to accomplish the same kind of one to one mapping between perception and action that MT theorizes to be the basis of human speech communication. However, this seeming correspondence is superficial, and there are theoretical and empirical reasons to temper enthusiasm about the explanatory role mirror neurons might have for speech perception. In fact, rather than providing support for MT, mirror neurons are actually inconsistent with the central tenets of MT. PMID:19223222

  8. The interaction between acoustic salience and language experience in developmental speech perception: evidence from nasal place discrimination.

    Science.gov (United States)

    Narayan, Chandan R; Werker, Janet F; Beddor, Patrice Speeter

    2010-05-01

    Previous research suggests that infant speech perception reorganizes in the first year: young infants discriminate both native and non-native phonetic contrasts, but by 10-12 months difficult non-native contrasts are less discriminable whereas performance improves on native contrasts. In the current study, four experiments tested the hypothesis that, in addition to the influence of native language experience, acoustic salience also affects the perceptual reorganization that takes place in infancy. Using a visual habituation paradigm, two nasal place distinctions that differ in relative acoustic salience, acoustically robust labial-alveolar [ma]-[na] and acoustically less salient alveolar-velar [na]-[ enga], were presented to infants in a cross-language design. English-learning infants at 6-8 and 10-12 months showed discrimination of the native and acoustically robust [ma]-[na] (Experiment 1), but not the non-native (in initial position) and acoustically less salient [na]-[ enga] (Experiment 2). Very young (4-5-month-old) English-learning infants tested on the same native and non-native contrasts also showed discrimination of only the [ma]-[na] distinction (Experiment 3). Filipino-learning infants, whose ambient language includes the syllable-initial alveolar (/n/)-velar (/ eng/) contrast, showed discrimination of native [na]-[ enga] at 10-12 months, but not at 6-8 months (Experiment 4). These results support the hypothesis that acoustic salience affects speech perception in infancy, with native language experience facilitating discrimination of an acoustically similar phonetic distinction [na]-[ enga]. We discuss the implications of this developmental profile for a comprehensive theory of speech perception in infancy.

  9. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  10. Perception and the temporal properties of speech

    Science.gov (United States)

    Gordon, Peter C.

    1991-11-01

    Four experiments addressing the role of attention in phonetic perception are reported. The first experiment shows that the relative importance of two cues to the voicing distinction changes when subjects must perform an arithmetic distractor task at the same time as identifying a speech stimulus. The voice onset time cue loses phonetic significance when subjects are distracted, while the F0 onset frequency cue does not. The second experiment shows a similar pattern for two cues to the distinction between the vowels /i/ (as in 'beat') and /I/ (as in 'bit'). Together these experiments indicate that careful attention to speech perception is necessary for strong acoustic cues to achieve their full phonetic impact, while weaker acoustic cues achieve their full phonetic impact without close attention. Experiment 3 shows that this pattern is obtained when the distractor task places little demand on verbal short term memory. Experiment 4 provides a large data set for testing formal models of the role of attention in speech perception. Attention is shown to influence the signal to noise ratio in phonetic encoding. This principle is instantiated in a network model in which the role of attention is to reduce noise in the phonetic encoding of acoustic cues. Implications of this work for understanding speech perception and general theories of the role of attention in perception are discussed.

  11. QUANTITATIVE REDUCTION OF VOWEL GRAPHS “A” AND “O” POSITIONED AFTER THE HARD CONSONANTS IN THE SPEECH OF NATIVE AND NON-NATIVE RUSSIAN SPEAKERS IN LITHUANIA

    Directory of Open Access Journals (Sweden)

    Danutė Balšaitytė

    2015-04-01

    Full Text Available This article analyses the absolute duration (ms of stressed Russian vowels /a/, /o/ (graphs: “a”, “o” and their allophones in unstressed positions after the hard consonants in the pronunciation of native and non-native Russian speakers in Lithuania. The results of the conducted spectral analysis reveal the specificities of quantitative reduction in the speech of the Russian speakers in Lithuania and the Lithuanian speakers that are learning the Russian language. These specificities are influenced by the two phonetic systems interaction. The speakers of both languages by the realisation of “a” and “o” violates the relation of unstressed vowel duration that is peculiar to the contemporary Russian language: the post-stressed vowels in closed syllables are shorter than the pre-stressed vowels; the first pre-stressed syllable differs from the second pre-stressed and post-stressed syllables by a longer voice duration. Both Russians and Lithuanians pronounce vowels longer in post-stressed syllables than in the pre-stressed syllables. This corresponds to the qualitative reduction of the Lithuanian language vowels /a:/ and /o:/. There are certain differences between the pronunciation of qualitative vowels “a” and “o” reduction among the native and non-native Russian speakers in Lithuania. The Russian speakers in Lithuania pronounce the second pre-stressed vowel longer than the first pre-stressed vowel; this corresponds to the degree of reduction of pre-stressed vowels “a” and “o” in the standardised Russian language. These degrees of quantitative reduction in the Lithuanian pronunciation are peculiar only for “a” in the Russian language. According to the duration ratio, the unstressed allophones “a” and “o” in the Russian language are closer to the unstressed /a:/ and /o:/ in the Lithuanian language in the pronunciation of Russian-Lithuanian bilinguals than in the pronunciation Lithuanian speakers.

  12. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  13. Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder

    Science.gov (United States)

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2006-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…

  14. Speech-in-speech perception and executive function involvement.

    Directory of Open Access Journals (Sweden)

    Marcela Perrone-Bertolotti

    Full Text Available This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation. The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities.

  15. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  16. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  17. Neurophysiological Influence of Musical Training on Speech Perception

    OpenAIRE

    Shahin, Antoine J.

    2011-01-01

    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one’s ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss, who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skill...

  18. Talker Variability in Audiovisual Speech Perception

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-07-01

    Full Text Available A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition. So far, this talker-variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target-word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  19. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  20. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  1. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  2. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  3. Lexical and sublexical units in speech perception.

    Science.gov (United States)

    Giroux, Ibrahima; Rey, Arnaud

    2009-03-01

    Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.

  4. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  5. Speech perception in noise in unilateral hearing loss

    OpenAIRE

    Mondelli, Maria Fernanda Capoani Garcia; dos Santos, Marina de Marchi; José, Maria Renata

    2016-01-01

    ABSTRACT INTRODUCTION: Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. OBJECTIVE: To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. METHODS: The study included 30 adu...

  6. Audiovisual speech perception development at varying levels of perceptual processing

    OpenAIRE

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the le...

  7. Musical expertise and foreign speech perception.

    Science.gov (United States)

    Martínez-Montes, Eduardo; Hernández-Pérez, Heivet; Chobert, Julie; Morgado-Rodríguez, Lisbet; Suárez-Murias, Carlos; Valdés-Sosa, Pedro A; Besson, Mireille

    2013-01-01

    The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN) design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT) or equivalent that were either far from (Large deviants) or close to (Small deviants) the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception are discussed.

  8. Musical expertise and foreign speech perception

    Directory of Open Access Journals (Sweden)

    Eduardo eMartínez-Montes

    2013-11-01

    Full Text Available The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT or equivalent that were either far from (Large deviants or close to (Small deviants the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception is discussed.

  9. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems.

    Science.gov (United States)

    Greene, Beth G; Logan, John S; Pisoni, David B

    1986-03-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.

  10. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    Science.gov (United States)

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  11. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  12. Neurophysiological influence of musical training on speech perception.

    Science.gov (United States)

    Shahin, Antoine J

    2011-01-01

    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL.

  13. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  14. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  15. [Speech perception development in children with dyslexia].

    Science.gov (United States)

    Ortiz, Rosario; Jiménez, Juan E; Muñetón, Mercedes; Rojas, Estefanía; Estévez, Adelina; Guzmán, Remedios; Rodríguez, Cristina; Naranjo, Francisco

    2008-11-01

    Several studies have indicated that dyslexics show a deficit in speech perception (SP). The main purpose of this research is to determine the development of SP in dyslexics and normal readers paired by grades from 2nd to 6th grade of primary school and to know whether the phonetic contrasts that are relevant for SP change during development, taking into account the individual differences. The achievement of both groups was compared in the phonetic tasks: voicing contrast, place of articulation contrast and manner of articulation contrast. The results showed that the dyslexic performed poorer than the normal readers in SP. In place of articulation contrast, the developmental pattern is similar in both groups but not in voicing and manner of articulation. Manner of articulation has more influence on SP, and its development is higher than the other contrast tasks in both groups.

  16. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... knowledge of such bimodal integration would be strengthened if the phenomena could be investigated by objective, neutrally based methods. One key question of the present work is if perceptual processing of audiovisual speech can be gauged with a specific signature of neurophysiological activity...... on the auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less...

  17. Sound frequency affects speech emotion perception: Results from congenital amusia

    Directory of Open Access Journals (Sweden)

    Sydney eLolli

    2015-09-01

    Full Text Available Congenital amusics, or tone-deaf individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying band-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody (MBEP were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under band-pass and unfiltered speech conditions. Results showed a significant correlation between pitch discrimination threshold and emotion identification accuracy for band-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold > 16 Hz performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between band-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation.

  18. The speech perception skills of children with and without speech sound disorder.

    Science.gov (United States)

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. How does cognitive load influence speech perception? An encoding hypothesis.

    Science.gov (United States)

    Mitterer, Holger; Mattys, Sven L

    2017-01-01

    Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.

  20. Ecological impacts of non-native species

    Science.gov (United States)

    Wilkinson, John W.

    2012-01-01

    Non-native species are considered one of the greatest threats to freshwater biodiversity worldwide (Drake et al. 1989; Allen and Flecker 1993; Dudgeon et al. 2005). Some of the first hypotheses proposed to explain global patterns of amphibian declines included the effects of non-native species (Barinaga 1990; Blaustein and Wake 1990; Wake and Morowitz 1991). Evidence for the impact of non-native species on amphibians stems (1) from correlative research that relates the distribution or abundance of a species to that of a putative non-native species, and (2) from experimental tests of the effects of a non-native species on survival, growth, development or behaviour of a target species (Kats and Ferrer 2003). Over the past two decades, research on the effects of non-native species on amphibians has mostly focused on introduced aquatic predators, particularly fish. Recent research has shifted to more complex ecological relationships such as influences of sub-lethal stressors (e.g. contaminants) on the effects of non-native species (Linder et al. 2003; Sih et al. 2004), non-native species as vectors of disease (Daszak et al. 2004; Garner et al. 2006), hybridization between non-natives and native congeners (Riley et al. 2003; Storfer et al. 2004), and the alteration of food-webs by non-native species (Nystrom et al. 2001). Other research has examined the interaction of non-native species in terms of facilitation (i.e. one non-native enabling another to become established or spread) or the synergistic effects of multiple non-native species on native amphibians, the so-called invasional meltdown hypothesis (Simerloff and Von Holle 1999). Although there is evidence that some non-native species may interact (Ricciardi 2001), there has yet to be convincing evidence that such interactions have led to an accelerated increase in the number of non-native species and cumulative impacts are still uncertain (Simberloff 2006). Applied research on the control, eradication, and

  1. Sound frequency affects speech emotion perception: results from congenital amusia.

    Science.gov (United States)

    Lolli, Sydney L; Lewenstein, Ari D; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

  2. Audio-Visual Speech Perception: A Developmental ERP Investigation

    Science.gov (United States)

    Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…

  3. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  4. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  5. Audiovisual speech perception development at varying levels of perceptual processing.

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  6. Individual differences in speech-in-noise perception parallel neural speech processing and attention in preschoolers

    Science.gov (United States)

    Thompson, Elaine C.; Carr, Kali Woodruff; White-Schwoch, Travis; Otto-Meyer, Sebastian; Kraus, Nina

    2016-01-01

    From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3–5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ~12 months), we followed a cohort of 59 preschoolers, ages 3.0–4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children. PMID:27864051

  7. The role of stress and accent in the perception of speech rhythm

    NARCIS (Netherlands)

    Grover, C.N.; Terken, J.M.B.

    1995-01-01

    Modelling rhythmic characteristics of speech is expected to contribute to the acceptability of synthetic speech. However, before rules for the control of speech rhythm in synthetic speech can be developed, we need to know which properties of speech give rise to the perception of speech rhythm. An

  8. Neural correlates of quality perception for complex speech signals

    CERN Document Server

    Antons, Jan-Niklas

    2015-01-01

    This book interconnects two essential disciplines to study the perception of speech: Neuroscience and Quality of Experience, which to date have rarely been used together for the purposes of research on speech quality perception. In five key experiments, the book demonstrates the application of standard clinical methods in neurophysiology on the one hand, and of methods used in fields of research concerned with speech quality perception on the other. Using this combination, the book shows that speech stimuli with different lengths and different quality impairments are accompanied by physiological reactions related to quality variations, e.g., a positive peak in an event-related potential. Furthermore, it demonstrates that – in most cases – quality impairment intensity has an impact on the intensity of physiological reactions.

  9. Perception of words and pitch patterns in song and speech

    Directory of Open Access Journals (Sweden)

    Julia eMerrill

    2012-03-01

    Full Text Available This fMRI study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words, pitch and rhythm. Univariate and multivariate analyses were performed on the brain activity patterns of six conditions, arranged in a subtractive hierarchy: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; as well as the pure musical or speech rhythm.Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG and intraparietal sulcus (IPS in processing song and speech. The left IFG was involved in word- and pitch-related processing in speech, the right IFG in processing pitch in song.Furthermore, the IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the activity of IFG and IPS.

  10. Speech perception at the interface of neurobiology and linguistics.

    Science.gov (United States)

    Poeppel, David; Idsardi, William J; van Wassenhove, Virginie

    2008-03-12

    Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.

  11. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  12. One Way or Another: Evidence for Perceptual Asymmetry in Pre-attentive Learning of Non-native Contrasts

    Directory of Open Access Journals (Sweden)

    Liquan Liu

    2018-03-01

    Full Text Available Research investigating listeners’ neural sensitivity to speech sounds has largely focused on segmental features. We examined Australian English listeners’ perception and learning of a supra-segmental feature, pitch direction in a non-native tonal contrast, using a passive oddball paradigm and electroencephalography. The stimuli were two contours generated from naturally produced high-level and high-falling tones in Mandarin Chinese, differing only in pitch direction (Liu and Kager, 2014. While both contours had similar pitch onsets, the pitch offset of the falling contour was lower than that of the level one. The contrast was presented in two orientations (standard and deviant reversed and tested in two blocks with the order of block presentation counterbalanced. Mismatch negativity (MMN responses showed that listeners discriminated the non-native tonal contrast only in the second block, reflecting indications of learning through exposure during the first block. In addition, listeners showed a later MMN peak for their second block of test relative to listeners who did the same block first, suggesting linguistic (as opposed to acoustic processing or a misapplication of perceptual strategies from the first to the second block. The results also showed a perceptual asymmetry for change in pitch direction: listeners who encountered a falling tone deviant in the first block had larger frontal MMN amplitudes than listeners who encountered a level tone deviant in the first block. The implications of our findings for second language speech and the developmental trajectory for tone perception are discussed.

  13. Recognizing speech in a novel accent: the motor theory of speech perception reframed.

    Science.gov (United States)

    Moulin-Frier, Clément; Arbib, Michael A

    2013-08-01

    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory.

  14. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  15. Longitudinal Study of Speech Perception, Speech, and Language for Children with Hearing Loss in an Auditory-Verbal Therapy Program

    Science.gov (United States)

    Dornan, Dimity; Hickson, Louise; Murdoch, Bruce; Houston, Todd

    2009-01-01

    This study examined the speech perception, speech, and language developmental progress of 25 children with hearing loss (mean Pure-Tone Average [PTA] 79.37 dB HL) in an auditory verbal therapy program. Children were tested initially and then 21 months later on a battery of assessments. The speech and language results over time were compared with…

  16. Listeners' Perceptions of Speech and Language Disorders

    Science.gov (United States)

    Allard, Emily R.; Williams, Dale F.

    2008-01-01

    Using semantic differential scales with nine trait pairs, 445 adults rated five audio-taped speech samples, one depicting an individual without a disorder and four portraying communication disorders. Statistical analyses indicated that the no disorder sample was rated higher with respect to the trait of employability than were the articulation,…

  17. Cortical Mechanisms of Speech Perception in Noise

    Science.gov (United States)

    Wong, Patrick C. M.; Uppunda, Ajith K.; Parrish, Todd B.; Dhar, Sumitrajit

    2008-01-01

    Purpose: The present study examines the brain basis of listening to spoken words in noise, which is a ubiquitous characteristic of communication, with the focus on the dorsal auditory pathway. Method: English-speaking young adults identified single words in 3 listening conditions while their hemodynamic response was measured using fMRI: speech in…

  18. The Role of the Listener's State in Speech Perception

    Science.gov (United States)

    Viswanathan, Navin

    2009-01-01

    Accounts of speech perception disagree on whether listeners perceive the acoustic signal (Diehl, Lotto, & Holt, 2004) or the vocal tract gestures that produce the signal (e.g., Fowler, 1986). In this dissertation, I outline a research program using a phenomenon called "perceptual compensation for coarticulation" (Mann, 1980) to examine this…

  19. Perceptions of University Instructors When Listening to International Student Speech

    Science.gov (United States)

    Sheppard, Beth; Elliott, Nancy; Baese-Berk, Melissa

    2017-01-01

    Intensive English Program (IEP) Instructors and content faculty both listen to international students at the university. For these two groups of instructors, this study compared perceptions of international student speech by collecting comprehensibility ratings and transcription samples for intelligibility scores. No significant differences were…

  20. Vocabulary Facilitates Speech Perception in Children With Hearing Aids.

    Science.gov (United States)

    Klein, Kelsey E; Walker, Elizabeth A; Kirby, Benjamin; McCreery, Ryan W

    2017-08-16

    We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs. Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5-12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli. Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs. Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH.

  1. Multisensory speech perception without the left superior temporal sulcus.

    Science.gov (United States)

    Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S

    2012-09-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Feedback in online course for non-native English-speaking students

    CERN Document Server

    Olesova, Larisa

    2013-01-01

    Feedback in Online Course for Non-Native English-Speaking Students is an investigation of the effectiveness of audio and text feedback provided in English in an online course for non-native English-speaking students. The study presents results showing how audio and text feedback can impact on non-native English-speaking students' higher-order learning as they participate in an asynchronous online course. It also discusses the results of how students perceive both types of the feedback provided. In addition, the study examines how the impact and perceptions differ when the instructor giving the

  4. A music perception disorder (congenital amusia) influences speech comprehension.

    Science.gov (United States)

    Liu, Fang; Jiang, Cunmei; Wang, Bei; Xu, Yi; Patel, Aniruddh D

    2015-01-01

    This study investigated the underlying link between speech and music by examining whether and to what extent congenital amusia, a musical disorder characterized by degraded pitch processing, would impact spoken sentence comprehension for speakers of Mandarin, a tone language. Sixteen Mandarin-speaking amusics and 16 matched controls were tested on the intelligibility of news-like Mandarin sentences with natural and flat fundamental frequency (F0) contours (created via speech resynthesis) under four signal-to-noise (SNR) conditions (no noise, +5, 0, and -5dB SNR). While speech intelligibility in quiet and extremely noisy conditions (SNR=-5dB) was not significantly compromised by flattened F0, both amusic and control groups achieved better performance with natural-F0 sentences than flat-F0 sentences under moderately noisy conditions (SNR=+5 and 0dB). Relative to normal listeners, amusics demonstrated reduced speech intelligibility in both quiet and noise, regardless of whether the F0 contours of the sentences were natural or flattened. This deficit in speech intelligibility was not associated with impaired pitch perception in amusia. These findings provide evidence for impaired speech comprehension in congenital amusia, suggesting that the deficit of amusics extends beyond pitch processing and includes segmental processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Introduction. The perception of speech: from sound to meaning.

    Science.gov (United States)

    Moore, Brian C J; Tyler, Lorraine K; Marslen-Wilson, William

    2008-03-12

    Spoken language communication is arguably the most important activity that distinguishes humans from non-human species. This paper provides an overview of the review papers that make up this theme issue on the processes underlying speech communication. The volume includes contributions from researchers who specialize in a wide range of topics within the general area of speech perception and language processing. It also includes contributions from key researchers in neuroanatomy and functional neuro-imaging, in an effort to cut across traditional disciplinary boundaries and foster cross-disciplinary interactions in this important and rapidly developing area of the biological and cognitive sciences.

  6. Cross-language and second language speech perception

    DEFF Research Database (Denmark)

    Bohn, Ocke-Schwen

    2017-01-01

    in cross-language and second language speech perception research: The mapping issue (the perceptual relationship of sounds of the native and the nonnative language in the mind of the native listener and the L2 learner), the perceptual and learning difficulty/ease issue (how this relationship may or may...... not cause perceptual and learning difficulty), and the plasticity issue (whether and how experience with the nonnative language affects the perceptual organization of speech sounds in the mind of L2 learners). One important general conclusion from this research is that perceptual learning is possible at all...

  7. NATIVE VS NON-NATIVE ENGLISH TEACHERS

    Directory of Open Access Journals (Sweden)

    Masrizal Masrizal

    2013-02-01

    Full Text Available Although the majority of English language teachers worldwide are non-native English speakers (NNS, no research was conducted on these teachers until recently. A pioneer research by Peter Medgyes in 1994 took quite a long time until the other researchers found their interests in this issue. There is a widespread stereotype that a native speaker (NS is by nature the best person to teach his/her foreign language. In regard to this assumption, we then see a very limited room and opportunities for a non native teacher to teach language that is not his/hers. The aim of this article is to analyze the differences among these teachers in order to prove that non-native teachers have equal advantages that should be taken into account. The writer expects that the result of this short article could be a valuable input to the area of teaching English as a foreign language in Indonesia.

  8. Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech.

    Science.gov (United States)

    Borrie, Stephanie A; Lansford, Kaitlin L; Barrett, Tyson S

    2017-03-01

    The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria. Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words. Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement. Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.

  9. Why Not Non-Native Varieties of English as Listening Comprehension Test Input?

    Science.gov (United States)

    Abeywickrama, Priyanvada

    2013-01-01

    The existence of different varieties of English in target language use (TLU) domains calls into question the usefulness of listening comprehension tests whose input is limited only to a native speaker variety. This study investigated the impact of non-native varieties or accented English speech on test takers from three different English use…

  10. Non-natives: 141 scientists object

    NARCIS (Netherlands)

    Simberloff, D.; Van der Putten, W.H.

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. dsimberloff@utk.edu Jake Alexander Institute of Integrative

  11. Comprehending non-native speakers: theory and evidence for adjustment in manner of processing.

    Science.gov (United States)

    Lev-Ari, Shiri

    2014-01-01

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker's upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker's language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers.

  12. Listening to a non-native speaker: Adaptation and generalization

    Science.gov (United States)

    Clarke, Constance M.

    2004-05-01

    Non-native speech can cause perceptual difficulty for the native listener, but experience can moderate this difficulty. This study explored the perceptual benefit of a brief (approximately 1 min) exposure to foreign-accented speech using a cross-modal word matching paradigm. Processing speed was tracked by recording reaction times (RTs) to visual probe words following English sentences produced by a Spanish-accented speaker. In experiment 1, RTs decreased significantly over 16 accented utterances and by the end were equal to RTs to a native voice. In experiment 2, adaptation to one Spanish-accented voice improved perceptual efficiency for a new Spanish-accented voice, indicating that abstract properties of accented speech are learned during adaptation. The control group in Experiment 2 also adapted to the accented voice during the test block, suggesting adaptation can occur within two to four sentences. The results emphasize the flexibility of the human speech processing system and the need for a mechanism to explain this adaptation in models of spoken word recognition. [Research supported by an NSF Graduate Research Fellowship and the University of Arizona Cognitive Science Program.] a)Currently at SUNY at Buffalo, Dept. of Psych., Park Hall, Buffalo, NY 14260, cclarke2@buffalo.edu

  13. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  14. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  15. Spectrotemporal Modulation Detection and Speech Perception by Cochlear Implant Users.

    Science.gov (United States)

    Won, Jong Ho; Moon, Il Joon; Jin, Sunhwa; Park, Heesung; Woo, Jihwan; Cho, Yang-Sun; Chung, Won-Ho; Hong, Sung Hwa

    2015-01-01

    Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test-retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information.

  16. The selective role of premotor cortex in speech perception: a contribution to phoneme judgements but not speech comprehension.

    Science.gov (United States)

    Krieger-Redwood, Katya; Gaskell, M Gareth; Lindsay, Shane; Jefferies, Elizabeth

    2013-12-01

    Several accounts of speech perception propose that the areas involved in producing language are also involved in perceiving it. In line with this view, neuroimaging studies show activation of premotor cortex (PMC) during phoneme judgment tasks; however, there is debate about whether speech perception necessarily involves motor processes, across all task contexts, or whether the contribution of PMC is restricted to tasks requiring explicit phoneme awareness. Some aspects of speech processing, such as mapping sounds onto meaning, may proceed without the involvement of motor speech areas if PMC specifically contributes to the manipulation and categorical perception of phonemes. We applied TMS to three sites-PMC, posterior superior temporal gyrus, and occipital pole-and for the first time within the TMS literature, directly contrasted two speech perception tasks that required explicit phoneme decisions and mapping of speech sounds onto semantic categories, respectively. TMS to PMC disrupted explicit phonological judgments but not access to meaning for the same speech stimuli. TMS to two further sites confirmed that this pattern was site specific and did not reflect a generic difference in the susceptibility of our experimental tasks to TMS: stimulation of pSTG, a site involved in auditory processing, disrupted performance in both language tasks, whereas stimulation of occipital pole had no effect on performance in either task. These findings demonstrate that, although PMC is important for explicit phonological judgments, crucially, PMC is not necessary for mapping speech onto meanings.

  17. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  18. Non-natives: 141 scientists object

    OpenAIRE

    Simberloff, Daniel; Vilà, Montserrat

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. Jake Alexander Institute of Integrative Biology, Zurich, Switzerland. Fred Allendorf University of Montana, Missoula, Montana, USA. James Aronson CEFE/CNRS, Montpellier, France. Pedro M. Antunes Algoma University, Sault Ste. Marie, Onta...

  19. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  20. Speech perception with mono- and quadrupolar electrode configurations: a crossover study.

    NARCIS (Netherlands)

    Mens, L.H.M.; Berenstein, C.K.

    2005-01-01

    OBJECTIVE: To study the effect of two multipolar electrode configurations on speech perception, pitch perception, and the intracochlear electrical field. STUDY DESIGN: Crossover design; within subject. SETTING: Tertiary referral center. PATIENTS: Eight experienced adult cochlear implant users.

  1. Only Behavioral But Not Self-Report Measures of Speech Perception Correlate with Cognitive Abilities.

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A

    2016-01-01

    Good speech perception and communication skills in everyday life are crucial for participation and well-being, and are therefore an overarching aim of auditory rehabilitation. Both behavioral and self-report measures can be used to assess these skills. However, correlations between behavioral and self-report speech perception measures are often low. One possible explanation is that there is a mismatch between the specific situations used in the assessment of these skills in each method, and a more careful matching across situations might improve consistency of results. The role that cognition plays in specific speech situations may also be important for understanding communication, as speech perception tests vary in their cognitive demands. In this study, the role of executive function, working memory (WM) and attention in behavioral and self-report measures of speech perception was investigated. Thirty existing hearing aid users with mild-to-moderate hearing loss aged between 50 and 74 years completed a behavioral test battery with speech perception tests ranging from phoneme discrimination in modulated noise (easy) to words in multi-talker babble (medium) and keyword perception in a carrier sentence against a distractor voice (difficult). In addition, a self-report measure of aided communication, residual disability from the Glasgow Hearing Aid Benefit Profile, was obtained. Correlations between speech perception tests and self-report measures were higher when specific speech situations across both were matched. Cognition correlated with behavioral speech perception test results but not with self-report. Only the most difficult speech perception test, keyword perception in a carrier sentence with a competing distractor voice, engaged executive functions in addition to WM. In conclusion, any relationship between behavioral and self-report speech perception is not mediated by a shared correlation with cognition.

  2. On the Perception of Speech Sounds as Biologically Significant Signals1,2

    Science.gov (United States)

    Pisoni, David B.

    2012-01-01

    This paper reviews some of the major evidence and arguments currently available to support the view that human speech perception may require the use of specialized neural mechanisms for perceptual analysis. Experiments using synthetically produced speech signals with adults are briefly summarized and extensions of these results to infants and other organisms are reviewed with an emphasis towards detailing those aspects of speech perception that may require some need for specialized species-specific processors. Finally, some comments on the role of early experience in perceptual development are provided as an attempt to identify promising areas of new research in speech perception. PMID:399200

  3. Cross-Cultural Variation of Politeness Orientation & Speech Act Perception

    Directory of Open Access Journals (Sweden)

    Nisreen Naji Al-Khawaldeh

    2013-05-01

    Full Text Available This paper presents the findings of an empirical study which compares Jordanian and English native speakers’ perceptions about the speech act of thanking. The forty interviews conducted revealed some similarities but also of remarkable cross-cultural differences relating to the significance of thanking, the variables affecting it, and the appropriate linguistic and paralinguistic choices, as well as their impact on the interpretation of thanking behaviour. The most important theoretical finding is that the data, while consistent with many views found in the existing literature, do not support Brown and Levinson’s (1987 claim that thanking is a speech act which intrinsically threatens the speaker’s negative face because it involves overt acceptance of an imposition on the speaker.  Rather, thanking should be viewed as a means of establishing and sustaining social relationships. The study findings suggest that cultural variation in thanking is due to the high degree of sensitivity of this speech act to the complex interplay of a range of social and contextual variables, and point to some promising directions for further research.

  4. The Development of the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test

    Science.gov (United States)

    Mealings, Kiri T.; Demuth, Katherine; Buchholz, Jörg; Dillon, Harvey

    2015-01-01

    Purpose: Open-plan classroom styles are increasingly being adopted in Australia despite evidence that their high intrusive noise levels adversely affect learning. The aim of this study was to develop a new Australian speech perception task (the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test) and use it in an open-plan…

  5. The Role of Broca's Area in Speech Perception: Evidence from Aphasia Revisited

    Science.gov (United States)

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-01-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that…

  6. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  7. Face configuration affects speech perception: Evidence from a McGurk mismatch negativity study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; MacDonald, Ewen; Andersen, Tobias

    2015-01-01

    We perceive identity, expression and speech from faces. While perception of identity and expression depends crucially on the configuration of facial features it is less clear whether this holds for visual speech perception. Facial configuration is poorly perceived for upside-down faces as demonst...

  8. Perception of foreign-accented clear speech by younger and older English listeners

    OpenAIRE

    Li, Chi-Nin

    2009-01-01

    Naturally produced English clear speech has been shown to be more intelligible than English conversational speech. However, little is known about the extent of the clear speech effects in the production of nonnative English, and perception of foreign-accented English by younger and older listeners. The present study examined whether Cantonese speakers would employ the same strategies as those used by native English speakers in producing clear speech in their second language. Also, the clear s...

  9. Music training and speech perception: a gene-environment interaction.

    Science.gov (United States)

    Schellenberg, E Glenn

    2015-03-01

    Claims of beneficial side effects of music training are made for many different abilities, including verbal and visuospatial abilities, executive functions, working memory, IQ, and speech perception in particular. Such claims assume that music training causes the associations even though children who take music lessons are likely to differ from other children in music aptitude, which is associated with many aspects of speech perception. Music training in childhood is also associated with cognitive, personality, and demographic variables, and it is well established that IQ and personality are determined largely by genetics. Recent evidence also indicates that the role of genetics in music aptitude and music achievement is much larger than previously thought. In short, music training is an ideal model for the study of gene-environment interactions but far less appropriate as a model for the study of plasticity. Children seek out environments, including those with music lessons, that are consistent with their predispositions; such environments exaggerate preexisting individual differences. © 2015 New York Academy of Sciences.

  10. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  11. Relations between psychophysical data and speech perception for hearing-impaired subjects. II

    NARCIS (Netherlands)

    Dreschler, W. A.; Plomp, R.

    1985-01-01

    Twenty-one sensorineurally hearing-impaired adolescents were studied with an extensive battery of tone-perception, phoneme-perception, and speech-perception tests. Tests on loudness perception, frequency selectivity, and temporal resolution at the test frequencies of 500, 1000, and 2000 Hz were

  12. Comparison of Speech Perception in Background Noise with Acceptance of Background Noise in Aided and Unaided Conditions.

    Science.gov (United States)

    Nabelek, Anna K.; Tampas, Joanna W.; Burchfield, Samuel B.

    2004-01-01

    l, speech perception in noiseBackground noise is a significant factor influencing hearing-aid satisfaction and is a major reason for rejection of hearing aids. Attempts have been made by previous researchers to relate the use of hearing aids to speech perception in noise (SPIN), with an expectation of improved speech perception followed by an…

  13. The Role of Categorical Speech Perception and Phonological Processing in Familial Risk Children with and without Dyslexia

    Science.gov (United States)

    Hakvoort, Britt; de Bree, Elise; van der Leij, Aryan; Maassen, Ben; van Setten, Ellie; Maurits, Natasha; van Zuijen, Titia L.

    2016-01-01

    Purpose: This study assessed whether a categorical speech perception (CP) deficit is associated with dyslexia or familial risk for dyslexia, by exploring a possible cascading relation from speech perception to phonology to reading and by identifying whether speech perception distinguishes familial risk (FR) children with dyslexia (FRD) from those…

  14. Refining Stimulus Parameters in Assessing Infant Speech Perception Using Visual Reinforcement Infant Speech Discrimination: Sensation Level.

    Science.gov (United States)

    Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy

    2015-01-01

    Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a

  15. Crossmodal and incremental perception of audiovisual cues to emotional speech.

    Science.gov (United States)

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: 1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests with video clips of emotional utterances collected via a variant of the well-known Velten method. More specifically, we recorded speakers who displayed positive or negative emotions, which were congruent or incongruent with the (emotional) lexical content of the uttered sentence. In order to test this, we conducted two experiments. The first experiment is a perception experiment in which Czech participants, who do not speak Dutch, rate the perceived emotional state of Dutch speakers in a bimodal (audiovisual) or a unimodal (audio- or vision-only) condition. It was found that incongruent emotional speech leads to significantly more extreme perceived emotion scores than congruent emotional speech, where the difference between congruent and incongruent emotional speech is larger for the negative than for the positive conditions. Interestingly, the largest overall differences between congruent and incongruent emotions were found for the audio-only condition, which suggests that posing an incongruent emotion has a particularly strong effect on the spoken realization of emotions. The second experiment uses a gating paradigm to test the recognition speed for various emotional expressions from a speaker's face. In this experiment participants were presented with the same clips as experiment I, but this time presented vision-only. The clips were shown in successive segments (gates) of increasing duration. Results show that participants are surprisingly accurate in their recognition of the various emotions, as they already reach high recognition scores in the first gate (after only 160 ms). Interestingly, the recognition scores

  16. The Knowledge Base of Non-Native English-Speaking Teachers: Perspectives of Teachers and Administrators

    Science.gov (United States)

    Zhang, Fengjuan; Zhan, Ju

    2014-01-01

    This study explores the knowledge base of non-native English-speaking teachers (NNESTs) working in the Canadian English as a second language (ESL) context. By examining NNESTs' experiences in seeking employment and teaching ESL in Canada, and investigating ESL program administrators' perceptions and hiring practices in relation to NNESTs, it…

  17. Ecological impacts of non-native species: Chapter 2

    Science.gov (United States)

    Pilliod, David S.; Griffiths, R.A.; Kuzmin, S.L.; Heatwole, Harold; Wilkinson, John W.

    2012-01-01

    Non-native species are considered one of the greatest threats to freshwater biodiversity worldwide (Drake et al. 1989; Allen and Flecker 1993; Dudgeon et al. 2005). Some of the first hypotheses proposed to explain global patterns of amphibian declines included the effects of non-native species (Barinaga 1990; Blaustein and Wake 1990; Wake and Morowitz 1991). Evidence for the impact of non-native species on amphibians stems (1) from correlative research that relates the distribution or abundance of a species to that of a putative non-native species, and (2) from experimental tests of the effects of a non-native species on survival, growth, development or behaviour of a target species (Kats and Ferrer 2003). Over the past two decades, research on the effects of non-native species on amphibians has mostly focused on introduced aquatic predators, particularly fish. Recent research has shifted to more complex ecological relationships such as influences of sub-lethal stressors (e.g. contaminants) on the effects of non-native species (Linder et al. 2003; Sih et al. 2004), non-native species as vectors of disease (Daszak et al. 2004; Garner et al. 2006), hybridization between non-natives and native congeners (Riley et al. 2003; Storfer et al. 2004), and the alteration of food-webs by non-native species (Nystrom et al. 2001). Other research has examined the interaction of non-native species in terms of facilitation (i.e. one non-native enabling another to become established or spread) or the synergistic effects of multiple non-native species on native amphibians, the so-called invasional meltdown hypothesis (Simerloff and Von Holle 1999). Although there is evidence that some non-native species may interact (Ricciardi 2001), there has yet to be convincing evidence that such interactions have led to an accelerated increase in the number of non-native species and cumulative impacts are still uncertain (Simberloff 2006). Applied research on the control, eradication, and

  18. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  19. Beyond production: Brain responses during speech perception in adults who stutter

    Directory of Open Access Journals (Sweden)

    Tali Halag-Milo

    2016-01-01

    Full Text Available Developmental stuttering is a speech disorder that disrupts the ability to produce speech fluently. While stuttering is typically diagnosed based on one's behavior during speech production, some models suggest that it involves more central representations of language, and thus may affect language perception as well. Here we tested the hypothesis that developmental stuttering implicates neural systems involved in language perception, in a task that manipulates comprehensibility without an overt speech production component. We used functional magnetic resonance imaging to measure blood oxygenation level dependent (BOLD signals in adults who do and do not stutter, while they were engaged in an incidental speech perception task. We found that speech perception evokes stronger activation in adults who stutter (AWS compared to controls, specifically in the right inferior frontal gyrus (RIFG and in left Heschl's gyrus (LHG. Significant differences were additionally found in the lateralization of response in the inferior frontal cortex: AWS showed bilateral inferior frontal activity, while controls showed a left lateralized pattern of activation. These findings suggest that developmental stuttering is associated with an imbalanced neural network for speech processing, which is not limited to speech production, but also affects cortical responses during speech perception.

  20. Exploring Dyslexics' Phonological Deficit III: Foreign Speech Perception and Production

    Science.gov (United States)

    Soroli, Efstathia; Szenkovits, Gayaneh; Ramus, Franck

    2010-01-01

    This study investigates French dyslexic and control adult participants' ability to perceive and produce two different non-native contrasts (one segmental and one prosodic), across several conditions varying short-term memory load. For this purpose, we selected Korean plosive voicing (whose categories conflict with French ones) as the segmental…

  1. Internet Video Telephony Allows Speech Reading by Deaf Individuals and Improves Speech Perception by Cochlear Implant Users

    Science.gov (United States)

    Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal

    2013-01-01

    Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119

  2. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  3. Relative Contributions of the Dorsal vs. Ventral Speech Streams to Speech Perception are Context Dependent: a lesion study

    Directory of Open Access Journals (Sweden)

    Corianne Rogalsky

    2014-04-01

    Full Text Available The neural basis of speech perception has been debated for over a century. While it is generally agreed that the superior temporal lobes are critical for the perceptual analysis of speech, a major current topic is whether the motor system contributes to speech perception, with several conflicting findings attested. In a dorsal-ventral speech stream framework (Hickok & Poeppel 2007, this debate is essentially about the roles of the dorsal versus ventral speech processing streams. A major roadblock in characterizing the neuroanatomy of speech perception is task-specific effects. For example, much of the evidence for dorsal stream involvement comes from syllable discrimination type tasks, which have been found to behaviorally doubly dissociate from auditory comprehension tasks (Baker et al. 1981. Discrimination task deficits could be a result of difficulty perceiving the sounds themselves, which is the typical assumption, or it could be a result of failures in temporary maintenance of the sensory traces, or the comparison and/or the decision process. Similar complications arise in perceiving sentences: the extent of inferior frontal (i.e. dorsal stream activation during listening to sentences increases as a function of increased task demands (Love et al. 2006. Another complication is the stimulus: much evidence for dorsal stream involvement uses speech samples lacking semantic context (CVs, non-words. The present study addresses these issues in a large-scale lesion-symptom mapping study. 158 patients with focal cerebral lesions from the Mutli-site Aphasia Research Consortium underwent a structural MRI or CT scan, as well as an extensive psycholinguistic battery. Voxel-based lesion symptom mapping was used to compare the neuroanatomy involved in the following speech perception tasks with varying phonological, semantic, and task loads: (i two discrimination tasks of syllables (non-words and words, respectively, (ii two auditory comprehension tasks

  4. Understanding the threats posed by non-native species: public vs. conservation managers.

    Directory of Open Access Journals (Sweden)

    Rodolphe E Gozlan

    Full Text Available Public perception is a key factor influencing current conservation policy. Therefore, it is important to determine the influence of the public, end-users and scientists on the prioritisation of conservation issues and the direct implications for policy makers. Here, we assessed public attitudes and the perception of conservation managers to five non-native species in the UK, with these supplemented by those of an ecosystem user, freshwater anglers. We found that threat perception was not influenced by the volume of scientific research or by the actual threats posed by the specific non-native species. Media interest also reflected public perception and vice versa. Anglers were most concerned with perceived threats to their recreational activities but their concerns did not correspond to the greatest demonstrated ecological threat. The perception of conservation managers was an amalgamation of public and angler opinions but was mismatched to quantified ecological risks of the species. As this suggests that invasive species management in the UK is vulnerable to a knowledge gap, researchers must consider the intrinsic characteristics of their study species to determine whether raising public perception will be effective. The case study of the topmouth gudgeon Pseudorasbora parva reveals that media pressure and political debate has greater capacity to ignite policy changes and impact studies on non-native species than scientific evidence alone.

  5. Students Writing Emails to Faculty: An Examination of E-Politeness among Native and Non-Native Speakers of English

    Science.gov (United States)

    Biesenbach-Lucas, Sigrun

    2007-01-01

    This study combines interlanguage pragmatics and speech act research with computer-mediated communication and examines how native and non-native speakers of English formulate low- and high-imposition requests to faculty. While some research claims that email, due to absence of non-verbal cues, encourages informal language, other research has…

  6. Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech

    Science.gov (United States)

    Ben-David, Boaz M.; Multani, Namita; Shakuf, Vered; Rudzicz, Frank; van Lieshout, Pascal H. H. M.

    2016-01-01

    Purpose: Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. Method: We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5…

  7. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  8. Compensation for Coarticulation: Disentangling Auditory and Gestural Theories of Perception of Coarticulatory Effects in Speech

    Science.gov (United States)

    Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.

    2010-01-01

    According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…

  9. Hearing Aid-Induced Plasticity in the Auditory System of Older Adults: Evidence from Speech Perception

    Science.gov (United States)

    Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph

    2015-01-01

    Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-­in-­noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…

  10. Hearing loss and speech perception in noise difficulties in Fanconi anemia.

    Science.gov (United States)

    Verheij, Emmy; Oomen, Karin P Q; Smetsers, Stephanie E; van Zanten, Gijsbert A; Speleman, Lucienne

    2017-10-01

    Fanconi anemia is a hereditary chromosomal instability disorder. Hearing loss and ear abnormalities are among the many manifestations reported in this disorder. In addition, Fanconi anemia patients often complain about hearing difficulties in situations with background noise (speech perception in noise difficulties). Our study aimed to describe the prevalence of hearing loss and speech perception in noise difficulties in Dutch Fanconi anemia patients. Retrospective chart review. A retrospective chart review was conducted at a Dutch tertiary care center. All patients with Fanconi anemia at clinical follow-up in our hospital were included. Medical files were reviewed to collect data on hearing loss and speech perception in noise difficulties. In total, 49 Fanconi anemia patients were included. Audiograms were available in 29 patients and showed hearing loss in 16 patients (55%). Conductive hearing loss was present in 24.1%, sensorineural in 20.7%, and mixed in 10.3%. A speech in noise test was performed in 17 patients; speech perception in noise was subnormal in nine patients (52.9%) and abnormal in two patients (11.7%). Hearing loss and speech perception in noise abnormalities are common in Fanconi anemia. Therefore, pure tone audiograms and speech in noise tests should be performed, preferably already at a young age, because hearing aids or assistive listening devices could be very valuable in developing language and communication skills. 4. Laryngoscope, 127:2358-2361, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  12. Factors contributing to speech perception scores in long-term pediatric cochlear implant users.

    Science.gov (United States)

    Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A

    2011-02-01

    The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody

  13. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  14. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    Science.gov (United States)

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  15. The Non-Native English Speaker Teachers in TESOL Movement

    Science.gov (United States)

    Kamhi-Stein, Lía D.

    2016-01-01

    It has been almost 20 years since what is known as the non-native English-speaking (NNES) professionals' movement--designed to increase the status of NNES professionals--started within the US-based TESOL International Association. However, still missing from the literature is an understanding of what a movement is, and why non-native English…

  16. Speech Perception Outcomes after Cochlear Implantation in Children with GJB2/DFNB1 associated Deafness

    Directory of Open Access Journals (Sweden)

    Marina Davcheva-Chakar

    2014-03-01

    Full Text Available Background: Cochlear implants (CI for the rehabilitation of patients with profound or total bilateral sensorineural hypoacusis represent the initial use of electrical fields to provide audibility in cases where the use of sound amplifiers does not provide satisfactory results. Aims: To compare speech perception performance after cochlear implantation in children with connexin 26-associated deafness with that of a control group of children with deafness of unknown etiology. Study Design: Retrospective comparative study. Methods: During the period from 2006 to , cochlear implantation was performed on 26 children. Eighteen of these children had undergone genetic tests for mutation of the Gap Junction Protein Beta 2 (GJB2 gene. Bi-allelic GJB2 mutations were confirmed in 7 out of 18 examined children. In order to confirm whether genetic factors have influence on speech perception after cochlear implantation, we compared the post-implantation speech performance of seven children with mutations of the GBJ2 (connexin 26 gene with seven other children who had the wild type version of this particular gene. The latter were carefully matched according to the age at cochlear implantation. Speech perception performance was measured before cochlear implantation, and one and two years after implantation. All the patients were arranged in line with the appropriate speech perception category (SPC. Non-parametric tests, Friedman ANOVA and Mann-Whitney’s U test were used for statistical analysis. Results: Both groups showed similar improvements in speech perception scores after cochlear implantation. Statistical analysis did not confirm significant differences between the groups 12 and 24 months after cochlear implantation. Conclusion: The results obtained in this study showed an absence of apparent distinctions in the scores of speech perception between the two examined groups and therefore might have significant implications in selecting prognostic indicators

  17. An algorithm of improving speech emotional perception for hearing aid

    Science.gov (United States)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  18. The Neural Basis of Speech Perception through Lipreading and Manual Cues: Evidence from Deaf Native Users of Cued Speech

    Science.gov (United States)

    Aparicio, Mario; Peigneux, Philippe; Charlier, Brigitte; Balériaux, Danielle; Kavec, Martin; Leybaert, Jacqueline

    2017-01-01

    We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework

  19. Working memory training to improve speech perception in noise across languages.

    Science.gov (United States)

    Ingvalson, Erin M; Dhar, Sumitrajit; Wong, Patrick C M; Liu, Hanjun

    2015-06-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners.

  20. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    Science.gov (United States)

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  1. Is There a Relationship between Speech Identification in Noise and Categorical Perception in Children with Dyslexia?

    Science.gov (United States)

    Calcus, Axelle; Lorenzi, Christian; Collet, Gregory; Colin, Cécile; Kolinsky, Régine

    2016-01-01

    Purpose: Children with dyslexia have been suggested to experience deficits in both categorical perception (CP) and speech identification in noise (SIN) perception. However, results regarding both abilities are inconsistent, and the relationship between them is still unclear. Therefore, this study aimed to investigate the relationship between CP…

  2. Speech-driven environmental control systems--a qualitative analysis of users' perceptions.

    Science.gov (United States)

    Judge, Simon; Robertson, Zoë; Hawley, Mark; Enderby, Pam

    2009-05-01

    To explore users' experiences and perceptions of speech-driven environmental control systems (SPECS) as part of a larger project aiming to develop a new SPECS. The motivation for this part of the project was to add to the evidence base for the use of SPECS and to determine the key design specifications for a new speech-driven system from a user's perspective. Semi-structured interviews were conducted with 12 users of SPECS from around the United Kingdom. These interviews were transcribed and analysed using a qualitative method based on framework analysis. Reliability is the main influence on the use of SPECS. All the participants gave examples of occasions when their speech-driven system was unreliable; in some instances, this unreliability was reported as not being a problem (e.g., for changing television channels); however, it was perceived as a problem for more safety critical functions (e.g., opening a door). Reliability was cited by participants as the reason for using a switch-operated system as back up. Benefits of speech-driven systems focused on speech operation enabling access when other methods were not possible; quicker operation and better aesthetic considerations. Overall, there was a perception of increased independence from the use of speech-driven environmental control. In general, speech was considered a useful method of operating environmental controls by the participants interviewed; however, their perceptions regarding reliability often influenced their decision to have backup or alternative systems for certain functions.

  3. Development and preliminary evaluation of a pediatric Spanish-English speech perception task.

    Science.gov (United States)

    Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J

    2014-06-01

    The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.

  4. The functional anatomy of speech perception: Dorsal and ventral processing pathways

    Science.gov (United States)

    Hickok, Gregory

    2003-04-01

    Drawing on recent developments in the cortical organization of vision, and on data from a variety of sources, Hickok and Poeppel (2000) have proposed a new model of the functional anatomy of speech perception. The model posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, involved in mapping sound onto meaning, and a dorsal stream, involved in mapping sound onto articulatory-based representations. The ventral stream projects ventrolaterally toward inferior posterior temporal cortex which serves as an interface between sound and meaning. The dorsal stream projects dorsoposteriorly toward the parietal lobe and ultimately to frontal regions. This network provides a mechanism for the development and maintenance of ``parity'' between auditory and motor representations of speech. Although the dorsal stream represents a tight connection between speech perception and speech production, it is not a critical component of the speech perception process under ecologically natural listening conditions. Some degree of bi-directionality in both the dorsal and ventral pathways is also proposed. A variety of recent empirical tests of this model have provided further support for the proposal.

  5. Mandarin speech perception in combined electric and acoustic stimulation.

    Directory of Open Access Journals (Sweden)

    Yongxin Li

    Full Text Available For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI and hearing aid (HA typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0 information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2 information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects' HA-aided pure-tone average (PTA thresholds between 250 and 2000 Hz; subjects were divided into two groups: "better" PTA (50 dB HL. The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12, further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception.

  6. Audio-Visual Speech in Noise Perception in Dyslexia

    Science.gov (United States)

    van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean

    2018-01-01

    Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a…

  7. The effects of bilingualism on children's perception of speech sounds

    NARCIS (Netherlands)

    Brasileiro, I.

    2009-01-01

    The general topic addressed by this dissertation is that of bilingualism, and more specifically, the topic of bilingual acquisition of speech sounds. The central question in this study is the following: does bilingualism affect children’s perceptual development of speech sounds? The term bilingual

  8. Influence of Telecommunication Modality, Internet Transmission Quality, and Accessories on Speech Perception in Cochlear Implant Users

    Science.gov (United States)

    Koller, Roger; Guignard, Jérémie; Caversaccio, Marco; Kompis, Martin; Senn, Pascal

    2017-01-01

    Background Telecommunication is limited or even impossible for more than one-thirds of all cochlear implant (CI) users. Objective We sought therefore to study the impact of voice quality on speech perception with voice over Internet protocol (VoIP) under real and adverse network conditions. Methods Telephone speech perception was assessed in 19 CI users (15-69 years, average 42 years), using the German HSM (Hochmair-Schulz-Moser) sentence test comparing Skype and conventional telephone (public switched telephone networks, PSTN) transmission using a personal computer (PC) and a digital enhanced cordless telecommunications (DECT) telephone dual device. Five different Internet transmission quality modes and four accessories (PC speakers, headphones, 3.5 mm jack audio cable, and induction loop) were compared. As a secondary outcome, the subjective perceived voice quality was assessed using the mean opinion score (MOS). Results Speech telephone perception was significantly better (median 91.6%, P 15%) were not superior to conventional telephony. In addition, there were no significant differences between the tested accessories (P>.05) using a PC. Coupling a Skype DECT phone device with an audio cable to the CI, however, resulted in higher speech perception (median 65%) and subjective MOS scores (3.2) than using PSTN (median 7.5%, P<.001). Conclusions Skype calls significantly improve speech perception for CI users compared with conventional telephony under real network conditions. Listening accessories do not further improve listening experience. Current Skype DECT telephone devices do not fully offer technical advantages in voice quality. PMID:28438727

  9. Speech perception under adverse conditions: Insights from behavioral, computational and neuroscience research

    Directory of Open Access Journals (Sweden)

    Sara eGuediche

    2014-01-01

    Full Text Available Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this review article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. In particular, we consider several domains of neuroscience research that offer insight into how perception can be adaptively tuned to short-term deviations while also maintaining without affecting the long-term learned regularities for mapping sensory input. We review several literatures to highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Better understanding the application and limitations of these algorithms for the challenges of flexible speech perception under adverse conditions promises to inform theoretical models of speech.

  10. Sensorimotor Representation of Speech Perception. Cross-Decoding of Place of Articulation Features during Selective Attention to Syllables in 7T fMRI

    NARCIS (Netherlands)

    Archila-Meléndez, Mario E.; Valente, Giancarlo; Correia, Joao M.; Rouhl, Rob P. W.; van Kranen-Mastenbroek, Vivianne H.; Jansma, Bernadette M.

    2018-01-01

    Sensorimotor integration, the translation between acoustic signals and motoric programs, may constitute a crucial mechanism for speech. During speech perception, the acoustic-motoric translations include the recruitment of cortical areas for the representation of speech articulatory features, such

  11. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  12. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    Science.gov (United States)

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International

  13. Non-native plant invasions of United States National parks

    Science.gov (United States)

    Allen, J.A.; Brown, C.S.; Stohlgren, T.J.

    2009-01-01

    The United States National Park Service was created to protect and make accessible to the public the nation's most precious natural resources and cultural features for present and future generations. However, this heritage is threatened by the invasion of non-native plants, animals, and pathogens. To evaluate the scope of invasions, the USNPS has inventoried non-native plant species in the 216 parks that have significant natural resources, documenting the identity of non-native species. We investigated relationships among non-native plant species richness, the number of threatened and endangered plant species, native species richness, latitude, elevation, park area and park corridors and vectors. Parks with many threatened and endangered plants and high native plant species richness also had high non-native plant species richness. Non-native plant species richness was correlated with number of visitors and kilometers of backcountry trails and rivers. In addition, this work reveals patterns that can be further explored empirically to understand the underlying mechanisms. ?? Springer Science+Business Media B.V. 2008.

  14. Non-native educators in English language teaching

    CERN Document Server

    Braine, George

    2013-01-01

    The place of native and non-native speakers in the role of English teachers has probably been an issue ever since English was taught internationally. Although ESL and EFL literature is awash, in fact dependent upon, the scrutiny of non-native learners, interest in non-native academics and teachers is fairly new. Until recently, the voices of non-native speakers articulating their own concerns have been even rarer. This book is a response to this notable vacuum in the ELT literature, providing a forum for language educators from diverse geographical origins and language backgrounds. In addition to presenting autobiographical narratives, these authors argue sociopolitical issues and discuss implications for teacher education, all relating to the theme of non-native educators in ETL. All of the authors are non-native speakers of English. Some are long established professionals, whereas others are more recent initiates to the field. All but one received part of the higher education in North America, and all excep...

  15. Music Training Can Improve Music and Speech Perception in Pediatric Mandarin-Speaking Cochlear Implant Users.

    Science.gov (United States)

    Cheng, Xiaoting; Liu, Yangwenyi; Shu, Yilai; Tao, Duo-Duo; Wang, Bing; Yuan, Yasheng; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2018-01-01

    Due to limited spectral resolution, cochlear implants (CIs) do not convey pitch information very well. Pitch cues are important for perception of music and tonal language; it is possible that music training may improve performance in both listening tasks. In this study, we investigated music training outcomes in terms of perception of music, lexical tones, and sentences in 22 young (4.8 to 9.3 years old), prelingually deaf Mandarin-speaking CI users. Music perception was measured using a melodic contour identification (MCI) task. Speech perception was measured for lexical tones and sentences presented in quiet. Subjects received 8 weeks of MCI training using pitch ranges not used for testing. Music and speech perception were measured at 2, 4, and 8 weeks after training was begun; follow-up measures were made 4 weeks after training was stopped. Mean baseline performance was 33.2%, 76.9%, and 45.8% correct for MCI, lexical tone recognition, and sentence recognition, respectively. After 8 weeks of MCI training, mean performance significantly improved by 22.9, 14.4, and 14.5 percentage points for MCI, lexical tone recognition, and sentence recognition, respectively ( p music and speech performance. The results suggest that music training can significantly improve pediatric Mandarin-speaking CI users' music and speech perception.

  16. Speech perception in autism spectrum disorder: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Tryfon, Ana; Foster, Nicholas E V; Sharda, Megha; Hyde, Krista L

    2018-02-15

    Autism spectrum disorder (ASD) is often characterized by atypical language profiles and auditory and speech processing. These can contribute to aberrant language and social communication skills in ASD. The study of the neural basis of speech perception in ASD can serve as a potential neurobiological marker of ASD early on, but mixed results across studies renders it difficult to find a reliable neural characterization of speech processing in ASD. To this aim, the present study examined the functional neural basis of speech perception in ASD versus typical development (TD) using an activation likelihood estimation (ALE) meta-analysis of 18 qualifying studies. The present study included separate analyses for TD and ASD, which allowed us to examine patterns of within-group brain activation as well as both common and distinct patterns of brain activation across the ASD and TD groups. Overall, ASD and TD showed mostly common brain activation of speech processing in bilateral superior temporal gyrus (STG) and left inferior frontal gyrus (IFG). However, the results revealed trends for some distinct activation in the TD group showing additional activation in higher-order brain areas including left superior frontal gyrus (SFG), left medial frontal gyrus (MFG), and right IFG. These results provide a more reliable neural characterization of speech processing in ASD relative to previous single neuroimaging studies and motivate future work to investigate how these brain signatures relate to behavioral measures of speech processing in ASD. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

    Science.gov (United States)

    Good, Arla; Gordon, Karen A; Papsin, Blake C; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A

    Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.

  18. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants

    Science.gov (United States)

    Gordon, Karen A.; Papsin, Blake C.; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A.

    2017-01-01

    Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. PMID:28085739

  19. Result on speech perception after conversion from Spectra® to Freedom®.

    Science.gov (United States)

    Magalhães, Ana Tereza de Matos; Goffi-Gomez, Maria Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens

    2012-04-01

    New technology in the Freedom® speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. To identify the contribution of this technology-- the Nucleus 22®--on speech perception tests in silence and in noise, and on audiometric thresholds. A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra® was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom® technology for the Nucleus22®, auditory thresholds and speech perception tests were performed in free field in sound-proof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Freedom® applied for the Nucleus22® showed a statistically significant difference in all speech perception tests and audiometric thresholds. The Freedom® technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22®.

  20. Speech-in-Noise Perception Deficit in Adults with Dyslexia: Effects of Background Type and Listening Configuration

    Science.gov (United States)

    Dole, Marjorie; Hoen, Michel; Meunier, Fanny

    2012-01-01

    Developmental dyslexia is associated with impaired speech-in-noise perception. The goal of the present research was to further characterize this deficit in dyslexic adults. In order to specify the mechanisms and processing strategies used by adults with dyslexia during speech-in-noise perception, we explored the influence of background type,…

  1. Are mirror neurons the basis of speech perception? Evidence from five cases with damage to the purported human mirror system

    Science.gov (United States)

    Rogalsky, Corianne; Love, Tracy; Driscoll, David; Anderson, Steven W.; Hickok, Gregory

    2013-01-01

    The discovery of mirror neurons in macaque has led to a resurrection of motor theories of speech perception. Although the majority of lesion and functional imaging studies have associated perception with the temporal lobes, it has also been proposed that the ‘human mirror system’, which prominently includes Broca’s area, is the neurophysiological substrate of speech perception. Although numerous studies have demonstrated a tight link between sensory and motor speech processes, few have directly assessed the critical prediction of mirror neuron theories of speech perception, namely that damage to the human mirror system should cause severe deficits in speech perception. The present study measured speech perception abilities of patients with lesions involving motor regions in the left posterior frontal lobe and/or inferior parietal lobule (i.e., the proposed human ‘mirror system’). Performance was at or near ceiling in patients with fronto-parietal lesions. It is only when the lesion encroaches on auditory regions in the temporal lobe that perceptual deficits are evident. This suggests that ‘mirror system’ damage does not disrupt speech perception, but rather that auditory systems are the primary substrate for speech perception. PMID:21207313

  2. Reading Fluency and Speech Perception Speed of Beginning Readers with Persistent Reading Problems: The Perception of Initial Stop Consonants and Consonant Clusters

    Science.gov (United States)

    Snellings, Patrick; van der Leij, Aryan; Blok, Henk; de Jong, Peter F.

    2010-01-01

    This study investigated the role of speech perception accuracy and speed in fluent word decoding of reading disabled (RD) children. A same-different phoneme discrimination task with natural speech tested the perception of single consonants and consonant clusters by young but persistent RD children. RD children were slower than chronological age…

  3. Speech Perception Benefits of Internet Versus Conventional Telephony for Hearing-Impaired Individuals

    Science.gov (United States)

    Dubach, Patrick; Pfiffner, Flurin; Kompis, Martin; Caversaccio, Marco

    2012-01-01

    Background Telephone communication is a challenge for many hearing-impaired individuals. One important technical reason for this difficulty is the restricted frequency range (0.3–3.4 kHz) of conventional landline telephones. Internet telephony (voice over Internet protocol [VoIP]) is transmitted with a larger frequency range (0.1–8 kHz) and therefore includes more frequencies relevant to speech perception. According to a recently published, laboratory-based study, the theoretical advantage of ideal VoIP conditions over conventional telephone quality has translated into improved speech perception by hearing-impaired individuals. However, the speech perception benefits of nonideal VoIP network conditions, which may occur in daily life, have not been explored. VoIP use cannot be recommended to hearing-impaired individuals before its potential under more realistic conditions has been examined. Objective To compare realistic VoIP network conditions, under which digital data packets may be lost, with ideal conventional telephone quality with respect to their impact on speech perception by hearing-impaired individuals. Methods We assessed speech perception using standardized test material presented under simulated VoIP conditions with increasing digital data packet loss (from 0% to 20%) and compared with simulated ideal conventional telephone quality. We monaurally tested 10 adult users of cochlear implants, 10 adult users of hearing aids, and 10 normal-hearing adults in the free sound field, both in quiet and with background noise. Results Across all participant groups, mean speech perception scores using VoIP with 0%, 5%, and 10% packet loss were 15.2% (range 0%–53%), 10.6% (4%–46%), and 8.8% (7%–33%) higher, respectively, than with ideal conventional telephone quality. Speech perception did not differ between VoIP with 20% packet loss and conventional telephone quality. The maximum benefits were observed under ideal VoIP conditions without packet loss and

  4. Speech perception benefits of internet versus conventional telephony for hearing-impaired individuals.

    Science.gov (United States)

    Mantokoudis, Georgios; Dubach, Patrick; Pfiffner, Flurin; Kompis, Martin; Caversaccio, Marco; Senn, Pascal

    2012-07-16

    Telephone communication is a challenge for many hearing-impaired individuals. One important technical reason for this difficulty is the restricted frequency range (0.3-3.4 kHz) of conventional landline telephones. Internet telephony (voice over Internet protocol [VoIP]) is transmitted with a larger frequency range (0.1-8 kHz) and therefore includes more frequencies relevant to speech perception. According to a recently published, laboratory-based study, the theoretical advantage of ideal VoIP conditions over conventional telephone quality has translated into improved speech perception by hearing-impaired individuals. However, the speech perception benefits of nonideal VoIP network conditions, which may occur in daily life, have not been explored. VoIP use cannot be recommended to hearing-impaired individuals before its potential under more realistic conditions has been examined. To compare realistic VoIP network conditions, under which digital data packets may be lost, with ideal conventional telephone quality with respect to their impact on speech perception by hearing-impaired individuals. We assessed speech perception using standardized test material presented under simulated VoIP conditions with increasing digital data packet loss (from 0% to 20%) and compared with simulated ideal conventional telephone quality. We monaurally tested 10 adult users of cochlear implants, 10 adult users of hearing aids, and 10 normal-hearing adults in the free sound field, both in quiet and with background noise. Across all participant groups, mean speech perception scores using VoIP with 0%, 5%, and 10% packet loss were 15.2% (range 0%-53%), 10.6% (4%-46%), and 8.8% (7%-33%) higher, respectively, than with ideal conventional telephone quality. Speech perception did not differ between VoIP with 20% packet loss and conventional telephone quality. The maximum benefits were observed under ideal VoIP conditions without packet loss and were 36% (P = .001) for cochlear implant users, 18

  5. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing

    OpenAIRE

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise rati...

  6. Neurophysiological Evidence That Musical Training Influences the Recruitment of Right Hemispheric Homologues for Speech Perception

    Directory of Open Access Journals (Sweden)

    McNeel Gordon Jantzen

    2014-03-01

    Full Text Available Musicians have a more accurate temporal and tonal representation of auditory stimuli than their non-musician counterparts (Kraus & Chandrasekaran, 2010; Parbery-Clark, Skoe, & Kraus, 2009; Zendel & Alain, 2008; Musacchia, Sams, Skoe, & Kraus, 2007. Musicians who are adept at the production and perception of music are also more sensitive to key acoustic features of speech such as voice onset timing and pitch. Together, these data suggest that musical training may enhance the processing of acoustic information for speech sounds. In the current study, we sought to provide neural evidence that musicians process speech and music in a similar way. We hypothesized that for musicians, right hemisphere areas traditionally associated with music are also engaged for the processing of speech sounds. In contrast we predicted that in non-musicians processing of speech sounds would be localized to traditional left hemisphere language areas. Speech stimuli differing in voice onset time was presented using a dichotic listening paradigm. Subjects either indicated aural location for a specified speech sound or identified a specific speech sound from a directed aural location. Musical training effects and organization of acoustic features were reflected by activity in source generators of the P50. This included greater activation of right middle temporal gyrus (MTG and superior temporal gyrus (STG in musicians. The findings demonstrate recruitment of right hemisphere in musicians for discriminating speech sounds and a putative broadening of their language network. Musicians appear to have an increased sensitivity to acoustic features and enhanced selective attention to temporal features of speech that is facilitated by musical training and supported, in part, by right hemisphere homologues of established speech processing regions of the brain.

  7. Improving speech perception in noise with current focusing in cochlear implant users.

    Science.gov (United States)

    Srinivasan, Arthi G; Padilla, Monica; Shannon, Robert V; Landsberger, David M

    2013-05-01

    Cochlear implant (CI) users typically have excellent speech recognition in quiet but struggle with understanding speech in noise. It is thought that broad current spread from stimulating electrodes causes adjacent electrodes to activate overlapping populations of neurons which results in interactions across adjacent channels. Current focusing has been studied as a way to reduce spread of excitation, and therefore, reduce channel interactions. In particular, partial tripolar stimulation has been shown to reduce spread of excitation relative to monopolar stimulation. However, the crucial question is whether this benefit translates to improvements in speech perception. In this study, we compared speech perception in noise with experimental monopolar and partial tripolar speech processing strategies. The two strategies were matched in terms of number of active electrodes, microphone, filterbanks, stimulation rate and loudness (although both strategies used a lower stimulation rate than typical clinical strategies). The results of this study showed a significant improvement in speech perception in noise with partial tripolar stimulation. All subjects benefited from the current focused speech processing strategy. There was a mean improvement in speech recognition threshold of 2.7 dB in a digits in noise task and a mean improvement of 3 dB in a sentences in noise task with partial tripolar stimulation relative to monopolar stimulation. Although the experimental monopolar strategy was worse than the clinical, presumably due to different microphones, frequency allocations and stimulation rates, the experimental partial-tripolar strategy, which had the same changes, showed no acute deficit relative to the clinical. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. The development of visual speech perception in Mandarin Chinese-speaking children.

    Science.gov (United States)

    Chen, Liang; Lei, Jianghua

    2017-01-01

    The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.

  9. Effects of Musicality on the Perception of Rhythmic Structure in Speech

    Directory of Open Access Journals (Sweden)

    Natalie Boll-Avetisyan

    2017-04-01

    Full Text Available Language and music share many rhythmic properties, such as variations in intensity and duration leading to repeating patterns. Perception of rhythmic properties may rely on cognitive networks that are shared between the two domains. If so, then variability in speech rhythm perception may relate to individual differences in musicality. To examine this possibility, the present study focuses on rhythmic grouping, which is assumed to be guided by a domain-general principle, the Iambic/Trochaic law, stating that sounds alternating in intensity are grouped as strong-weak, and sounds alternating in duration are grouped as weak-strong. German listeners completed a grouping task: They heard streams of syllables alternating in intensity, duration, or neither, and had to indicate whether they perceived a strong-weak or weak-strong pattern. Moreover, their music perception abilities were measured, and they filled out a questionnaire reporting their productive musical experience. Results showed that better musical rhythm perception ability was associated with more consistent rhythmic grouping of speech, while melody perception ability and productive musical experience were not. This suggests shared cognitive procedures in the perception of rhythm in music and speech. Also, the results highlight the relevance of considering individual differences in musicality when aiming to explain variability in prosody perception.

  10. Hemispheric asymmetries in speech perception: sense, nonsense and modulations.

    Directory of Open Access Journals (Sweden)

    Stuart Rosen

    Full Text Available The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding 'rapid temporal processing'.A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET was used to compare which brain regions were active when participants listened to the different sounds.Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.

  11. Vulnerability of freshwater native biodiversity to non-native ...

    Science.gov (United States)

    Background/Question/Methods Non-native species pose one of the greatest threats to native biodiversity. The literature provides plentiful empirical and anecdotal evidence of this phenomenon; however, such evidence is limited to local or regional scales. Employing geospatial analyses, we investigate the potential threat of non-native species to threatened and endangered aquatic animal taxa inhabiting unprotected areas across the continental US. We compiled distribution information from existing publicly available databases at the watershed scale (12-digit hydrologic unit code). We mapped non-native aquatic plant and animal species richness, and an index of cumulative invasion pressure, which weights non-native richness by the time since invasion of each species. These distributions were compared to the distributions of native aquatic taxa (fish, amphibians, mollusks, and decapods) from the International Union for the Conservation of Nature (IUCN) database. We mapped the proportion of species listed by IUCN as threatened and endangered, and a species rarity index per watershed. An overlay analysis identified watersheds experiencing high pressure from non-native species and also containing high proportions of threatened and endangered species or exhibiting high species rarity. Conservation priorities were identified by generating priority indices from these overlays and mapping them relative to the distribution of protected areas across the US. Results/Conclusion

  12. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  13. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  14. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users.

    Science.gov (United States)

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.

  15. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...

  16. The influence of phonetic dimensions on aphasic speech perception

    NARCIS (Netherlands)

    de Kok, D.A.; Jonkers, R.; Bastiaanse, Y.R.M.

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with 'audiovisual', 'auditory only' and

  17. Dissociating speech perception and comprehension at reduced levels of awareness

    Science.gov (United States)

    Davis, Matthew H.; Coleman, Martin R.; Absalom, Anthony R.; Rodd, Jennifer M.; Johnsrude, Ingrid S.; Matta, Basil F.; Owen, Adrian M.; Menon, David K.

    2007-01-01

    We used functional MRI and the anesthetic agent propofol to assess the relationship among neural responses to speech, successful comprehension, and conscious awareness. Volunteers were scanned while listening to sentences containing ambiguous words, matched sentences without ambiguous words, and signal-correlated noise (SCN). During three scanning sessions, participants were nonsedated (awake), lightly sedated (a slowed response to conversation), and deeply sedated (no conversational response, rousable by loud command). Bilateral temporal-lobe responses for sentences compared with signal-correlated noise were observed at all three levels of sedation, although prefrontal and premotor responses to speech were absent at the deepest level of sedation. Additional inferior frontal and posterior temporal responses to ambiguous sentences provide a neural correlate of semantic processes critical for comprehending sentences containing ambiguous words. However, this additional response was absent during light sedation, suggesting a marked impairment of sentence comprehension. A significant decline in postscan recognition memory for sentences also suggests that sedation impaired encoding of sentences into memory, with left inferior frontal and temporal lobe responses during light sedation predicting subsequent recognition memory. These findings suggest a graded degradation of cognitive function in response to sedation such that “higher-level” semantic and mnemonic processes can be impaired at relatively low levels of sedation, whereas perceptual processing of speech remains resilient even during deep sedation. These results have important implications for understanding the relationship between speech comprehension and awareness in the healthy brain in patients receiving sedation and in patients with disorders of consciousness. PMID:17938125

  18. Influence of musical training on perception of L2 speech

    NARCIS (Netherlands)

    Sadakata, M.; Zanden, L.D.T. van der; Sekiyama, K.

    2010-01-01

    The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and

  19. Effects of Removing Low-Frequency Electric Information on Speech Perception with Bimodal Hearing

    Science.gov (United States)

    Fowler, Jennifer R.; Eggleston, Jessica L.; Reavis, Kelly M.; McMillan, Garnett P.; Reiss, Lina A. J.

    2016-01-01

    Purpose: The objective was to determine whether speech perception could be improved for bimodal listeners (those using a cochlear implant [CI] in one ear and hearing aid in the contralateral ear) by removing low-frequency information provided by the CI, thereby reducing acoustic-electric overlap. Method: Subjects were adult CI subjects with at…

  20. Speech perception after cochlear implantation in 53 patients with otosclerosis: multicentre results.

    NARCIS (Netherlands)

    Rotteveel, L.J.C.; Snik, A.F.M.; Cooper, H.; Mawman, D.J.; Olphen, A.F. van; Mylanus, E.A.M.

    2010-01-01

    OBJECTIVES: To analyse the speech perception performance of 53 cochlear implant recipients with otosclerosis and to evaluate which factors influenced patient performance in this group. The factors included disease-related data such as demographics, pre-operative audiological characteristics, the

  1. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  2. Speech-Language Pathologists' and Teachers' Perceptions of Classroom-Based Interventions.

    Science.gov (United States)

    Beck, Ann R.; Dennis, Marcia

    1997-01-01

    Speech-language pathologists (N=21) and teachers (N=54) were surveyed regarding their perceptions of classroom-based interventions. The two groups agreed about the primary advantages and disadvantages of most interventions, the primary areas of difference being classroom management and ease of data collection. Other findings indicated few…

  3. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users

    NARCIS (Netherlands)

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects

  4. Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel

    Science.gov (United States)

    Kleinschmidt, Dave F.; Jaeger, T. Florian

    2016-01-01

    Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker’s /p/ might be physically indistinguishable from another talker’s /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively non-stationary world and propose that the speech perception system overcomes this challenge by (1) recognizing previously encountered situations, (2) generalizing to other situations based on previous similar experience, and (3) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (1) to (3) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on two critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these two aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension. PMID:25844873

  5. Reliability of Interaural Time Difference-Based Localization Training in Elderly Individuals with Speech-in-Noise Perception Disorder.

    Science.gov (United States)

    Delphi, Maryam; Lotfi, M-Yones; Moossavi, Abdollah; Bakhshi, Enayatollah; Banimostafa, Maryam

    2017-09-01

    Previous studies have shown that interaural-time-difference (ITD) training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV). We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months' follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001). The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months' follow-up (P=0.212). The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.

  6. On the context-dependent nature of the contribution of the ventral premotor cortex to speech perception

    Science.gov (United States)

    Tremblay, Pascale; Small, Steven L.

    2011-01-01

    What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. “palace”), while the complex words contained one to three consonant clusters (e.g. “planet”). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories theory of

  7. Intensive foreign language learning reveals effects on categorical perception of sibilant voicing after only 3 weeks

    DEFF Research Database (Denmark)

    Nielsen, Andreas Højlund; Horn, Nynne Thorup; Derdau Sørensen, Stine

    2015-01-01

    Models of speech learning suggest that adaptations to foreign language sound categories take place within 6-12 months of exposure to a foreign language. Results from laboratory language training show effects of very targeted training on non-native speech contrasts within only one to three weeks...... of training. Results from immersion studies are inconclusive, but some suggest continued effects on non-native speech perception after 6-8 years of experience. We investigated this apparent discrepancy in the timing of adaptation to foreign speech sounds in a longitudinal study of foreign language learning....... We examined two groups of Danish language officer cadets learning either Arabic (MSA and Egyptian Arabic) or Dari (Afghan Farsi) through intensive multi-faceted language training. We conducted two experiments (identification and discrimination) with the cadets who were tested four times: at the start...

  8. Dissociating speech perception and comprehension at reduced levels of awareness

    OpenAIRE

    Davis, Matthew H.; Coleman, Martin R.; Absalom, Anthony R.; Rodd, Jennifer M.; Johnsrude, Ingrid S.; Matta, Basil F.; Owen, Adrian M.; Menon, David K.

    2007-01-01

    We used functional MRI and the anesthetic agent propofol to assess the relationship among neural responses to speech, successful comprehension, and conscious awareness. Volunteers were scanned while listening to sentences containing ambiguous words, matched sentences without ambiguous words, and signal-correlated noise (SCN). During three scanning sessions, participants were nonsedated (awake), lightly sedated (a slowed response to conversation), and deeply sedated (no conversational response...

  9. STUDENTS WRITING EMAILS TO FACULTY: AN EXAMINATION OF E-POLITENESS AMONG NATIVE AND NON-NATIVE SPEAKERS OF ENGLISH

    Directory of Open Access Journals (Sweden)

    Sigrun Biesenbach-Lucas

    2007-02-01

    Full Text Available This study combines interlanguage pragmatics and speech act research with computer-mediated communication and examines how native and non-native speakers of English formulate low- and high-imposition requests to faculty. While some research claims that email, due to absence of non-verbal cues, encourages informal language, other research has claimed the opposite. However, email technology also allows writers to plan and revise messages before sending them, thus affording the opportunity to edit not only for grammar and mechanics, but also for pragmatic clarity and politeness.The study examines email requests sent by native and non-native English speaking graduate students to faculty at a major American university over a period of several semesters and applies Blum-Kulka, House, and Kasper’s (1989 speech act analysis framework – quantitatively to distinguish levels of directness, i.e. pragmatic clarity; and qualitatively to compare syntactic and lexical politeness devices, the request perspectives, and the specific linguistic request realization patterns preferred by native and non-native speakers. Results show that far more requests are realized through direct strategies as well as hints than conventionally indirect strategies typically found in comparative speech act studies. Politeness conventions in email, a text-only medium with little guidance in the academic institutional hierarchy, appear to be a work in progress, and native speakers demonstrate greater resources in creating e-polite messages to their professors than non-native speakers. A possible avenue for pedagogical intervention with regard to instruction in and acquisition of politeness routines in hierarchically upward email communication is presented.

  10. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  11. Restoring speech perception with cochlear implants by spanning defective electrode contacts.

    Science.gov (United States)

    Frijns, Johan H M; Snel-Bongers, Jorien; Vellinga, Dirk; Schrage, Erik; Vanpoucke, Filiep J; Briaire, Jeroen J

    2013-04-01

    Even with six defective contacts, spanning can largely restore speech perception with the HiRes 120 speech processing strategy to the level supported by an intact electrode array. Moreover, the sound quality is not degraded. Previous studies have demonstrated reduced speech perception scores (SPS) with defective contacts in HiRes 120. This study investigated whether replacing defective contacts by spanning, i.e. current steering on non-adjacent contacts, is able to restore speech recognition to the level supported by an intact electrode array. Ten adult cochlear implant recipients (HiRes90K, HiFocus1J) with experience with HiRes 120 participated in this study. Three different defective electrode arrays were simulated (six separate defective contacts, three pairs or two triplets). The participants received three take-home strategies and were asked to evaluate the sound quality in five predefined listening conditions. After 3 weeks, SPS were evaluated with monosyllabic words in quiet and in speech-shaped background noise. The participants rated the sound quality equal for all take-home strategies. SPS with background noise were equal for all conditions tested. However, SPS in quiet (85% phonemes correct on average with the full array) decreased significantly with increasing spanning distance, with a 3% decrease for each spanned contact.

  12. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  13. Credibility of native and non-native speakers of English revisited: Do non-native listeners feel the same?

    OpenAIRE

    Hanzlíková, Dagmar; Skarnitzl, Radek

    2017-01-01

    This study reports on research stimulated by Lev-Ari and Keysar (2010) who showed that native listeners find statements delivered by foreign-accented speakers to be less true than those read by native speakers. Our objective was to replicate the study with non-native listeners to see whether this effect is also relevant in international communication contexts. The same set of statements from the original study was recorded by 6 native and 6 nonnative speakers of English. 121 non-native listen...

  14. Musical background not associated with self-perceived hearing performance or speech perception in postlingual cochlear-implant users.

    Science.gov (United States)

    Fuller, Christina; Free, Rolien; Maat, Bert; Başkent, Deniz

    2012-08-01

    In normal-hearing listeners, musical background has been observed to change the sound representation in the auditory system and produce enhanced performance in some speech perception tests. Based on these observations, it has been hypothesized that musical background can influence sound and speech perception, and as an extension also the quality of life, by cochlear-implant users. To test this hypothesis, this study explored musical background [using the Dutch Musical Background Questionnaire (DMBQ)], and self-perceived sound and speech perception and quality of life [using the Nijmegen Cochlear Implant Questionnaire (NCIQ) and the Speech Spatial and Qualities of Hearing Scale (SSQ)] in 98 postlingually deafened adult cochlear-implant recipients. In addition to self-perceived measures, speech perception scores (percentage of phonemes recognized in words presented in quiet) were obtained from patient records. The self-perceived hearing performance was associated with the objective speech perception. Forty-one respondents (44% of 94 respondents) indicated some form of formal musical training. Fifteen respondents (18% of 83 respondents) judged themselves as having musical training, experience, and knowledge. No association was observed between musical background (quantified by DMBQ), and self-perceived hearing-related performance or quality of life (quantified by NCIQ and SSQ), or speech perception in quiet.

  15. The discrepancy in the perception of the public-political speech in Croatia.

    Science.gov (United States)

    Tanta, Ivan; Lesinger, Gordana

    2014-03-01

    Key place in this paper takes the study of political speech in the Republic of Croatia and their impact on voters, or which keywords are in political speeches and public appearances of politicians in Croatia that their voting body wants to hear. Given listed below we will define the research topic in the form of a question - is there a discrepancy in the perception of the public-political speech in Croatia, and which keywords are specific to the two main regions in Croatia and that inhabitant these regions respond. Marcus Tullius Cicero, the most important Roman orator, he used a specific associative mnemonic technique that is called "technique room". He would talk expound on keywords and conceptual terms that he needed for the desired topic and join in these make them, according to the desired order, in a very creative and unique way, the premises of the house or palace, which he knew well. Then, while holding the speech intended to pass through rooms of the house or palace and then put keywords and concepts come to mind, again according to the desired order. Given that this is a specific kind of research political speech that is relatively recent in Croatia, it should be noted that there is still, this kind of political communication is not sufficiently explored. Particularly the emphasis on the impact and use of keywords specific to the Republic of Croatia, in everyday public and political communication. The paper will be analyzed the political, campaign speeches and promises several winning candidates, and now Croatian MEPs, specific keywords related to: economics, culture, science, education and health. The analysis is based on comparison of the survey results on the representation of key words in the speeches of politicians and qualitative analysis of the speeches of politicians on key words during the election campaign.

  16. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2017-02-01

    Full Text Available Audiovisual speech integration combines information from auditory speech (talker's voice and visual speech (talker's mouth movements to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga, that are integrated to produce a fused percept ("da". This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba. We describe a simplified model of causal inference in multisensory speech perception (CIMS that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  17. Speech perception and communication ability over the telephone by Mandarin-speaking children with cochlear implants.

    Science.gov (United States)

    Wu, Che-Ming; Liu, Tien-Chen; Wang, Nan-Mai; Chao, Wei-Chieh

    2013-08-01

    (1) To understand speech perception and communication ability through real telephone calls by Mandarin-speaking children with cochlear implants and compare them to live-voice perception, (2) to report the general condition of telephone use of this population, and (3) to investigate the factors that correlate with telephone speech perception performance. Fifty-six children with over 4 years of implant use (aged 6.8-13.6 years, mean duration 8.0 years) took three speech perception tests administered using telephone and live voice to examine sentence, monosyllabic-word and Mandarin tone perception. The children also filled out a questionnaire survey investigating everyday telephone use. Wilcoxon signed-rank test was used to compare the scores between live-voice and telephone tests, and Pearson's test to examine the correlation between them. The mean scores were 86.4%, 69.8% and 70.5% respectively for sentence, word and tone recognition over the telephone. The corresponding live-voice mean scores were 94.3%, 84.0% and 70.8%. Wilcoxon signed-rank test showed the sentence and word scores were significantly different between telephone and live voice test, while the tone recognition scores were not, indicating tone perception was less worsened by telephone transmission than words and sentences. Spearman's test showed that chronological age and duration of implant use were weakly correlated with the perception test scores. The questionnaire survey showed 78% of the children could initiate phone calls and 59% could use the telephone 2 years after implantation. Implanted children are potentially capable of using the telephone 2 years after implantation, and communication ability over the telephone becomes satisfactory 4 years after implantation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Hearing aid processing of loud speech and noise signals: Consequences for loudness perception and listening comfort

    DEFF Research Database (Denmark)

    Schmidt, Erik

    2007-01-01

    sounds, has found that both normal-hearing and hearing-impaired listeners prefer loud sounds to be closer to the most comfortable loudness-level, than suggested by common non-linear fitting rules. During this project, two listening experiments were carried out. In the first experiment, hearing aid users......Hearing aid processing of loud speech and noise signals: Consequences for loudness perception and listening comfort. Sound processing in hearing aids is determined by the fitting rule. The fitting rule describes how the hearing aid should amplify speech and sounds in the surroundings......, such that they become audible again for the hearing impaired person. The general goal is to place all sounds within the hearing aid users’ audible range, such that speech intelligibility and listening comfort become as good as possible. Amplification strategies in hearing aids are in many cases based on empirical...

  19. Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    NARCIS (Netherlands)

    Başkent, Deniz; Fuller, Christina; Galvin, John; Schepel, Like; Gaudrain, Etienne; Free, Rolien

    2018-01-01

    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech

  20. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners with Normal Hearing

    Science.gov (United States)

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…

  1. Musical background not associated with self-perceived hearing performance or speech perception in postlingual cochlear-implant users

    NARCIS (Netherlands)

    Fuller, Christina; Free, Rolien; Maat, Bert; Baskent, Deniz

    In normal-hearing listeners, musical background has been observed to change the sound representation in the auditory system and produce enhanced performance in some speech perception tests. Based on these observations, it has been hypothesized that musical background can influence sound and speech

  2. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    Science.gov (United States)

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  3. Executives' speech expressiveness: analysis of perceptive and acoustic aspects of vocal dynamics.

    Science.gov (United States)

    Marquezin, Daniela Maria Santos Serrano; Viola, Izabel; Ghirardi, Ana Carolina de Assis Moura; Madureira, Sandra; Ferreira, Léslie Piccolotto

    2015-01-01

    To analyze speech expressiveness in a group of executives based on perceptive and acoustic aspects of vocal dynamics. Four male subjects participated in the research study (S1, S2, S3, and S4). The assessments included the Kingdomality test to obtain the keywords of communicative attitudes; perceptive-auditory assessment to characterize vocal quality and dynamics, performed by three judges who are speech language pathologists; perceptiveauditory assessment to judge the chosen keywords; speech acoustics to assess prosodic elements (Praat software); and a statistical analysis. According to the perceptive-auditory analysis of vocal dynamics, S1, S2, S3, and S4 did not show vocal alterations and all of them were considered with lowered habitual pitch. S1: pointed out as insecure, nonobjective, nonempathetic, and unconvincing with inappropriate use of pauses that are mainly formed by hesitations; inadequate separation of prosodic groups with breaking of syntagmatic constituents. S2: regular use of pauses for respiratory reload, organization of sentences, and emphasis, which is considered secure, little objective, empathetic, and convincing. S3: pointed out as secure, objective, empathetic, and convincing with regular use of pauses for respiratory reload and organization of sentences and hesitations. S4: the most secure, objective, empathetic, and convincing, with proper use of pauses for respiratory reload, planning, and emphasis; prosodic groups agreed with the statement, without separating the syntagmatic constituents. The speech characteristics and communicative attitudes were highlighted in two subjects in a different manner, in such a way that the slow rate of speech and breaks of the prosodic groups transmitted insecurity, little objectivity, and nonpersuasion.

  4. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis.

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H

    2015-12-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.

  5. The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception.

    Science.gov (United States)

    Buchan, Julie N; Paré, Martin; Munhall, Kevin G

    2008-11-25

    During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.

  6. Adaptive plasticity in speech perception: Effects of external information and internal predictions.

    Science.gov (United States)

    Guediche, Sara; Fiez, Julie A; Holt, Lori L

    2016-07-01

    When listeners encounter speech under adverse listening conditions, adaptive adjustments in perception can improve comprehension over time. In some cases, these adaptive changes require the presence of external information that disambiguates the distorted speech signals, whereas in other cases mere exposure is sufficient. Both external (e.g., written feedback) and internal (e.g., prior word knowledge) sources of information can be used to generate predictions about the correct mapping of a distorted speech signal. We hypothesize that these predictions provide a basis for determining the discrepancy between the expected and actual speech signal that can be used to guide adaptive changes in perception. This study provides the first empirical investigation that manipulates external and internal factors through (a) the availability of explicit external disambiguating information via the presence or absence of postresponse orthographic information paired with a repetition of the degraded stimulus, and (b) the accuracy of internally generated predictions; an acoustic distortion is introduced either abruptly or incrementally. The results demonstrate that the impact of external information on adaptive plasticity is contingent upon whether the intelligibility of the stimuli permits accurate internally generated predictions during exposure. External information sources enhance adaptive plasticity only when input signals are severely degraded and cannot reliably access internal predictions. This is consistent with a computational framework for adaptive plasticity in which error-driven supervised learning relies on the ability to compute sensory prediction error signals from both internal and external sources of information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross......Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  8. How musical expertise shapes speech perception: evidence from auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel

    2015-09-24

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

  9. The development of multisensory speech perception continues into the late childhood years.

    Science.gov (United States)

    Ross, Lars A; Molholm, Sophie; Blanco, Daniella; Gomez-Ramirez, Manuel; Saint-Amour, Dave; Foxe, John J

    2011-06-01

    Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd. No claim to original US government works.

  10. Impact of the linguistic environment on speech perception : comparing bilingual and monolingual populations

    OpenAIRE

    Roessler, Abeba, 1981-

    2012-01-01

    The present dissertation set out to investigate how the linguistic environment affects speech perception. Three sets of studies have explored effects of bilingualism on word recognition in adults and infants and the impact of first language linguistic knowledge on rule learning in adults. In the present work, we have found evidence in three auditory priming studies that bilingual adults, in contrast to monolinguals have developed mechanisms to effectively overcome interference from irrelevant...

  11. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Karsten eSpecht

    2013-01-01

    Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus...

  12. Mapping a lateralization gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus....

  13. Influence of anesthesia techniques of caesarean section on memory, perception and speech

    Directory of Open Access Journals (Sweden)

    Volkov O.O.

    2014-06-01

    Full Text Available In obstetrics postoperative cognitive dysfunctions may take place after caesarean section and vaginal delivery with poor results both for mother and child. The goal was to study influence of anesthesia techniques following caesarian section on memory, perception and speech. Having agreed with local ethics committee and obtained informed consent depending on anesthesia method, pregnant women were divided into 2 groups: 1st group (n=31 had spinal anesthesia, 2nd group (n=34 – total intravenous anesthesia. Spinal anesthesia: 1.8-2.2 mLs of hyperbaric 0.5% bupivacaine. ТIVА: Thiopental sodium (4 mgs kg-1, succinylcholine (1-1.5 mgs kg-1. Phentanyl (10-5-3 µgs kg-1 hour and Diazepam (10 mgs were used after newborn extraction. We used Luria’s test for memory assessment, perception was studied by test “recognition of time”. Speech was studied by test "name of fingers". Control points: 1 - before the surgery, 2 - in 24h after the caesarian section, 3 - on day 3 after surgery, 4 - at discharge from hospital (5-7th day. The study showed that initially decreased memory level in expectant mothers regressed along with the time after caesarean section. Memory is restored in 3 days after surgery regardless of anesthesia techniques. In spinal anesthesia on 5-7th postoperative day memory level exceeds that of used in total intravenous anesthesia. The perception and speech do not depend on the term of postoperative period. Anesthesia technique does not influence perception and speech restoration after caesarean sections.

  14. Causal inference and temporal predictions in audiovisual perception of speech and music.

    Science.gov (United States)

    Noppeney, Uta; Lee, Hwee Ling

    2018-03-31

    To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses. © 2018 New York Academy of Sciences.

  15. Electrophysiological measures of attention during speech perception predict metalinguistic skills in children

    Directory of Open Access Journals (Sweden)

    Lori Astheimer

    2014-01-01

    Full Text Available Event-related potential (ERP evidence demonstrates that preschool-aged children selectively attend to informative moments such as word onsets during speech perception. Although this observation indicates a role for attention in language processing, it is unclear whether this type of attention is part of basic speech perception mechanisms, higher-level language skills, or general cognitive abilities. The current study examined these possibilities by measuring ERPs from 5-year-old children listening to a narrative containing attention probes presented before, during, and after word onsets as well as at random control times. Children also completed behavioral tests assessing verbal and nonverbal skills. Probes presented after word onsets elicited a more negative ERP response beginning around 100 ms after probe onset than control probes, indicating increased attention to word-initial segments. Crucially, the magnitude of this difference was correlated with performance on verbal tasks, but showed no relationship to nonverbal measures. More specifically, ERP attention effects were most strongly correlated with performance on a complex metalinguistic task involving grammaticality judgments. These results demonstrate that effective allocation of attention during speech perception supports higher-level, controlled language processing in children by allowing them to focus on relevant information at individual word and complex sentence levels.

  16. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.

    Science.gov (United States)

    Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline

    2013-01-01

    The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between

  17. New tests of the distal speech rate effect: Examining cross-linguistic generalization

    Directory of Open Access Journals (Sweden)

    Laura eDilley

    2013-12-01

    Full Text Available Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664-1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In Experiment 1, native Russian speakers listened to Russian sentences which had been subjected to rate manipulations and performed a lexical report task. Experiment 2 investigated speech rate effects in cross-language speech perception; non-native speakers of Russian of both high and low proficiency were tested on the same Russian sentences as in Experiment 1. They decided between two lexical interpretations of a critical portion of the sentence, where one choice contained more phonological material than the other (e.g., /stərʌ'na/ side vs. /strʌ'na/ country. In both experiments, with native and non-native speakers of Russian, context speech rate and the relative duration of the critical sentence portion were found to influence the amount of phonological material perceived. The results support the generalized rate normalization hypothesis, according to which the content perceived in a spectrally ambiguous stretch of speech depends on the duration of that content relative to the surrounding speech, while showing that the findings of Dilley and Pitt (2010 extend to a variety of morphosyntactic contexts and a new language, Russian. Findings indicate that relative timing cues across an utterance can be critical to accurate lexical perception by both native and non-native speakers.

  18. Kalispel Non-Native Fish Suppression Project 2007 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Wingert, Michele; Andersen, Todd [Kalispel Natural Resource Department

    2008-11-18

    Non-native salmonids are impacting native salmonid populations throughout the Pend Oreille Subbasin. Competition, hybridization, and predation by non-native fish have been identified as primary factors in the decline of some native bull trout (Salvelinus confluentus) and westslope cutthroat trout (Oncorhynchus clarki lewisi) populations. In 2007, the Kalispel Natural Resource Department (KNRD) initiated the Kalispel Nonnative Fish Suppression Project. The goal of this project is to implement actions to suppress or eradicate non-native fish in areas where native populations are declining or have been extirpated. These projects have previously been identified as critical to recovering native bull trout and westslope cutthroat trout (WCT). Lower Graham Creek was invaded by non-native rainbow (Oncorhynchus mykiss) and brook trout (Salvelinus fontinalis) after a small dam failed in 1991. By 2003, no genetically pure WCT remained in the lower 700 m of Graham Creek. Further invasion upstream is currently precluded by a relatively short section of steep, cascade-pool stepped channel section that will likely be breached in the near future. In 2008, a fish management structure (barrier) was constructed at the mouth of Graham Creek to preclude further invasion of non-native fish into Graham Creek. The construction of the barrier was preceded by intensive electrofishing in the lower 700 m to remove and relocate all captured fish. Westslope cutthroat trout have recently been extirpated in Cee Cee Ah Creek due to displacement by brook trout. We propose treating Cee Cee Ah Creek with a piscicide to eradicate brook trout. Once eradication is complete, cutthroat trout will be translocated from nearby watersheds. In 2004, the Washington Department of Fish and Wildlife (WDFW) proposed an antimycin treatment within the subbasin; the project encountered significant public opposition and was eventually abandoned. However, over the course of planning this 2004 project, little public

  19. Perceptions of The Seriousness of Mispronunciations of English Speech Sounds

    Directory of Open Access Journals (Sweden)

    Moedjito Moedjito

    2006-01-01

    Full Text Available The present study attempts to investigate Indonesian EFL teachers’ and native English speakers’ perceptions of mispronunciations of English sounds by Indonesian EFL learners. For this purpose, a paper-form questionnaire consisting of 32 target mispronunciations was distributed to Indonesian secondary school teachers of English and also to native English speakers. An analysis of the respondents’ perceptions has discovered that 14 out of the 32 target mispronunciations are pedagogically significant in pronunciation instruction. A further analysis of the reasons for these major mispronunciations has reconfirmed the prevalence of interference of learners’ native language in their English pronunciation as a major cause of mispronunciations. It has also revealed Indonesian EFL teachers’ tendency to overestimate the seriousness of their learners’ pronunciations. Based on these findings, the study makes suggestions for better English pronunciation teaching in Indonesia or other EFL countries.

  20. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L

    2017-05-24

    Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios. Speech perception in noise and AVWM were measured in 30 listeners (age range 31-67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition. After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions. AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.

  1. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  2. Reliability of Interaural Time Difference-Based Localization Training in Elderly Individuals with Speech-in-Noise Perception Disorder

    Directory of Open Access Journals (Sweden)

    Maryam Delphi

    2017-09-01

    Full Text Available Background: Previous studies have shown that interaural-time-difference (ITD training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV. We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. Methods: The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months’ follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Results: Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001. The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months’ follow-up (P=0.212. Conclusion: The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.

  3. NIS occurrence - Non-native species impacts on threatened and endangered salmonids

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The objectives of this project: a) Identify the distribution of non-natives in the Columbia River Basin b) Highlight the impacts of non-natives on salmonids c)...

  4. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Directory of Open Access Journals (Sweden)

    Arianna eLaCroix

    2015-08-01

    Full Text Available The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel’s Shared Syntactic Integration Resource Hypothesis (SSIRH and Koelsch’s neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music versus speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

  5. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Science.gov (United States)

    LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976

  6. A positron emission tomography study of the neural basis of informational and energetic masking effects in speech perception

    Science.gov (United States)

    Scott, Sophie K.; Rosen, Stuart; Wickham, Lindsay; Wise, Richard J. S.

    2004-02-01

    Positron emission tomography (PET) was used to investigate the neural basis of the comprehension of speech in unmodulated noise (``energetic'' masking, dominated by effects at the auditory periphery), and when presented with another speaker (``informational'' masking, dominated by more central effects). Each type of signal was presented at four different signal-to-noise ratios (SNRs) (+3, 0, -3, -6 dB for the speech-in-speech, +6, +3, 0, -3 dB for the speech-in-noise), with listeners instructed to listen for meaning to the target speaker. Consistent with behavioral studies, there was SNR-dependent activation associated with the comprehension of speech in noise, with no SNR-dependent activity for the comprehension of speech-in-speech (at low or negative SNRs). There was, in addition, activation in bilateral superior temporal gyri which was associated with the informational masking condition. The extent to which this activation of classical ``speech'' areas of the temporal lobes might delineate the neural basis of the informational masking is considered, as is the relationship of these findings to the interfering effects of unattended speech and sound on more explicit working memory tasks. This study is a novel demonstration of candidate neural systems involved in the perception of speech in noisy environments, and of the processing of multiple speakers in the dorso-lateral temporal lobes.

  7. The role of continuous low-frequency harmonicity cues for interrupted speech perception in bimodal hearing.

    Science.gov (United States)

    Oh, Soo Hee; Donaldson, Gail S; Kong, Ying-Yee

    2016-04-01

    Low-frequency acoustic cues have been shown to enhance speech perception by cochlear-implant users, particularly when target speech occurs in a competing background. The present study examined the extent to which a continuous representation of low-frequency harmonicity cues contributes to bimodal benefit in simulated bimodal listeners. Experiment 1 examined the benefit of restoring a continuous temporal envelope to the low-frequency ear while the vocoder ear received a temporally interrupted stimulus. Experiment 2 examined the effect of providing continuous harmonicity cues in the low-frequency ear as compared to restoring a continuous temporal envelope in the vocoder ear. Findings indicate that bimodal benefit for temporally interrupted speech increases when continuity is restored to either or both ears. The primary benefit appears to stem from the continuous temporal envelope in the low-frequency region providing additional phonetic cues related to manner and F1 frequency; a secondary contribution is provided by low-frequency harmonicity cues when a continuous representation of the temporal envelope is present in the low-frequency, or both ears. The continuous temporal envelope and harmonicity cues of low-frequency speech are thought to support bimodal benefit by facilitating identification of word and syllable boundaries, and by restoring partial phonetic cues that occur during gaps in the temporally interrupted stimulus.

  8. Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data.

    Science.gov (United States)

    Gow, David W; Segawa, Jennifer A

    2009-02-01

    The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.

  9. Social performance deficits in social anxiety disorder: reality during conversation and biased perception during speech.

    Science.gov (United States)

    Voncken, Marisol J; Bögels, Susan M

    2008-12-01

    Cognitive models emphasize that patients with social anxiety disorder (SAD) are mainly characterized by biased perception of their social performance. In addition, there is a growing body of evidence showing that SAD patients suffer from actual deficits in social interaction. To unravel what characterizes SAD patients the most, underestimation of social performance (defined as the discrepancy between self-perceived and observer-perceived social performance), or actual (observer-perceived) social performance, 48 patients with SAD and 27 normal control participants were observed during a speech and conversation. Consistent with the cognitive model of SAD, patients with SAD underestimated their social performance relative to control participants during the two interactions, but primarily during the speech. Actual social performance deficits were clearly apparent in the conversation but not in the speech. In conclusion, interactions that pull for more interpersonal skills, like a conversation, elicit more actual social performance deficits whereas, situations with a performance character, like a speech, bring about more cognitive distortions in patients with SAD.

  10. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    Science.gov (United States)

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  11. Talker-specific learning in amnesia: Insight into mechanisms of adaptive speech perception.

    Science.gov (United States)

    Trude, Alison M; Duff, Melissa C; Brown-Schmidt, Sarah

    2014-05-01

    A hallmark of human speech perception is the ability to comprehend speech quickly and effortlessly despite enormous variability across talkers. However, current theories of speech perception do not make specific claims about the memory mechanisms involved in this process. To examine whether declarative memory is necessary for talker-specific learning, we tested the ability of amnesic patients with severe declarative memory deficits to learn and distinguish the accents of two unfamiliar talkers by monitoring their eye-gaze as they followed spoken instructions. Analyses of the time-course of eye fixations showed that amnesic patients rapidly learned to distinguish these accents and tailored perceptual processes to the voice of each talker. These results demonstrate that declarative memory is not necessary for this ability and points to the involvement of non-declarative memory mechanisms. These results are consistent with findings that other social and accommodative behaviors are preserved in amnesia and contribute to our understanding of the interactions of multiple memory systems in the use and understanding of spoken language. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Hearing Loss in Children With Otitis Media With Effusion: Actual and Simulated Effects on Speech Perception.

    Science.gov (United States)

    Cai, Ting; McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-11-14

    Conductive hearing loss simulations have attempted to estimate the speech-understanding difficulties of children with otitis media with effusion (OME). However, the validity of this approach has not been evaluated. The research aim of the present study was to investigate whether a simple, frequency-specific, attenuation-based simulation of OME-related hearing loss was able to reflect the actual effects of conductive hearing loss on speech perception. Forty-one school-age children with OME-related hearing loss were recruited. Each child with OME was matched with a same sex and age counterpart with normal hearing to make a participant pair. Pure-tone threshold differences at octave frequencies from 125 to 8000 Hz for every participant pair were used as the simulation attenuation levels for the normal-hearing children. Another group of 41 school-age otologically normal children were recruited as a control group without actual or simulated hearing loss. The Mandarin Hearing in Noise Test was utilized, and sentence recall accuracy at four signal to noise ratios (SNR) considered representative of classroom-listening conditions were derived, as well as reception thresholds for sentences (RTS) in quiet and in noise using adaptive protocols. The speech perception in quiet and in noise of children with simulated OME-related hearing loss was significantly poorer than that of otologically normal children. Analysis showed that RTS in quiet of children with OME-related hearing loss and of children with simulated OME-related hearing loss was significantly correlated and comparable. A repeated-measures analysis suggested that sentence recall accuracy obtained at 5-dB SNR, 0-dB SNR, and -5-dB SNR was similar between children with actual and simulated OME-related hearing loss. However, RTS in noise in children with OME was significantly better than that for children with simulated OME-related hearing loss. The present frequency-specific, attenuation-based simulation method reflected

  13. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  14. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate

  15. Age-Related Differences in Speech Rate Perception Do Not Necessarily Entail Age-Related Differences in Speech Rate Use

    Science.gov (United States)

    Heffner, Christopher C.; Newman, Rochelle S.; Dilley, Laura C.; Idsardi, William J.

    2015-01-01

    Purpose: A new literature has suggested that speech rate can influence the parsing of words quite strongly in speech. The purpose of this study was to investigate differences between younger adults and older adults in the use of context speech rate in word segmentation, given that older adults perceive timing information differently from younger…

  16. Conflict monitoring in speech processing : An fMRI study of error detection in speech production and perception

    NARCIS (Netherlands)

    Gauvin, Hanna; De Baene, W.; Brass, Marcel; Hartsuiker, Robert

    2016-01-01

    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated

  17. The effect of instantaneous input dynamic range setting on the speech perception of children with the nucleus 24 implant.

    Science.gov (United States)

    Davidson, Lisa S; Skinner, Margaret W; Holstad, Beth A; Fears, Beverly T; Richter, Marie K; Matusofsky, Margaret; Brenner, Christine; Holden, Timothy; Birath, Amy; Kettel, Jerrica L; Scollie, Susan

    2009-06-01

    The purpose of this study was to examine the effects of a wider instantaneous input dynamic range (IIDR) setting on speech perception and comfort in quiet and noise for children wearing the Nucleus 24 implant system and the Freedom speech processor. In addition, children's ability to understand soft and conversational level speech in relation to aided sound-field thresholds was examined. Thirty children (age, 7 to 17 years) with the Nucleus 24 cochlear implant system and the Freedom speech processor with two different IIDR settings (30 versus 40 dB) were tested on the Consonant Nucleus Consonant (CNC) word test at 50 and 60 dB SPL, the Bamford-Kowal-Bench Speech in Noise Test, and a loudness rating task for four-talker speech noise. Aided thresholds for frequency-modulated tones, narrowband noise, and recorded Ling sounds were obtained with the two IIDRs and examined in relation to CNC scores at 50 dB SPL. Speech Intelligibility Indices were calculated using the long-term average speech spectrum of the CNC words at 50 dB SPL measured at each test site and aided thresholds. Group mean CNC scores at 50 dB SPL with the 40 IIDR were significantly higher (p Speech in Noise Test were not significantly different for the two IIDRs. Significantly improved aided thresholds at 250 to 6000 Hz as well as higher Speech Intelligibility Indices afforded improved audibility for speech presented at soft levels (50 dB SPL). These results indicate that an increased IIDR provides improved word recognition for soft levels of speech without compromising comfort of higher levels of speech sounds or sentence recognition in noise.

  18. Relative Weighting of Semantic and Syntactic Cues in Native and Non-Native Listeners' Recognition of English Sentences.

    Science.gov (United States)

    Shi, Lu-Feng; Koenig, Laura L

    2016-01-01

    Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer's j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN

  19. Speech Perception With Combined Electric-Acoustic Stimulation: A Simulation and Model Comparison.

    Science.gov (United States)

    Rader, Tobias; Adel, Youssef; Fastl, Hugo; Baumann, Uwe

    2015-01-01

    The aim of this study is to simulate speech perception with combined electric-acoustic stimulation (EAS), verify the advantage of combined stimulation in normal-hearing (NH) subjects, and then compare it with cochlear implant (CI) and EAS user results from the authors' previous study. Furthermore, an automatic speech recognition (ASR) system was built to examine the impact of low-frequency information and is proposed as an applied model to study different hypotheses of the combined-stimulation advantage. Signal-detection-theory (SDT) models were applied to assess predictions of subject performance without the need to assume any synergistic effects. Speech perception was tested using a closed-set matrix test (Oldenburg sentence test), and its speech material was processed to simulate CI and EAS hearing. A total of 43 NH subjects and a customized ASR system were tested. CI hearing was simulated by an aurally adequate signal spectrum analysis and representation, the part-tone-time-pattern, which was vocoded at 12 center frequencies according to the MED-EL DUET speech processor. Residual acoustic hearing was simulated by low-pass (LP)-filtered speech with cutoff frequencies 200 and 500 Hz for NH subjects and in the range from 100 to 500 Hz for the ASR system. Speech reception thresholds were determined in amplitude-modulated noise and in pseudocontinuous noise. Previously proposed SDT models were lastly applied to predict NH subject performance with EAS simulations. NH subjects tested with EAS simulations demonstrated the combined-stimulation advantage. Increasing the LP cutoff frequency from 200 to 500 Hz significantly improved speech reception thresholds in both noise conditions. In continuous noise, CI and EAS users showed generally better performance than NH subjects tested with simulations. In modulated noise, performance was comparable except for the EAS at cutoff frequency 500 Hz where NH subject performance was superior. The ASR system showed similar behavior

  20. Communication in a noisy environment: Perception of one's own voice and speech enhancement

    Science.gov (United States)

    Le Cocq, Cecile

    Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing

  1. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding....... Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  2. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    Science.gov (United States)

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  3. Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking

    Science.gov (United States)

    Gauvin, Hanna S.; Hartsuiker, Robert J.; Huettig, Falk

    2013-01-01

    The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. PMID:24339809

  4. Aquatic macroinvertebrate responses to native and non-native predators

    Directory of Open Access Journals (Sweden)

    Haddaway N. R.

    2014-01-01

    Full Text Available Non-native species can profoundly affect native ecosystems through trophic interactions with native species. Native prey may respond differently to non-native versus native predators since they lack prior experience. Here we investigate antipredator responses of two common freshwater macroinvertebrates, Gammarus pulex and Potamopyrgus jenkinsi, to olfactory cues from three predators; sympatric native fish (Gasterosteus aculeatus, sympatric native crayfish (Austropotamobius pallipes, and novel invasive crayfish (Pacifastacus leniusculus. G. pulex responded differently to fish and crayfish; showing enhanced locomotion in response to fish, but a preference for the dark over the light in response to the crayfish. P.jenkinsi showed increased vertical migration in response to all three predator cues relative to controls. These different responses to fish and crayfish are hypothesised to reflect the predators’ differing predation types; benthic for crayfish and pelagic for fish. However, we found no difference in response to native versus invasive crayfish, indicating that prey naiveté is unlikely to drive the impacts of invasive crayfish. The Predator Recognition Continuum Hypothesis proposes that benefits of generalisable predator recognition outweigh costs when predators are diverse. Generalised responses of prey as observed here will be adaptive in the presence of an invader, and may reduce novel predators’ potential impacts.

  5. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users.

    Science.gov (United States)

    Scheperle, Rachel A; Abbas, Paul J

    2015-01-01

    The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech

  6. Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition.

    Science.gov (United States)

    Stevenson, Ryan A; Nelms, Caitlin E; Baum, Sarah H; Zurkovsky, Lilia; Barense, Morgan D; Newhouse, Paul A; Wallace, Mark T

    2015-01-01

    Over the next 2 decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio. For whole-word recognition, older adults relative to younger adults showed greater multisensory gains at intermediate SNRs but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as signal-to-noise ratio decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy-aging populations and that deficits begin to emerge only at the more complex word-recognition level of speech signals. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech

    Science.gov (United States)

    Borrie, Stephanie A.; Lansford, Kaitlin L.; Barrett, Tyson S.

    2017-01-01

    Purpose: The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception…

  8. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  9. Speech-in-noise perception deficit in adults with dyslexia: effects of background type and listening configuration.

    Science.gov (United States)

    Dole, Marjorie; Hoen, Michel; Meunier, Fanny

    2012-06-01

    Developmental dyslexia is associated with impaired speech-in-noise perception. The goal of the present research was to further characterize this deficit in dyslexic adults. In order to specify the mechanisms and processing strategies used by adults with dyslexia during speech-in-noise perception, we explored the influence of background type, presenting single target-words against backgrounds made of cocktail party sounds, modulated speech-derived noise or stationary noise. We also evaluated the effect of three listening configurations differing in terms of the amount of spatial processing required. In a monaural condition, signal and noise were presented to the same ear while in a dichotic situation, target and concurrent sound were presented to two different ears, finally in a spatialised configuration, target and competing signals were presented as if they originated from slightly differing positions in the auditory scene. Our results confirm the presence of a speech-in-noise perception deficit in dyslexic adults, in particular when the competing signal is also speech, and when both signals are presented to the same ear, an observation potentially relating to phonological accounts of dyslexia. However, adult dyslexics demonstrated better levels of spatial release of masking than normal reading controls when the background was speech, suggesting that they are well able to rely on denoising strategies based on spatial auditory scene analysis strategies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Evaluating proposed dorsal and ventral route functions in speech perception and phonological short-term memory: Evidence from aphasia

    Directory of Open Access Journals (Sweden)

    Heather Raye Dial

    2015-04-01

    When the lexical and sublexical stimuli were matched in discriminability, scores were highly correlated and no individual demonstrated substantially better performance on lexical than sublexical perception (Figures 1a-c. However, when the word discriminations were easier (as in prior studies; e.g., Miceli et al., 1980, patients with impaired syllable discrimination were within the control range on word discrimination (Figure 1d. Finally, digit matching showed no significant relation to perception tasks (e.g., Figure 1e. Moreover, there was a wide range of digit matching spans for patients performing well on speech perception tasks (e.g., > 1.5 on syllable discrimination and digit matching ranging from 3.6 to 6.0. These data fail to support dual route claims, suggesting that lexical processing depends on sublexical perception and suggesting that phonological STM depends on a buffer separate from speech perception mechanisms.

  11. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  12. Portuguese Lexical Clusters and CVC Sequences in Speech Perception and Production.

    Science.gov (United States)

    Cunha, Conceição

    2015-01-01

    This paper investigates similarities between lexical consonant clusters and CVC sequences differing in the presence or absence of a lexical vowel in speech perception and production in two Portuguese varieties. The frequent high vowel deletion in the European variety (EP) and the realization of intervening vocalic elements between lexical clusters in Brazilian Portuguese (BP) may minimize the contrast between lexical clusters and CVC sequences in the two Portuguese varieties. In order to test this hypothesis we present a perception experiment with 72 participants and a physiological analysis of 3-dimensional movement data from 5 EP and 4 BP speakers. The perceptual results confirmed a gradual confusion of lexical clusters and CVC sequences in EP, which corresponded roughly to the gradient consonantal overlap found in production. © 2015 S. Karger AG, Basel.

  13. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2013-10-01

    Full Text Available Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic imaging (fMRI studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesised, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a lateralisation gradient. This increasing leftward lateralisation was particularly evident for the left superior temporal sulcus (STS and more anterior parts of the temporal lobe.

  14. Mapping a lateralization gradient within the ventral stream for auditory speech perception.

    Science.gov (United States)

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.

  15. Malaysian University Students’ Attitudes towards Six Varieties of Accented Speech in English

    Directory of Open Access Journals (Sweden)

    Zainab Thamer Ahmed

    2014-10-01

    Full Text Available Previous language attitude studies indicated that in many countries all over the world, English language learners perceived native accents either American or British, more positively than the non-native accents such as the Japanese, Korean, and Austrian accents. However, in Malaysia it is still unclear which accent Malaysian learners of English tend to perceive more positively (Pillai 2009. The verbal-guise technique and accent recognition item were adopted as indirect and direct instruments in gathering data to obtain data to clarify the inquiry. The sample includes 120 Malaysian university students and they were immersed in several speech accent situations to elicit feedback on their perceptions. Essentially two research questions are addressed: 1 What are Malaysian university students’ attitudes toward native and non-native English accents? 2 How familiar are students with accents?  The results indicated that the students had a bias towards in-group accent, meaning that they evaluated non-native lecturers’ accents more positively. These results supported the ‘social identity theory’ consistent with many previous language attitude studies of this nature. The Malaysian students were seen to be able to distinguish between native and non-native accents although there was much confusion between British and American accents.

  16. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design

    Science.gov (United States)

    Higgins, Meaghan C.; Penney, Sarah B.; Robertson, Erin K.

    2017-01-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N = 71) completed a sentence-picture matching task. Children performed the control,…

  17. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    Science.gov (United States)

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  18. Listening to Yourself Is like Listening to Others: External, but Not Internal, Verbal Self-Monitoring Is Based on Speech Perception

    Science.gov (United States)

    Huettig, Falk; Hartsuiker, Robert J.

    2010-01-01

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioural consequences as listening to…

  19. Audiovisual speech perception at various presentation levels in Mandarin-speaking adults with cochlear implants.

    Directory of Open Access Journals (Sweden)

    Shu-Yu Liu

    Full Text Available (1 To evaluate the recognition of words, phonemes and lexical tones in audiovisual (AV and auditory-only (AO modes in Mandarin-speaking adults with cochlear implants (CIs; (2 to understand the effect of presentation levels on AV speech perception; (3 to learn the effect of hearing experience on AV speech perception.Thirteen deaf adults (age = 29.1±13.5 years; 8 male, 5 female who had used CIs for >6 months and 10 normal-hearing (NH adults participated in this study. Seven of them were prelingually deaf, and 6 postlingually deaf. The Mandarin Monosyllablic Word Recognition Test was used to assess recognition of words, phonemes and lexical tones in AV and AO conditions at 3 presentation levels: speech detection threshold (SDT, speech recognition threshold (SRT and 10 dB SL (re:SRT.The prelingual group had better phoneme recognition in the AV mode than in the AO mode at SDT and SRT (both p = 0.016, and so did the NH group at SDT (p = 0.004. Mode difference was not noted in the postlingual group. None of the groups had significantly different tone recognition in the 2 modes. The prelingual and postlingual groups had significantly better phoneme and tone recognition than the NH one at SDT in the AO mode (p = 0.016 and p = 0.002 for phonemes; p = 0.001 and p<0.001 for tones but were outperformed by the NH group at 10 dB SL (re:SRT in both modes (both p<0.001 for phonemes; p<0.001 and p = 0.002 for tones. The recognition scores had a significant correlation with group with age and sex controlled (p<0.001.Visual input may help prelingually deaf implantees to recognize phonemes but may not augment Mandarin tone recognition. The effect of presentation level seems minimal on CI users' AV perception. This indicates special considerations in developing audiological assessment protocols and rehabilitation strategies for implantees who speak tonal languages.

  20. Speech perception and production in children with inner ear malformations after cochlear implantation.

    Science.gov (United States)

    Rachovitsas, Dimitrios; Psillas, George; Chatzigiannakidou, Vasiliki; Triaridis, Stefanos; Constantinidis, Jiannis; Vital, Victor

    2012-09-01

    The aim of this study was to assess the speech perception and speech intelligibility outcome after cochlear implantation in children with malformed inner ear and to compare them with a group of congenitally deaf children implantees without inner ear malformation. Six deaf children (five boys and one girl) with inner ear malformations who were implanted and followed in our clinic were included. These children were matched with six implanted children with normal cochlea for age at implantation and duration of cochlear implant use. All subjects were tested with the internationally used battery tests of listening progress profile (LiP), capacity of auditory performance (CAP), and speech intelligibility rating (SIR). A closed and open set word perception test adapted to the Modern Greek language was also used. In the dysplastic group, two children suffered from CHARGE syndrome, another two from mental retardation, and two children grew up in bilingual homes. At least two years after switch-on, the dysplastic group scored mean LiP 62%, CAP 3.8, SIR 2.1, closed-set 61%, and open-set 49%. The children without inner ear dysplasia achieved significantly better scores, except for CAP which this difference was marginally statistically significant (p=0.009 for LiP, p=0.080 for CAP, p=0.041 for SIR, p=0.011 for closed-set, and p=0.006 for open-set tests). All of the implanted children with malformed inner ear showed benefit of auditory perception and speech production. However, the children with inner ear malformation performed less well compared with the children without inner ear dysplasia. This was possibly due to the high proportion of disabilities detected in the dysplastic group, such as CHARGE syndrome and mental retardation. Bilingualism could also be considered as a factor which possibly affects the outcome of implanted children. Therefore, children with malformed inner ear should be preoperatively evaluated for cognitive and developmental delay. In this case

  1. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  2. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  3. EMPOWERING NON-NATIVE ENGLISH SPEAKING TEACHERS THROUGH CRITICAL PEDAGOGY

    Directory of Open Access Journals (Sweden)

    Nur Hayati

    2010-02-01

    Full Text Available Critical pedagogy is a teaching approach that aims to develop students’ critical thinking, political and social awareness, and self esteem through dialogue learning and reflection. Related to the teaching of EFL, this pedagogy holds the potential to empower non native English speaking teachers (NNESTs when incorporated into English teacher education programs. It can help aspiring NNESTs to grow awareness of the political and sociocultural implications of EFL teaching, to foster their critical thinking on any concepts or ideas regarding their profession, and more importantly, to recognize their strengths as NNESTs. Despite the potential, the role of critical pedagogy in improving EFL teacher education program in Indonesia has not been sufficiently discussed. This article attempts to contribute to the discussion by looking at a number of ways critical pedagogy can be incorporated in the programs, the rationale for doing so, and the challenges that might come on the way.

  4. Non-native fishes of the central Indian River Lagoon

    Science.gov (United States)

    Schofield, Pamela J.; Loftus, William F.; Reaver, Kristen M.

    2018-01-01

    We provide a comprehensive review of the status of non-native fishes in the central Indian River Lagoon (from Cape Canaveral to Grant-Valkaria, east of I-95) through literature review and field surveys. Historical records exist for 17 taxa (15 species, one hybrid, one species complex). We found historical records for one additional species, and collected one species in our field survey that had never been recorded in the region before (and which we eradicated). Thus, we evaluate 19 total taxa herein. Of these, we documented range expansion of four salt-tolerant cichlid species, extirpation of six species that were previously recorded from the area and eradication of three species. There was no noticeable change in geographic range for one widespread species and the records for one species are doubtful and may be erroneous. Currently, there is not enough information to evaluate geographic ranges for four species although at least one of those is established.

  5. Reducing Channel Interaction Through Cochlear Implant Programming May Improve Speech Perception

    Directory of Open Access Journals (Sweden)

    Julie A. Bierer

    2016-06-01

    Full Text Available Speech perception among cochlear implant (CI listeners is highly variable. High degrees of channel interaction are associated with poorer speech understanding. Two methods for reducing channel interaction, focusing electrical fields, and deactivating subsets of channels were assessed by the change in vowel and consonant identification scores with different program settings. The main hypotheses were that (a focused stimulation will improve phoneme recognition and (b speech perception will improve when channels with high thresholds are deactivated. To select high-threshold channels for deactivation, subjects’ threshold profiles were processed to enhance the peaks and troughs, and then an exclusion or inclusion criterion based on the mean and standard deviation was used. Low-threshold channels were selected manually and matched in number and apex-to-base distribution. Nine ears in eight adult CI listeners with Advanced Bionics HiRes90k devices were tested with six experimental programs. Two, all-channel programs, (a 14-channel partial tripolar (pTP and (b 14-channel monopolar (MP, and four variable-channel programs, derived from these two base programs, (c pTP with high- and (d low-threshold channels deactivated, and (e MP with high- and (f low-threshold channels deactivated, were created. Across subjects, performance was similar with pTP and MP programs. However, poorer performing subjects (scoring  2. These same subjects showed slightly more benefit with the reduced channel MP programs (5 and 6. Subjective ratings were consistent with performance. These finding suggest that reducing channel interaction may benefit poorer performing CI listeners.

  6. Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory.

    Science.gov (United States)

    Magimairaj, Beula M; Nagaraj, Naveen K; Benafield, Natalie J

    2018-05-17

    We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations. Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford-Kowal-Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM. Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ loaded as a 4th factor. Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

  7. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  8. Within-subjects comparison of the HiRes and Fidelity120 speech processing strategies: speech perception and its relation to place-pitch sensitivity.

    Science.gov (United States)

    Donaldson, Gail S; Dawson, Patricia K; Borden, Lamar Z

    2011-01-01

    Previous studies have confirmed that current steering can increase the number of discriminable pitches available to many cochlear implant (CI) users; however, the ability to perceive additional pitches has not been linked to improved speech perception. The primary goals of this study were to determine (1) whether adult CI users can achieve higher levels of spectral cue transmission with a speech processing strategy that implements current steering (Fidelity120) than with a predecessor strategy (HiRes) and, if so, (2) whether the magnitude of improvement can be predicted from individual differences in place-pitch sensitivity. A secondary goal was to determine whether Fidelity120 supports higher levels of speech recognition in noise than HiRes. A within-subjects repeated measures design evaluated speech perception performance with Fidelity120 relative to HiRes in 10 adult CI users. Subjects used the novel strategy (either HiRes or Fidelity120) for 8 wks during the main study; a subset of five subjects used Fidelity120 for three additional months after the main study. Speech perception was assessed for the spectral cues related to vowel F1 frequency, vowel F2 frequency, and consonant place of articulation; overall transmitted information for vowels and consonants; and sentence recognition in noise. Place-pitch sensitivity was measured for electrode pairs in the apical, middle, and basal regions of the implanted array using a psychophysical pitch-ranking task. With one exception, there was no effect of strategy (HiRes versus Fidelity120) on the speech measures tested, either during the main study (N = 10) or after extended use of Fidelity120 (N = 5). The exception was a small but significant advantage for HiRes over Fidelity120 for consonant perception during the main study. Examination of individual subjects' data revealed that 3 of 10 subjects demonstrated improved perception of one or more spectral cues with Fidelity120 relative to HiRes after 8 wks or longer

  9. Effects of noise and reverberation on speech perception and listening comprehension of children and adults in a classroom-like setting.

    Science.gov (United States)

    Klatte, Maria; Lachmann, Thomas; Meis, Markus

    2010-01-01

    The effects of classroom noise and background speech on speech perception, measured by word-to-picture matching, and listening comprehension, measured by execution of oral instructions, were assessed in first- and third-grade children and adults in a classroom-like setting. For speech perception, in addition to noise, reverberation time (RT) was varied by conducting the experiment in two virtual classrooms with mean RT = 0.47 versus RT = 1.1 s. Children were more impaired than adults by background sounds in both speech perception and listening comprehension. Classroom noise evoked a reliable disruption in children's speech perception even under conditions of short reverberation. RT had no effect on speech perception in silence, but evoked a severe increase in the impairments due to background sounds in all age groups. For listening comprehension, impairments due to background sounds were found in the children, stronger for first- than for third-graders, whereas adults were unaffected. Compared to classroom noise, background speech had a smaller effect on speech perception, but a stronger effect on listening comprehension, remaining significant when speech perception was controlled. This indicates that background speech affects higher-order cognitive processes involved in children's comprehension. Children's ratings of the sound-induced disturbance were low overall and uncorrelated to the actual disruption, indicating that the children did not consciously realize the detrimental effects. The present results confirm earlier findings on the substantial impact of noise and reverberation on children's speech perception, and extend these to classroom-like environmental settings and listening demands closely resembling those faced by children at school.

  10. Unenthusiastic Europeans or Affected English: the Impact of Intonation on the Overall Make-up of Speech

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2005-06-01

    Full Text Available Attitudes and emotions are expressed by linguistic as well as extra-linguistic features. The linguistic features comprise the lexis, the word-order and the intonation of the utterance. The purpose of this article is to examine the impact of intonation on our perception of speech. I will attempt to show that our expression, as well as our perception and understanding of attitudes and emotions are realized in accordance with the intonation patterns typical of the mother tongue. When listening to non-native speakers using our mother tongue we expect and tolerate errors in pronunciation, grammar and lexis but are quite ignorant and intolerant of non-native intonation patterns. Foreigners often sound unenthusiastic to native English ears. On the basis of the results obtained from an analysis of speech produced by 21 non-native speakers of English, including Slovenes, I will show that the reasons for such an impression of being unenthusiastic stem from different tonality and tonicity rules, as well as from the lack of the fall-rise tone and a very narrow pitch range with no or very few pitch jumps or slumps.

  11. Precursors to language in preterm infants: speech perception abilities in the first year of life.

    Science.gov (United States)

    Bosch, Laura

    2011-01-01

    Language development in infants born very preterm is often compromised. Poor language skills have been described in preschoolers and differences between preterms and full terms, relative to early vocabulary size and morphosyntactical complexity, have also been identified. However, very few data are available concerning early speech perception abilities and their predictive value for later language outcomes. An overview of the results obtained in a prospective study exploring the link between early speech perception abilities and lexical development in the second year of life in a population of very preterm infants (≤32 gestation weeks) is presented. Specifically, behavioral measures relative to (a) native-language recognition and discrimination from a rhythmically distant and a rhythmically close nonfamiliar languages, and (b) monosyllabic word-form segmentation, were obtained and compared to data from full-term infants. Expressive vocabulary at two test ages (12 and 18 months, corrected age for gestation) was measured using the MacArthur Communicative Development Inventory. Behavioral results indicated that differences between preterm and control groups were present, but only evident when task demands were high in terms of language processing, selective attention to relevant information and memory load. When responses could be based on acquired knowledge from accumulated linguistic experience, between-group differences were no longer observed. Critically, while preterm infants responded satisfactorily to the native-language recognition and discrimination tasks, they clearly differed from full-term infants in the more challenging activity of extracting and retaining word-form units from fluent speech, a fundamental ability for starting to building a lexicon. Correlations between results from the language discrimination tasks and expressive vocabulary measures could not be systematically established. However, attention time to novel words in the word segmentation

  12. Non-native earthworms promote plant invasion by ingesting seeds and modifying soil properties

    OpenAIRE

    Clause, J.; Forey, E.; Lortie, C. J.; Lambert, A. M.; Barot, Sébastien

    2015-01-01

    Earthworms can have strong direct effects on plant communities through consumption and digestion of seeds, however it is unclear how earthworms may influence the relative abundance and composition of plant communities invaded by non-native species. In this study, earthworms, seed banks, and the standing vegetation were sampled in a grassland of central California. Our objectives were i) to examine whether the abundances of non-native, invasive earthworm species and non-native grassland plant ...

  13. Temporal Fine-Structure Coding and Lateralized Speech Perception in Normal-Hearing and Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Locsei, Gusztav; Pedersen, Julie Hefting; Laugesen, Søren

    2016-01-01

    This study investigated the relationship between speech perception performance in spatially complex, lateralized listening scenarios and temporal fine-structure (TFS) coding at low frequencies. Young normal-hearing (NH) and two groups of elderly hearing-impaired (HI) listeners with mild or moderate...... hearing loss above 1.5 kHz participated in the study. Speech reception thresholds (SRTs) were estimated in the presence of either speech-shaped noise, two-, four-, or eight-talker babble played reversed, or a nonreversed two-talker masker. Target audibility was ensured by applying individualized linear...... threshold nor the interaural phase difference threshold tasks showed a correlation with the SRTs or with the amount of masking release due to binaural unmasking, respectively. The results suggest that, although HI listeners with normal hearing thresholds below 1.5 kHz experienced difficulties with speech...

  14. Lateralized speech perception with small interaural time differences in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Locsei, Gusztav; Santurette, Sébastien; Dau, Torsten

    2017-01-01

    and two-talker babble in terms of SRTs, HI listeners could utilize ITDs to a similar degree as NH listeners to facilitate the binaural unmasking of speech. A slight difference was observed between the group means when target and maskers were separated from each other by large ITDs, but not when separated...... SRMs are elicited by small ITDs. Speech reception thresholds (SRTs) and SRM due to ITDs were measured over headphones for 10 young NH and 10 older HI listeners, who had normal or close-to-normal hearing below 1.5 kHz. Diotic target sentences were presented in diotic or dichotic speech-shaped noise...... or two-talker babble maskers. In the dichotic conditions, maskers were lateralized by delaying the masker waveforms in the left headphone channel. Multiple magnitudes of masker ITDs were tested in both noise conditions. Although deficits were observed in speech perception abilities in speechshaped noise...

  15. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  16. The Neurobiology of Speech Perception and Production-Can Functional Imaging Tell Us Anything We Did Not Already Know?

    Science.gov (United States)

    Scott, Sophie K.

    2012-01-01

    Our understanding of the neurobiological basis for human speech production and perception has benefited from insights from psychology, neuropsychology and neurology. In this overview, I outline some of the ways that functional imaging has added to this knowledge and argue that, as a neuroanatomical tool, functional imaging has led to some…

  17. Cognitive Control of Speech Perception across the Lifespan: A Large-Scale Cross-Sectional Dichotic Listening Study

    Science.gov (United States)

    Westerhausen, René; Bless, Josef J.; Passow, Susanne; Kompus, Kristiina; Hugdahl, Kenneth

    2015-01-01

    The ability to use cognitive-control functions to regulate speech perception is thought to be crucial in mastering developmental challenges, such as language acquisition during childhood or compensation for sensory decline in older age, enabling interpersonal communication and meaningful social interactions throughout the entire life span.…

  18. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  19. The Effect of Frequency Transposition on Speech Perception in Adolescents and Young Adults with Profound Hearing Loss

    Science.gov (United States)

    Gou, J.; Smith, J.; Valero, J.; Rubio, I.

    2011-01-01

    This paper reports on a clinical trial evaluating outcomes of a frequency-lowering technique for adolescents and young adults with severe to profound hearing impairment. Outcomes were defined by changes in aided thresholds, speech perception, and acceptance. The participants comprised seven young people aged between 13 and 25 years. They were…

  20. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    Science.gov (United States)

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  1. Neighbour tolerance, not suppression, provides competitive advantage to non-native plants.

    Science.gov (United States)

    Golivets, Marina; Wallin, Kimberly F

    2018-05-01

    High competitive ability has often been invoked as a key determinant of invasion success and ecological impacts of non-native plants. Yet our understanding of the strategies that non-natives use to gain competitive dominance remains limited. Particularly, it remains unknown whether the two non-mutually exclusive competitive strategies, neighbour suppression and neighbour tolerance, are equally important for the competitive advantage of non-native plants. Here, we analyse data from 192 peer-reviewed studies on pairwise plant competition within a Bayesian multilevel meta-analytic framework and show that non-native plants outperform their native counterparts due to high tolerance of competition, as opposed to strong suppressive ability. Competitive tolerance ability of non-native plants was driven by neighbour's origin and was expressed in response to a heterospecific native but not heterospecific non-native neighbour. In contrast to natives, non-native species were not more suppressed by hetero- vs. conspecific neighbours, which was partially due to higher intensity of intraspecific competition among non-natives. Heterogeneity in the data was primarily associated with methodological differences among studies and not with phylogenetic relatedness among species. Altogether, our synthesis demonstrates that non-native plants are competitively distinct from native plants and challenges the common notion that neighbour suppression is the primary strategy for plant invasion success. © 2018 John Wiley & Sons Ltd/CNRS.

  2. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  3. Gopherus agassizii (Desert Tortoise). Non-native seed dispersal

    Science.gov (United States)

    Ennen, J.R.; Loughran, Caleb L.; Lovich, Jeffrey E.

    2011-01-01

    Sahara Mustard (Brassica tournefortii) is a non-native, highly invasive weed species of southwestern U.S. deserts. Sahara Mustard is a hardy species, which flourishes under many conditions including drought and in both disturbed and undisturbed habitats (West and Nabhan 2002. In B. Tellman [ed.], Invasive Plants: Their Occurrence and Possible Impact on the Central Gulf Coast of Sonora and the Midriff Islands in the Sea of Cortes, pp. 91–111. University of Arizona Press, Tucson). Because of this species’ ability to thrive in these habitats, B. tournefortii has been able to propagate throughout the southwestern United States establishing itself in the Mojave and Sonoran Deserts in Arizona, California, Nevada, and Utah. Unfortunately, naturally disturbed areas created by native species, such as the Desert Tortoise (Gopherus agassizii), within these deserts could have facilitated the propagation of B. tournefortii. (Lovich 1998. In R. G. Westbrooks [ed.], Invasive Plants, Changing the Landscape of America: Fact Book, p. 77. Federal Interagency Committee for the Management of Noxious and Exotic Weeds [FICMNEW], Washington, DC). However, Desert Tortoises have never been directly observed dispersing Sahara Mustard seeds. Here we present observations of two Desert Tortoises dispersing Sahara Mustard seeds at the interface between the Mojave and Sonoran deserts in California.

  4. Relationship between pure-tone audiogram findings and speech perception among older Japanese persons.

    Science.gov (United States)

    Maeda, Yukihide; Takao, Soshi; Sugaya, Akiko; Kataoka, Yuko; Kariya, Shin; Tanaka, Satomi; Nagayasu, Rie; Nakagawa, Atsuko; Nishizaki, Kazunori

    2018-02-01

    To clarify how the pure-tone threshold (PTT) on the PTA predicts speech perception (SP) in elderly Japanese persons. Data on PTT and SP were cross-sectionally analyzed in Japanese persons (656 ears in 353 patients, aged ≥65 years). Correlations of SP and average PTT in all tested frequencies were evaluated by Pearson's correlation coefficient and simple linear regression. After adjusting for sex, laterality of ears, and age, the relationship of average and frequency-specific PTT with impaired SP ≤50% was estimated by logistic regression models. SP correlated well (r = -0.699) with the average PTT of all tested frequencies. On the other hand, the correlation between patient age and SP was weak, especially among ≤85-year-old persons (r = -0.092). Linear regression showed that the average PTT corresponding to SP of 50% was 76.4 dB nHL. Odds ratios for impaired SP were highest for PTT at 2000 Hz. Odds ratios were higher for middle (500, 1000, 2000 Hz) and high frequencies (4000, 8000 Hz) than low frequencies (125, 250 Hz). The PTT on the pure-tone audiogram (PTA) is a good predictor of SP by speech audiometry among older persons, which could provide clinically important information for hearing aid fitting and cochlear implantation.

  5. Greater BOLD variability in older compared with younger adults during audiovisual speech perception.

    Directory of Open Access Journals (Sweden)

    Sarah H Baum

    Full Text Available Older adults exhibit decreased performance and increased trial-to-trial variability on a range of cognitive tasks, including speech perception. We used blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI to search for neural correlates of these behavioral phenomena. We compared brain responses to simple speech stimuli (audiovisual syllables in 24 healthy older adults (53 to 70 years old and 14 younger adults (23 to 39 years old using two independent analysis strategies: region-of-interest (ROI and voxel-wise whole-brain analysis. While mean response amplitudes were moderately greater in younger adults, older adults had much greater within-subject variability. The greatly increased variability in older adults was observed for both individual voxels in the whole-brain analysis and for ROIs in the left superior temporal sulcus, the left auditory cortex, and the left visual cortex. Increased variability in older adults could not be attributed to differences in head movements between the groups. Increased neural variability may be related to the performance declines and increased behavioral variability that occur with aging.

  6. Comparing spatial tuning curves, spectral ripple resolution, and speech perception in cochlear implant users.

    Science.gov (United States)

    Anderson, Elizabeth S; Nelson, David A; Kreft, Heather; Nelson, Peggy B; Oxenham, Andrew J

    2011-07-01

    Spectral ripple discrimination thresholds were measured in 15 cochlear-implant users with broadband (350-5600 Hz) and octave-band noise stimuli. The results were compared with spatial tuning curve (STC) bandwidths previously obtained from the same subjects. Spatial tuning curve bandwidths did not correlate significantly with broadband spectral ripple discrimination thresholds but did correlate significantly with ripple discrimination thresholds when the rippled noise was confined to an octave-wide passband, centered on the STC's probe electrode frequency allocation. Ripple discrimination thresholds were also measured for octave-band stimuli in four contiguous octaves, with center frequencies from 500 Hz to 4000 Hz. Substantial variations in thresholds with center frequency were found in individuals, but no general trends of increasing or decreasing resolution from apex to base were observed in the pooled data. Neither ripple nor STC measures correlated consistently with speech measures in noise and quiet in the sample of subjects in this study. Overall, the results suggest that spectral ripple discrimination measures provide a reasonable measure of spectral resolution that correlates well with more direct, but more time-consuming, measures of spectral resolution, but that such measures do not always provide a clear and robust predictor of performance in speech perception tasks. © 2011 Acoustical Society of America

  7. Reading your own lips: common-coding theory and visual speech perception.

    Science.gov (United States)

    Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel; Hale, Sandra; Sommers, Mitchell S

    2013-02-01

    Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.

  8. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design.

    Science.gov (United States)

    Higgins, Meaghan C; Penney, Sarah B; Robertson, Erin K

    2017-10-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N [Formula: see text] 71) completed a sentence-picture matching task. Children performed the control, simulated pSTM deficit, simulated speech perception deficit, or simulated double deficit condition. On long sentences, the double deficit group had lower scores than the control and speech perception deficit groups, and the pSTM deficit group had lower scores than the control group and marginally lower scores than the speech perception deficit group. The pSTM and speech perception groups performed similarly to groups with real deficits in these areas, who completed the control condition. Overall, scores were lowest on noncanonical long sentences. Results show pSTM has a greater effect than speech perception on sentence comprehension, at least in the tasks employed here.

  9. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    Directory of Open Access Journals (Sweden)

    David eJenson

    2014-07-01

    Full Text Available Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz and alpha (~10Hz spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different of syllables pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to 1 identify clusters of µ components common to all conditions and 2 examine real-time event-related spectral perturbations (ERSP within alpha and beta bands. 17 and 15 out of 20 participants produced left and right µ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR<.05 early alpha event-related synchronization (ERS prior to and during stimulus presentation and later alpha event-related desynchronization (ERD following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that µ-beta ERD indexes early predictive coding (e.g., internal modeling and/or overt and covert attentional / motor processes. µ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while µ-alpha ERD may index re-afferent sensory feedback during speech rehearsal and production.

  10. Automatically identifying characteristic features of non-native English accents

    NARCIS (Netherlands)

    Bloem, Jelke; Wieling, Martijn; Nerbonne, John; Côté, Marie-Hélène; Knooihuizen, Remco; Nerbonne, John

    2016-01-01

    In this work, we demonstrate the application of statistical measures from dialectometry to the study of accented English speech. This new methodology enables a more quantitative approach to the study of accents. Studies on spoken dialect data have shown that a combination of representativeness (the

  11. Parents and Speech Therapist Perception of Parental Involvement in Kailila Therapy Center, Jakarta, Indonesia

    Science.gov (United States)

    Jane, Griselda; Tunjungsari, Harini

    2015-01-01

    Parental involvement in a speech therapy has not been prioritized in most therapy centers in Indonesia. One of the therapy centers that has recognized the importance of parental involvement is Kailila Speech Therapy Center. In Kailila speech therapy center, parental involvement in children's speech therapy is an obligation that has been…

  12. Sources of Variability in Consonant Perception and Implications for Speech Perception Modeling

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2016-01-01

    The  present  study  investigated  the  influence  of  various  sources  of response  variability  in  consonant  perception.  A  distinction  was  made  between source­induced variability and receiver­related variability. The former refers to perceptual differences induced by differences in the ...

  13. Audio-visual speech perception in infants and toddlers with Down syndrome, fragile X syndrome, and Williams syndrome.

    Science.gov (United States)

    D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette

    2016-08-01

    Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Single-Sided Deafness: Impact of Cochlear Implantation on Speech Perception in Complex Noise and on Auditory Localization Accuracy.

    Science.gov (United States)

    Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias

    2017-12-01

    To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.

  15. Periphyton density is similar on native and non-native plant species

    NARCIS (Netherlands)

    Grutters, B.M.C.; Gross, Elisabeth M.; van Donk, E.; Bakker, E.S.

    2017-01-01

    Non-native plants increasingly dominate the vegetation in aquatic ecosystems and thrive in eutrophic conditions. In eutrophic conditions, submerged plants risk being overgrown by epiphytic algae; however, if non-native plants are less susceptible to periphyton than natives, this would contribute to

  16. Determinants of Success in Native and Non-Native Listening Comprehension: An Individual Differences Approach

    Science.gov (United States)

    Andringa, Sible; Olsthoorn, Nomi; van Beuningen, Catherine; Schoonen, Rob; Hulstijn, Jan

    2012-01-01

    The goal of this study was to explain individual differences in both native and non-native listening comprehension; 121 native and 113 non-native speakers of Dutch were tested on various linguistic and nonlinguistic cognitive skills thought to underlie listening comprehension. Structural equation modeling was used to identify the predictors of…

  17. Determinants of success in native and non-native listening comprehension: an individual differences approach

    NARCIS (Netherlands)

    Andringa, S.; Olsthoorn, N.; van Beuningen, C.; Schoonen, R.; Hulstijn, J.

    2012-01-01

    The goal of this study was to explain individual differences in both native and non-native listening comprehension; 121 native and 113 non-native speakers of Dutch were tested on various linguistic and nonlinguistic cognitive skills thought to underlie listening comprehension. Structural equation

  18. The Impact of Non-Native English Teachers' Linguistic Insecurity on Learners' Productive Skills

    Science.gov (United States)

    Daftari, Giti Ehtesham; Tavil, Zekiye Müge

    2017-01-01

    The discrimination between native and non-native English speaking teachers is reported in favor of native speakers in literature. The present study examines the linguistic insecurity of non-native English speaking teachers (NNESTs) and investigates its influence on learners' productive skills by using SPSS software. The eighteen teachers…

  19. Economic Impacts of Non-Native Forest Insects in the Continental United States

    Science.gov (United States)

    Juliann E. Aukema; Brian. Leung; Kent Kovacs; Corey Chivers; Jeffrey Englin; Susan J. Frankel; Robert G. Haight; Thomas P. Holmes; Andrew M. Liebhold; Deborah G. McCullough; Betsy. Von Holle

    2011-01-01

    Reliable estimates of the impacts and costs of biological invasions are critical to developing credible management, trade and regulatory policies. Worldwide, forests and urban trees provide important ecosystem services as well as economic and social benefits, but are threatened by non-native insects. More than 450 non-native forest insects are established in the United...

  20. Growth strategy, phylogeny and stoichiometry determine the allelopathic potential of native and non-native plants

    NARCIS (Netherlands)

    Grutters, Bart M.C.; Saccomanno, Benedetta; Gross, Elisabeth M.; Van de Waal, Dedmer B.; van Donk, Ellen; Bakker, Elisabeth S.

    2017-01-01

    Secondary compounds can contribute to the success of non-native plant species if they reduce damage by native herbivores or inhibit the growth of native plant competitors. However, there is opposing evidence on whether the secondary com- pounds of non-native plant species are stronger than those of

  1. Promoting Communities of Practice among Non-Native Speakers of English in Online Discussions

    Science.gov (United States)

    Kim, Hoe Kyeung

    2011-01-01

    An online discussion involving text-based computer-mediated communication has great potential for promoting equal participation among non-native speakers of English. Several studies claimed that online discussions could enhance the academic participation of non-native speakers of English. However, there is little research around participation…

  2. Chinese College Students' Views on Native English and Non-Native English in EFL Classrooms

    Science.gov (United States)

    Qian, Yang; Jingxia, Liu

    2016-01-01

    With the development of globalization, English is clearly spoken by many more non-native than native speakers, which raises the discussion of English varieties and the debate regarding the conformity to Standard English. Although a large number of studies have shown scholars' attitudes towards native English and non-native English, little research…

  3. DNA metabarcoding of fish larvae for detection of non-native fishes

    Science.gov (United States)

    Our objective was to evaluate the use of fish larvae for early detection of non-native fishes, comparing traditional and molecular taxonomy approaches to investigate potential efficiencies. Fish larvae present an interesting opportunity for non-native fish early detection because...

  4. Factors influencing non-native tree species distribution in urban landscapes

    Science.gov (United States)

    Wayne C. Zipperer

    2010-01-01

    Non-native species are presumed to be pervasive across the urban landscape. Yet, we actually know very little about their actual distribution. For this study, vegetation plot data from Syracuse, NY and Baltimore, MD were used to examine non-native tree species distribution in urban landscapes. Data were collected from remnant and emergent forest patches on upland sites...

  5. Effects of irrelevant speech and traffic noise on speech perception and cognitive performance in elementary school children.

    Science.gov (United States)

    Klatte, Maria; Meis, Markus; Sukowski, Helga; Schick, August

    2007-01-01

    The effects of background noise of moderate intensity on short-term storage and processing of verbal information were analyzed in 6 to 8 year old children. In line with adult studies on "irrelevant sound effect" (ISE), serial recall of visually presented digits was severely disrupted by background speech that the children did not understand. Train noises of equal Intensity however, had no effect. Similar results were demonstrated with tasks requiring storage and processing of heard information. Memory for nonwords, execution of oral instructions and categorizing speech sounds were significantly disrupted by irrelevant speech. The affected functions play a fundamental role in the acquisition of spoken and written language. Implications concerning current models of the ISE and the acoustic conditions in schools and kindergardens are discussed.

  6. Show me the numbers: What data currently exist for non-native species in the USA?

    Science.gov (United States)

    Crall, Alycia W.; Meyerson, Laura A.; Stohlgren, Thomas J.; Jarnevich, Catherine S.; Newman, Gregory J.; Graham, James

    2006-01-01

    Non-native species continue to be introduced to the United States from other countries via trade and transportation, creating a growing need for early detection and rapid response to new invaders. It is therefore increasingly important to synthesize existing data on non-native species abundance and distributions. However, no comprehensive analysis of existing data has been undertaken for non-native species, and there have been few efforts to improve collaboration. We therefore conducted a survey to determine what datasets currently exist for non-native species in the US from county, state, multi-state region, national, and global scales. We identified 319 datasets and collected metadata for 79% of these. Through this study, we provide a better understanding of extant non-native species datasets and identify data gaps (ie taxonomic, spatial, and temporal) to help guide future survey, research, and predictive modeling efforts.

  7. Contrasting xylem vessel constraints on hydraulic conductivity between native and non-native woody understory species

    Directory of Open Access Journals (Sweden)

    Maria S Smith

    2013-11-01

    Full Text Available We examined the hydraulic properties of 82 native and non-native woody species common to forests of Eastern North America, including several congeneric groups, representing a range of anatomical wood types. We observed smaller conduit diameters with greater frequency in non-native species, corresponding to lower calculated potential vulnerability to cavitation index. Non-native species exhibited higher vessel-grouping in metaxylem compared with native species, however, solitary vessels were more prevalent in secondary xylem. Higher frequency of solitary vessels in secondary xylem was related to a lower potential vulnerability index. We found no relationship between anatomical characteristics of xylem, origin of species and hydraulic conductivity, indicating that non-native species did not exhibit advantageous hydraulic efficiency over native species. Our results confer anatomical advantages for non-native species under the potential for cavitation due to freezing, perhaps permitting extended growing seasons.

  8. Use of speech generating devices can improve perception of qualifications for skilled, verbal, and interactive jobs.

    Science.gov (United States)

    Stern, Steven E; Chobany, Chelsea M; Beam, Alexander A; Hoover, Brittany N; Hull, Thomas T; Linsenbigler, Melissa; Makdad-Light, Courtney; Rubright, Courtney N

    2017-01-01

    We have previously demonstrated that when speech generating devices (SGD) are used as assistive technologies, they are preferred over the users' natural voices. We sought to examine whether using SGDs would affect listener's perceptions of hirability of people with complex communication needs. In a series of three experiments, participants rated videotaped actors, one using SGD and the other using their natural, mildly dysarthric voice, on (a) a measurement of perceptions of speaker credibility, strength, and informedness and (b) measurements of hirability for jobs coded in terms of skill, verbal ability, and interactivity. Experiment 1 examined hirability for jobs varying in terms of skill and verbal ability. Experiment 2 was a replication that examined hirability for jobs varying in terms of interactivity. Experiment 3 examined jobs in terms of skill and specific mode of interaction (face-to-face, telephone, computer-mediated). Actors were rated more favorably when using SGD than their own voices. Actors using SGD were also rated more favorably for highly skilled and highly verbal jobs. This preference for SGDs over mildly dysarthric voice was also found for jobs entailing computer-mediated-communication, particularly skillful jobs.

  9. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception

    Directory of Open Access Journals (Sweden)

    Emily B. J. Coffey

    2017-08-01

    Full Text Available Speech-in-noise (SIN perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.

  10. Speech perception enhancement in elderly hearing aid users using an auditory training program for mobile devices.

    Science.gov (United States)

    Yu, Jyaehyoung; Jeon, Hanjae; Song, Changgeun; Han, Woojae

    2017-01-01

    The goal of the present study was to develop an auditory training program using a mobile device and to test its efficacy by applying it to older adults suffering from moderate-to-severe sensorineural hearing loss. Among the 20 elderly hearing-impaired listeners who participated, 10 were randomly assigned to a training group (TG) and 10 were assigned to a non-training group (NTG) as a control. As a baseline, all participants were measured by vowel, consonant and sentence tests. In the experiment, the TG had been trained for 4 weeks using a mobile program, which had four levels and consisted of 10 Korean nonsense syllables, with each level completed in 1 week. In contrast, traditional auditory training had been provided for the NTG during the same period. To evaluate whether a training effect was achieved, the two groups also carried out the same tests as the baseline after completing the experiment. The results showed that performance on the consonant and sentence tests in the TG was significantly increased compared with that of the NTG. Also, improved scores of speech perception were retained at 2 weeks after the training was completed. However, vowel scores were not changed after the 4-week training in both the TG and the NTG. This result pattern suggests that a moderate amount of auditory training using the mobile device with cost-effective and minimal supervision is useful when it is used to improve the speech understanding of older adults with hearing loss. Geriatr Gerontol Int 2017; 17: 61-68. © 2015 Japan Geriatrics Society.

  11. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise.

    Science.gov (United States)

    Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey

    2017-09-01

    Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by

  12. Speech perception benefits of FM and infrared devices to children with hearing aids in a typical classroom.

    Science.gov (United States)

    Anderson, Karen L; Goldstein, Howard

    2004-04-01

    Children typically learn in classroom environments that have background noise and reverberation that interfere with accurate speech perception. Amplification technology can enhance the speech perception of students who are hard of hearing. This study used a single-subject alternating treatments design to compare the speech recognition abilities of children who are, hard of hearing when they were using hearing aids with each of three frequency modulated (FM) or infrared devices. Eight 9-12-year-olds with mild to severe hearing loss repeated Hearing in Noise Test (HINT) sentence lists under controlled conditions in a typical kindergarten classroom with a background noise level of +10 dB signal-to-noise (S/N) ratio and 1.1 s reverberation time. Participants listened to HINT lists using hearing aids alone and hearing aids in combination with three types of S/N-enhancing devices that are currently used in mainstream classrooms: (a) FM systems linked to personal hearing aids, (b) infrared sound field systems with speakers placed throughout the classroom, and (c) desktop personal sound field FM systems. The infrared ceiling sound field system did not provide benefit beyond that provided by hearing aids alone. Desktop and personal FM systems in combination with personal hearing aids provided substantial improvements in speech recognition. This information can assist in making S/N-enhancing device decisions for students using hearing aids. In a reverberant and noisy classroom setting, classroom sound field devices are not beneficial to speech perception for students with hearing aids, whereas either personal FM or desktop sound field systems provide listening benefits.

  13. Audio-visual speech perception in prelingually deafened Japanese children following sequential bilateral cochlear implantation.

    Science.gov (United States)

    Yamamoto, Ryosuke; Naito, Yasushi; Tona, Risa; Moroto, Saburo; Tamaya, Rinko; Fujiwara, Keizo; Shinohara, Shogo; Takebayashi, Shinji; Kikuchi, Masahiro; Michida, Tetsuhiko

    2017-11-01

    An effect of audio-visual (AV) integration is observed when the auditory and visual stimuli are incongruent (the McGurk effect). In general, AV integration is helpful especially in subjects wearing hearing aids or cochlear implants (CIs). However, the influence of AV integration on spoken word recognition in individuals with bilateral CIs (Bi-CIs) has not been fully investigated so far. In this study, we investigated AV integration in children with Bi-CIs. The study sample included thirty one prelingually deafened children who underwent sequential bilateral cochlear implantation. We assessed their responses to congruent and incongruent AV stimuli with three CI-listening modes: only the 1st CI, only the 2nd CI, and Bi-CIs. The responses were assessed in the whole group as well as in two sub-groups: a proficient group (syllable intelligibility ≥80% with the 1st CI) and a non-proficient group (syllable intelligibility effect in each of the three CI-listening modes. AV integration responses were observed in a subset of incongruent AV stimuli, and the patterns observed with the 1st CI and with Bi-CIs were similar. In the proficient group, the responses with the 2nd CI were not significantly different from those with the 1st CI whereas in the non-proficient group the responses with the 2nd CI were driven by visual stimuli more than those with the 1st CI. Our results suggested that prelingually deafened Japanese children who underwent sequential bilateral cochlear implantation exhibit AV integration abilities, both in monaural listening as well as in binaural listening. We also observed a higher influence of visual stimuli on speech perception with the 2nd CI in the non-proficient group, suggesting that Bi-CIs listeners with poorer speech recognition rely on visual information more compared to the proficient subjects to compensate for poorer auditory input. Nevertheless, poorer quality auditory input with the 2nd CI did not interfere with AV integration with binaural

  14. Learning to Recognize Speakers of a Non-Native Language: Implications for the Functional Organization of Human Auditory Cortex

    Science.gov (United States)

    Perrachione, Tyler K.; Wong, Patrick C. M.

    2007-01-01

    Brain imaging studies of voice perception often contrast activation from vocal and verbal tasks to identify regions uniquely involved in processing voice. However, such a strategy precludes detection of the functional relationship between speech and voice perception. In a pair of experiments involving identifying voices from native and foreign…

  15. The Effect of Age and Type of Noise on Speech Perception under Conditions of Changing Context and Noise Levels.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Fostick, Leah

    2016-01-01

    Everyday life includes fluctuating noise levels, resulting in continuously changing speech intelligibility. The study aims were: (1) to quantify the amount of decrease in age-related speech perception, as a result of increasing noise level, and (2) to test the effect of age on context usage at the word level (smaller amount of contextual cues). A total of 24 young adults (age 20-30 years) and 20 older adults (age 60-75 years) were tested. Meaningful and nonsense one-syllable consonant-vowel-consonant words were presented with the background noise types of speech noise (SpN), babble noise (BN), and white noise (WN), with a signal-to-noise ratio (SNR) of 0 and -5 dB. Older adults had lower accuracy in SNR = 0, with WN being the most difficult condition for all participants. Measuring the change in speech perception when SNR decreased showed a reduction of 18.6-61.5% in intelligibility, with age effect only for BN. Both young and older adults used less phonemic context with WN, as compared to other conditions. Older adults are more affected by an increasing noise level of fluctuating informational noise as compared to steady-state noise. They also use less contextual cues when perceiving monosyllabic words. Further studies should take into consideration that when presenting the stimulus differently (change in noise level, less contextual cues), other perceptual and cognitive processes are involved. © 2016 S. Karger AG, Basel.

  16. Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise.

    Science.gov (United States)

    Gifford, René H; Revit, Lawrence J

    2010-01-01

    Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam preprocessing (Cochlear Corporation) or the T-Mic accessory option (Advanced Bionics). In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition

  17. Non-native earthworms promote plant invasion by ingesting seeds and modifying soil properties

    Science.gov (United States)

    Clause, Julia; Forey, Estelle; Lortie, Christopher J.; Lambert, Adam M.; Barot, Sébastien

    2015-04-01

    Earthworms can have strong direct effects on plant communities through consumption and digestion of seeds, however it is unclear how earthworms may influence the relative abundance and composition of plant communities invaded by non-native species. In this study, earthworms, seed banks, and the standing vegetation were sampled in a grassland of central California. Our objectives were i) to examine whether the abundances of non-native, invasive earthworm species and non-native grassland plant species are correlated, and ii) to test whether seed ingestion by these worms alters the soil seed bank by evaluating the composition of seeds in casts relative to uningested soil. Sampling locations were selected based on historical land-use practices, including presence or absence of tilling, and revegetation by seed using Phalaris aquatica. Only non-native earthworm species were found, dominated by the invasive European species Aporrectodea trapezoides. Earthworm abundance was significantly higher in the grassland blocks dominated by non-native plant species, and these sites had higher carbon and moisture contents. Earthworm abundance was also positively related to increased emergence of non-native seedlings, but had no effect on that of native seedlings. Plant species richness and total seedling emergence were higher in casts than in uningested soils. This study suggests that there is a potential effect of non-native earthworms in promoting non-native and likely invasive plant species within grasslands, due to seed-plant-earthworm interactions via soil modification or to seed ingestion by earthworms and subsequent cast effects on grassland dynamics. This study supports a growing body of literature for earthworms as ecosystem engineers but highlights the relative importance of considering non-native-native interactions with the associated plant community.

  18. On the matching of top-down knowledge with sensory input in the perception of ambiguous speech

    Directory of Open Access Journals (Sweden)

    Hannemann R

    2010-06-01

    Full Text Available Abstract Background How does the brain repair obliterated speech and cope with acoustically ambivalent situations? A widely discussed possibility is to use top-down information for solving the ambiguity problem. In the case of speech, this may lead to a match of bottom-up sensory input with lexical expectations resulting in resonant states which are reflected in the induced gamma-band activity (GBA. Methods In the present EEG study, we compared the subject's pre-attentive GBA responses to obliterated speech segments presented after a series of correct words. The words were a minimal pair in German and differed with respect to the degree of specificity of segmental phonological information. Results The induced GBA was larger when the expected lexical information was phonologically fully specified compared to the underspecified condition. Thus, the degree of specificity of phonological information in the mental lexicon correlates with the intensity of the matching process of bottom-up sensory input with lexical information. Conclusions These results together with those of a behavioural control experiment support the notion of multi-level mechanisms involved in the repair of deficient speech. The delineated alignment of pre-existing knowledge with sensory input is in accordance with recent ideas about the role of internal forward models in speech perception.

  19. Perception of co-speech gestures in aphasic patients: a visual exploration study during the observation of dyadic conversations.

    Science.gov (United States)

    Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M

    2015-03-01

    Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Autonomic nervous system responses during perception of masked speech may reflect constructs other than subjective listening effort

    Directory of Open Access Journals (Sweden)

    Alexander L. Francis

    2016-03-01

    Full Text Available Typically, understanding speech seems effortless and automatic. However, a variety of factors may, independently or interactively, make listening more effortful. Physiological measures may help to distinguish between the application of different cognitive mechanisms whose operation is perceived as effortful. In the present study, physiological and behavioral measures associated with task demand were collected along with behavioral measures of performance while participants listened to and repeated sentences. The goal was to measure psychophysiological reactivity associated with three degraded listening conditions, each of which differed in terms of the source of the difficulty (distortion, energetic masking, and informational masking, and therefore were expected to engage different cognitive mechanisms. These conditions were chosen to be matched for overall performance (keywords correct, and were compared to listening to unmasked speech produced by a natural voice. The three degraded conditions were: (1 Unmasked speech produced by a computer speech synthesizer, (2 Speech produced by a natural voice and masked by speech-shaped noise and (3 Speech produced by a natural voice and masked by two-talker babble. Masked conditions were both presented at a -8 dB signal to noise ratio (SNR, a level shown in previous research to result in comparable levels of performance for these stimuli and maskers. Performance was measured in terms of proportion of key words identified correctly, and task demand or effort was quantified subjectively by self-report. Measures of psychophysiological reactivity included electrodermal (skin conductance response frequency and amplitude, blood pulse amplitude and pulse rate. Results suggest that the two masked conditions evoked stronger psychophysiological reactivity than did the two unmasked conditions even when behavioral measures of listening performance and listeners’ subjective perception of task demand were comparable

  1. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of ).

    Science.gov (United States)

    Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F

    Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition

  2. Discriminating native from non-native speech using fusion of visual cues

    NARCIS (Netherlands)

    Georgakis, Christos; Petridis, Stavros; Pantic, Maja

    2014-01-01

    The task of classifying accent, as belonging to a native language speaker or a foreign language speaker, has been so far addressed by means of the audio modality only. However, features extracted from the visual modality have been successfully used to extend or substitute audio-only approaches

  3. Discrimination Between Native and Non-Native Speech Using Visual Features Only

    NARCIS (Netherlands)

    Georgakis, Christos; Petridis, Stavros; Pantic, Maja

    2016-01-01

    Accent is a soft biometric trait that can be inferred from pronunciation and articulation patterns characterizing the speaking style of an individual. Past research has addressed the task of classifying accent, as belonging to a native language speaker or a foreign language speaker, by means of the

  4. Native fruit traits may mediate dispersal competition between native and non-native plants

    Directory of Open Access Journals (Sweden)

    Clare Aslan

    2012-02-01

    Full Text Available Seed disperser preferences may mediate the impact of invasive, non-native plant species on their new ecological communities. Significant seed disperser preference for invasives over native species could facilitate the spread of the invasives while impeding native plant dispersal. Such competition for dispersers could negatively impact the fitness of some native plants. Here, we review published literature to identify circumstances under which preference for non-native fruits occurs. The importance of fruit attraction is underscored by several studies demonstrating that invasive, fleshy-fruited plant species are particularly attractive to regional frugivores. A small set of studies directly compare frugivore preference for native vs. invasive species, and we find that different designs and goals within such studies frequently yield contrasting results. When similar native and non-native plant species have been compared, frugivores have tended to show preference for the non-natives. This preference appears to stem from enhanced feeding efficiency or accessibility associated with the non-native fruits. On the other hand, studies examining preference within existing suites of co-occurring species, with no attempt to maximize fruit similarity, show mixed results, with frugivores in most cases acting opportunistically or preferring native species. A simple, exploratory meta-analysis finds significant preference for native species when these studies are examined as a group. We illustrate the contrasting findings typical of these two approaches with results from two small-scale aviary experiments we conducted to determine preference by frugivorous bird species in northern California. In these case studies, native birds preferred the native fruit species as long as it was dissimilar from non-native fruits, while non-native European starlings preferred non-native fruit. However, native birds showed slight, non-significant preference for non-native fruit

  5. Familiarity breeds support: speech-language pathologists' perceptions of bullying of students with autism spectrum disorders.

    Science.gov (United States)

    Blood, Gordon W; Blood, Ingrid M; Coniglio, Amy D; Finke, Erinn H; Boyle, Michael P

    2013-01-01

    Children with autism spectrum disorders (ASD) are primary targets for bullies and victimization. Research shows school personnel may be uneducated about bullying and ways to intervene. Speech-language pathologists (SLPs) in schools often work with children with ASD and may have victims of bullying on their caseloads. These victims may feel most comfortable turning to SLPs for help during one-to-one treatment sessions to discuss these types of experiences. A nationwide survey mailed to 1000 school-based SLPs, using a vignette design technique, determined perceptions about intervention for bullying and use of specific strategies. Results revealed a majority of the SLPs (89%) responses were in "likely" or "very likely" to intervene categories for all types of bullying (physical, verbal, relational and cyber), regardless of whether the episode was observed or not. A factor analysis was conducted on a 14 item strategy scale for dealing with bullying for children with ASD. Three factors emerged, labeled "Report/Consult", "Educate the Victim", and Reassure the Victim". SLPs providing no services to children with ASD on their caseloads demonstrated significantly lower mean scores for the likelihood of intervention and using select strategies. SLPs may play an important role in reducing and/or eliminating bullying episodes in children with ASD. Readers will be able to (a) explain four different types of bullying, (b) describe the important role of school personnel in reducing and eliminating bullying, (c) describe the perceptions and strategies selected by SLPs to deal with bullying episodes for students with ASD, and (d) outline the potential role of SLPs in assisting students with ASD who are victimized. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Motor skills, haptic perception and social abilities in children with mild speech disorders.

    Science.gov (United States)

    Müürsepp, Iti; Aibast, Herje; Gapeyeva, Helena; Pääsuke, Mati

    2012-02-01

    The aim of the study was to evaluate motor skills, haptic object recognition and social interaction in 5-year-old children with mild specific expressive language impairment (expressive-SLI) and articulation disorder (AD) in comparison of age- and gender matched healthy children. Twenty nine children (23 boys and 6 girls) with expressive-SLI, 27 children (20 boys and 7 girls) with AD and 30 children (23 boys and 7 girls) with typically developing language as controls participated in our study. The children were examined for manual dexterity, ball skills, static and dynamic balance by M-ABC test, haptic object recognition and for social interaction by questionnaire completed by teachers. Children with mild expressive-SLI demonstrated significantly poorer results in all subtests of motor skills (psocial interaction (p0.05) in measured parameters between children with AD and controls. Children with expressive-SLI performed considerably poorer compared to AD group in balance subtest (psocial interaction are considerably more affected than in children with AD. Although motor difficulties in speech production are prevalent in AD, it is localised and does not involve children's general motor skills, haptic perception or social interaction. Copyright © 2011 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  7. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  8. New graduates’ perceptions of preparedness to provide speech-language therapy services in general and dysphagia services in particular

    Science.gov (United States)

    Booth, Alannah; Choto, Fadziso; Gotlieb, Jessica; Robertson, Rebecca; Morris, Gabriella; Stockley, Nicola; Mauff, Katya

    2015-01-01

    Background Upon graduation, newly qualified speech-language therapists are expected to provide services independently. This study describes new graduates’ perceptions of their preparedness to provide services across the scope of the profession and explores associations between perceptions of dysphagia theory and clinical learning curricula with preparedness for adult and paediatric dysphagia service delivery. Methods New graduates of six South African universities were recruited to participate in a survey by completing an electronic questionnaire exploring their perceptions of the dysphagia curricula and their preparedness to practise across the scope of the profession of speech-language therapy. Results Eighty graduates participated in the study yielding a response rate of 63.49%. Participants perceived themselves to be well prepared in some areas (e.g. child language: 100%; articulation and phonology: 97.26%), but less prepared in other areas (e.g. adult dysphagia: 50.70%; paediatric dysarthria: 46.58%; paediatric dysphagia: 38.36%) and most unprepared to provide services requiring sign language (23.61%) and African languages (20.55%). There was a significant relationship between perceptions of adequate theory and clinical learning opportunities with assessment and management of dysphagia and perceptions of preparedness to provide dysphagia services. Conclusion There is a need for review of existing curricula and consideration of developing a standard speech-language therapy curriculum across universities, particularly in service provision to a multilingual population, and in both the theory and clinical learning of the assessment and management of adult and paediatric dysphagia, to better equip graduates for practice. PMID:26304217

  9. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    Science.gov (United States)

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  10. Recreational freshwater fishing drives non-native aquatic species richness patterns at a continental scale

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aim. Mapping the geographic distribution of non-native aquatic species is a critically important precursor to understanding the anthropogenic and environmental...

  11. Non-Native (Exotic) Snake Envenomations in the U.S., 2005–2011

    OpenAIRE

    Warrick, Brandon J.; Boyer, Leslie V.; Seifert, Steven A.

    2014-01-01

    Non-native (exotic) snakes are a problematic source of envenomation worldwide. This manuscript describes the current demographics, outcomes and challenges of non-native snakebites in the United States (U.S.). We performed a retrospective case series of the National Poison Data System (NPDS) database between 2005 and 2011. There were 258 human exposures involving at least 61 unique exotic venomous species (average = 37 per year; range = 33–40). Males comprised 79% and females 21%. The averag...

  12. Managing conflicts arising from fisheries enhancements based on non-native fishes in southern Africa.

    Science.gov (United States)

    Ellender, B R; Woodford, D J; Weyl, O L F; Cowx, I G

    2014-12-01

    Southern Africa has a long history of non-native fish introductions for the enhancement of recreational and commercial fisheries, due to a perceived lack of suitable native species. This has resulted in some important inland fisheries being based on non-native fishes. Regionally, these introductions are predominantly not benign, and non-native fishes are considered one of the main threats to aquatic biodiversity because they affect native biota through predation, competition, habitat alteration, disease transfer and hybridization. To achieve national policy objectives of economic development, food security and poverty eradication, countries are increasingly looking towards inland fisheries as vehicles for development. As a result, conflicts have developed between economic and conservation objectives. In South Africa, as is the case for other invasive biota, the control and management of non-native fishes is included in the National Environmental Management: Biodiversity Act. Implementation measures include import and movement controls and, more recently, non-native fish eradication in conservation priority areas. Management actions are, however, complicated because many non-native fishes are important components in recreational and subsistence fisheries that contribute towards regional economies and food security. In other southern African countries, little attention has focussed on issues and management of non-native fishes, and this is cause for concern. This paper provides an overview of introductions, impacts and fisheries in southern Africa with emphasis on existing and evolving legislation, conflicts, implementation strategies and the sometimes innovative approaches that have been used to prioritize conservation areas and manage non-native fishes. © 2014 The Fisheries Society of the British Isles.

  13. Non-native fishes in Florida freshwaters: a literature review and synthesis

    Science.gov (United States)

    Schofield, Pamela J.; Loftus, William F.

    2015-01-01

    Non-native fishes have been known from freshwater ecosystems of Florida since the 1950s, and dozens of species have established self-sustaining populations. Nonetheless, no synthesis of data collected on those species in Florida has been published until now. We searched the literature for peer-reviewed publications reporting original data for 42 species of non-native fishes in Florida that are currently established, were established in the past, or are sustained by human intervention. Since the 1950s, the number of non-native fish species increased steadily at a rate of roughly six new species per decade. Studies documented (in decreasing abundance): geographic location/range expansion, life- and natural-history characteristics (e.g., diet, habitat use), ecophysiology, community composition, population structure, behaviour, aquatic-plant management, and fisheries/aquaculture. Although there is a great deal of taxonomic uncertainty and confusion associated with many taxa, very few studies focused on clarifying taxonomic ambiguities of non-native fishes in the State. Most studies were descriptive; only 15 % were manipulative. Risk assessments, population-control studies and evaluations of effects of non-native fishes were rare topics for research, although they are highly valued by natural-resource managers. Though some authors equated lack of data with lack of effects, research is needed to confirm or deny conclusions. Much more is known regarding the effects of lionfish (Pterois spp.) on native fauna, despite its much shorter establishment time. Natural-resource managers need biological and ecological information to make policy decisions regarding non-native fishes. Given the near-absence of empirical data on effects of Florida non-native fishes, and the lengthy time-frames usually needed to collect such information, we provide suggestions for data collection in a manner that may be useful in the evaluation and prediction of non-native fish effects.

  14. Non-native vascular plants from Canary Islands (Spain): nomenclatural and taxonomical adjustments

    OpenAIRE

    Verloove, F.

    2013-01-01

    Se propone correcciones taxonómicas y nomenclaturales respecto a 88 taxones no nativos de la lista de plantas vasculares de las Islas Canarias (España). Non-native vascular plants from Canary Islands (Spain): nomenclatural and taxonomical adjustments. Corrections and other adjustments are proposed for 88 non-native taxa from the checklist of vascular plants from the Canary Islands (Spain).

  15. Setting Priorities for Monitoring and Managing Non-native Plants: Toward a Practical Approach.

    Science.gov (United States)

    Koch, Christiane; Jeschke, Jonathan M; Overbeck, Gerhard E; Kollmann, Johannes

    2016-09-01

    Land managers face the challenge to set priorities in monitoring and managing non-native plant species, as resources are limited and not all non-natives become invasive. Existing frameworks that have been proposed to rank non-native species require extensive information on their distribution, abundance, and impact. This information is difficult to obtain and often not available for many species and regions. National watch or priority lists are helpful, but it is questionable whether they provide sufficient information for environmental management on a regional scale. We therefore propose a decision tree that ranks species based on more simple albeit robust information, but still provides reliable management recommendations. To test the decision tree, we collected and evaluated distribution data from non-native plants in highland grasslands of Southern Brazil. We compared the results with a national list from the Brazilian Invasive Species Database for the state to discuss advantages and disadvantages of the different approaches on a regional scale. Out of 38 non-native species found, only four were also present on the national list. If management would solely rely on this list, many species that were identified as spreading based on the decision tree would go unnoticed. With the suggested scheme, it is possible to assign species to active management, to monitoring, or further evaluation. While national lists are certainly important, management on a regional scale should employ additional tools that adequately consider the actual risk of non-natives to become invasive.

  16. Social and Cognitive Impressions of Adults Who Do and Do Not Stutter Based on Listeners' Perceptions of Read-Speech Samples

    Directory of Open Access Journals (Sweden)

    Lauren J. Amick

    2017-07-01

    Full Text Available Stuttering is a neurodevelopmental disorder characterized by frequent and involuntary disruptions during speech production. Adults who stutter are often subject to negative perceptions. The present study examined whether negative social and cognitive impressions are formed when listening to speech, even without any knowledge about the speaker. Two experiments were conducted in which naïve participants were asked to listen to and provide ratings on samples of read speech produced by adults who stutter and typically-speaking adults without knowledge about the individuals who produced the speech. In both experiments, listeners rated speaker cognitive ability, likeability, anxiety, as well as a number of speech characteristics that included fluency, naturalness, intelligibility, the likelihood the speaker had a speech-and-language disorder (Experiment 1 only, rate and volume (both Experiments 1 and 2. The speech of adults who stutter was perceived to be less fluent, natural, intelligible, and to be slower and louder than the speech of typical adults. Adults who stutter were also perceived to have lower cognitive ability, to be less likeable and to be more anxious than the typical adult speakers. Relations between speech characteristics and social and cognitive impressions were found, independent of whether or not the speaker stuttered (i.e., they were found for both adults who stutter and typically-speaking adults and did not depend on being cued that some of the speakers may have had a speech-language impairment.

  17. Speech perception in older listeners with normal hearing:conditions of time alteration, selective word stress, and length of sentences.

    Science.gov (United States)

    Cho, Soojin; Yu, Jyaehyoung; Chun, Hyungi; Seo, Hyekyung; Han, Woojae

    2014-04-01

    Deficits of the aging auditory system negatively affect older listeners in terms of speech communication, resulting in limitations to their social lives. To improve their perceptual skills, the goal of this study was to investigate the effects of time alteration, selective word stress, and varying sentence lengths on the speech perception of older listeners. Seventeen older people with normal hearing were tested for seven conditions of different time-altered sentences (i.e., ±60%, ±40%, ±20%, 0%), two conditions of selective word stress (i.e., no-stress and stress), and three different lengths of sentences (i.e., short, medium, and long) at the most comfortable level for individuals in quiet circumstances. As time compression increased, sentence perception scores decreased statistically. Compared to a natural (or no stress) condition, the selectively stressed words significantly improved the perceptual scores of these older listeners. Long sentences yielded the worst scores under all time-altered conditions. Interestingly, there was a noticeable positive effect for the selective word stress at the 20% time compression. This pattern of results suggests that a combination of time compression and selective word stress is more effective for understanding speech in older listeners than using the time-expanded condition only.

  18. Sonority's Effect as a Surface Cue on Lexical Speech Perception of Children With Cochlear Implants.

    Science.gov (United States)

    Hamza, Yasmeen; Okalidou, Areti; Kyriafinis, George; van Wieringen, Astrid

    2018-03-06

    Sonority is the relative perceptual prominence/loudness of speech sounds of the same length, stress, and pitch. Children with cochlear implants (CIs), with restored audibility and relatively intact temporal processing, are expected to benefit from the perceptual prominence cues of highly sonorous sounds. Sonority also influences lexical access through the sonority-sequencing principle (SSP), a grammatical phonotactic rule, which facilitates the recognition and segmentation of syllables within speech. The more nonsonorous the onset of a syllable is, the larger is the degree of sonority rise to the nucleus, and the more optimal the SSP. Children with CIs may experience hindered or delayed development of the language-learning rule SSP, as a result of their deprived/degraded auditory experience. The purpose of the study was to explore sonority's role in speech perception and lexical access of prelingually deafened children with CIs. A case-control study with 15 children with CIs, 25 normal-hearing children (NHC), and 50 normal-hearing adults was conducted, using a lexical identification task of novel, nonreal CV-CV words taught via fast mapping. The CV-CV words were constructed according to four sonority conditions, entailing syllables with sonorous onsets/less optimal SSP (SS) and nonsonorous onsets/optimal SSP (NS) in all combinations, that is, SS-SS, SS-NS, NS-SS, and NS-NS. Outcome measures were accuracy and reaction times (RTs). A subgroup analysis of 12 children with CIs pair matched to 12 NHC on hearing age aimed to study the effect of oral-language exposure period on the sonority-related performance. The children groups showed similar accuracy performance, overall and across all the sonority conditions. However, within-group comparisons showed that the children with CIs scored more accurately on the SS-SS condition relative to the NS-NS and NS-SS conditions, while the NHC performed equally well across all conditions. Additionally, adult-comparable accuracy

  19. An Ecosystem-Service Approach to Evaluate the Role of Non-Native Species in Urbanized Wetlands

    Directory of Open Access Journals (Sweden)

    Rita S. W. Yam

    2015-04-01

    Full Text Available Natural wetlands have been increasingly transformed into urbanized ecosystems commonly colonized by stress-tolerant non-native species. Although non-native species present numerous threats to natural ecosystems, some could provide important benefits to urbanized ecosystems. This study investigated the extent of colonization by non-native fish and bird species of three urbanized wetlands in subtropical Taiwan. Using literature data the role of each non-native species in the urbanized wetland was evaluated by their effect (benefits/damages on ecosystem services (ES based on their ecological traits. Our sites were seriously colonized by non-native fishes (39%–100%, but <3% by non-native birds. Although most non-native species could damage ES regulation (disease control and wastewater purification, some could be beneficial to the urbanized wetland ES. Our results indicated the importance of non-native fishes in supporting ES by serving as food source to fish-eating waterbirds (native, and migratory species due to their high abundance, particularly for Oreochromis spp. However, all non-native birds are regarded as “harmful” species causing important ecosystem disservices, and thus eradication of these bird-invaders from urban wetlands would be needed. This simple framework for role evaluation of non-native species represents a holistic and transferable approach to facilitate decision making on management priority of non-native species in urbanized wetlands.

  20. Combined Audience and Video Feedback With Cognitive Review Improves State Anxiety and Self-Perceptions During Speech Tasks in Socially Anxious Individuals.

    Science.gov (United States)

    Chen, Junwen; McLean, Jordan E; Kemps, Eva

    2018-03-01

    This study investigated the effects of combined audience feedback with video feedback plus cognitive preparation, and cognitive review (enabling deeper processing of feedback) on state anxiety and self-perceptions including perception of performance and perceived probability of negative evaluation in socially anxious individuals during a speech performance. One hundred and forty socially anxious students were randomly assigned to four conditions: Cognitive Preparation + Video Feedback + Audience Feedback + Cognitive Review (CP+VF+AF+CR), Cognitive Preparation + Video Feedback + Cognitive Review (CP+VF+CR), Cognitive Preparation + Video Feedback only (CP+VF), and Control. They were asked to deliver two impromptu speeches that were evaluated by confederates. Participants' levels of anxiety and self-perceptions pertaining to the speech task were assessed before and after feedback, and after the second speech. Compared to participants in the other conditions, participants in the CP+VF+AF+CR condition reported a significant decrease in their state anxiety and perceived probability of negative evaluation scores, and a significant increase in their positive perception of speech performance from before to after the feedback. These effects generalized to the second speech. Our results suggest that adding audience feedback to video feedback plus cognitive preparation and cognitive review may improve the effects of existing video feedback procedures in reducing anxiety symptoms and distorted self-representations in socially anxious individuals. Copyright © 2017. Published by Elsevier Ltd.

  1. Speech Perception Engages a General Timer: Evidence from a Divided Attention Word Identification Task

    Science.gov (United States)

    Casini, Laurence; Burle, Boris; Nguyen, Noel

    2009-01-01

    Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…

  2. The Perception of "Sine-Wave Speech" by Adults with Developmental Dyslexia.

    Science.gov (United States)

    Rosner, Burton S.; Talcott, Joel B.; Witton, Caroline; Hogg, James D.; Richardson, Alexandra J.; Hansen, Peter C.; Stein, John F.

    2003-01-01

    "Sine-wave speech" sentences contain only four frequency-modulated sine waves, lacking many acoustic cues present in natural speech. Adults with (n=19) and without (n=14) dyslexia were asked to reproduce orally sine-wave utterances in successive trials. Results suggest comprehension of sine-wave sentences is impaired in some adults with…

  3. Perceptions of Speech and Language Therapy Amongst UK School and College Students: Implications for Recruitment

    Science.gov (United States)

    Greenwood, Nan; Wright, Jannet A.; Bithell, Christine

    2006-01-01

    Background: Communication disorders affect both sexes and people from all ethnic groups, but members of minority ethnic groups and males in the UK are underrepresented in the speech and language therapy profession. Research in the area of recruitment is limited, but a possible explanation is poor awareness and understanding of speech and language…

  4. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    Science.gov (United States)

    Jaekel, Brittany N.; Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate…

  5. Examining speech perception in noise and cognitive functions in the elderly.

    Science.gov (United States)

    Meister, Hartmut; Schreitmüller, Stefan; Grugel, Linda; Beutner, Dirk; Walger, Martin; Meister, Ingo

    2013-12-01

    The purpose of this study was to investigate the relationship of cognitive functions (i.e., working memory [WM]) and speech recognition against different background maskers in older individuals. Speech reception thresholds (SRTs) were determined using a matrix-sentence test. Unmodulated noise, modulated noise (International Collegium for Rehabilitative Audiology [ICRA] noise 5-250), and speech fragments (International Speech Test Signal [ISTS]) were used as background maskers. Verbal WM was assessed using the Verbal Learning and Memory Test (VLMT; Helmstaedter & Durwen, 1990). Measurements were conducted with 14 normal-hearing older individuals and a control group of 12 normal-hearing young listeners. Despite their normal hearing ability, the young listeners outperformed the older individuals in all background maskers. These differences were largest for the modulated maskers. SRTs were significantly correlated with the scores of the VLMT. A linear regression model also included WM as the only significant predictor variable. The results support the assumption that WM plays an important role for speech understanding and that it might have impact on results obtained using speech audiometry. Thus, an individual's WM capacity should be considered with aural diagnosis and rehabilitation. The VLMT proved to be a clinically applicable test for WM. Further cognitive functions important with speech understanding are currently being investigated within the SAKoLA (Sprachaudiometrie und kognitive Leistungen im Alter [Speech Audiometry and Cognitive Functions in the Elderly]) project.

  6. Ecological disequilibrium drives insect pest and pathogen accumulation in non-native trees.

    Science.gov (United States)

    Crous, Casparus J; Burgess, Treena I; Le Roux, Johannes J; Richardson, David M; Slippers, Bernard; Wingfield, Michael J

    2016-12-23

    Non-native trees have become dominant components of many landscapes, including urban ecosystems, commercial forestry plantations, fruit orchards, and as invasives in natural ecosystems. Often, these trees have been separated from their natural enemies (i.e. insects and pathogens) leading to ecological disequilibrium, that is, the immediate breakdown of historically co-evolved interactions once introduced into novel environments. Long-established, non-native tree plantations provide useful experiments to explore the dimensions of such ecological disequilibria. We quantify the status quo of non-native insect pests and pathogens catching up with their tree hosts (planted Acacia, Eucalyptus and Pinus species) in South Africa, and examine which native South African enemy species utilise these trees as hosts. Interestingly, pines, with no confamilial relatives in South Africa and the longest residence time (almost two centuries), have acquired only one highly polyphagous native pathogen. This is in contrast to acacias and eucalypts, both with many native and confamilial relatives in South Africa that have acquired more native pathogens. These patterns support the known role of phylogenetic relatedness of non-native and native floras in influencing the likelihood of pathogen shifts between them. This relationship, however, does not seem to hold for native insects. Native insects appear far more likely to expand their feeding habits onto non-native tree hosts than are native pathogens, although they are generally less damaging. The ecological disequilibrium conditions of non-native trees are deeply rooted in the eco-evolutionary experience of the host plant, co-evolved natural enemies, and native organisms from the introduced range. We should expect considerable spatial and temporal variation in ecological disequilibrium conditions among non-native taxa, which can be significantly influenced by biosecurity and management practices. Published by Oxford University Press on

  7. Non-native species in the vascular flora of highlands and mountains of Iceland

    Directory of Open Access Journals (Sweden)

    Pawel Wasowicz

    2016-01-01

    Full Text Available The highlands and mountains of Iceland are one of the largest remaining wilderness areas in Europe. This study aimed to provide comprehensive and up-to-date data on non-native plant species in these areas and to answer the following questions: (1 How many non-native vascular plant species inhabit highland and mountainous environments in Iceland? (2 Do temporal trends in the immigration of alien species to Iceland differ between highland and lowland areas? (3 Does the incidence of alien species in the disturbed and undisturbed areas within Icelandic highlands differ? (4 Does the spread of non-native species in Iceland proceed from lowlands to highlands? and (5 Can we detect hot-spots in the distribution of non-native taxa within the highlands? Overall, 16 non-native vascular plant species were detected, including 11 casuals and 5 naturalized taxa (1 invasive. Results showed that temporal trends in alien species immigration to highland and lowland areas are similar, but it is clear that the process of colonization of highland areas is still in its initial phase. Non-native plants tended to occur close to man-made infrastructure and buildings including huts, shelters, roads etc. Analysis of spatio-temporal patterns showed that the spread within highland areas is a second step in non-native plant colonization in Iceland. Several statically significant hot spots of alien plant occurrences were identified using the Getis-Ord Gi* statistic and these were linked to human disturbance. This research suggests that human-mediated dispersal is the main driving force increasing the risk of invasion in Iceland’s highlands and mountain areas.

  8. Interdependence of linguistic and indexical speech perception skills in school-age children with early cochlear implantation.

    Science.gov (United States)

    Geers, Ann E; Davidson, Lisa S; Uchanski, Rosalie M; Nicholas, Johanna G

    2013-09-01

    This study documented the ability of experienced pediatric cochlear implant (CI) users to perceive linguistic properties (what is said) and indexical attributes (emotional intent and talker identity) of speech, and examined the extent to which linguistic (LSP) and indexical (ISP) perception skills are related. Preimplant-aided hearing, age at implantation, speech processor technology, CI-aided thresholds, sequential bilateral cochlear implantation, and academic integration with hearing age-mates were examined for their possible relationships to both LSP and ISP skills. Sixty 9- to 12-year olds, first implanted at an early age (12 to 38 months), participated in a comprehensive test battery that included the following LSP skills: (1) recognition of monosyllabic words at loud and soft levels, (2) repetition of phonemes and suprasegmental features from nonwords, and (3) recognition of key words from sentences presented within a noise background, and the following ISP skills: (1) discrimination of across-gender and within-gender (female) talkers and (2) identification and discrimination of emotional content from spoken sentences. A group of 30 age-matched children without hearing loss completed the nonword repetition, and talker- and emotion-perception tasks for comparison. Word-recognition scores decreased with signal level from a mean of 77% correct at 70 dB SPL to 52% at 50 dB SPL. On average, CI users recognized 50% of key words presented in sentences that were 9.8 dB above background noise. Phonetic properties were repeated from nonword stimuli at about the same level of accuracy as suprasegmental attributes (70 and 75%, respectively). The majority of CI users identified emotional content and differentiated talkers significantly above chance levels. Scores on LSP and ISP measures were combined into separate principal component scores and these components were highly correlated (r = 0.76). Both LSP and ISP component scores were higher for children who received a CI

  9. Revisiting Neil Armstrongs Moon-Landing Quote: Implications for Speech Perception, Function Word Reduction, and Acoustic Ambiguity.

    Directory of Open Access Journals (Sweden)

    Melissa M Baese-Berk

    Full Text Available Neil Armstrong insisted that his quote upon landing on the moon was misheard, and that he had said one small step for a man, instead of one small step for man. What he said is unclear in part because function words like a can be reduced and spectrally indistinguishable from the preceding context. Therefore, their presence can be ambiguous, and they may disappear perceptually depending on the rate of surrounding speech. Two experiments are presented examining production and perception of reduced tokens of for and for a in spontaneous speech. Experiment 1 investigates the distributions of several acoustic features of for and for a. The results suggest that the distributions of for and for a overlap substantially, both in terms of temporal and spectral characteristics. Experiment 2 examines perception of these same tokens when the context speaking rate differs. The perceptibility of the function word a varies as a function of this context speaking rate. These results demonstrate that substantial ambiguity exists in the original quote from Armstrong, and that this ambiguity may be understood through context speaking rate.

  10. Revisiting Neil Armstrongs Moon-Landing Quote: Implications for Speech Perception, Function Word Reduction, and Acoustic Ambiguity.

    Science.gov (United States)

    Baese-Berk, Melissa M; Dilley, Laura C; Schmidt, Stephanie; Morrill, Tuuli H; Pitt, Mark A

    2016-01-01

    Neil Armstrong insisted that his quote upon landing on the moon was misheard, and that he had said one small step for a man, instead of one small step for man. What he said is unclear in part because function words like a can be reduced and spectrally indistinguishable from the preceding context. Therefore, their presence can be ambiguous, and they may disappear perceptually depending on the rate of surrounding speech. Two experiments are presented examining production and perception of reduced tokens of for and for a in spontaneous speech. Experiment 1 investigates the distributions of several acoustic features of for and for a. The results suggest that the distributions of for and for a overlap substantially, both in terms of temporal and spectral characteristics. Experiment 2 examines perception of these same tokens when the context speaking rate differs. The perceptibility of the function word a varies as a function of this context speaking rate. These results demonstrate that substantial ambiguity exists in the original quote from Armstrong, and that this ambiguity may be understood through context speaking rate.

  11. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  12. The impact of cochlear implantation on speech understanding, subjective hearing performance, and tinnitus perception in patients with unilateral severe to profound hearing loss.

    Science.gov (United States)

    Távora-Vieira, Dayse; Marino, Roberta; Acharya, Aanand; Rajan, Gunesh P

    2015-03-01

    This study aimed to determine the impact of cochlear implantation on speech understanding in noise, subjective perception of hearing, and tinnitus perception of adult patients with unilateral severe to profound hearing loss and to investigate whether duration of deafness and age at implantation would influence the outcomes. In addition, this article describes the auditory training protocol used for unilaterally deaf patients. This is a prospective study of subjects undergoing cochlear implantation for unilateral deafness with or without associated tinnitus. Speech perception in noise was tested using the Bamford-Kowal-Bench speech-in-noise test presented at 65 dB SPL. The Speech, Spatial, and Qualities of Hearing Scale and the Abbreviated Profile of Hearing Aid Benefit were used to evaluate the subjective perception of hearing with a cochlear implant and quality of life. Tinnitus disturbance was measured using the Tinnitus Reaction Questionnaire. Data were collected before cochlear implantation and 3, 6, 12, and 24 months after implantation. Twenty-eight postlingual unilaterally deaf adults with or without tinnitus were implanted. There was a significant improvement in speech perception in noise across time in all spatial configurations. There was an overall significant improvement on the subjective perception of hearing and quality of life. Tinnitus disturbance reduced significantly across time. Age at implantation and duration of deafness did not influence the outcomes significantly. Cochlear implantation provided significant improvement in speech understanding in challenging situations, subjective perception of hearing performance, and quality of life. Cochlear implantation also resulted in reduced tinnitus disturbance. Age at implantation and duration of deafness did not seem to influence the outcomes.

  13. Hemispheric asymmetry of emotion words in a non-native mind: a divided visual field study.

    Science.gov (United States)

    Jończyk, Rafał

    2015-05-01

    This study investigates hemispheric specialization for emotional words among proficient non-native speakers of English by means of the divided visual field paradigm. The motivation behind the study is to extend the monolingual hemifield research to the non-native context and see how emotion words are processed in a non-native mind. Sixty eight females participated in the study, all highly proficient in English. The stimuli comprised 12 positive nouns, 12 negative nouns, 12 non-emotional nouns and 36 pseudo-words. To examine the lateralization of emotion, stimuli were presented unilaterally in a random fashion for 180 ms in a go/no-go lexical decision task. The perceptual data showed a right hemispheric advantage for processing speed of negative words and a complementary role of the two hemispheres in the recognition accuracy of experimental stimuli. The data indicate that processing of emotion words in non-native language may require greater interhemispheric communication, but at the same time demonstrates a specific role of the right hemisphere in the processing of negative relative to positive valence. The results of the study are discussed in light of the methodological inconsistencies in the hemifield research as well as the non-native context in which the study was conducted.

  14. Positive and Negative Impacts of Non-Native Bee Species around the World.

    Science.gov (United States)

    Russo, Laura

    2016-11-28

    Though they are relatively understudied, non-native bees are ubiquitous and have enormous potential economic and environmental impacts. These impacts may be positive or negative, and are often unquantified. In this manuscript, I review literature on the known distribution and environmental and economic impacts of 80 species of introduced bees. The potential negative impacts of non-native bees include competition with native bees for nesting sites or floral resources, pollination of invasive weeds, co-invasion with pathogens and parasites, genetic introgression, damage to buildings, affecting the pollination of native plant species, and changing the structure of native pollination networks. The potential positive impacts of non-native bees include agricultural pollination, availability for scientific research, rescue of native species, and resilience to human-mediated disturbance and climate change. Most non-native bee species are accidentally introduced and nest in stems, twigs, and cavities in wood. In terms of number of species, the best represented families are Megachilidae and Apidae, and the best represented genus is Megachile . The best studied genera are Apis and Bombus , and most of the species in these genera were deliberately introduced for agricultural pollination. Thus, we know little about the majority of non-native bees, accidentally introduced or spreading beyond their native ranges.

  15. Positive and Negative Impacts of Non-Native Bee Species around the World

    Directory of Open Access Journals (Sweden)

    Laura Russo

    2016-11-01

    Full Text Available Though they are relatively understudied, non-native bees are ubiquitous and have enormous potential economic and environmental impacts. These impacts may be positive or negative, and are often unquantified. In this manuscript, I review literature on the known distribution and environmental and economic impacts of 80 species of introduced bees. The potential negative impacts of non-native bees include competition with native bees for nesting sites or floral resources, pollination of invasive weeds, co-invasion with pathogens and parasites, genetic introgression, damage to buildings, affecting the pollination of native plant species, and changing the structure of native pollination networks. The potential positive impacts of non-native bees include agricultural pollination, availability for scientific research, rescue of native species, and resilience to human-mediated disturbance and climate change. Most non-native bee species are accidentally introduced and nest in stems, twigs, and cavities in wood. In terms of number of species, the best represented families are Megachilidae and Apidae, and the best represented genus is Megachile. The best studied genera are Apis and Bombus, and most of the species in these genera were deliberately introduced for agricultural pollination. Thus, we know little about the majority of non-native bees, accidentally introduced or spreading beyond their native ranges.

  16. Unique structural modulation of a non-native substrate by cochaperone DnaJ.

    Science.gov (United States)

    Tiwari, Satyam; Kumar, Vignesh; Jayaraj, Gopal Gunanathan; Maiti, Souvik; Mapa, Koyeli

    2013-02-12

    The role of bacterial DnaJ protein as a cochaperone of DnaK is strongly appreciated. Although DnaJ unaccompanied by DnaK can bind unfolded as well as native substrate proteins, its role as an individual chaperone remains elusive. In this study, we demonstrate that DnaJ binds a model non-native substrate with a low nanomolar dissociation constant and, more importantly, modulates the structure of its non-native state. The structural modulation achieved by DnaJ is different compared to that achieved by the DnaK-DnaJ complex. The nature of structural modulation exerted by DnaJ is suggestive of a unique unfolding activity on the non-native substrate by the chaperone. Furthermore, we demonstrate that the zinc binding motif along with the C-terminal substrate binding domain of DnaJ is necessary and sufficient for binding and the subsequent binding-induced structural alterations of the non-native substrate. We hypothesize that this hitherto unknown structural alteration of non-native states by DnaJ might be important for its chaperoning activity by removing kinetic traps of the folding intermediates.

  17. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Epistemologies in the Text of Children's Books: Native- and non-Native-authored books

    Science.gov (United States)

    Dehghani, Morteza; Bang, Megan; Medin, Douglas; Marin, Ananda; Leddon, Erin; Waxman, Sandra

    2013-09-01

    An examination of artifacts provides insights into the goals, practices, and orientations of the persons and cultures who created them. Here, we analyze storybook texts, artifacts that are a part of many children's lives. We examine the stories in books targeted for 4-8-year-old children, contrasting the texts generated by Native American authors versus popular non-Native authors. We focus specifically on the implicit and explicit 'epistemological orientations' associated with relations between human beings and the rest of nature. Native authors were significantly more likely than non-Native authors to describe humans and the rest of nature as psychologically close and embedded in relationships. This pattern converges well with evidence from a behavioral task in which we probed Native (from urban inter-tribal and rural communities) and non-Native children's and adults' attention to ecological relations. We discuss the implications of these differences for environmental cognition and science learning.

  19. Do native brown trout and non-native brook trout interact reproductively?

    Science.gov (United States)

    Cucherousset, J.; Aymes, J. C.; Poulet, N.; Santoul, F.; Céréghino, R.

    2008-07-01

    Reproductive interactions between native and non-native species of fish have received little attention compared to other types of interactions such as predation or competition for food and habitat. We studied the reproductive interactions between non-native brook trout ( Salvelinus fontinalis) and native brown trout ( Salmo trutta) in a Pyrenees Mountain stream (SW France). We found evidence of significant interspecific interactions owing to consistent spatial and temporal overlap in redd localizations and spawning periods. We observed mixed spawning groups composed of the two species, interspecific subordinate males, and presence of natural hybrids (tiger trout). These reproductive interactions could be detrimental to the reproduction success of both species. Our study shows that non-native species might have detrimental effects on native species via subtle hybridization behavior.

  20. Non-native Chinese Foreign Language (CFL) Teachers: Identity and Discourse

    DEFF Research Database (Denmark)

    Zhang, Chun

    2014-01-01

    Abstract Native Chinese foreign language (CFL) teacher identity is an emerging subject of research interest in the teacher education. Yet, limited study has been done on the construction of Non-native CFL teachers in their home culture. Guided by a concept of teacher identity-in-discourse, the pa......Abstract Native Chinese foreign language (CFL) teacher identity is an emerging subject of research interest in the teacher education. Yet, limited study has been done on the construction of Non-native CFL teachers in their home culture. Guided by a concept of teacher identity...... teachers face tensions and challenges in constructing their identities as CFL teachers, and the tensions and challenges that arose from Danish teaching culture could influence the Non-native CFL teachers' contributions to CFL teaching in their home cultures. The findings further show that in order to cope...

  1. Speech Research

    Science.gov (United States)

    Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.

  2. The Public and Professionals Reason Similarly about the Management of Non-Native Invasive Species: A Quantitative Investigation of the Relationship between Beliefs and Attitudes

    Science.gov (United States)

    Fischer, Anke; Selge, Sebastian; van der Wal, René; Larson, Brendon M. H.

    2014-01-01

    Despite continued critique of the idea of clear boundaries between scientific and lay knowledge, the ‘deficit-model’ of public understanding of ecological issues still seems prevalent in discourses of biodiversity management. Prominent invasion biologists, for example, still argue that citizens need to be educated so that they accept scientists’ views on the management of non-native invasive species. We conducted a questionnaire-based survey with members of the public and professionals in invasive species management (n = 732) in Canada and the UK to investigate commonalities and differences in their perceptions of species and, more importantly, how these perceptions were connected to attitudes towards species management. Both native and non-native mammal and tree species were included. Professionals tended to have more extreme views than the public, especially in relation to nativeness and abundance of a species. In both groups, species that were perceived to be more abundant, non-native, unattractive or harmful to nature and the economy were more likely to be regarded as in need of management. While perceptions of species and attitudes towards management thus often differed between public and professionals, these perceptions were linked to attitudes in very similar ways across the two groups. This suggests that ways of reasoning about invasive species employed by professionals and the public might be more compatible with each other than commonly thought. We recommend that managers and local people engage in open discussion about each other’s beliefs and attitudes prior to an invasive species control programme. This could ultimately reduce conflict over invasive species control. PMID:25170957

  3. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  4. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Directory of Open Access Journals (Sweden)

    Léo Varnet

    Full Text Available Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  5. Music training improves speech-in-noise perception: Longitudinal evidence from a community-based music program.

    Science.gov (United States)

    Slater, Jessica; Skoe, Erika; Strait, Dana L; O'Connell, Samantha; Thompson, Elaine; Kraus, Nina

    2015-09-15

    Music training may strengthen auditory skills that help children not only in musical performance but in everyday communication. Comparisons of musicians and non-musicians across the lifespan have provided some evidence for a "musician advantage" in understanding speech in noise, although reports have been mixed. Controlled longitudinal studies are essential to disentangle effects of training from pre-existing differences, and to determine how much music training is necessary to confer benefits. We followed a cohort of elementary school children for 2 years, assessing their ability to perceive speech in noise before and after musical training. After the initial assessment, participants were randomly assigned to one of two groups: one group began music training right away and completed 2 years of training, while the second group waited a year and then received 1 year of music training. Outcomes provide the first longitudinal evidence that speech-in-noise perception improves after 2 years of group music training. The children were enrolled in an established and successful community-based music program and followed the standard curriculum, therefore these findings provide an important link between laboratory-based research and real-world assessment of the impact of music training on everyday communication skills. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Relations Between Self-reported Executive Functioning and Speech Perception Skills in Adult Cochlear Implant Users.

    Science.gov (United States)

    Moberly, Aaron C; Patel, Tirth R; Castellanos, Irina

    2018-02-01

    As a result of their hearing loss, adults with cochlear implants (CIs) would self-report poorer executive functioning (EF) skills than normal-hearing (NH) peers, and these EF skills would be associated with performance on speech recognition tasks. EF refers to a group of high order neurocognitive skills responsible for behavioral and emotional regulation during goal-directed activity, and EF has been found to be poorer in children with CIs than their NH age-matched peers. Moreover, there is increasing evidence that neurocognitive skills, including some EF skills, contribute to the ability to recognize speech through a CI. Thirty postlingually deafened adults with CIs and 42 age-matched NH adults were enrolled. Participants and their spouses or significant others (informants) completed well-validated self-reports or informant-reports of EF, the Behavior Rating Inventory of Executive Function - Adult (BRIEF-A). CI users' speech recognition skills were assessed in quiet using several measures of sentence recognition. NH peers were tested for recognition of noise-vocoded versions of the same speech stimuli. CI users self-reported difficulty on EF tasks of shifting and task monitoring. In CI users, measures of speech recognition correlated with several self-reported EF skills. The present findings provide further evidence that neurocognitive factors, including specific EF skills, may decline in association with hearing loss, and that some of these EF skills contribute to speech processing under degraded listening conditions.

  7. Invasive non-native species' provision of refugia for endangered native species.

    Science.gov (United States)

    Chiba, Satoshi

    2010-08-01

    The influence of non-native species on native ecosystems is not predicted easily when interspecific interactions are complex. Species removal can result in unexpected and undesired changes to other ecosystem components. I examined whether invasive non-native species may both harm and provide refugia for endangered native species. The invasive non-native plant Casuarina stricta has damaged the native flora and caused decline of the snail fauna on the Ogasawara Islands, Japan. On Anijima in 2006 and 2009, I examined endemic land snails in the genus Ogasawarana. I compared the density of live specimens and frequency of predation scars (from black rats [Rattus rattus]) on empty shells in native vegetation and Casuarina forests. The density of land snails was greater in native vegetation than in Casuarina forests in 2006. Nevertheless, radical declines in the density of land snails occurred in native vegetation since 2006 in association with increasing predation by black rats. In contrast, abundance of Ogasawarana did not decline in the Casuarina forest, where shells with predation scars from rats were rare. As a result, the density of snails was greater in the Casuarina forest than in native vegetation. Removal of Casuarina was associated with an increased proportion of shells with predation scars from rats and a decrease in the density of Ogasawarana. The thick and dense litter of Casuarina appears to provide refugia for native land snails by protecting them from predation by rats; thus, eradication of rats should precede eradication of Casuarina. Adaptive strategies, particularly those that consider the removal order of non-native species, are crucial to minimizing the unintended effects of eradication on native species. In addition, my results suggested that in some cases a given non-native species can be used to mitigate the impacts of other non-native species on native species.

  8. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Directory of Open Access Journals (Sweden)

    Georg Berding

    Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  9. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Science.gov (United States)

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  10. Sequencing at the syllabic and supra-syllabic levels during speech perception: an fMRI study.

    Science.gov (United States)

    Deschamps, Isabelle; Tremblay, Pascale

    2014-01-01

    The processing of fluent speech involves complex computational steps that begin with the segmentation of the continuous flow of speech sounds into syllables and words. One question that naturally arises pertains to the type of syllabic information that speech processes act upon. Here, we used functional magnetic resonance imaging to profile regions, using a combination of whole-brain and exploratory anatomical region-of-interest (ROI) approaches, that were sensitive to syllabic information during speech perception by parametrically manipulating syllabic complexity along two dimensions: (1) individual syllable complexity, and (2) sequence complexity (supra-syllabic). We manipulated the complexity of the syllable by using the simplest syllable template-a consonant and vowel (CV)-and inserting an additional consonant to create a complex onset (CCV). The supra-syllabic complexity was manipulated by creating sequences composed of the same syllable repeated six times (e.g., /pa-pa-pa-pa-pa-pa/) and sequences of three different syllables each repeated twice (e.g., /pa-ta-ka-pa-ta-ka/). This parametrical design allowed us to identify brain regions sensitive to (1) syllabic complexity independent of supra-syllabic complexity, (2) supra-syllabic complexity independent of syllabic complexity and, (3) both syllabic and supra-syllabic complexity. High-resolution scans were acquired for 15 healthy adults. An exploratory anatomical ROI analysis of the supratemporal plane (STP) identified bilateral regions within the anterior two-third of the planum temporale, the primary auditory cortices as well as the anterior two-third of the superior temporal gyrus that showed different patterns of sensitivity to syllabic and supra-syllabic information. These findings demonstrate that during passive listening of syllable sequences, sublexical information is processed automatically, and sensitivity to syllabic and supra-syllabic information is localized almost exclusively within the STP.

  11. The Influence of Visual and Auditory Information on the Perception of Speech and Non-Speech Oral Movements in Patients with Left Hemisphere Lesions

    Science.gov (United States)

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-01-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…

  12. Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M

    2011-01-01

    correlation coefficients indicated that the cognitive load was larger in listeners with better TRT performances as reflected by a longer peak latency (normal-hearing participants, SRT50% condition) and a larger peak amplitude and longer response duration (hearing-impaired participants, SRT50% and SRT84% conditions). Also, a larger word vocabulary was related to longer response duration in the SRT84% condition for the participants with normal hearing. The pupil response systematically increased with decreasing speech intelligibility. Ageing and hearing loss were related to less release from effort when increasing the intelligibility of speech in noise. In difficult listening conditions, these factors may induce cognitive overload relatively early or they may be associated with relatively shallow speech processing. More research is needed to elucidate the underlying mechanisms explaining these results. Better TRTs and larger word vocabulary were related to higher mental processing load across speech intelligibility levels. This indicates that utilizing linguistic ability to improve speech perception is associated with increased listening load.

  13. An Ecosystem-Service Approach to Evaluate the Role of Non-Native Species in Urbanized Wetlands

    Science.gov (United States)

    Yam, Rita S. W.; Huang, Ko-Pu; Hsieh, Hwey-Lian; Lin, Hsing-Juh; Huang, Shou-Chung

    2015-01-01

    Natural wetlands have been increasingly transformed into urbanized ecosystems commonly colonized by stress-tolerant non-native species. Although non-native species present numerous threats to natural ecosystems, some could provide important benefits to urbanized ecosystems. This study investigated the extent of colonization by non-native fish and bird species of three urbanized wetlands in subtropical Taiwan. Using literature data the role of each non-native species in the urbanized wetland was evaluated by their effect (benefits/damages) on ecosystem services (ES) based on their ecological traits. Our sites were seriously colonized by non-native fishes (39%–100%), but wetland ES. Our results indicated the importance of non-native fishes in supporting ES by serving as food source to fish-eating waterbirds (native, and migratory species) due to their high abundance, particularly for Oreochromis spp. However, all non-native birds are regarded as “harmful” species causing important ecosystem disservices, and thus eradication of these bird-invaders from urban wetlands would be needed. This simple framework for role evaluation of non-native species represents a holistic and transferable approach to facilitate decision making on management priority of non-native species in urbanized wetlands. PMID:25860870

  14. Effects of Hearing Loss and Fast-Acting Compression on Amplitude Modulation Perception and Speech Intelligibility

    DEFF Research Database (Denmark)

    Wiinberg, Alan; Jepsen, Morten Løve; Epp, Bastian

    2018-01-01

    Objective: The purpose was to investigate the effects of hearing-loss and fast-acting compression on speech intelligibility and two measures of temporal modulation sensitivity. Design: Twelve adults with normal hearing (NH) and 16 adults with mild to moderately severe sensorineural hearing loss......, the MDD thresholds were higher for the group with hearing loss than for the group with NH. Fast-acting compression increased the modulation detection thresholds, while no effect of compression on the MDD thresholds was observed. The speech reception thresholds obtained in stationary noise were slightly...... of the modulation detection thresholds, compression does not seem to provide a benefit for speech intelligibility. Furthermore, fast-acting compression may not be able to restore MDD thresholds to the values observed for listeners with NH, suggesting that the two measures of amplitude modulation sensitivity...

  15. Fine-structure processing, frequency selectivity and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Strelcyk, Olaf; Dau, Torsten

    2008-01-01

    Hearing-impaired people often experience great difficulty with speech communication when background noise is present, even if reduced audibility has been compensated for. Other impairment factors must be involved. In order to minimize confounding effects, the subjects participating in this study...... consisted of groups with homogeneous, symmetric audiograms. The perceptual listening experiments assessed the intelligibility of full-spectrum as well as low-pass filtered speech in the presence of stationary and fluctuating interferers, the individual's frequency selectivity and the integrity of temporal...... modulation were obtained. In addition, these binaural and monaural thresholds were measured in a stationary background noise in order to assess the persistence of the fine-structure processing to interfering noise. Apart from elevated speech reception thresholds, the hearing impaired listeners showed poorer...

  16. Predation by crustaceans on native and non-native Baltic clams

    NARCIS (Netherlands)

    Ejdung, G.; Flach, E.; Byrén, L.; Hummel, H.

    2009-01-01

    We studied the effect of crustacean predators on native/non-native Macoma balthica bivalves in aquarium experiments. North Sea M. balthica (NS Macoma) were recently observed in the southern Baltic Sea. They differ genetically and in terms of morphology, behaviour and evolutionary history from Baltic

  17. Are native songbird populations affected by non-native plant invasion?

    Science.gov (United States)

    Amanda M. Conover; Christopher K. Williams; Vincent. D' Amico

    2011-01-01

    Development into forested areas is occurring rapidly across the United States, and many of the remnant forests within suburban landscapes are being fragmented into smaller patches, impacting the quality of this habitat for avian species. An ecological effect linked to forest fragmentation is the invasion of non-native plants into the ecosystem.

  18. Non-Native English Speakers and Nonstandard English: An In-Depth Investigation

    Science.gov (United States)

    Polat, Brittany

    2012-01-01

    Given the rising prominence of nonstandard varieties of English around the world (Jenkins 2007), learners of English as a second language are increasingly called on to communicate with speakers of both native and non-native nonstandard English varieties. In many classrooms around the world, however, learners continue to be exposed only to…

  19. Which English? Whose English? An Investigation of "Non-Native" Teachers' Beliefs about Target Varieties

    Science.gov (United States)

    Young, Tony Johnstone; Walsh, Steve

    2010-01-01

    This study explored the beliefs of "non-native English speaking" teachers about the usefulness and appropriacy of varieties such as English as an International Language (EIL) and English as a Lingua Franca (ELF), compared with native speaker varieties. The study therefore addresses the current theoretical debate concerning "appropriate" target…

  20. User requirement analysis of social conventions learning applications for Non-natives and low-literates

    NARCIS (Netherlands)

    Schouten, D.; Smets, N.; Driessen, M.; Hanekamp, M.; Cremers, A.H.M.; Neerincx, M.A.

    2013-01-01

    Learning and acting on social conventions is problematic for low-literates and non-natives, causing problems with societal participation and citizenship. Using the Situated Cognitive Engineering method, requirements for the design of social conventions learning software are derived from demographic

  1. Within-category variance and lexical tone discrimination in native and non-native speakers

    NARCIS (Netherlands)

    Hoffmann, C.W.G.; Sadakata, M.; Chen, A.; Desain, P.W.M.; McQueen, J.M.; Gussenhove, C.; Chen, Y.; Dediu, D.

    2014-01-01

    In this paper, we show how acoustic variance within lexical tones in disyllabic Mandarin Chinese pseudowords affects discrimination abilities in both native and non-native speakers of Mandarin Chinese. Within-category acoustic variance did not hinder native speakers in discriminating between lexical

  2. Invasions by two non-native insects alter regional forest species composition and successional trajectories

    Science.gov (United States)

    Randall S. Morin; Andrew M. Liebhold

    2015-01-01

    While invasions of individual non-native phytophagous insect species are known to affect growth and mortality of host trees, little is known about how multiple invasions combine to alter forest dynamics over large regions. In this study we integrate geographical data describing historical invasion spread of the hemlock woolly adelgid, Adelges tsugae...

  3. When the Native Is Also a Non-Native: "Retrodicting" the Complexity of Language Teacher Cognition

    Science.gov (United States)

    Aslan, Erhan

    2015-01-01

    The impact of native (NS) and non-native speaker (NNS) identities on second or foreign language teachers' cognition and practices in the classroom has mainly been investigated in ESL/EFL contexts. Using complexity theory as a framework, this case study attempts to fill the gap in the literature by presenting a foreign language teacher in the…

  4. Reanalysis and semantic persistence in native and non-native garden-path recovery.

    Science.gov (United States)

    Jacob, Gunnar; Felser, Claudia

    2016-01-01

    We report the results from an eye-movement monitoring study investigating how native and non-native speakers of English process temporarily ambiguous sentences such as While the gentleman was eating the burgers were still being reheated in the microwave, in which an initially plausible direct-object analysis is first ruled out by a syntactic disambiguation (were) and also later on by semantic information (being reheated). Both participant groups showed garden-path effects at the syntactic disambiguation, with native speakers showing significantly stronger effects of ambiguity than non-native speakers in later eye-movement measures but equally strong effects in first-pass reading times. Ambiguity effects at the semantic disambiguation and in participants' end-of-trial responses revealed that for both participant groups, the incorrect direct-object analysis was frequently maintained beyond the syntactic disambiguation. The non-native group showed weaker reanalysis effects at the syntactic disambiguation and was more likely to misinterpret the experimental sentences than the native group. Our results suggest that native language (L1) and non-native language (L2) parsing are similar with regard to sensitivity to syntactic and semantic error signals, but different with regard to processes of reanalysis.

  5. Professional Development in Japanese Non-Native English Speaking Teachers' Identity and Efficacy

    Science.gov (United States)

    Takayama, Hiromi

    2015-01-01

    This mixed methods study investigates how Japanese non-native English speaking teachers' (NNESTs) efficacy and identity are developed and differentiated from those of native English speaking teachers (NESTs). To explore NNESTs' efficacy, this study focuses on the contributing factors, such as student engagement, classroom management, instructional…

  6. Germination responses of an invasive species in native and non-native ranges

    Science.gov (United States)

    Jose L. Hierro; Ozkan Eren; Liana Khetsuriani; Alecu Diaconu; Katalin Torok; Daniel Montesinos; Krikor Andonian; David Kikodze; Levan Janoian; Diego Villarreal; Maria Estanga-Mollica; Ragan M. Callaway

    2009-01-01

    Studying germination in the native and non-native range of a species can provide unique insights into processes of range expansion and adaptation; however, traits related to germination have rarely been compared between native and nonnative populations. In a series of common garden experiments, we explored whether differences in the seasonality of precipitation,...

  7. Computer Vision Syndrome for Non-Native Speaking Students: What Are the Problems with Online Reading?

    Science.gov (United States)

    Tseng, Min-chen

    2014-01-01

    This study investigated the online reading performances and the level of visual fatigue from the perspectives of non-native speaking students (NNSs). Reading on a computer screen is more visually more demanding than reading printed text. Online reading requires frequent saccadic eye movements and imposes continuous focusing and alignment demand.…

  8. The online application of binding condition B in native and non-native pronoun resolution

    Directory of Open Access Journals (Sweden)

    Clare ePatterson

    2014-02-01

    Full Text Available Previous research has shown that anaphor resolution in a non-native language may be more vulnerable to interference from structurally inappropriate antecedents compared to native anaphor resolution. To test whether previous findings on reflexive anaphors generalise to non-reflexive pronouns, we carried out an eye-movement monitoring study investigating the application of binding condition B during native and non-native sentence processing. In two online reading experiments we examined when during processing local and/or non-local antecedents for pronouns were considered in different types of syntactic environment. Our results demonstrate that both native English speakers and native German-speaking learners of English showed online sensitivity to binding condition B in that they did not consider syntactically inappropriate antecedents. For pronouns thought to be exempt from condition B (so-called 'short-distance pronouns', the native readers showed a weak preference for the local antecedent during processing. The non-native readers, on the other hand, showed a preference for the matrix subject even where local coreference was permitted, and despite demonstrating awareness of short-distance pronouns' referential ambiguity in a complementary offline task. This indicates that non-native comprehenders are less sensitive during processing to structural cues that render pronouns exempt from condition B, and prefer to link a pronoun to a salient subject antecedent instead.

  9. An invasion risk map for non-native aquatic macrophytes of the Iberian Peninsula

    Directory of Open Access Journals (Sweden)

    Argantonio Rodríguez-Merino

    2017-05-01

    Full Text Available Freshwater systems are particularly susceptible to non-native organisms, owing to their high sensitivity to the impacts that are caused by these organisms. Species distribution models, which are based on both environmental and socio-economic variables, facilitate the identification of the most vulnerable areas for the spread of non-native species. We used MaxEnt to predict the potential distribution of 20 non-native aquatic macrophytes in the Iberian Peninsula. Some selected variables, such as the temperature seasonality and the precipitation in the driest quarter, highlight the importance of the climate on their distribution. Notably, the human influence in the territory appears as a key variable in the distribution of studied species. The model discriminated between favorable and unfavorable areas with high accuracy. We used the model to build an invasion risk map of aquatic macrophytes for the Iberian Peninsula that included results from 20 individual models. It showed that the most vulnerable areas are located near to the sea, the major rivers basins, and the high population density areas. These facts suggest the importance of the human impact on the colonization and distribution of non-native aquatic macrophytes in the Iberian Peninsula, and more precisely agricultural development during the Green Revolution at the end of the 70’s. Our work also emphasizes the utility of species distribution models for the prevention and management of biological invasions.

  10. An assessment of a proposal to eradicate non-native fish from ...

    African Journals Online (AJOL)

    African Journal of Aquatic Science ... A pilot project to evaluate the use of the piscicide rotenone to eradicate non-native fish from selected reaches in four rivers has been proposed by CapeNature, the conservation ... It is expected that the project will be successful while having minimal impact on other aquatic fauna.

  11. Non-Native English Teachers' Beliefs on Grammar Instruction

    Science.gov (United States)

    Önalan, Okan

    2018-01-01

    Research on teacher cognition, which mainly focuses on identifying what teachers think, know and believe, is essential to understanding teachers' cognitive framework as it relates to the instructional choices they make. The aim of this study is to find out the beliefs of non-native speaker teachers of English on grammar instruction and to explain…

  12. Minimal effectiveness of native and non-native seeding following three high-severity wildfire

    Science.gov (United States)

    Ken A. Stella; Carolyn H. Sieg; Pete Z. Fule

    2010-01-01

    The rationale for seeding following high-severity wildfires is to enhance plant cover and reduce bare ground, thus decreasing the potential for soil erosion and non-native plant invasion. However, experimental tests of the effectiveness of seeding in meeting these objectives in forests are lacking. We conducted three experimental studies of the effectiveness of seeding...

  13. The influence of ungulates on non-native plant invasions in forests and rangelands: a review.

    Science.gov (United States)

    Catherine G. Parks; Michael J. Wisdom; John G. Kie

    2005-01-01

    Herbivory by wild and domestic ungulates can strongly influence vegetation composition and productivity in forest and range ecosystems. However, the role of ungulates as contributors to the establishment and spread of non-native invasive plants is not well known. Ungulates spread seeds through endozoochory (passing through an animal's digestive tract) or...

  14. Non-native gobies facilitate the transmission of Bucephalus polymorphus (Trematoda)

    Czech Academy of Sciences Publication Activity Database

    Ondračková, Markéta; Hudcová, Iveta; Dávidová, Martina; Adámek, Zdeněk; Kašný, M.; Jurajda, Pavel

    2015-01-01

    Roč. 8, č. 1 (2015), s. 382 ISSN 1756-3305 R&D Projects: GA ČR(CZ) GAP505/12/2569 Institutional support: RVO:68081766 Keywords : Bucephalus polymorphus * Complex life cycle * Goby * Infectivity * Intermediate host * Non-native species * Trematode Subject RIV: EH - Ecology, Behaviour Impact factor: 3.234, year: 2015

  15. Ethical Considerations in Conducting Research with Non-Native Speakers of English

    Science.gov (United States)

    Koulouriotis, Joanna

    2011-01-01

    The ethical considerations of three education researchers working with non-native English-speaking participants were examined from a critical theory stand-point in the light of the literature on research ethics in various disciplines. Qualitative inquiry and data analysis were used to identify key themes, which centered around honor and respect…

  16. Differences in the Metacognitive Awareness of Reading Strategies among Native and Non-Native Readers.

    Science.gov (United States)

    Sheorey, R.; Mokhtari, K.

    2001-01-01

    Examines the differences in the reported use of reading strategies of native and non-native English speakers when reading academic materials. Participants were native English speaking and English-as-a-Second-Language college students who completed a survey of reading strategies aimed at discerning the strategies readers report using when coping…

  17. Long-distance dispersal of non-native pine bark beetles from host resources

    Science.gov (United States)

    Kevin Chase; Dave Kelly; Andrew M. Liebhold; Martin K.-F. Bader; Eckehard G. Brockerhoff

    2017-01-01

    Dispersal and host detection are behaviours promoting the spread of invading populations in a landscape matrix. In fragmented landscapes, the spatial arrangement of habitat structure affects the dispersal success of organisms. The aim of the present study was to determine the long distance dispersal capabilities of two non-native pine bark beetles (Hylurgus...

  18. Recreational freshwater fishing drives non-native aquatic species richness patterns at a continental scale.

    Science.gov (United States)

    Mapping the geographic distribution of non-native aquatic species is a critically important precursor to understanding the anthropogenic and environmental factors that drive freshwater biological invasions. Such efforts are often limited to local scales and/or to single species, ...

  19. Vulnerability of freshwater native biodiversity to non-native species invasions across the continental United States

    Science.gov (United States)

    Background/Question/Methods Non-native species pose one of the greatest threats to native biodiversity. The literature provides plentiful empirical and anecdotal evidence of this phenomenon; however, such evidence is limited to local or regional scales. Employing geospatial analy...

  20. Non-native Species in Floodplain Secondary Forests in Peninsular Malaysia

    Directory of Open Access Journals (Sweden)

    Nor Rasidah Hashim

    2010-01-01

    Full Text Available There is an increasing concern of alien species invading our tropical ecosystems because anthropogenic land use can create conditions in which non-native species thrive. This study is an assessment of bioinvasion using a quantitative survey of non-native plant species in floodplain secondary forests in Peninsular Malaysia. The study area is known to have a long cultivation and settlement history that provides ample time for non-native species introduction. The survey results showed that introduced species constituted 23% of all the identified species, with seven species unique to riparian forest strips and eleven species unique to abandoned paddy fields and the remaining five species being shared between the two secondary forest types. There existed some habitat preferences amongst the species implying both secondary forests were potentially susceptible to bioinvasion. Fourteen species are also invasive elsewhere (PIER invasives whereas fifteen species have acquired local uses such for traditional medicine and food products. The presence of these non-native species could alter native plant succession trajectory, and eventually leads to native species impoverishment if the exotics managed to outcompete the native species. As such, the findings of this study have a far-reaching application for the national biodiversity conservation efforts because it provides the required information on bioinvasion.

  1. Predicting establishment of non-native fishes in Greece: identifying key features

    Directory of Open Access Journals (Sweden)

    Christos Gkenas

    2015-11-01

    Full Text Available Non-native fishes are known to cause economic damage to human society and are considered a major threat to biodiversity loss in freshwater ecosystems. The growing concern about these impacts has driven to an investigation of the biological traits that facilitate the establishment of non-native fish. However, invalid assessment in choosing the appropriate statistical model can lead researchers to ambiguous conclusions. Here, we present a comprehensive comparison of traditional and alternative statistical methods for predicting fish invasions using logistic regression, classification trees, multicorrespondence analysis and random forest analysis to determine characteristics of successful and failed non-native fishes in Hellenic Peninsula through establishment. We defined fifteen categorical predictor variables with biological relevance and measures of human interest. Our study showed that accuracy differed according to the model and the number of factors considered. Among all the models tested, random forest and logistic regression performed best, although all approaches predicted non-native fish establishment with moderate to excellent results. Detailed evaluation among the models corresponded with differences in variables importance, with three biological variables (parental care, distance from nearest native source and maximum size and two variables of human interest (prior invasion success and propagule pressure being important in predicting establishment. The analyzed statistical methods presented have a high predictive power and can be used as a risk assessment tool to prevent future freshwater fish invasions in this region with an imperiled fish fauna.

  2. Non-native fish introductions and the reversibility of amphibian declines in the Sierra Nevada

    Science.gov (United States)

    Roland A. Knapp

    2004-01-01

    Amphibians are declining worldwide for a variety of reasons, including habitat alteration, introduction of non-native species, disease, climate change, and environmental contaminants. Amphibians often play important roles in structuring ecosystems, and, as a result, amphibian population declines or extinctions are likely to affect other trophic levels (Matthews and...

  3. Topic Continuity in Informal Conversations between Native and Non-Native Speakers of English

    Science.gov (United States)

    Morris-Adams, Muna

    2013-01-01

    Topic management by non-native speakers (NNSs) during informal conversations has received comparatively little attention from researchers, and receives surprisingly little attention in second language learning and teaching. This article reports on one of the topic management strategies employed by international students during informal, social…

  4. Speech perception, production and intelligibility in French-speaking children with profound hearing loss and early cochlear implantation after congenital cytomegalovirus infection.

    Science.gov (United States)

    Laccourreye, L; Ettienne, V; Prang, I; Couloigner, V; Garabedian, E-N; Loundon, N

    2015-12-01

    To analyze speech in children with profound hearing loss following congenital cytomegalovirus (cCMV) infection with cochlear implantation (CI) before the age of 3 years. In a cohort of 15 children with profound hearing loss, speech perception, production and intelligibility were assessed before and 3 years after CI; variables impacting results were explored. Post-CI, median word recognition was 74% on closed-list and 48% on open-list testing; 80% of children acquired speech production; and 60% were intelligible for all listeners or listeners attentive to lip-reading and/or aware of the child's hearing loss. Univariate analysis identified 3 variables (mean post-CI hearing threshold, bilateral vestibular areflexia, and brain abnormality on MRI) with significant negative impact on the development of speech perception, production and intelligibility. CI showed positive impact on hearing and speech in children with post-cCMV profound hearing loss. Our study demonstrated the key role of maximizing post-CI hearing gain. A few children had insufficient progress, especially in case of bilateral vestibular areflexia and/or brain abnormality on MRI. This led us to suggest that balance rehabilitation and speech therapy should be intensified in such cases. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  5. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    Science.gov (United States)

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...... of audiovisual integration) for fusions was comparable to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. This article is protected...

  7. Non-native species impacts on pond occupancy by an anuran

    Science.gov (United States)

    Adams, Michael J.; Pearl, Christopher A.; Galvan, Stephanie; McCreary, Brome

    2011-01-01

    Non-native fish and bullfrogs (Lithobates catesbeianus; Rana catesbeiana) are frequently cited as factors contributing to the decline of ranid frogs in the western United States (Bradford 2005). This hypothesis is supported by studies showing competition with or predation by these introduced species (Kupferberg 1997, Kiesecker and Blaustein 1998, Lawler et al. 1999, Knapp et al. 2001) and studies suggesting a deficit of native frogs at sites occupied by bullfrogs or game fish (Hammerson 1982, Schwalbe and Rosen 1988, Fisher and Shaffer 1996, Adams 1999). Conversely, other studies failed to find a negative association between native ranids and bullfrogs and point out that presence of non-native species correlates with habitat alterations that could also contribute to declines of native species (Hayes and Jennings 1986; Adams 1999, 2000; Pearl et al. 2005). A criticism of these studies is that they may not detect an effect of non-native species if the process of displacement is at an early stage. We are not aware of any studies that have monitored a set of native frog populations to determine if non-native species predict population losses. Our objective was to study site occupancy trends in relation to non-native species for northern red-legged frogs (Rana aurora) on federal lands in the southern Willamette Valley, Oregon. We conducted a 5-yr monitoring study to answer the following questions about the status and trends of the northern red-legged frog: 1) What is the rate of local extinction (how often is a site that is occupied in year t unoccupied in year t+1) and what factors predict variation in local extinction? and 2) What is the rate of colonization (how often is a site that is unoccupied in year t occupied in year t+1) and what factors predict variation in colonization? The factors we hypothesized for local extinction were: 1) bullfrog presence, 2) bullfrogs mediated by wetland vegetation, 3) non-native fish (Centrarchidae), 4) non-native fish mediated by

  8. A non-native prey mediates the effects of a shared predator on an ecosystem service.

    Directory of Open Access Journals (Sweden)

    James E Byers

    Full Text Available Non-native species can alter ecosystem functions performed by native species often by displacing influential native species. However, little is known about how ecosystem functions may be modified by trait-mediated indirect effects of non-native species. Oysters and other reef-associated filter feeders enhance water quality by controlling nutrients and contaminants in many estuarine environments. However, this ecosystem service may be mitigated by predation, competition, or other species interactions, especially when such interactions involve non-native species that share little evolutionary history. We assessed trophic and other interference effects on the critical ecosystem service of water filtration in mesocosm experiments. In single-species trials, typical field densities of oysters (Crassostrea virginica reduced water-column chlorophyll a more strongly than clams (Mercenaria mercenaria. The non-native filter-feeding reef crab Petrolisthes armatus did not draw down chlorophyll a. In multi-species treatments, oysters and clams combined additively to influence chlorophyll a drawdown. Petrolisthes did not affect net filtration when added to the bivalve-only treatments. Addition of the predatory mud crab Panopeus herbstii did not influence oyster feeding rates, but it did stop chlorophyll a drawdown by clams. However, when Petrolisthes was also added in with the clams, the clams filtered at their previously unadulterated rates, possibly because Petrolisthes drew the focus of predators or habituated the clams to crab stimuli. In sum, oysters were the most influential filter feeder, and neither predators nor competitors interfered with their net effect on water-column chlorophyll. In contrast, clams filtered less, but were more sensitive to predators as well as a facilitative buffering effect of Petrolisthes, illustrating that non-native species can indirectly affect an ecosystem service by aiding the performance of a native species.

  9. The Acquisition of English Focus Marking by Non-Native Speakers

    Science.gov (United States)

    Baker, Rachel Elizabeth

    This dissertation examines Mandarin and Korean speakers' acquisition of English focus marking, which is realized by accenting particular words within a focused constituent. It is important for non-native speakers to learn how accent placement relates to focus in English because appropriate accent placement and realization makes a learner's English more native-like and easier to understand. Such knowledge may also improve their English comprehension skills. In this study, 20 native English speakers, 20 native Mandarin speakers, and 20 native Korean speakers participated in four experiments: (1) a production experiment, in which they were recorded reading the answers to questions, (2) a perception experiment, in which they were asked to determine which word in a recording was the last prominent word, (3) an understanding experiment, in which they were asked whether the answers in recorded question-answer pairs had context-appropriate prosody, and (4) an accent placement experiment, in which they were asked which word they would make prominent in a particular context. Finally, a new group of native English speakers listened to utterances produced in the production experiment, and determined whether the prosody of each utterance was appropriate for its context. The results of the five experiments support a novel predictive model for second language prosodic focus marking acquisition. This model holds that both transfer of linguistic features from a learner's native language (L1) and features of their second language (L2) affect learners' acquisition of prosodic focus marking. As a result, the model includes two complementary components: the Transfer Component and the L2 Challenge Component. The Transfer Component predicts that prosodic structures in the L2 will be more easily acquired by language learners that have similar structures in their L1 than those who do not, even if there are differences between the L1 and L2 in how the structures are realized. The L2

  10. Using personal response systems to assess speech perception within the classroom: an approach to determine the efficacy of sound field amplification in primary school classrooms.

    Science.gov (United States)

    Vickers, Deborah A; Backus, Bradford C; Macdonald, Nora K; Rostamzadeh, Niloofar K; Mason, Nisha K; Pandya, Roshni; Marriage, Josephine E; Mahon, Merle H

    2013-01-01

    The assessment of the combined effect of classroom acoustics and sound field amplification (SFA) on children's speech perception within the "live" classroom poses a challenge to researchers. The goals of this study were to determine: (1) Whether personal response system (PRS) hand-held voting cards, together with a closed-set speech perception test (Chear Auditory Perception Test [CAPT]), provide an appropriate method for evaluating speech perception in the classroom; (2) Whether SFA provides better access to the teacher's speech than without SFA for children, taking into account vocabulary age, middle ear dysfunction or ear-canal wax, and home language. Forty-four children from two school-year groups, year 2 (aged 6 years 11 months to 7 years 10 months) and year 3 (aged 7 years 11 months to 8 years 10 months) were tested in two classrooms, using a shortened version of the four-alternative consonant discrimination section of the CAPT. All children used a PRS to register their chosen response, which they selected from four options displayed on the interactive whiteboard. The classrooms were located in a 19th-century school in central London, United Kingdom. Each child sat at their usual position in the room while target speech stimuli were presented either in quiet or in noise. The target speech was presented from the front of the classroom at 65 dBA (calibrated at 1 m) and the presented noise level was 46 dBA measured at the center of the classroom. The older children had an additional noise condition with a noise level of 52 dBA. All conditions were presented twice, once with SFA and once without SFA and the order of testing was randomized. White noise from the teacher's right-hand side of the classroom and International Speech Test Signal from the teacher's left-hand side were used, and the noises were matched at the center point of the classroom (10sec averaging [A-weighted]). Each child's expressive vocabulary age and middle ear status were measured

  11. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  12. Perception of Speech Modulation Cues by 6-Month-Old Infants

    Science.gov (United States)

    Cabrera, Laurianne; Bertoncini, Josiane; Lorenzi, Christian

    2013-01-01

    Purpose: The capacity of 6-month-old infants to discriminate a voicing contrast (/aba/--/apa/) on the basis of "amplitude modulation (AM) cues" and "frequency modulation (FM) cues" was evaluated. Method: Several vocoded speech conditions were designed to either degrade FM cues in 4 or 32 bands or degrade AM in 32 bands. Infants…

  13. Perception of Foreign Accent Syndrome Speech and Its Relation to Segmental Characteristics

    Science.gov (United States)

    Dankovicova, Jana; Hunt, Claire

    2011-01-01

    Foreign accent syndrome (FAS) is an acquired neurogenic disorder characterized by altered speech that sounds foreign-accented. This study presents a British subject perceived to speak with an Italian (or Greek) accent after a brainstem (pontine) stroke. Native English listeners rated the strength of foreign accent and impairment they perceived in…

  14. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  15. Interrupted-speech perception : Top-down restoration in cochlear implant users

    NARCIS (Netherlands)

    Bhargava, Pranesh

    2016-01-01

    In difficult listening situations, e.g. in a cocktail party scenario, speech signal that a listener is interested in decoding is often masked by unwanted noise, disrupting bottom-up signalling. A normal hearing (NH) listener is able to withstand a reasonable amount of such disruption and can still

  16. Perceptions of Staff on Embedding Speech and Language Therapy within a Youth Offending Team

    Science.gov (United States)

    Bryan, Karen; Gregory, Juliette

    2013-01-01

    The purpose of this research was to ascertain the views of staff and managers within a youth offending team on their experiences of working with a speech and language therapist (SLT). The model of therapy provision was similar to the whole-systems approach used in schools. The impact of the service on language outcomes is reported elsewhere…

  17. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    Science.gov (United States)

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  18. Comparison of the HiFocus Mid-Scala and HiFocus 1J Electrode Array: Angular Insertion Depths and Speech Perception Outcomes.

    Science.gov (United States)

    van der Jagt, M Annerie; Briaire, Jeroen J; Verbist, Berit M; Frijns, Johan H M

    2016-01-01

    The HiFocus Mid-Scala (MS) electrode array has recently been introduced onto the market. This precurved design with a targeted mid-scalar intracochlear position pursues an atraumatic insertion and optimal distance for neural stimulation. In this study we prospectively examined the angular insertion depth achieved and speech perception outcomes resulting from the HiFocus MS electrode array for 6 months after implantation, and retrospectively compared these with the HiFocus 1J lateral wall electrode array. The mean angular insertion depth within the MS population (n = 96) was found at 470°. This was 50° shallower but more consistent than the 1J electrode array (n = 110). Audiological evaluation within a subgroup, including only postlingual, unilaterally implanted, adult cochlear implant recipients who were matched on preoperative speech perception scores and the duration of deafness (MS = 32, 1J = 32), showed no difference in speech perception outcomes between the MS and 1J groups. Furthermore, speech perception outcome was not affected by the angular insertion depth or frequency mismatch. © 2016 S. Karger AG, Basel.

  19. Speech-Language Pathologists' Perceptions of the Importance and Ability to Use Assistive Technology in the Kingdom of Saudi Arabia

    Science.gov (United States)

    Al-Dawaideh, Ahmad Mousa

    2013-01-01

    Speech-language pathologists (SLPs) frequently work with people with severe communication disorders who require assistive technology (AT) for communication. The purpose of this study was to investigate the SLPs perceptions of the importance of and ability level required for using AT, and the relationship of AT with gender, level of education,…

  20. Speech Perception and Phonological Short-Term Memory Capacity in Language Impairment: Preliminary Evidence from Adolescents with Specific Language Impairment (SLI) and Autism Spectrum Disorders (ASD)

    Science.gov (United States)

    Loucas, Tom; Riches, Nick Greatorex; Charman, Tony; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Baird, Gillian

    2010-01-01

    Background: The cognitive bases of language impairment in specific language impairment (SLI) and autism spectrum disorders (ASD) were investigated in a novel non-word comparison task which manipulated phonological short-term memory (PSTM) and speech perception, both implicated in poor non-word repetition. Aims: This study aimed to investigate the…

  1. The connection of hemispheric activity in the field of audioverbal perception and the progressive lateralization of speech and motor processes.

    Directory of Open Access Journals (Sweden)

    Kovyazina, M.S.

    2015-07-01

    Full Text Available This article discusses the connection of hemispheric control over audioverbal perception processes and such individual features as “leading hand” (right-handedness and lefthandedness. We present a literature review and description of our research to provide evidence of the complexity and ambiguity of this connection. The method of dichotic listening was used for diagnosing audioverbal perception lateralization. This method allows estimation of the right-ear coefficient (REC, the efficiency coefficient (EC, and the effectiveness ratio (ER of different aspects of audioverbal perception. Our research involved 47 persons with a leading right hand (mean age, 29.04±9.97 years and 32 persons with a leading left hand (mean age, 29.41±10.34 years. Different hypotheses about the mechanisms of hemispheric control over audioverbal and motor processes were assessed. The research showed that both the leftand right-handers’ audioverbal perception characteristics depended mainly on right-hemisphere activity. The most dynamic and sensitive index of the functioning of the two hemispheres during dichotic listening was the efficiency coefficient of stimuli reproduction through the left ear (EC of the left ear. It turns out that this index depends on the coincidence/noncoincidence of the leading hemispheres in speech and motor processes. The highest efficiency of audioverbal perception revealed itself in the left-handers with a leading left ear (the hemispheric-control coincidence, and the lowest efficiency was in the left-handers with a leading right ear (the hemispheric-control divergence. The right-handers were characterized by less variation in values, although the influence of the coincidence/noncoincidence of the leading hemispheres in speech and motor processes also revealed itself as a tendency. This consistent pattern points out the necessity for further research on asymmetries of the different modalities that takes into account their probable

  2. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Study on differences between perceptions of Japanese and Chinese emotional speech by Japanese and Chinese listeners

    OpenAIRE

    Zhang, Chenyi; Akagi, Masato

    2018-01-01

    Without understanding of one language, emotional contents of a voice can still be judged by human beings. However, it is reported that differences occur in emotion perception among listeners with different mother languages. Investigating reasons that differences occur may provide a systematic method to the discussions of emotion perception in a cross-language scenario. Therefore, this study discusses commonalities and differences between emotion perception of Japanese and Chinese listeners fo...

  4. New graduates’ perceptions of preparedness to provide speech-language therapy services in general and dysphagia services in particular

    Directory of Open Access Journals (Sweden)

    Shajila Singh

    2015-06-01

    Methods: New graduates of six South African universities were recruited to participate in a survey by completing an electronic questionnaire exploring their perceptions of the dysphagia curricula and their preparedness to practise across the scope of the profession of speechlanguage therapy. Results: Eighty graduates participated in the study yielding a response rate of 63.49%. Participants perceived themselves to be well prepared in some areas (e.g. child language: 100%; articulation and phonology: 97.26%, but less prepared in other areas (e.g. adult dysphagia: 50.70%; paediatric dysarthria: 46.58%; paediatric dysphagia: 38.36% and most unprepared to provide services requiring sign language (23.61% and African languages (20.55%. There was a significant relationship between perceptions of adequate theory and clinical learning opportunities with assessment and management of dysphagia and perceptions of preparedness to provide dysphagia services. Conclusion: There is a need for review of existing curricula and consideration of developing a standard speech-language therapy curriculum across universities, particularly in service provision to a multilingual population, and in both the theory and clinical learning of the assessment and management of adult and paediatric dysphagia, to better equip graduates for practice.

  5. Top-Down Modulation on the Perception and Categorization of Identical Pitch Contours in Speech and Music

    Directory of Open Access Journals (Sweden)

    Joey L. Weidema

    2016-06-01

    Full Text Available Whether pitch in language and music is governed by domain-specific or domain-general cognitive mechanisms is contentiously debated. The aim of the present study was to investigate whether mechanisms governing pitch contour perception operate differently when pitch information is interpreted as either speech or music. By modulating listening mode, this study aspired to demonstrate that pitch contour perception relies on domain-specific cognitive mechanisms, which are regulated by top-down influences from language and music. Three groups of participants (Mandarin speakers, Dutch speaking non-musicians, and Dutch musicians were exposed to identical pitch contours, and tested on their ability to identify these contours in a language and musical context. Stimuli consisted of disyllabic words spoken in Mandarin, and melodic tonal analogues, embedded in a linguistic and melodic carrier phrase, respectively. Participants classified identical pitch contours as significantly different depending on listening mode. Top-down influences from language appeared to alter the perception of pitch contour in speakers of Mandarin. This was not the case for non-musician speakers of Dutch. Moreover, this effect was lacking in Dutch speaking musicians. The classification patterns of pitch contours in language and music seem to suggest that domain-specific categorization is modulated by top-down influences from language and music.

  6. Exploring the role of wood waste landfills in early detection of non-native alien wood-boring beetles

    Science.gov (United States)

    Davide Rassati; Massimo Faccoli; Lorenzo Marini; Robert A. Haack; Andrea Battisti; Edoardo. Petrucco Toffolo

    2015-01-01

    Non-native wood-boring beetles (Coleoptera) represent one of the most commonly intercepted groups of insects at ports worldwide. The development of early detection methods is a crucial step when implementing rapid response programs so that non-native wood-boring beetles can be quickly detected and a timely action plan can be produced. However, due to the limited...

  7. Higher dropout rate in non-native patients than in native patients in rehabilitation in The Netherlands

    NARCIS (Netherlands)

    Sloots, Maurits; Scheppers, Emmanuel F.; van de Weg, Frans B.; Bartels, Edien A.; Geertzen, Jan H.; Dekker, Joost; Dekker, Jaap

    Dropout from a rehabilitation programme often occurs in patients with chronic nonspecific low back pain of non-native origin. However, the exact dropout rate is not known. The objective of this study was to determine the difference in dropout rate between native and non-native patients with chronic

  8. Non-native grass removal and shade increase soil moisture and seedling performance during Hawaiian dry forest restoration

    Science.gov (United States)

    Jared M. Thaxton; Susan Cordell; Robert J. Cabin; Darren R. Sandquist

    2012-01-01

    Invasive non-native species can create especially problematic restoration barriers in subtropical and tropical dry forests. Native dry forests in Hawaii presently cover less than 10% of their original area. Many sites that historically supported dry forest are now completely dominated by non-native species, particularly grasses. Within a grass-dominated site in leeward...

  9. On the use of the distortion-sensitivity approach in examining the role of linguistic abilities in speech understanding in noise.

    Science.gov (United States)

    Goverts, S Theo; Huysmans, Elke; Kramer, Sophia E; de Groot, Annette M B; Houtgast, Tammo

    2011-12-01

    Researchers have used the distortion-sensitivity approach in the psychoacoustical domain to investigate the role of auditory processing abilities in speech perception in noise (van Schijndel, Houtgast, & Festen, 2001; Goverts & Houtgast, 2010). In this study, the authors examined the potential applicability of the distortion-sensitivity approach for investigating the role of linguistic abilities in speech understanding in noise. The authors applied the distortion-sensitivity approach by measuring the processing of visually presented masked text in a condition with manipulated syntactic, lexical, and semantic cues and while using the Text Reception Threshold (George et al., 2007; Kramer, Zekveld, & Houtgast, 2009; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) method. Two groups that differed in linguistic abilities were studied: 13 native and 10 non-native speakers of Dutch, all typically hearing university students. As expected, the non-native subjects showed substantially reduced performance. The results of the distortion-sensitivity approach yielded differentiated results on the use of specific linguistic cues in the 2 groups. The results show the potential value of the distortion-sensitivity approach in studying the role of linguistic abilities in speech understanding in noise of individuals with hearing impairment.

  10. Sensory deprivation due to otitis media episodes in early childhood and its effect at later age: A psychoacoustic and speech perception measure.

    Science.gov (United States)

    Shetty, Hemanth Narayan; Koonoor, Vishal

    2016-11-01

    Past research has reported that children with repeated occurrences of otitis media at an early age have a negative impact on speech perception at a later age. The present study necessitates documenting the temporal and spectral processing on speech perception in noise from normal and atypical groups. The present study evaluated the relation between speech perception in noise and temporal; and spectral processing abilities in children with normal and atypical groups. The study included two experiments. In the first experiment, temporal resolution and frequency discrimination of listeners with normal group and three subgroups of atypical groups (had a history of OM) a) less than four episodes b) four to nine episodes and c) More than nine episodes during their chronological age of 6 months to 2 years) were evaluated using measures of temporal modulation transfer function and frequency discrimination test. In the second experiment, SNR 50 was evaluated on each group of study participants. All participants had normal hearing and middle ear status during the course of testing. Demonstrated that children with atypical group had significantly poorer modulation detection threshold, peak sensitivity and bandwidth; and frequency discrimination to each F0 than normal hearing listeners. Furthermore, there was a significant correlation seen between measures of temporal resolution; frequency discrimination and speech perception in noise. It infers atypical groups have significant impairment in extracting envelope as well as fine structure cues from the signal. The results supported the idea that episodes of OM before 2 years of agecan produce periods of sensory deprivation that alters the temporal and spectral skills which in turn has negative consequences on speech perception in noise. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Mexican immigrant mothers' perceptions of their children's communication disabilities, emergent literacy development, and speech-language therapy program.

    Science.gov (United States)

    Kummerer, Sharon E; Lopez-Reyna, Norma A; Hughes, Marie Tejero

    2007-08-01

    This qualitative study explored mothers' perceptions of their children's communication disabilities, emergent literacy development, and speech-language therapy programs. Participants were 14 Mexican immigrant mothers and their children (age 17-47 months) who were receiving center-based services from an early childhood intervention program, located in a large urban city in the Midwestern United States. Mother interviews composed the primary source of data. A secondary source of data included children's therapy files and log notes. Following the analysis of interviews through the constant comparative method, grounded theory was generated. The majority of mothers perceived their children as exhibiting a communication delay. Causal attributions were diverse and generally medical in nature (i.e., ear infections, seizures) or due to familial factors (i.e., family history and heredity, lack of extended family). Overall, mothers seemed more focused on their children's speech intelligibility and/or expressive language in comparison to emergent literacy abilities. To promote culturally responsive intervention, mothers recommended that professionals speak Spanish, provide information about the therapy process, and use existing techniques with Mexican immigrant families.

  12. Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones

    Science.gov (United States)

    Heinzen, Christina Carolyn

    for the across-category lexical tone contrast. Overall, the results support the use of IDS characteristics in training non-native speech contrasts and provide impetus for further research.

  13. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    Science.gov (United States)

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  14. Speech perception performance of subjects with type I diabetes mellitus in noise

    Directory of Open Access Journals (Sweden)

    Bárbara Cristiane Sordi Silva

    Full Text Available Abstract Introduction: Diabetes mellitus (DM is a chronic metabolic disorder of various origins that occurs when the pancreas fails to produce insulin in sufficient quantities or when the organism fails to respond to this hormone in an efficient manner. Objective: To evaluate the speech recognition in subjects with type I diabetes mellitus (DMI in quiet and in competitive noise. Methods: It was a descriptive, observational and cross-section study. We included 40 participants of both genders aged 18-30 years, divided into a control group (CG of 20 healthy subjects with no complaints or auditory changes, paired for age and gender with the study group, consisting of 20 subjects with a diagnosis of DMI. First, we applied basic audiological evaluations (pure tone audiometry, speech audiometry and immittance audiometry for all subjects; after these evaluations, we applied Sentence Recognition Threshold in Quiet (SRTQ and Sentence Recognition Threshold in Noise (SRTN in free field, using the List of Sentences in Portuguese test. Results: All subjects showed normal bilateral pure tone threshold, compatible speech audiometry and "A" tympanometry curve. Group comparison revealed a statistically significant difference for SRTQ (p = 0.0001, SRTN (p < 0.0001 and the signal-to-noise ratio (p < 0.0001. Conclusion: The performance of DMI subjects in SRTQ and SRTN was worse compared to the subjects without diabetes.

  15. Perception of Music and Speech in Adolescents with Cochlear Implants – A Pilot Study on Effects of Intensive Musical Ear Training

    DEFF Research Database (Denmark)

    Petersen, Bjørn; Sørensen, Stine Derdau; Pedersen, Ellen Raben

    measures of rehabilitation are important throughout adolescence. Music training may provide a beneficial method of strengthening not only music perception, but also linguistic skills, particularly prosody. The purpose of this study was to examine perception of music and speech and music engagement...... of adolescent CI users and the potential effects of an intensive musical ear training program. METHODS Eleven adolescent CI users participated in a short intensive training program involving music making activities and computer based listening exercises. Ten NH agemates formed a reference group, who followed...... their standard school schedule and received no music training. Before and after the intervention period, both groups completed a set of tests for perception of music, speech and emotional prosody. In addition, the participants filled out a questionnaire which examined music listening habits and enjoyment...

  16. The effect of different cochlear implant microphones on acoustic hearing individuals’ binaural benefits for speech perception in noise

    Science.gov (United States)

    Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.

    2011-01-01

    Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the

  17. Non-native tree species in urban areas of the city of Nitra

    International Nuclear Information System (INIS)

    Galis, M

    2014-01-01

    Non-native plant species are part of our environment. The introduction of these species is huge conditioned by anthropogenic activities, such as the urban environment is characterized by. During the field surveys of selected town Nitra (Chrenova, Mikova Ves, Zobor), we studied the frequency of non-native tree species in the contact zone. Overall, we found out the presence of 10 alien species, observed in this area. Our results show dominant presence of the species Rhus typhina, followed by the Robinia pseudoacacia and Ailanthus altissima. Individual plants were tied largely to the surrounding of built-up areas, often growns directly in front of houses, or as a part of urban green. (author)

  18. Catalytic mechanism of phenylacetone monooxygenases for non-native linear substrates.

    Science.gov (United States)

    Carvalho, Alexandra T P; Dourado, Daniel F A R; Skvortsov, Timofey; de Abreu, Miguel; Ferguson, Lyndsey J; Quinn, Derek J; Moody, Thomas S; Huang, Meilan

    2017-10-11

    Phenylacetone monooxygenase (PAMO) is the most stable and thermo-tolerant member of the Baeyer-Villiger monooxygenase family, and therefore it is an ideal candidate for the synthesis of industrially relevant compounds. However, its limited substrate scope has largely limited its industrial applications. In the present work, we provide, for the first time, the catalytic mechanism of PAMO for the native substrate phenylacetone as well as for a linear non-native substrate 2-octanone, using molecular dynamics simulations, quantum mechanics and quantum mechanics/molecular mechanics calculations. We provide a theoretical basis for the preference of the enzyme for the native aromatic substrate over non-native linear substrates. Our study provides fundamental atomic-level insights that can be employed in the rational engineering of PAMO for wide applications in industrial biocatalysis, in particular, in the biotransformation of long-chain aliphatic oils into potential biodiesels.

  19. Benefits to Speech Perception in Noise From the Binaural Integration of Electric and Acoustic Signals in Simulated Unilateral Deafness.

    Science.gov (United States)

    Ma, Ning; Morris, Saffron; Kitterick, Pádraig Thomas

    2016-01-01

    performance (50%), binaural integration advantages were found regardless of whether a mismatch was simulated or not. When the CI-simulation ear supported a superior level of monaural performance (71%), evidence of binaural integration was absent when a mismatch was simulated using both the Realistic and the Ideal processing strategies. This absence of integration could not be accounted for by ceiling effects or by changes in SNR. If generalizable to unilaterally deaf CI users, the results of the current simulation study would suggest that benefits to speech perception in noise can be obtained by integrating information from an implanted ear and an NH ear. A mismatch in the delivery of spectral information between the ears due to a misalignment in the mapping of frequency to place may disrupt binaural integration in situations where both ears cannot support a similar level of monaural speech understanding. Previous studies that have measured the speech perception of unilaterally deaf individuals after CI but with nonindividualized frequency-to-electrode allocations may therefore have underestimated the potential benefits of providing binaural hearing. However, it remains unclear whether the size and nature of the potential incremental benefits from individualized allocations are sufficient to justify the time and resources required to derive them based on cochlear imaging or pitch-matching tasks.

  20. Comparison of bimodal and bilateral cochlear implant users on speech recognition with competing talker, music perception, affective prosody discrimination, and talker identification.

    Science.gov (United States)

    Cullington, Helen E; Zeng, Fan-Gang

    2011-02-01

    Despite excellent performance in speech recognition in quiet, most cochlear implant users have great difficulty with speech recognition in noise, music perception, identifying tone of voice, and discriminating different talkers. This may be partly due to the pitch coding in cochlear implant speech processing. Most current speech processing strategies use only the envelope information; the temporal fine structure is discarded. One way to improve electric pitch perception is to use residual acoustic hearing via a hearing aid on the nonimplanted ear (bimodal hearing). This study aimed to test the hypothesis that bimodal users would perform better than bilateral cochlear implant users on tasks requiring good pitch perception. Four pitch-related tasks were used. 1. Hearing in Noise Test (HINT) sentences spoken by a male talker with a competing female, male, or child talker. 2. Montreal Battery of Evaluation of Amusia. This is a music test with six subtests examining pitch, rhythm and timing perception, and musical memory. 3. Aprosodia Battery. This has five subtests evaluating aspects of affective prosody and recognition of sarcasm. 4. Talker identification using vowels spoken by 10 different talkers (three men, three women, two boys, and two girls). Bilateral cochlear implant users were chosen as the comparison group. Thirteen bimodal and 13 bilateral adult cochlear implant users were recruited; all had good speech perception in quiet. There were no significant differences between the mean scores of the bimodal and bilateral groups on any of the tests, although the bimodal group did perform better than the bilateral group on almost all tests. Performance on the different pitch-related tasks was not correlated, meaning that if a subject performed one task well they would not necessarily perform well on another. The correlation between the bimodal users' hearing threshold levels in the aided ear and their performance on these tasks was weak. Although the bimodal cochlear

  1. Reflecting on the dichotomy native-non native speakers in an EFL context

    OpenAIRE

    Mariño, Claudia

    2011-01-01

    This article provides a discussion based on constructs about the dichotomy betweennative and non-native speakers. Several models and examples are displayed about thespreading of the English language with the intention of understanding its developmentin the whole world and in Colombia, specifically. Then, some possible definitions aregiven to the term “native speaker” and its conceptualization is described as both realityand myth. One of the main reasons for writing this article is grounded on...

  2. Non-native salmonids affect amphibian occupancy at multiple spatial scales

    Science.gov (United States)

    Pilliod, David S.; Hossack, Blake R.; Bahls, Peter F.; Bull, Evelyn L.; Corn, Paul Stephen; Hokit, Grant; Maxell, Bryce A.; Munger, James C.; Wyrick, Aimee

    2010-01-01

    Aim The introduction of non-native species into aquatic environments has been linked with local extinctions and altered distributions of native species. We investigated the effect of non-native salmonids on the occupancy of two native amphibians, the long-toed salamander (Ambystoma macrodactylum) and Columbia spotted frog (Rana luteiventris), across three spatial scales: water bodies, small catchments and large catchments. Location Mountain lakes at ≥ 1500 m elevation were surveyed across the northern Rocky Mountains, USA. Methods We surveyed 2267 water bodies for amphibian occupancy (based on evidence of reproduction) and fish presence between 1986 and 2002 and modelled the probability of amphibian occupancy at each spatial scale in relation to habitat availability and quality and fish presence. Results After accounting for habitat features, we estimated that A. macrodactylum was 2.3 times more likely to breed in fishless water bodies than in water bodies with fish. Ambystoma macrodactylum also was more likely to occupy small catchments where none of the water bodies contained fish than in catchments where at least one water body contained fish. However, the probability of salamander occupancy in small catchments was also influenced by habitat availability (i.e. the number of water bodies within a catchment) and suitability of remaining fishless water bodies. We found no relationship between fish presence and salamander occupancy at the large-catchment scale, probably because of increased habitat availability. In contrast to A. macrodactylum, we found no relationship between fish presence and R. luteiventris occupancy at any scale. Main conclusions Our results suggest that the negative effects of non-native salmonids can extend beyond the boundaries of individual water bodies and increase A. macrodactylum extinction risk at landscape scales. We suspect that niche overlap between non-native fish and A. macrodactylum at higher elevations in the northern Rocky

  3. Non-Native Japanese Learners' Perception of Consonant Length in Japanese and Italian

    Science.gov (United States)

    Tsukada, Kimiko; Cox, Felicity; Hajek, John; Hirata, Yukari

    2018-01-01

    Learners of a foreign language (FL) typically have to learn to process sounds that do not exist in their first language (L1). As this is known to be difficult for adults, in particular, it is important for FL pedagogy to be informed by phonetic research. This study examined the role of FL learners' previous linguistic experience in the processing…

  4. When perception reflects reality: Non-native grass invasion alters small mammal risk landscapes and survival

    Science.gov (United States)

    Ceradnini, Joseph P.; Chalfoun, Anna

    2017-01-01

    Modification of habitat structure due to invasive plants can alter the risk landscape for wildlife by, for example, changing the quality or availability of refuge habitat. Whether perceived risk corresponds with actual fitness outcomes, however, remains an important open question. We simultaneously measured how habitat changes due to a common invasive grass (cheatgrass, Bromus tectorum) affected the perceived risk, habitat selection, and apparent survival of a small mammal, enabling us to assess how well perceived risk influenced important behaviors and reflected actual risk. We measured perceived risk by nocturnal rodents using a giving-up density foraging experiment with paired shrub (safe) and open (risky) foraging trays in cheatgrass and native habitats. We also evaluated microhabitat selection across a cheatgrass gradient as an additional assay of perceived risk and behavioral responses for deer mice (Peromyscus maniculatus) at two spatial scales of habitat availability. Finally, we used mark-recapture analysis to quantify deer mouse apparent survival across a cheatgrass gradient while accounting for detection probability and other habitat features. In the foraging experiment, shrubs were more important as protective cover in cheatgrass-dominated habitats, suggesting that cheatgrass increased perceived predation risk. Additionally, deer mice avoided cheatgrass and selected shrubs, and marginally avoided native grass, at two spatial scales. Deer mouse apparent survival varied with a cheatgrass–shrub interaction, corresponding with our foraging experiment results, and providing a rare example of a native plant mediating the effects of an invasive plant on wildlife. By synthesizing the results of three individual lines of evidence (foraging behavior, habitat selection, and apparent survival), we provide a rare example of linkage between behavioral responses of animals indicative of perceived predation risk and actual fitness outcomes. Moreover, our results suggest that exotic grass invasions can influence wildlife populations by altering risk landscapes and survival.

  5. Fitness benefits of the fruit fly Rhagoletis alternata on a non-native rose host.

    Science.gov (United States)

    Meijer, Kim; Smit, Christian; Schilthuizen, Menno; Beukeboom, Leo W

    2016-05-01

    Many species have been introduced worldwide into areas outside their natural range. Often these non-native species are introduced without their natural enemies, which sometimes leads to uncontrolled population growth. It is rarely reported that an introduced species provides a new resource for a native species. The rose hips of the Japanese rose, Rosa rugosa, which has been introduced in large parts of Europe, are infested by the native monophagous tephritid fruit fly Rhagoletis alternata. We studied differences in fitness benefits between R. alternata larvae using R. rugosa as well as native Rosa species in the Netherlands. R. alternata pupae were larger and heavier when the larvae fed on rose hips of R. rugosa. Larvae feeding on R. rugosa were parasitized less frequently by parasitic wasps than were larvae feeding on native roses. The differences in parasitization are probably due to morphological differences between the native and non-native rose hips: the hypanthium of a R. rugosa hip is thicker and provides the larvae with the possibility to feed deeper into the hip, meaning that the parasitoids cannot reach them with their ovipositor and the larvae escape parasitization. Our study shows that native species switching to a novel non-native host can experience fitness benefits compared to the original native host.

  6. Growth rate differences between resident native brook trout and non-native brown trout

    Science.gov (United States)

    Carlson, S.M.; Hendry, A.P.; Letcher, B.H.

    2007-01-01

    Between species and across season variation in growth was examined by tagging and recapturing individual brook trout Salvelinus fontinalis and brown trout Salmo trutta across seasons in a small stream (West Brook, Massachusetts, U.S.A.). Detailed information on body size and growth are presented to (1) test whether the two species differed in growth within seasons and (2) characterize the seasonal growth patterns for two age classes of each species. Growth differed between species in nearly half of the season- and age-specific comparisons. When growth differed, non-native brown trout grew faster than native brook trout in all but one comparison. Moreover, species differences were most pronounced when overall growth was high during the spring and early summer. These growth differences resulted in size asymmetries that were sustained over the duration of the study. A literature survey also indicated that non-native salmonids typically grow faster than native salmonids when the two occur in sympatry. Taken together, these results suggest that differences in growth are not uncommon for coexisting native and non-native salmonids. ?? 2007 The Authors.

  7. Evolution under changing climates: climatic niche stasis despite rapid evolution in a non-native plant.

    Science.gov (United States)

    Alexander, Jake M

    2013-09-22

    A topic of great current interest is the capacity of populations to adapt genetically to rapidly changing climates, for example by evolving the timing of life-history events, but this is challenging to address experimentally. I use a plant invasion as a model system to tackle this question by combining molecular markers, a common garden experiment and climatic niche modelling. This approach reveals that non-native Lactuca serriola originates primarily from Europe, a climatic subset of its native range, with low rates of admixture from Asia. It has rapidly refilled its climatic niche in the new range, associated with the evolution of flowering phenology to produce clines along climate gradients that mirror those across the native range. Consequently, some non-native plants have evolved development times and grow under climates more extreme than those found in Europe, but not among populations from the native range as a whole. This suggests that many plant populations can adapt rapidly to changed climatic conditions that are already within the climatic niche space occupied by the species elsewhere in its range, but that evolution to conditions outside of this range is more difficult. These findings can also help to explain the prevalence of niche conservatism among non-native species.

  8. Economic impacts of non-native forest insects in the continental United States.

    Directory of Open Access Journals (Sweden)

    Juliann E Aukema

    Full Text Available Reliable estimates of the impacts and costs of biological invasions are critical to developing credible management, trade and regulatory policies. Worldwide, forests and urban trees provide important ecosystem services as well as economic and social benefits, but are threatened by non-native insects. More than 450 non-native forest insects are established in the United States but estimates of broad-scale economic impacts associated with these species are largely unavailable. We developed a novel modeling approach that maximizes the use of available data, accounts for multiple sources of uncertainty, and provides cost estimates for three major feeding guilds of non-native forest insects. For each guild, we calculated the economic damages for five cost categories and we estimated the probability of future introductions of damaging pests. We found that costs are largely borne by homeowners and municipal governments. Wood- and phloem-boring insects are anticipated to cause the largest economic impacts by annually inducing nearly $1.7 billion in local government expenditures and approximately $830 million in lost residential property values. Given observations of new species, there is a 32% chance that another highly destructive borer species will invade the U.S. in the next 10 years. Our damage estimates provide a crucial but previously missing component of cost-benefit analyses to evaluate policies and management options intended to reduce species introductions. The modeling approach we developed is highly flexible and could be similarly employed to estimate damages in other countries or natural resource sectors.

  9. Invasion of non-native grasses causes a drop in soil carbon storage in California grasslands

    Energy Technology Data Exchange (ETDEWEB)

    Koteen, Laura E; Harte, John [Energy and Resources Group, 310 Barrows Hall, University of California, Berkeley, CA 94720 (United States); Baldocchi, Dennis D, E-mail: lkoteen@berkeley.edu [Department of Environmental Science, Policy and Management, 137 Mulford Hall, University of California, Berkeley, CA 94720 (United States)

    2011-10-15

    Vegetation change can affect the magnitude and direction of global climate change via its effect on carbon cycling among plants, the soil and the atmosphere. The invasion of non-native plants is a major cause of land cover change, of biodiversity loss, and of other changes in ecosystem structure and function. In California, annual grasses from Mediterranean Europe have nearly displaced native perennial grasses across the coastal hillsides and terraces of the state. Our study examines the impact of this invasion on carbon cycling and storage at two sites in northern coastal California. The results suggest that annual grass invasion has caused an average drop in soil carbon storage of 40 Mg/ha in the top half meter of soil, although additional mechanisms may also contribute to soil carbon losses. We attribute the reduction in soil carbon storage to low rates of net primary production in non-native annuals relative to perennial grasses, a shift in rooting depth and water use to primarily shallow sources, and soil respiratory losses in non-native grass soils that exceed production rates. These results indicate that even seemingly subtle land cover changes can significantly impact ecosystem functions in general, and carbon storage in particular.

  10. Mental health status in pregnancy among native and non-native Swedish-speaking women

    DEFF Research Database (Denmark)

    Wangel, Anne-Marie; Schei, Berit; Ryding, Elsa Lena

    2012-01-01

    OBJECTIVES: To describe mental health status in native and non-native Swedish-speaking pregnant women and explore risk factors of depression and posttraumatic stress (PTS) symptoms. DESIGN AND SETTING: A cross-sectional questionnaire study was conducted at midwife-based antenatal clinics in South......OBJECTIVES: To describe mental health status in native and non-native Swedish-speaking pregnant women and explore risk factors of depression and posttraumatic stress (PTS) symptoms. DESIGN AND SETTING: A cross-sectional questionnaire study was conducted at midwife-based antenatal clinics...... in Southern Sweden. SAMPLE: A non-selected group of women in mid-pregnancy. METHODS: Participants completed a questionnaire covering background characteristics, social support, life events, mental health variables and the short Edinburgh Depression Scale. MAIN OUTCOME MEASURES: Depressive symptoms during...... the past week and PTS symptoms during the past year. RESULTS: Out of 1003 women, 21.4% reported another language than Swedish as their mother tongue and were defined as non-native. These women were more likely to be younger, have fewer years of education, potential financial problems, and lack of social...

  11. Invasion of non-native grasses causes a drop in soil carbon storage in California grasslands

    Science.gov (United States)

    Koteen, Laura E.; Baldocchi, Dennis D.; Harte, John

    2011-10-01

    Vegetation change can affect the magnitude and direction of global climate change via its effect on carbon cycling among plants, the soil and the atmosphere. The invasion of non-native plants is a major cause of land cover change, of biodiversity loss, and of other changes in ecosystem structure and function. In California, annual grasses from Mediterranean Europe have nearly displaced native perennial grasses across the coastal hillsides and terraces of the state. Our study examines the impact of this invasion on carbon cycling and storage at two sites in northern coastal California. The results suggest that annual grass invasion has caused an average drop in soil carbon storage of 40 Mg/ha in the top half meter of soil, although additional mechanisms may also contribute to soil carbon losses. We attribute the reduction in soil carbon storage to low rates of net primary production in non-native annuals relative to perennial grasses, a shift in rooting depth and water use to primarily shallow sources, and soil respiratory losses in non-native grass soils that exceed production rates. These results indicate that even seemingly subtle land cover changes can significantly impact ecosystem functions in general, and carbon storage in particular.

  12. Invasion of non-native grasses causes a drop in soil carbon storage in California grasslands

    International Nuclear Information System (INIS)

    Koteen, Laura E; Harte, John; Baldocchi, Dennis D

    2011-01-01

    Vegetation change can affect the magnitude and direction of global climate change via its effect on carbon cycling among plants, the soil and the atmosphere. The invasion of non-native plants is a major cause of land cover change, of biodiversity loss, and of other changes in ecosystem structure and function. In California, annual grasses from Mediterranean Europe have nearly displaced native perennial grasses across the coastal hillsides and terraces of the state. Our study examines the impact of this invasion on carbon cycling and storage at two sites in northern coastal California. The results suggest that annual grass invasion has caused an average drop in soil carbon storage of 40 Mg/ha in the top half meter of soil, although additional mechanisms may also contribute to soil carbon losses. We attribute the reduction in soil carbon storage to low rates of net primary production in non-native annuals relative to perennial grasses, a shift in rooting depth and water use to primarily shallow sources, and soil respiratory losses in non-native grass soils that exceed production rates. These results indicate that even seemingly subtle land cover changes can significantly impact ecosystem functions in general, and carbon storage in particular.

  13. Evaluating ecosystem services provided by non-native species: an experimental test in California grasslands.

    Science.gov (United States)

    Stein, Claudia; Hallett, Lauren M; Harpole, W Stanley; Suding, Katharine N

    2014-01-01

    The concept of ecosystem services--the benefits that nature provides to human's society--has gained increasing attention over the past decade. Increasing global abiotic and biotic change, including species invasions, is threatening the secure delivery of these ecosystem services. Efficient evaluation methods of ecosystem services are urgently needed to improve our ability to determine management strategies and restoration goals in face of these new emerging ecosystems. Considering a range of multiple ecosystem functions may be a useful way to determine such strategies. We tested this framework experimentally in California grasslands, where large shifts in species composition have occurred since the late 1700's. We compared a suite of ecosystem functions within one historic native and two non-native species assemblages under different grazing intensities to address how different species assemblages vary in provisioning, regulatory and supporting ecosystem services. Forage production was reduced in one non-native assemblage (medusahead). Cultural ecosystem services, such as native species diversity, were inherently lower in both non-native assemblages, whereas most other services were maintained across grazing intensities. All systems provided similar ecosystem services under the highest grazing intensity treatment, which simulated unsustainable grazing intensity. We suggest that applying a more comprehensive ecosystem framework that considers multiple ecosystem services to evaluate new emerging ecosystems is a valuable tool to determine management goals and how to intervene in a changing ecosystem.

  14. Community-level plant-soil feedbacks explain landscape distribution of native and non-native plants.

    Science.gov (United States)

    Kulmatiski, Andrew

    2018-02-01

    Plant-soil feedbacks (PSFs) have gained attention for their potential role in explaining plant growth and invasion. While promising, most PSF research has measured plant monoculture growth on different soils in short-term, greenhouse experiments. Here, five soil types were conditioned by growing one native species, three non-native species, or a mixed plant community in different plots in a common-garden experiment. After 4 years, plants were removed and one native and one non-native plant community were planted into replicate plots of each soil type. After three additional years, the percentage cover of each of the three target species in each community was measured. These data were used to parameterize a plant community growth model. Model predictions were compared to native and non-native abundance on the landscape. Native community cover was lowest on soil conditioned by the dominant non-native, Centaurea diffusa , and non-native community cover was lowest on soil cultivated by the dominant native, Pseudoroegneria spicata . Consistent with plant growth on the landscape, the plant growth model predicted that the positive PSFs observed in the common-garden experiment would result in two distinct communities on the landscape: a native plant community on native soils and a non-native plant community on non-native soils. In contrast, when PSF effects were removed, the model predicted that non-native plants would dominate all soils, which was not consistent with plant growth on the landscape. Results provide an example where PSF effects were large enough to change the rank-order abundance of native and non-native plant communities and to explain plant distributions on the landscape. The positive PSFs that contributed to this effect reflected the ability of the two dominant plant species to suppress each other's growth. Results suggest that plant dominance, at least in this system, reflects the ability of a species to suppress the growth of dominant competitors

  15. Presence and abundance of non-native plant species associated with recent energy development in the Williston Basin

    Science.gov (United States)

    Preston, Todd M.

    2015-01-01

    The Williston Basin, located in the Northern Great Plains, is experiencing rapid energy development with North Dakota and Montana being the epicenter of current and projected development in the USA. The average single-bore well pad is 5 acres with an estimated 58,485 wells in North Dakota alone. This landscape-level disturbance may provide a pathway for the establishment of non-native plants. To evaluate potential influences of energy development on the presence and abundance of non-native species, vegetation surveys were conducted at 30 oil well sites (14 ten-year-old and 16 five-year-old wells) and 14 control sites in native prairie environments across the Williston Basin. Non-native species richness and cover were recorded in four quadrats, located at equal distances, along four transects for a total of 16 quadrats per site. Non-natives were recorded at all 44 sites and ranged from 5 to 13 species, 7 to 15 species, and 2 to 8 species at the 10-year, 5-year, and control sites, respectively. Respective non-native cover ranged from 1 to 69, 16 to 76, and 2 to 82 %. Total, forb, and graminoid non-native species richness and non-native forb cover were significantly greater at oil well sites compared to control sites. At oil well sites, non-native species richness and forb cover were significantly greater adjacent to the well pads and decreased with distance to values similar to control sites. Finally, non-native species whose presence and/or abundance were significantly greater at oil well sites relative to control sites were identified to aid management efforts.

  16. Non-Native Plant Invasion along Elevation and Canopy Closure Gradients in a Middle Rocky Mountain Ecosystem.

    Directory of Open Access Journals (Sweden)

    Joshua P Averett

    Full Text Available Mountain environments are currently among the ecosystems least invaded by non-native species; however, mountains are increasingly under threat of non-native plant invasion. The slow pace of exotic plant invasions in mountain ecosystems is likely due to a combination of low anthropogenic disturbances, low propagule supply, and extreme/steep environmental gradients. The importance of any one of these factors is debated and likely ecosystem dependent. We evaluated the importance of various correlates of plant invasions in the Wallowa Mountain Range of northeastern Oregon and explored whether non-native species distributions differed from native species along an elevation gradient. Vascular plant communities were sampled in summer 2012 along three mountain roads. Transects (n = 20 were evenly stratified by elevation (~70 m intervals along each road. Vascular plant species abundances and environmental parameters were measured. We used indicator species analysis to identify habitat affinities for non-native species. Plots were ordinated in species space, joint plots and non-parametric multiplicative regression were used to relate species and community variation to environmental variables. Non-native species richness decreased continuously with increasing elevation. In contrast, native species richness displayed a unimodal distribution with maximum richness occurring at mid-elevations. Species composition was strongly related to elevation and canopy openness. Overlays of trait and environmental factors onto non-metric multidimensional ordinations identified the montane-subalpine community transition and over-story canopy closure exceeding 60% as potential barriers to non-native species establishment. Unlike native species, non-native species showed little evidence for high-elevation or closed-canopy specialization. These data suggest that non-native plants currently found in the Wallowa Mountains are dependent on open canopies and disturbance for

  17. Non-Native Plant Invasion along Elevation and Canopy Closure Gradients in a Middle Rocky Mountain Ecosystem.

    Science.gov (United States)

    Averett, Joshua P; McCune, Bruce; Parks, Catherine G; Naylor, Bridgett J; DelCurto, Tim; Mata-González, Ricardo

    2016-01-01

    Mountain environments are currently among the ecosystems least invaded by non-native species; however, mountains are increasingly under threat of non-native plant invasion. The slow pace of exotic plant invasions in mountain ecosystems is likely due to a combination of low anthropogenic disturbances, low propagule supply, and extreme/steep environmental gradients. The importance of any one of these factors is debated and likely ecosystem dependent. We evaluated the importance of various correlates of plant invasions in the Wallowa Mountain Range of northeastern Oregon and explored whether non-native species distributions differed from native species along an elevation gradient. Vascular plant communities were sampled in summer 2012 along three mountain roads. Transects (n = 20) were evenly stratified by elevation (~70 m intervals) along each road. Vascular plant species abundances and environmental parameters were measured. We used indicator species analysis to identify habitat affinities for non-native species. Plots were ordinated in species space, joint plots and non-parametric multiplicative regression were used to relate species and community variation to environmental variables. Non-native species richness decreased continuously with increasing elevation. In contrast, native species richness displayed a unimodal distribution with maximum richness occurring at mid-elevations. Species composition was strongly related to elevation and canopy openness. Overlays of trait and environmental factors onto non-metric multidimensional ordinations identified the montane-subalpine community transition and over-story canopy closure exceeding 60% as potential barriers to non-native species establishment. Unlike native species, non-native species showed little evidence for high-elevation or closed-canopy specialization. These data suggest that non-native plants currently found in the Wallowa Mountains are dependent on open canopies and disturbance for establishment in low

  18. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  19. College Students' Perceptions of the C-Print Speech-to-Text Transcription System.

    Science.gov (United States)

    Elliot, L B; Stinson, M S; McKee, B G; Everhart, V S; Francis, P J

    2001-01-01

    C-Print is a real-time speech-to-text transcription system used as a support service with deaf students in mainstreamed classes. Questionnaires were administered to 36 college students in 32 courses in which the C-Print system was used in addition to interpreting and note taking. Twenty-two of these students were also interviewed. Questionnaire items included student ratings of lecture comprehension. Student ratings indicated good comprehension with C-Print, and the mean rating was significantly higher than that for understanding of the interpreter. Students also rated the hard copy printout provided by C-Print as helpful, and they reported that they used these notes more frequently than the handwritten notes from a paid student note taker. Interview results were consistent with those for the questionnaire. Questionnaire and interview responses regarding use of C-Print as the only support service indicated that this arrangement would be acceptable to many students, but not to others. Communication characteristics were related to responses to the questionnaire. Students who were relatively proficient in reading and writing English, and in speech-reading, responded more favorably to C-Print.

  20. Piano training enhances the neural processing of pitch and improves speech perception in Mandarin-speaking children.

    Science.gov (United States)

    Nan, Yun; Liu, Li; Geiser, Eveline; Shu, Hua; Gong, Chen Chen; Dong, Qi; Gabrieli, John D E; Desimone, Robert

    2018-06-25

    Musical training confers advantages in speech-sound processing, which could play an important role in early childhood education. To understand the mechanisms of this effect, we used event-related potential and behavioral measures in a longitudinal design. Seventy-four Mandarin-speaking children aged 4-5 y old were pseudorandomly assigned to piano training, reading training, or a no-contact control group. Six months of piano training improved behavioral auditory word discrimination in general as well as word discrimination based on vowels compared with the controls. The reading group yielded similar trends. However, the piano group demonstrated unique advantages over the reading and control groups in consonant-based word discrimination and in enhanced positive mismatch responses (pMMRs) to lexical tone and musical pitch changes. The improved word discrimination based on consonants correlated with the enhancements in musical pitch pMMRs among the children in the piano group. In contrast, all three groups improved equally on general cognitive measures, including tests of IQ, working memory, and attention. The results suggest strengthened common sound processing across domains as an important mechanism underlying the benefits of musical training on language processing. In addition, although we failed to find far-transfer effects of musical training to general cognition, the near-transfer effects to speech perception establish the potential for musical training to help children improve their language skills. Piano training was not inferior to reading training on direct tests of language function, and it even seemed superior to reading training in enhancing consonant discrimination.

  1. The Effects of Hearing Protectors on Speech Communication and the Perception of Warning Signals

    Science.gov (United States)

    1989-06-01

    situations, and cite Talamo (1982) as showing this problem with tractor drivers. Coleman Pt al. raise questions as to the practical significance of this...conditions: Toward a new theory of localization. J. Aud. Res., 16, 143-150, 1976. Talamo , J.D.C. Hearing in tractor cabs: Perception and directional effects

  2. Measurement of pitch in speech : an implementation of Goldstein's theory of pitch perception

    NARCIS (Netherlands)

    Duifhuis, H.; Willems, L.F.; Sluyter, R.J.

    1982-01-01

    Recent developments in hearing theory have resulted in the rather general acceptance of the idea that the perception of pitch of complex sounds is the result of the psychological pattern recognition process. The pitch is supposedly mediated by the fundamental of the harmonic spectrum which fits the

  3. Pragmatic Difficulties in the Production of the Speech Act of Apology by Iraqi EFL Learners

    Science.gov (United States)

    Al-Ghazalli, Mehdi Falih; Al-Shammary, Mohanad A. Amert

    2014-01-01

    The purpose of this paper is to investigate the pragmatic difficulties encountered by Iraqi EFL university students in producing the speech act of apology. Although the act of apology is easy to recognize or use by native speakers of English, non-native speakers generally encounter difficulties in discriminating one speech act from another. The…

  4. Speech perception in older hearing impaired listeners: benefits of perceptual training.

    Directory of Open Access Journals (Sweden)

    David L Woods

    Full Text Available Hearing aids (HAs only partially restore the ability of older hearing impaired (OHI listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC syllables containing different vowels (/ɑ/, /i/, or /u/ and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs spanning a 12 dB range. Noise levels were adjusted over training sessions based on d' measures. Listeners were tested before and after training to measure (1 changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2 sentence reception thresholds (SeRTs using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d' thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in

  5. Perception of Music and Speech in Adolescents with Cochlear Implants – A Pilot Study on Effects of Intensive Musical Ear Training

    DEFF Research Database (Denmark)

    Petersen, Bjørn; Sørensen, Stine Derdau; Pedersen, Ellen Raben

    their standard school schedule and received no music training. Before and after the intervention period, both groups completed a set of tests for perception of music, speech and emotional prosody. In addition, the participants filled out a questionnaire which examined music listening habits and enjoyment....... RESULTS CI users significantly improved their overall music perception and discrimination of melodic contour and rhythm in particular. No effect of the music training was found on discrimination of emotional prosody or speech. The CI users described levels of music engagement and enjoyment that were...... combined with their positive feedback suggests that music training could form part of future rehabilitation programs as a strong, motivational and beneficial method of improving auditory skills in adolescent CI users....

  6. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children with Normal Hearing: A Replication and Extension of Eisenberg et al., 2002

    Science.gov (United States)

    Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.

    2016-01-01

    Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words

  7. Long-term trends of native and non-native fish faunas in the American Southwest

    Directory of Open Access Journals (Sweden)

    Olden, J. D.

    2005-06-01

    Full Text Available Environmental degradation and the proliferation of non-native fish species threaten the endemic, and highly unique fish faunas of the American Southwest. The present study examines long-term trends (> 160 years of fish species distributions in the Lower Colorado River Basin and identifies those native species (n = 28 exhibiting the greatest rates of decline and those non-native species (n = 48 exhibiting the highest rates of spread. Among the fastest expanding invaders in the basin are red shiner (Cyprinella lutrensis, fathead minnow (Pimephales promelas, green sunfish (Lepomis cyanellus, largemouth bass (Micropterus salmoides, western mosquitofish (Gambussia affinis and channel catfish (Ictalurus punctatus; species considered to be the most invasive in terms of their negative impacts on native fish communities. Interestingly, non-native species that have been recently introduced (1950+ have generally spread at substantially lower rates as compared to species introduced prior to this time (especially from 1920 to 1950, likely reflecting reductions in human-aided spread of species. We found general agreement between patterns of species decline and extant distribution sizes and official listing status under the U.S. Endangered Species Act. ‘Endangered’ species have generally experienced greater declines and have smaller present-day distributions compared to ‘threatened’ species, which in turn have shown greater declines and smaller distributions than those species not currently listed. A number of notable exceptions did exist, however, and these may provide critical information to help guide the future listing of species (i.e., identification of candidates and the upgrading or downgrading of current listed species that are endemic to the Lower Colorado River Basin. The strong correlation between probability estimates of local extirpation and patterns of native species decline and present-day distributions suggest a possible proactive

  8. Environmental niche separation between native and non-native benthic invertebrate species: Case study of the northern Baltic Sea.

    Science.gov (United States)

    Jänes, Holger; Herkül, Kristjan; Kotta, Jonne

    2017-10-01

    Knowledge and understanding of geographic distributions of species is crucial for many aspects in ecology, conservation, policy making and management. In order to reach such an understanding, it is important to know abiotic variables that impact and drive distributions of native and non-native species. We used an existing long-term macrobenthos database for species presence-absence information and biomass estimates at different environmental gradients in the northern Baltic Sea. Region specific abiotic variables (e.g. salinity, depth) were derived from previously constructed bathymetric and hydrodynamic models. Multidimensional ordination techniques were then applied to investigate potential niche space separation between all native and non-native invertebrates in the northern Baltic Sea. Such an approach allowed to obtain data rich and robust estimates of the current native and non-native species distributions and outline important abiotic parameters influencing the observed pattern. The results showed clear niche space separation between native and non-native species. Non-native species were situated in an environmental space characterized by reduced salinity, high temperatures, high proportion of soft seabed and decreased depth and wave exposure whereas native species displayed an opposite pattern. Different placement of native and non-native species along the studied environmental niche space is likely to be explained by the differences in their evolutionary history, human mediated activities and geological youth of the Baltic Sea. The results of this study can provide early warnings and effectively outline coastal areas in the northern Baltic Sea that are prone to further range expansion of non-native species as climate change is expected to significantly reduce salinity and increase temperature in wide coastal areas, both supporting the disappearance of native and appearance of non-native species. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Status and management of non-native plant invasion in three of the largest national parks in the United States

    Directory of Open Access Journals (Sweden)

    Scott Abella

    2015-06-01

    Full Text Available Globally, invasion by non-native plants threatens resources that nature reserves are designated to protect. We assessed the status of non-native plant invasion on 1,662, 0.1-ha plots in Death Valley National Park, Mojave National Preserve, and Lake Mead National Recreation Area. These parks comprise 2.5 million ha, 23% of the national park land in the contiguous USA. At least one non-native species inhabited 82% of plots. Thirty-one percent of plots contained one non-native species, 30% two, 17% three, and 4% four to ten non-native species. Red brome (Bromus rubens, an ‘ecosystem engineer’ that alters fire regimes, was most widespread, infesting 60% of plots. By identifying frequency of species through this assessment, early detection and treatment can target infrequent species or minimally invaded sites, while containment strategies could focus on established invaders. We further compared two existing systems for prioritizing species for management and found that a third of species on plots had no rankings available. Moreover, rankings did not always agree between ranking systems for species that were ranked. Presence of multiple non-native species complicates treatment, and while we found that 40% of plots contained both forb and grass invaders, exploiting accelerated phenology of non-natives (compared to native annuals might help manage multi-species invasions. Large sizes of these parks and scale of invasion are formidable challenges for management. Yet, precisely because of their size, these reserves represent opportunities to conserve large landscapes of native species by managing non-native plant invasions.

  10. Small mammal use of native warm-season and non-native cool-season grass forage fields

    Science.gov (United States)

    Ryan L Klimstra,; Christopher E Moorman,; Converse, Sarah J.; Royle, J. Andrew; Craig A Harper,

    2015-01-01

    Recent emphasis has been put on establishing native warm-season grasses for forage production because it is thought native warm-season grasses provide higher quality wildlife habitat than do non-native cool-season grasses. However, it is not clear whether native warm-season grass fields provide better resources for small mammals than currently are available in non-native cool-season grass forage production fields. We developed a hierarchical spatially explicit capture-recapture model to compare abundance of hispid cotton rats (Sigmodon hispidus), white-footed mice (Peromyscus leucopus), and house mice (Mus musculus) among 4 hayed non-native cool-season grass fields, 4 hayed native warm-season grass fields, and 4 native warm-season grass-forb ("wildlife") fields managed for wildlife during 2 summer trapping periods in 2009 and 2010 of the western piedmont of North Carolina, USA. Cotton rat abundance estimates were greater in wildlife fields than in native warm-season grass and non-native cool-season grass fields and greater in native warm-season grass fields than in non-native cool-season grass fields. Abundances of white-footed mouse and house mouse populations were lower in wildlife fields than in native warm-season grass and non-native cool-season grass fields, but the abundances were not different between the native warm-season grass and non-native cool-season grass fields. Lack of cover following haying in non-native cool-season grass and native warm-season grass fields likely was the key factor limiting small mammal abundance, especially cotton rats, in forage fields. Retention of vegetation structure in managed forage production systems, either by alternately resting cool-season and warm-season grass forage fields or by leaving unharvested field borders, should provide refugia for small mammals during haying events.

  11. Impact of non-native terrestrial mammals on the structure of the terrestrial mammal food web of Newfoundland, Canada.

    Directory of Open Access Journals (Sweden)

    Justin S Strong

    Full Text Available The island of Newfoundland is unique because it has as many non-native terrestrial mammals as native ones. The impacts of non-native species on native flora and fauna can be profound and invasive species have been identified as one of the primary drivers of species extinction. Few studies, however, have investigated the effects of a non-native species assemblage on community and ecosystem properties. We reviewed the literature to build the first terrestrial mammal food web for the island of Newfoundland and then used network analyses to investigate how the timing of introductions and trophic position of non-native species has affected the structure of the terrestrial mammal food web in Newfoundland. The first non-native mammals (house mouse and brown rat became established in Newfoundland with human settlement in the late 15th and early 16th centuries. Coyotes and southern red-backed voles are the most recent mammals to establish themselves on the island in 1985 and 1998, respectively. The fraction of intermediate species increased with the addition of non-native mammals over time whereas the fraction of basal and top species declined over time. This increase in intermediate species mediated by non-native species arrivals led to an overall increase in the terrestrial mammal food web connectance and generality (i.e. mean number of prey per predator. This diverse prey base and sources of carrion may have facilitated the natural establishment of coyotes on the island. Also, there is some evidence that the introduction of non-native prey species such as the southern red-backed vole has contributed to the recovery of the threatened American marten. Long-term monitoring of the food web is required to understand and predict the impacts of the diverse novel interactions that are developing in the terrestrial mammal food web of Newfoundland.

  12. Psychoacoustic Performance and Music and Speech Perception in Prelingually Deafened Children with Cochlear Implants

    OpenAIRE

    Jung, Kyu Hwan; Won, Jong Ho; Drennan, Ward R.; Jameyson, Elyse; Miyasaki, Gary; Norton, Susan J.; Rubinstein, Jay T.

    2012-01-01

    The number of pediatric cochlear implant (CI) recipients has increased substantially over the past 10 years, and it has become more important to understand the underlying mechanisms of the variable outcomes in this population. In this study, psychoacoustic measures of spectral-ripple and Schroeder-phase discrimination, the Clinical Assessment of Music Perception, and consonant-nucleus-consonant (CNC) word recognition in quiet and spondee reception threshold (SRT) in noise tests have been pres...

  13. Monolingual and Bilingual Infants’ Ability to Use Non-native Tone for Word Learning Deteriorates by the Second Year After Birth

    Directory of Open Access Journals (Sweden)

    Liquan Liu

    2018-03-01

    Full Text Available Previous studies reported a non-native word learning advantage for bilingual infants at around 18 months. We investigated developmental changes in infant interpretation of sounds that aid in object mapping. Dutch monolingual and bilingual (exposed to Dutch and a second non-tone-language infants’ word learning ability was examined on two novel label–object pairings using syllables differing in Mandarin tones as labels (flat vs. falling. Infants aged 14–15 months, regardless of language backgrounds, were sensitive to violations in the label–objects pairings when lexical tones were switched compared to when they were the same as habituated. Conversely at 17–18 months, neither monolingual nor bilingual infants demonstrated learning. Linking with existing literature, infants’ ability to associate non-native tones with meanings may be related to tonal acoustic properties and/or perceptual assimilation to native prosodic categories. These findings provide new insights into the relation between infant tone perception, learning, and interpretative narrowing from a developmental perspective.

  14. A global organism detection and monitoring system for non-native species

    Science.gov (United States)

    Graham, J.; Newman, G.; Jarnevich, C.; Shory, R.; Stohlgren, T.J.

    2007-01-01

    Harmful invasive non-native species are a significant threat to native species and ecosystems, and the costs associated with non-native species in the United States is estimated at over $120 Billion/year. While some local or regional databases exist for some taxonomic groups, there are no effective geographic databases designed to detect and monitor all species of non-native plants, animals, and pathogens. We developed a web-based solution called the Global Organism Detection and Monitoring (GODM) system to provide real-time data from a broad spectrum of users on the distribution and abundance of non-native species, including attributes of their habitats for predictive spatial modeling of current and potential distributions. The four major subsystems of GODM provide dynamic links between the organism data, web pages, spatial data, and modeling capabilities. The core survey database tables for recording invasive species survey data are organized into three categories: "Where, Who & When, and What." Organisms are identified with Taxonomic Serial Numbers from the Integrated Taxonomic Information System. To allow users to immediately see a map of their data combined with other user's data, a custom geographic information system (GIS) Internet solution was required. The GIS solution provides an unprecedented level of flexibility in database access, allowing users to display maps of invasive species distributions or abundances based on various criteria including taxonomic classification (i.e., phylum or division, order, class, family, genus, species, subspecies, and variety), a specific project, a range of dates, and a range of attributes (percent cover, age, height, sex, weight). This is a significant paradigm shift from "map servers" to true Internet-based GIS solutions. The remainder of the system was created with a mix of commercial products, open source software, and custom software. Custom GIS libraries were created where required for processing large datasets

  15. Seed rain under native and non-native tree species in the Cabo Rojo National Wildlife Refuge, Puerto Rico.

    Science.gov (United States)

    Arias Garcia, Andrea; Chinea, J Danilo

    2014-09-01

    Seed dispersal is a fundamental process in plant ecology and is of critical importance for the restoration of tropical communities. The lands of the Cabo Rojo National Wildlife Refuge (CRNWR), formerly under agriculture, were abandoned in the 1970s and colonized mainly by non-native tree species of degraded pastures. Here we described the seed rain under the most common native and non-native trees in the refuge in an attempt to determine if focal tree geographic origin (native versus non-native) influences seed dispersal. For this, seed rain was sampled for one year under the canopies of four native and four non-native tree species common in this refuge using 40 seed traps. No significant differences were found for the abundance of seeds, or their diversity, dispersing under native versus non-native focal tree species, nor under the different tree species. A significantly different seed species composition was observed reaching native versus non-native focal species. However, this last result could be more easily explained as a function of distance of the closest adults of the two most abundantly dispersed plant species to the seed traps than as a function of the geographic origin of the focal species. We suggest to continue the practice of planting native tree species, not only as a way to restore the community to a condition similar to the original one, but also to reduce the distances needed for effective dispersal.

  16. General contrast effects in speech perception: effect of preceding liquid on stop consonant identification.

    Science.gov (United States)

    Lotto, A J; Kluender, K R

    1998-05-01

    When members of a series of synthesized stop consonants varying acoustically in F3 characteristics and varying perceptually from /da/ to /ga/ are preceded by /al/, subjects report hearing more /ga/ syllables relative to when each member is preceded by /ar/ (Mann, 1980). It has been suggested that this result demonstrates the existence of a mechanism that compensates for coarticulation via tacit knowledge of articulatory dynamics and constraints, or through perceptual recovery of vocal-tract dynamics. The present study was designed to assess the degree to which these perceptual effects are specific to qualities of human articulatory sources. In three experiments, series of consonant-vowel (CV) stimuli varying in F3-onset frequency (/da/-/ga/) were preceded by speech versions or nonspeech analogues of /al/ and /ar/. The effect of liquid identity on stop consonant labeling remained when the preceding VC was produced by a female speaker and the CV syllable was modeled after a male speaker's productions. Labeling boundaries also shifted when the CV was preceded by a sine wave glide modeled after F3 characteristics of /al/ and /ar/. Identifications shifted even when the preceding sine wave was of constant frequency equal to the offset frequency of F3 from a natural production. These results suggest an explanation in terms of general auditory processes as opposed to recovery of or knowledge of specific articulatory dynamics.

  17. Gender differences in the activation of inferior frontal cortex during emotional speech perception.

    Science.gov (United States)

    Schirmer, Annett; Zysset, Stefan; Kotz, Sonja A; Yves von Cramon, D

    2004-03-01

    We investigated the brain regions that mediate the processing of emotional speech in men and women by presenting positive and negative words that were spoken with happy or angry prosody. Hence, emotional prosody and word valence were either congruous or incongruous. We assumed that an fRMI contrast between congruous and incongruous presentations would reveal the structures that mediate the interaction of emotional prosody and word valence. The left inferior frontal gyrus (IFG) was more strongly activated in incongruous as compared to congruous trials. This difference in IFG activity was significantly larger in women than in men. Moreover, the congruence effect was significant in women whereas it only appeared as a tendency in men. As the left IFG has been repeatedly implicated in semantic processing, these findings are taken as evidence that semantic processing in women is more susceptible to influences from emotional prosody than is semantic processing in men. Moreover, the present data suggest that the left IFG mediates increased semantic processing demands imposed by an incongruence between emotional prosody and word valence.

  18. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    Science.gov (United States)

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech