WorldWideScience

Sample records for non-native speech perception

  1. The role of abstraction in non-native speech perception.

    Science.gov (United States)

    Pajak, Bozena; Levy, Roger

    2014-09-01

    The end-result of perceptual reorganization in infancy is currently viewed as a reconfigured perceptual space, "warped" around native-language phonetic categories, which then acts as a direct perceptual filter on any non-native sounds: naïve-listener discrimination of non-native-sounds is determined by their mapping onto native-language phonetic categories that are acoustically/articulatorily most similar. We report results that suggest another factor in non-native speech perception: some perceptual sensitivities cannot be attributed to listeners' warped perceptual space alone, but rather to enhanced general sensitivity along phonetic dimensions that the listeners' native language employs to distinguish between categories. Specifically, we show that the knowledge of a language with short and long vowel categories leads to enhanced discrimination of non-native consonant length contrasts. We argue that these results support a view of perceptual reorganization as the consequence of learners' hierarchical inductive inferences about the structure of the language's sound system: infants not only acquire the specific phonetic category inventory, but also draw higher-order generalizations over the set of those categories, such as the overall informativity of phonetic dimensions for sound categorization. Non-native sound perception is then also determined by sensitivities that emerge from these generalizations, rather than only by mappings of non-native sounds onto native-language phonetic categories.

  2. Non-native speech perception in adverse conditions: A review

    NARCIS (Netherlands)

    Garcia Lecumberri, M.L.; Cooke, M.P.; Cutler, A.

    2010-01-01

    If listening in adverse conditions is hard, then listening in a foreign language is doubly so: non-native listeners have to cope with both imperfect signals and imperfect knowledge. Comparison of native and non-native listener performance in speech-in-noise tasks helps to clarify the role of prior l

  3. The influence of non-native language proficiency on speech perception performance

    Directory of Open Access Journals (Sweden)

    Lisa eKilman

    2014-07-01

    Full Text Available The present study examined to what extent proficiency in a non-native language influences speech perception in noise. We explored how English proficiency affected native (Swedish and non-native (English speech perception in four speech reception threshold (SRT conditions including two energetic (stationary, fluctuating noise and two informational (two-talker babble Swedish, two-talker babble English maskers. Twenty-three normal-hearing native Swedish listeners participated, age between 28 and 64 years. The participants also performed standardized tests in English proficiency, non-verbal reasoning and working memory capacity. Our approach with focus on proficiency and the assessment of external as well as internal, listener-related factors allowed us to examine which variables explained intra-and interindividual differences in native and non-native speech perception performance. The main result was that in the non-native target, the level of English proficiency is a decisive factor for speech intelligibility in noise. High English proficiency improved performance in all four conditions when target language was English. The informational maskers were interfering more with perception than energetic maskers, specifically in the non-native language. The study also confirmed that the SRT's were better when target language was native compared to non-native.

  4. Native Speakers' Perception of Non-Native English Speech

    Science.gov (United States)

    Jaber, Maysa; Hussein, Riyad F.

    2011-01-01

    This study is aimed at investigating the rating and intelligibility of different non-native varieties of English, namely French English, Japanese English and Jordanian English by native English speakers and their attitudes towards these foreign accents. To achieve the goals of this study, the researchers used a web-based questionnaire which…

  5. Decoding speech perception by native and non-native speakers using single-trial electrophysiological data.

    Directory of Open Access Journals (Sweden)

    Alex Brandmeyer

    Full Text Available Brain-computer interfaces (BCIs are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1 Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2 Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native. A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.

  6. A Multidimensional Scaling Study of Native and Non-Native Listeners' Perception of Second Language Speech.

    Science.gov (United States)

    Foote, Jennifer A; Trofimovich, Pavel

    2016-04-01

    Second language speech learning is predicated on learners' ability to notice differences between their own language output and that of their interlocutors. Because many learners interact primarily with other second language users, it is crucial to understand which dimensions underlie the perception of second language speech by learners, compared to native speakers. For this study, 15 non-native and 10 native English speakers rated 30-s language audio-recordings from controlled reading and interview tasks for dissimilarity, using all pairwise combinations of recordings. PROXSCAL multidimensional scaling analyses revealed fluency and aspects of speakers' pronunciation as components underlying listener judgments but showed little agreement across listeners. Results contribute to an understanding of why second language speech learning is difficult and provide implications for language training.

  7. Age of Acquisition and Proficiency in a Second Language Independently Influence the Perception of Non-Native Speech

    Science.gov (United States)

    Archila-Suerte, Pilar; Zevin, Jason; Bunta, Ferenc; Hernandez, Arturo E.

    2012-01-01

    Sensorimotor processing in children and higher-cognitive processing in adults could determine how non-native phonemes are acquired. This study investigates how age-of-acquisition (AOA) and proficiency-level (PL) predict native-like perception of statistically dissociated L2 categories, i.e., within-category and between-category. In a similarity…

  8. Using the Speech Transmission Index to predict the intelligibility of non-native speech

    Science.gov (United States)

    van Wijngaarden, Sander J.; Steeneken, Herman J. M.; Houtgast, Tammo; Bronkhorst, Adelbert W.

    2002-05-01

    The calibration of the Speech Transmission Index (STI) is based on native speech, presented to native listeners. This means that the STI predicts speech intelligibility under the implicit assumption of fully native communication. In order to assess effects of both non-native production and non-native perception of speech, the intelligibility of short sentences was measured in various non-native scenarios, as a function of speech-to-noise ratio. Since each speech-to-noise ratio is associated with a unique STI value, this establishes the relation between sentence intelligibility and STI. The difference between native and non-native intelligibility as a function of STI was used to calculate a correction function for the STI for each separate non-native scenario. This correction function was applied to the STI ranges corresponding to certain intelligibility categories (bad-excellent). Depending on the proficiency of non-native talkers and listeners, the category boundaries were found to differ from the standard (native) boundaries by STI values up to 0.30 (on the standard 0-1 scale). The corrections needed for non-native listeners are greater than for non-native talkers with a similar level of proficiency. For some categories of non-native communicators, the qualification excellent requires an STI higher than 1.00, and therefore cannot be reached.

  9. Speech intelligibility of native and non-native speech

    NARCIS (Netherlands)

    Wijngaarden, S.J. van

    1999-01-01

    The intelligibility of speech is known to be lower if the talker is non-native instead of native for the given language. This study is aimed at quantifying the overall degradation due to acoustic-phonetic limitations of non-native talkers of Dutch, specifically of Dutch-speaking Americans who have l

  10. Phonetic training and non-native speech perception--New memory traces evolve in just three days as indexed by the mismatch negativity (MMN) and behavioural measures.

    Science.gov (United States)

    Tamminen, Henna; Peltola, Maija S; Kujala, Teija; Näätänen, Risto

    2015-07-01

    Language-specific, automatically responding memory traces form the basis for speech sound perception and new neural representations can also evolve for non-native speech categories. The aim of this study was to find out how a three-day phonetic listen-and-repeat training affects speech perception, and whether it generates new memory traces. We used behavioural identification, goodness rating, discrimination, and reaction time tasks together with mismatch negativity (MMN) brain response registrations to determine the training effects on native Finnish speakers. We trained the subjects the voicing contrast in fricative sounds. Fricatives are not differentiated by voicing in Finnish, i.e., voiced fricatives do not belong to the Finnish phonological system. Therefore, they are extremely hard for Finns to learn. However, only after three days of training, the native Finnish subjects had learned to perceive the distinction. The results show striking changes in the MMN response; it was significantly larger on the second day after two training sessions. Also, the majority of the behavioural indicators showed improvement during training. Identification altered after four sessions of training and discrimination and reaction times improved throughout training. These results suggest remarkable language-learning effects both at the perceptual and pre-attentive neural level as a result of brief listen-and-repeat training in adult participants.

  11. Intelligibility of native and non-native Dutch Speech

    NARCIS (Netherlands)

    Wijngaarden, S.J. van

    2001-01-01

    The intelligibility of speech is known to be lower if the speaker is non-native instead of native for the given language. This study is aimed at quantifying the overall degradation due to limitations of non-native speakers of Dutch, specifically of Dutch-speaking Americans who have lived in the Neth

  12. The intelligibility of Lombard speech for non-native listeners.

    Science.gov (United States)

    Cooke, Martin; Lecumberri, Maria Luisa García

    2012-08-01

    Speech produced in the presence of noise--Lombard speech--is more intelligible in noise than speech produced in quiet, but the origin of this advantage is poorly understood. Some of the benefit appears to arise from auditory factors such as energetic masking release, but a role for linguistic enhancements similar to those exhibited in clear speech is possible. The current study examined the effect of Lombard speech in noise and in quiet for Spanish learners of English. Non-native listeners showed a substantial benefit of Lombard speech in noise, although not quite as large as that displayed by native listeners tested on the same task in an earlier study [Lu and Cooke (2008), J. Acoust. Soc. Am. 124, 3261-3275]. The difference between the two groups is unlikely to be due to energetic masking. However, Lombard speech was less intelligible in quiet for non-native listeners than normal speech. The relatively small difference in Lombard benefit in noise for native and non-native listeners, along with the absence of Lombard benefit in quiet, suggests that any contribution of linguistic enhancements in the Lombard benefit for natives is small.

  13. Effects of training on learning non-native speech contrasts

    Science.gov (United States)

    Sinnott, Joan M.

    2002-05-01

    An animal psychoacoustic procedure was used to train human listeners to categorize two non-native phonemic distinctions. In Exp 1, Japanese perception of the English liquid contrast /r-l/ was examined. In Exp 2, American-English perception of the Hindi dental-retroflex contrast /d-D/was examined. The training methods were identical in the two studies. The stimuli consisted of 64 CVs produced by four different native talkers (two male, two female) using four different vowels. The procedure involved manually moving a lever to make either a ``go-left'' or ``go-right'' response to categorize the stimuli. Feedback was given for correct and incorrect responses after each trial. After 32 training sessions, lasting about 8 weeks, performance was analyzed using both percent correct and response time as measures. Results showed that the Japanese listeners, as a group, were statistically similar to a group of native listeners in categorizing the liquid contrast. In contrast, the Amercan-English listeners were not nativelike in categorizing the dental-retroflex contrast. Hypotheses for the different results in the two experiments are discussed, including possible subject-related variables. In addition, the use of an animal model is proposed to objectively ``calibrate'' the psychoacoustic salience of various phoneme contrasts used in human speech.

  14. The Attitudes and Perceptions of Non-Native English Speaking ...

    African Journals Online (AJOL)

    The Attitudes and Perceptions of Non-Native English Speaking Adults toward Explicit Grammar Instruction. ... to excel in their academic careers, obtain good jobs, and interact well with those who speak English. ... AJOL African Journals Online.

  15. Speech Recognition of Non-Native Speech Using Native and Non-Native Acoustic Models

    Science.gov (United States)

    2000-08-01

    NATIVE AND NON-NATIVE ACOUSTIC MODELS David A. van Leeuwen and Rosemary Orr vanLeeuwentm .tno. nl R. 0rr~kno. azn. nl TNO Human Factors Research...a] is pronounced closer to the [c] by the vowels . Journal of Phonetics, 25:437-470, 1997. 32 [2] D. B. Paul and J. M. Baker. The design for [9] R. H...J. Kershaw, [12] Tony Robinson. Private Communication. L. Lamel, D. A. van Leeuwen , D. Pye, A. J. Robinson, H. J. M. Steeneken, and P. C. Wood- [13

  16. Non-Native University Students' Perception of Plagiarism

    Science.gov (United States)

    Ahmad, Ummul Khair; Mansourizadeh, Kobra; Ai, Grace Koh Ming

    2012-01-01

    Plagiarism is a complex issue especially among non-native students and it has received a lot of attention from researchers and scholars of academic writing. Some scholars attribute this problem to cultural perceptions and different attitudes toward texts. This study evaluates student perception of different aspects of plagiarism. A small group of…

  17. Using the Speech Transmission Index for predicting non-native speech intelligibility

    Science.gov (United States)

    van Wijngaarden, Sander J.; Bronkhorst, Adelbert W.; Houtgast, Tammo; Steeneken, Herman J. M.

    2004-03-01

    While the Speech Transmission Index (STI) is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions for sentence intelligibility in noise, for populations of native and non-native communicators, a correction function for the interpretation of the STI is derived. This function is applied to determine the appropriate STI ranges with qualification labels (``bad''-``excellent''), for specific populations of non-natives. The correction function is derived by relating the non-native psychometric function to the native psychometric function by a single parameter (ν). For listeners, the ν parameter is found to be highly correlated with linguistic entropy. It is shown that the proposed correction function is also valid for conditions featuring bandwidth limiting and reverberation.

  18. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  19. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  20. How much does language proficiency by non-native listeners influence speech audiometric tests in noise?

    Science.gov (United States)

    Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger

    2015-01-01

    The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.

  1. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights.

  2. Influence of native and non-native multitalker babble on speech recognition in noise

    Directory of Open Access Journals (Sweden)

    Chandni Jain

    2014-03-01

    Full Text Available The aim of the study was to assess speech recognition in noise using multitalker babble of native and non-native language at two different signal to noise ratios. The speech recognition in noise was assessed on 60 participants (18 to 30 years with normal hearing sensitivity, having Malayalam and Kannada as their native language. For this purpose, 6 and 10 multitalker babble were generated in Kannada and Malayalam language. Speech recognition was assessed for native listeners of both the languages in the presence of native and nonnative multitalker babble. Results showed that the speech recognition in noise was significantly higher for 0 dB signal to noise ratio (SNR compared to -3 dB SNR for both the languages. Performance of Kannada Listeners was significantly higher in the presence of native (Kannada babble compared to non-native babble (Malayalam. However, this was not same with the Malayalam listeners wherein they performed equally well with native (Malayalam as well as non-native babble (Kannada. The results of the present study highlight the importance of using native multitalker babble for Kannada listeners in lieu of non-native babble and, considering the importance of each SNR for estimating speech recognition in noise scores. Further research is needed to assess speech recognition in Malayalam listeners in the presence of other non-native backgrounds of various types.

  3. Quantifying the intelligibility of speech in noise for non-native listeners

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Steeneken, H.J.M.; Houtgast, T.

    2002-01-01

    When listening to languages learned at a later age, speech intelligibility is generally lower than when listening to one's native language. The main purpose of this study is to quantify speech intelligibility in noise for specific populations of non-native listeners, only broadly addressing the unde

  4. Perceptual learning of non-native speech contrast and functioning of the olivocochlear bundle.

    Science.gov (United States)

    Kumar, Ajith U; Hegde, Medha; Mayaleela

    2010-07-01

    The purpose of this study was to investigate the relationship between perceptual learning of non-native speech sounds and strength of feedback in the medial olivocochlear bundle (MOCB). Discrimination abilities of non-native speech sounds (Malayalam) from its native counterparts (Hindi) were monitored during 12 days of training. Contralateral inhibition of otoacoustic emissions were measured on the first and twelfth day of training. Results suggested that training significantly improved reaction time and accuracy of identification of non-native speech sounds. There was a significant positive correlation between the slope (linear) of identification scores and change in distortion product otoacoustic emission inhibition at 3000 Hz. Findings suggest that during perceptual learning feedback from the MOCB may fine tune the brain stem and/or cochlea. However, such a change, isolated to a narrow frequency region, represents a limited effect and needs further exploration to confirm and/or extend any generalization of findings.

  5. Sleep and native language interference affect non-native speech sound learning.

    Science.gov (United States)

    Earle, F Sayako; Myers, Emily B

    2015-12-01

    Adults learning a new language are faced with a significant challenge: non-native speech sounds that are perceptually similar to sounds in one's native language can be very difficult to acquire. Sleep and native language interference, 2 factors that may help to explain this difficulty in acquisition, are addressed in 3 studies. Results of Experiment 1 showed that participants trained on a non-native contrast at night improved in discrimination 24 hr after training, while those trained in the morning showed no such improvement. Experiments 2 and 3 addressed the possibility that incidental exposure to perceptually similar native language speech sounds during the day interfered with maintenance in the morning group. Taken together, results show that the ultimate success of non-native speech sound learning depends not only on the similarity of learned sounds to the native language repertoire, but also to interference from native language sounds before sleep.

  6. Speech Recognition by Goats, Wolves, Sheep and Non-Natives

    Science.gov (United States)

    2000-08-01

    existent, will lead to vowel insertions and 1126. diphtongs are likely to be replaced by a single vowel. A [Bona97] P. Bonaventura , F. Gallocchio, G. Micca...Bona98] P. Bonaventura , F. Gallocchio, J. Mari, G. efficient for the small group of frequent language-pair Micca, "Speech recognition methods for non

  7. Combined Acoustic and Pronunciation Modelling for Non-Native Speech Recognition

    CERN Document Server

    Bouselmi, Ghazi; Illina, Irina

    2007-01-01

    In this paper, we present several adaptation methods for non-native speech recognition. We have tested pronunciation modelling, MLLR and MAP non-native pronunciation adaptation and HMM models retraining on the HIWIRE foreign accented English speech database. The ``phonetic confusion'' scheme we have developed consists in associating to each spoken phone several sequences of confused phones. In our experiments, we have used different combinations of acoustic models representing the canonical and the foreign pronunciations: spoken and native models, models adapted to the non-native accent with MAP and MLLR. The joint use of pronunciation modelling and acoustic adaptation led to further improvements in recognition accuracy. The best combination of the above mentioned techniques resulted in a relative word error reduction ranging from 46% to 71%.

  8. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  9. How noise and language proficiency influence speech recognition by individual non-native listeners.

    Science.gov (United States)

    Zhang, Jin; Xie, Lingli; Li, Yongjun; Chatterjee, Monita; Ding, Nai

    2014-01-01

    This study investigated how speech recognition in noise is affected by language proficiency for individual non-native speakers. The recognition of English and Chinese sentences was measured as a function of the signal-to-noise ratio (SNR) in sixty native Chinese speakers who never lived in an English-speaking environment. The recognition score for speech in quiet (which varied from 15%-92%) was found to be uncorrelated with speech recognition threshold (SRTQ/2), i.e. the SNR at which the recognition score drops to 50% of the recognition score in quiet. This result demonstrates separable contributions of language proficiency and auditory processing to speech recognition in noise.

  10. Designing acoustics for linguistically diverse classrooms: Effects of background noise, reverberation and talker foreign accent on speech comprehension by native and non-native English-speaking listeners

    Science.gov (United States)

    Peng, Zhao Ellen

    The current classroom acoustics standard (ANSI S12.60-2010) recommends core learning spaces not to exceed background noise level (BNL) of 35 dBA and reverberation time (RT) of 0.6 second, based on speech intelligibility performance mainly by the native English-speaking population. Existing literature has not correlated these recommended values well with student learning outcomes. With a growing population of non-native English speakers in American classrooms, the special needs for perceiving degraded speech among non-native listeners, either due to realistic room acoustics or talker foreign accent, have not been addressed in the current standard. This research seeks to investigate the effects of BNL and RT on the comprehension of English speech from native English and native Mandarin Chinese talkers as perceived by native and non-native English listeners, and to provide acoustic design guidelines to supplement the existing standard. This dissertation presents two studies on the effects of RT and BNL on more realistic classroom learning experiences. How do native and non-native English-speaking listeners perform on speech comprehension tasks under adverse acoustic conditions, if the English speech is produced by talkers of native English (Study 1) versus native Mandarin Chinese (Study 2)? Speech comprehension materials were played back in a listening chamber to individual listeners: native and non-native English-speaking in Study 1; native English, native Mandarin Chinese, and other non-native English-speaking in Study 2. Each listener was screened for baseline English proficiency level, and completed dual tasks simultaneously involving speech comprehension and adaptive dot-tracing under 15 acoustic conditions, comprised of three BNL conditions (RC-30, 40, and 50) and five RT scenarios (0.4 to 1.2 seconds). The results show that BNL and RT negatively affect both objective performance and subjective perception of speech comprehension, more severely for non-native

  11. Investigating Applications of Speech-to-Text Recognition Technology for a Face-to-Face Seminar to Assist Learning of Non-Native English-Speaking Participants

    Science.gov (United States)

    Shadiev, Rustam; Hwang, Wu-Yuin; Huang, Yueh-Min; Liu, Chia-Ju

    2016-01-01

    This study applied speech-to-text recognition (STR) technology to assist non-native English-speaking participants to learn at a seminar given in English. How participants used transcripts generated by the STR technology for learning and their perceptions toward the STR were explored. Three main findings are presented in this study. Most…

  12. Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration

    OpenAIRE

    Bouselmi, Ghazi; Fohr, Dominique; Illina, Irina; Haton, Jean-Paul

    2005-01-01

    This paper presents a fully automated approach for the recognition of non-native speech based on acoustic model modification. For a native language (L1) and a spoken language (L2), pronunciation variants of the phones of L2 are automatically extracted from an existing non-native database as a confusion matrix with sequences of phones of L1. This is done using L1's and L2's ASR systems. This confusion concept deals with the problem of non existence of match between some L2 and L1 phones. The c...

  13. Quantifying the intelligibility of speech in noise for non-native talkers

    Science.gov (United States)

    van Wijngaarden, Sander J.; Steeneken, Herman J. M.; Houtgast, Tammo

    2002-12-01

    The intelligibility of speech pronounced by non-native talkers is generally lower than speech pronounced by native talkers, especially under adverse conditions, such as high levels of background noise. The effect of foreign accent on speech intelligibility was investigated quantitatively through a series of experiments involving voices of 15 talkers, differing in language background, age of second-language (L2) acquisition and experience with the target language (Dutch). Overall speech intelligibility of L2 talkers in noise is predicted with a reasonable accuracy from accent ratings by native listeners, as well as from the self-ratings for proficiency of L2 talkers. For non-native speech, unlike native speech, the intelligibility of short messages (sentences) cannot be fully predicted by phoneme-based intelligibility tests. Although incorrect recognition of specific phonemes certainly occurs as a result of foreign accent, the effect of reduced phoneme recognition on the intelligibility of sentences may range from severe to virtually absent, depending on (for instance) the speech-to-noise ratio. Objective acoustic-phonetic analyses of accented speech were also carried out, but satisfactory overall predictions of speech intelligibility could not be obtained with relatively simple acoustic-phonetic measures.

  14. The effect of L1 orthography on non-native vowel perception

    NARCIS (Netherlands)

    Escudero, P.; Wanrooij, K.E.

    2010-01-01

    Previous research has shown that orthography influences the learning and processing of spoken non-native words. In this paper, we examine the effect of L1 orthography on non-native sound perception. In Experiment 1, 204 Spanish learners of Dutch and a control group of 20 native speakers of Dutch

  15. The Effect of L1 Orthography on Non-Native Vowel Perception

    Science.gov (United States)

    Escudero, Paola; Wanrooij, Karin

    2010-01-01

    Previous research has shown that orthography influences the learning and processing of spoken non-native words. In this paper, we examine the effect of L1 orthography on non-native sound perception. In Experiment 1, 204 Spanish learners of Dutch and a control group of 20 native speakers of Dutch were asked to classify Dutch vowel tokens by…

  16. Phonetic processing of non-native speech in semantic vs non-semantic tasks.

    Science.gov (United States)

    Gustafson, Erin; Engstler, Caroline; Goldrick, Matthew

    2013-12-01

    Research with speakers with acquired production difficulties has suggested phonetic processing is more difficult in tasks that require semantic processing. The current research examined whether similar effects are found in bilingual phonetic processing. English-French bilinguals' productions in picture naming (which requires semantic processing) were compared to those elicited by repetition (which does not require semantic processing). Picture naming elicited slower, more accented speech than repetition. These results provide additional support for theories integrating cognitive and phonetic processes in speech production and suggest that bilingual speech research must take cognitive factors into account when assessing the structure of non-native sound systems.

  17. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Science.gov (United States)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  18. Exploring Public Perception of Non-native Species from a Visions of Nature Perspective

    Science.gov (United States)

    Verbrugge, Laura N. H.; Van den Born, Riyan J. G.; Lenders, H. J. Rob

    2013-12-01

    Not much is known about lay public perceptions of non-native species and their underlying values. Public awareness and engagement, however, are important aspects in invasive species management. In this study, we examined the relations between the lay public's visions of nature, their knowledge about non-native species, and their perceptions of non-native species and invasive species management with a survey administered in the Netherlands. Within this framework, we identified three measures for perception of non-native species: perceived risk, control and engagement. In general, respondents scored moderate values for perceived risk and personal engagement. However, in case of potential ecological or human health risks, control measures were supported. Respondents' images of the human-nature relationship proved to be relevant in engagement in problems caused by invasive species and in recognizing the need for control, while images of nature appeared to be most important in perceiving risks to the environment. We also found that eradication of non-native species was predominantly opposed for species with a high cuddliness factor such as mammals and bird species. We conclude that lay public perceptions of non-native species have to be put in a wider context of visions of nature, and we discuss the implications for public support for invasive species management.

  19. Native and Non-Native Perceptions on a Non-Native Oral Discourse in an Academic Setting

    Directory of Open Access Journals (Sweden)

    Kenan Dikilitaş

    2012-07-01

    Full Text Available This qualitative study investigates discourse-level patterns typically employed by a Turkish lecturer based on the syntactic patterns found in the collected data. More specifically, the study aims to reveal how different native and non-native speakers of English perceive discourse patterns used by a non-native lecturer teaching in English. The data gathered from a Turkish lecturer teaching finance, and the interviews both with the lecturer and the students. The lecturer and the students were videotaped and the data was evaluated by content analysis. The results revealed a difference between the way non-native and native speakers evaluate an oral discourse of a non-native lecturer teaching in English. Native speakers of English found the oral performance moderately comprehensible, while non-native speakers found it relatively comprehensible.

  20. Optimizing Automatic Speech Recognition for Low-Proficient Non-Native Speakers

    Directory of Open Access Journals (Sweden)

    Catia Cucchiarini

    2010-01-01

    Full Text Available Computer-Assisted Language Learning (CALL applications for improving the oral skills of low-proficient learners have to cope with non-native speech that is particularly challenging. Since unconstrained non-native ASR is still problematic, a possible solution is to elicit constrained responses from the learners. In this paper, we describe experiments aimed at selecting utterances from lists of responses. The first experiment on utterance selection indicates that the decoding process can be improved by optimizing the language model and the acoustic models, thus reducing the utterance error rate from 29–26% to 10–8%. Since giving feedback on incorrectly recognized utterances is confusing, we verify the correctness of the utterance before providing feedback. The results of the second experiment on utterance verification indicate that combining duration-related features with a likelihood ratio (LR yield an equal error rate (EER of 10.3%, which is significantly better than the EER for the other measures in isolation.

  1. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  2. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  3. Effects of noise, reverberation and foreign accent on native and non-native listeners' performance of English speech comprehension.

    Science.gov (United States)

    Peng, Z Ellen; Wang, Lily M

    2016-05-01

    A large number of non-native English speakers may be found in American classrooms, both as listeners and talkers. Little is known about how this population comprehends speech in realistic adverse acoustical conditions. A study was conducted to investigate the effects of background noise level (BNL), reverberation time (RT), and talker foreign accent on native and non-native listeners' speech comprehension, while controlling for English language abilities. A total of 115 adult listeners completed comprehension tasks under 15 acoustic conditions: three BNLs (RC-30, RC-40, and RC-50) and five RTs (from 0.4 to 1.2 s). Fifty-six listeners were tested with speech from native English-speaking talkers and 59 with native Mandarin-Chinese-speaking talkers. Results show that, while higher BNLs were generally more detrimental to listeners with lower English proficiency, all listeners experienced significant comprehension deficits above RC-40 with native English talkers. This limit was lower (i.e., above RC-30), however, with Chinese talkers. For reverberation, non-native listeners as a group performed best with RT up to 0.6 s, while native listeners performed equally well up to 1.2 s. A matched foreign accent benefit has also been identified, where the negative impact of higher reverberation does not exist for non-native listeners who share the talker's native language.

  4. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    Science.gov (United States)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  5. Across-talker effects on non-native listeners’ vowel perception in noise1

    OpenAIRE

    Bent, Tessa; Kewley-Port, Diane; Ferguson, Sarah Hargus

    2010-01-01

    This study explored how across-talker differences influence non-native vowel perception. American English (AE) and Korean listeners were presented with recordings of 10 AE vowels in ∕bVd∕ context. The stimuli were mixed with noise and presented for identification in a 10-alternative forced-choice task. The two listener groups heard recordings of the vowels produced by 10 talkers at three signal-to-noise ratios. Overall the AE listeners identified the vowels 22% more accurately than the Korean...

  6. Are non-native plants perceived to be more risky? Factors influencing horticulturists' risk perceptions of ornamental plant species.

    Directory of Open Access Journals (Sweden)

    Franziska Humair

    Full Text Available Horticultural trade is recognized as an important vector in promoting the introduction and dispersal of harmful non-native plant species. Understanding horticulturists' perceptions of biotic invasions is therefore important for effective species risk management. We conducted a large-scale survey among horticulturists in Switzerland (N = 625 to reveal horticulturists' risk and benefit perceptions from ornamental plant species, their attitudes towards the regulation of non-native species, as well as the factors decisive for environmental risk perceptions and horticulturists' willingness to engage in risk mitigation behavior. Our results suggest that perceived familiarity with a plant species had a mitigating effect on risk perceptions, while perceptions of risk increased if a species was perceived to be non-native. However, perceptions of the non-native origin of ornamental plant species were often not congruent with scientific classifications. Horticulturists displayed positive attitudes towards mandatory trade regulations, particularly towards those targeted against known invasive species. Participants also expressed their willingness to engage in risk mitigation behavior. Yet, positive effects of risk perceptions on the willingness to engage in risk mitigation behavior were counteracted by perceptions of benefits from selling non-native ornamental species. Our results indicate that the prevalent practice in risk communication to emphasize the non-native origin of invasive species can be ineffective, especially in the case of species of high importance to local industries and people. This is because familiarity with these plants can reduce risk perceptions and be in conflict with scientific concepts of non-nativeness. In these cases, it might be more effective to focus communication on well-documented environmental impacts of harmful species.

  7. The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers.

    Science.gov (United States)

    Kawase, Saya; Hannah, Beverly; Wang, Yue

    2014-09-01

    This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.

  8. Across-talker effects on non-native listeners' vowel perception in noise.

    Science.gov (United States)

    Bent, Tessa; Kewley-Port, Diane; Ferguson, Sarah Hargus

    2010-11-01

    This study explored how across-talker differences influence non-native vowel perception. American English (AE) and Korean listeners were presented with recordings of 10 AE vowels in /bVd/ context. The stimuli were mixed with noise and presented for identification in a 10-alternative forced-choice task. The two listener groups heard recordings of the vowels produced by 10 talkers at three signal-to-noise ratios. Overall the AE listeners identified the vowels 22% more accurately than the Korean listeners. There was a wide range of identification accuracy scores across talkers for both AE and Korean listeners. At each signal-to-noise ratio, the across-talker intelligibility scores were highly correlated for AE and Korean listeners. Acoustic analysis was conducted for 2 vowel pairs that exhibited variable accuracy across talkers for Korean listeners but high identification accuracy for AE listeners. Results demonstrated that Korean listeners' error patterns for these four vowels were strongly influenced by variability in vowel production that was within the normal range for AE talkers. These results suggest that non-native listeners are strongly influenced by across-talker variability perhaps because of the difficulty they have forming native-like vowel categories.

  9. Perceptual assimilation and discrimination of non-native vowel contrasts

    OpenAIRE

    2014-01-01

    Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: The Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and L1-assimilation (categorization and goodnes...

  10. Unpacking Race, Culture, and Class in Rural Alaska: Native and Non-Native Multidisciplinary Professionals' Perceptions of Child Sexual Abuse

    Science.gov (United States)

    Bubar, Roe; Bundy-Fazioli, Kimberly

    2011-01-01

    The purpose of this study was to unpack notions of class, culture, and race as they relate to multidisciplinary team (MDT) professionals and their perceptions of prevalence in child sexual abuse cases in Native and non-Native rural Alaska communities. Power and privilege within professional settings is significant for all social work professionals…

  11. Non-Native Japanese Listeners' Perception of Vowel Length Contrasts in Japanese and Modern Standard Arabic (MSA)

    Science.gov (United States)

    Tsukada, Kimiko

    2012-01-01

    This study aimed to compare the perception of short vs. long vowel contrasts in Japanese and Modern Standard Arabic (MSA) by four groups of listeners differing in their linguistic backgrounds: native Arabic (NA), native Japanese (NJ), non-native Japanese (NNJ) and Australian English (OZ) speakers. The NNJ and OZ groups shared the first language…

  12. Unpacking Race, Culture, and Class in Rural Alaska: Native and Non-Native Multidisciplinary Professionals' Perceptions of Child Sexual Abuse

    Science.gov (United States)

    Bubar, Roe; Bundy-Fazioli, Kimberly

    2011-01-01

    The purpose of this study was to unpack notions of class, culture, and race as they relate to multidisciplinary team (MDT) professionals and their perceptions of prevalence in child sexual abuse cases in Native and non-Native rural Alaska communities. Power and privilege within professional settings is significant for all social work professionals…

  13. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    Science.gov (United States)

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236.

  14. Perceptual assimilation and discrimination of non-native vowel contrasts

    Science.gov (United States)

    Tyler, Michael D.; Best, Catherine T.; Faber, Alice; Levitt, Andrea G.

    2014-01-01

    Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: The Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and L1-assimilation (categorization and goodness rating) tests on six non-native vowel contrasts. Discrimination was consistent with PAM assimilation types, but asymmetries predicted by NRV were only observed for single-category assimilations, suggesting that perceptual assimilation might modulate the effects of vowel peripherality on non-native vowel perception. PMID:24923313

  15. Visual-only discrimination between native and non-native speech

    NARCIS (Netherlands)

    Georgakis, Christos; Petridis, Stavros; Pantic, Maja

    2014-01-01

    Accent is an important biometric characteristic that is defined by the presence of specific traits in the speaking style of an individual. These are identified by patterns in the speech production system, such as those present in the vocal tract or in lip movements. Evidence from linguistics and spe

  16. The impact of tone language and non-native language listening on measuring speech quality

    NARCIS (Netherlands)

    Ebem, D.U.; Beerends, J.G.; Vugt, J. van; Schmidmer, C.; Kooij, R.E.; Uguru, J.O.

    2011-01-01

    The extent to which the modeling used in objective speech quality algorithms depends on the cultural background of listeners as well as on the language characteristics using American English and Igbo, an African tone language is investigated. Two different approaches were used in order to separate b

  17. Neural activation in speech production and reading aloud in native and non-native languages.

    Science.gov (United States)

    Berken, Jonathan A; Gracco, Vincent L; Chen, Jen-Kai; Soles, Jennika; Watkins, Kate E; Baum, Shari; Callahan, Megan; Klein, Denise

    2015-05-15

    We used fMRI to investigate neural activation in reading aloud in bilinguals differing in age of acquisition. Three groups were compared: French-English bilinguals who acquired two languages from birth (simultaneous), French-English bilinguals who learned their L2 after the age of 5 years (sequential), and English-speaking monolinguals. While the bilingual groups contrasted in age of acquisition, they were matched for language proficiency, although sequential bilinguals produced speech with a less native-like accent in their L2 than in their L1. Simultaneous bilinguals activated similar brain regions to an equivalent degree when reading in their two languages. In contrast, sequential bilinguals more strongly activated areas related to speech-motor control and orthographic to phonological mapping, the left inferior frontal gyrus, left premotor cortex, and left fusiform gyrus, when reading aloud in L2 compared to L1. In addition, the activity in these regions showed a significant positive correlation with age of acquisition. The results provide evidence for the engagement of overlapping neural substrates for processing two languages when acquired in native context from birth. However, it appears that the maturation of certain brain regions for both speech production and phonological encoding is limited by a sensitive period for L2 acquisition regardless of language proficiency.

  18. Perception of Non-Native Consonant Length Contrast: The Role of Attention in Phonetic Processing

    Science.gov (United States)

    Porretta, Vincent J.; Tucker, Benjamin V.

    2015-01-01

    The present investigation examines English speakers' ability to identify and discriminate non-native consonant length contrast. Three groups (L1 English No-Instruction, L1 English Instruction, and L1 Finnish control) performed a speeded forced-choice identification task and a speeded AX discrimination task on Finnish non-words (e.g.…

  19. Assessing the Performance of Automatic Speech Recognition Systems When Used by Native and Non-Native Speakers of Three Major Languages in Dictation Workflows

    DEFF Research Database (Denmark)

    Zapata, Julián; Kirkedal, Andreas Søeborg

    2015-01-01

    In this paper, we report on a two-part experiment aiming to assess and compare the performance of two types of automatic speech recognition (ASR) systems on two different computational platforms when used to augment dictation workflows. The experiment was performed with a sample of speakers...... of three major languages and with different linguistic profiles: non-native English speakers; non-native French speakers; and native Spanish speakers. The main objective of this experiment is to examine ASR performance in translation dictation (TD) and medical dictation (MD) workflows without manual...

  20. Pragmatic assessment of request speech act of Iranian EFL learners by non-native English speaking teachers

    Directory of Open Access Journals (Sweden)

    Minoo Alemi

    2016-07-01

    Full Text Available The analysis of raters' comments on pragmatic assessment of L2 learners is among new and understudied concepts in second language studies. To shed light on this issue, the present investigation targeted important variables such as raters’ criteria and rating patterns by analyzing the interlanguage pragmatic assessment process of the Iranian non-native English speaking raters (NNESRs regarding the request speech act, while considering important factors such as raters’ gender and background teaching experiences. For this purpose, 62 raters’ rating scores and comments on Iranian EFL learners’ requests based on six situations of specified video prompts were analyzed. The results of the content analysis of raters’ comments revealed nine criteria, including pragmalinguistic and socio-pragmatic components of language, which have been noted by raters differently through six request situations. Among the considered criteria, politeness, conversers’ relationship, style and register, and explanation were of great importance to NNESRs. Furthermore, t-test and chi-square analysis of raters’ assigned rating scores and mentioned criteria across different situations verified the insignificance of factors such as raters’ gender and teaching experiences on the process of EFL learners’ pragmatic assessment. In addition, the results of the study suggest the necessity of teaching L2 pragmatics in language classes and in teacher training courses.

  1. Perception of native and non-native affricate-fricative contrasts: cross-language tests on adults and infants.

    Science.gov (United States)

    Tsao, Feng-Ming; Liu, Huei-Mei; Kuhl, Patricia K

    2006-10-01

    Previous studies have shown improved sensitivity to native-language contrasts and reduced sensitivity to non-native phonetic contrasts when comparing 6-8 and 10-12-month-old infants. This developmental pattern is interpreted as reflecting the onset of language-specific processing around the first birthday. However, generalization of this finding is limited by the fact that studies have yielded inconsistent results and that insufficient numbers of phonetic contrasts have been tested developmentally; this is especially true for native-language phonetic contrasts. Three experiments assessed the effects of language experience on affricate-fricative contrasts in a cross-language study of English and Mandarin adults and infants. Experiment 1 showed that English-speaking adults score lower than Mandarin-speaking adults on Mandarin alveolo-palatal affricate-fricative discrimination. Experiment 2 examined developmental change in the discrimination of this contrast in English- and Mandarin-leaning infants between 6 and 12 months of age. The results demonstrated that native-language performance significantly improved with age while performance on the non-native contrast decreased. Experiment 3 replicated the perceptual improvement for a native contrast: 6-8 and 10-12-month-old English-learning infants showed a performance increase at the older age. The results add to our knowledge of the developmental patterns of native and non-native phonetic perception.

  2. Linguistic influences in adult perception of non-native vowel contrasts.

    Science.gov (United States)

    Polka, L

    1995-02-01

    Perception of natural productions of two German vowels contrasts, /y/ vs /u/ and /Y/ vs /U/, was examined in monolingual English-speaking adults. Subjects were tested on multiple exemplars of the contrasting vowels produced in a dVt syllable by a native German speaker. Discrimination accuracy in an AXB discrimination task was well above chance for both contrasts. Most of the English adults failed to attain "nativelike" discrimination accuracy for the lax vowel pair /U/ vs /Y/, whereas all subjects showed nativelike performance in discriminating the tense vowel pair /u/ vs /y/. Results of a keyword identification and rating task provided evidence that English listeners' mapping of the German vowel to English vowel categories can be characterized as a category goodness difference assimilation, and that the difference in category goodness was more pronounced for the tense vowel pair than for the lax vowel pair. The results failed to support the hypothesis that the acoustic structure of vowels consistently favors auditory coding. Overall, the findings are compatible with existing data on discrimination of cross-language consonant contrasts in natural speech and suggest that linguistic experience shapes the discrimination of vowels and consonants as phonetic segmental units in similar ways.

  3. Exploring Non-Native EFL Teachers’ Knowledge Base: Practices and Perceptions

    Directory of Open Access Journals (Sweden)

    Anchalee Jansem

    2014-11-01

    Full Text Available This qualitative study was conducted to explore non-native EFL teachers’ knowledge base performed during instruction, perceived knowledge base underlying teaching practices, and perceived pathways of knowledge base construction.  The data from four sources including video recordings of classroom observations, interviews, detailed field-notes taken during classroom observations, and participants’ reflections revealed that the eight participants integrated knowledge of the English language, other content areas, instructional delivery, classroom management, and the changing world and social contexts in their instruction.  The findings indicated that the participants realized that their knowledge consisted of language construction and skills, other content areas, ability to teach, understanding students’ strengths, weaknesses, and needs, the changing world, social contexts, and technology, as well as problem solving ability.  Also, they perceived teacher education programs, additional learning experience, teaching experience, in-service professional development activities, and a working environment as key sources of knowledge base construction for non-native teachers. Keywords: knowledge base, English as a Foreign language teachers, knowledge construction

  4. Perceptual assimilation and discrimination of non-native vowel contrasts.

    Science.gov (United States)

    Tyler, Michael D; Best, Catherine T; Faber, Alice; Levitt, Andrea G

    2014-01-01

    Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: the Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and native language assimilation (categorization and goodness rating) tests on six nonnative vowel contrasts. Discrimination was consistent with PAM assimilation types, but asymmetries predicted by NRV were only observed for single-category assimilations, suggesting that perceptual assimilation might modulate the effects of vowel peripherality on non-native vowel perception.

  5. The relationship between auditory-visual speech perception and language-specific speech perception at the onset of reading instruction in English-speaking children.

    Science.gov (United States)

    Erdener, Doğu; Burnham, Denis

    2013-10-01

    Speech perception is auditory-visual, but relatively little is known about auditory-visual compared with auditory-only speech perception. One avenue for further understanding is via developmental studies. In a recent study, Sekiyama and Burnham (2008) found that English speakers significantly increase their use of visual speech information between 6 and 8 years of age but that this development does not appear to be universal across languages. Here, the possible bases for this language-specific increase among English speakers were investigated. Four groups of English-language children (5, 6, 7, and 8 years) and a group of adults were tested on auditory-visual, auditory-only, and visual-only speech perception; language-specific speech perception with native and non-native speech sounds; articulation; and reading. Results showed that language-specific speech perception and lip-reading ability reliably predicted auditory-visual speech perception in children but that adult auditory-visual speech perception was predicted by auditory-only speech perception. The implications are discussed in terms of both auditory-visual speech perception and language development. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. The effect of input source (Native vs. Non- native and EFL learners’ perceptions towards it, on their listening performances, across gender

    Directory of Open Access Journals (Sweden)

    Karim Sadeghi

    2014-07-01

    Full Text Available The issue of the “native/non-native speaker’s input source” occupies an important place in any EFL listening comprehension test. This study is meant to unveil whether listening to a native or non-native speaker (i.e., input source and students’ perceptions towards it affect the performance of upper-intermediate EFL learners in a listening test. For this purpose, an experimental design was used to compare the performance of two groups of learners on an EFL listening test. A test of 20 multiple choice items was administered to 66 EFL learners (31 male and 35 female, half of whom listened to a native speaker’s voice while the other 33 test takers listened to a non-native speaker’s voice. Moreover, a perception questionnaire considering students’ perceptions towards using native or non-native input source in listening tests was utilized. The results of the study highlighted that the overall performance of the two groups differed significantly. That is, the listeners to the non-native input outperformed those who listened to a native speaker. Also, the results of the questionnaire revealed some noteworthy findings which indicate that students preferred the use of non-native input in listening tests. Moreover, considering gender as a moderator variable, a statistically significant main effect was found for gender; that is, gender does play a significant role as a moderator variable. In other words, female test takers performed better than males in both conditions. Further findings and implications are discussed in the paper.

  7. Native and Non-native English Teachers' Perceptions of their Professional Identity: Convergent or Divergent?

    Directory of Open Access Journals (Sweden)

    Zia Tajeddin

    2016-10-01

    Full Text Available There is still a preference for native speaker teachers in the language teaching profession, which is supposed to influence the self-perceptions of native and nonnative teachers. However, the status of English as a globalized language is changing the legitimacy of native/nonnative teacher dichotomy. This study sought to investigate native and nonnative English-speaking teachers’ perceptions about native and nonnative teachers’ status and the advantages and disadvantages of being a native or nonnative teacher. Data were collected by means of a questionnaire and a semi-structured interview. A total of 200 native and nonnative teachers of English from the UK and the US, i.e. the inner circle, and Turkey and Iran, the expanding circle, participated in this study. A significant majority of nonnative teachers believed that native speaker teachers have better speaking proficiency, better pronunciation, and greater self-confidence. The findings also showed nonnative teachers’ lack of self-confidence and awareness of their role and status compared with native-speaker teachers, which could be the result of existing inequities between native and nonnative English-speaking teachers in ELT. The findings also revealed that native teachers disagreed more strongly with the concept of native teachers’ superiority over nonnative teachers. Native teachers argued that nonnative teachers have a good understanding of teaching methodology whereas native teachers are more competent in correct language. It can be concluded that teacher education programs in the expanding-circle countries should include materials for teachers to raise their awareness of their own professional status and role and to remove their misconception about native speaker fallacy.

  8. The effect of language immersion education on the preattentive perception of native and non-native vowel contrasts.

    Science.gov (United States)

    Peltola, Maija S; Tuomainen, Outi; Koskinen, Mira; Aaltonen, Olli

    2007-01-01

    Proficiency in a second language (L2) may depend upon the age of exposure and the continued use of the mother tongue (L1) during L2 acquisition. The effect of early L2 exposure on the preattentive perception of native and non-native vowel contrasts was studied by measuring the mismatch negativity (MMN) response from 14-year-old children. The test group consisted of six Finnish children who had participated in English immersion education. The control group consisted of eight monolingual Finns. The subjects were presented with Finnish and English synthetic vowel contrasts. The aim was to see whether early exposure had resulted in the development of a new language-specific memory trace for the contrast phonemically irrelevant in L1. The results indicated that only the contrast with the largest acoustic distance elicited an MMN response in the Bilingual group, while the Monolingual group showed a response also to the native contrast. This may suggest that native-like memory traces for prototypical vowels were not formed in early language immersion.

  9. Native and non-native speech sound processing and the neural mismatch responses: A longitudinal study on classroom-based foreign language learning.

    Science.gov (United States)

    Jost, Lea B; Eberhard-Moscicka, Aleksandra K; Pleisch, Georgette; Heusser, Veronica; Brandeis, Daniel; Zevin, Jason D; Maurer, Urs

    2015-06-01

    Learning a foreign language in a natural immersion context with high exposure to the new language has been shown to change the way speech sounds of that language are processed at the neural level. It remains unclear, however, to what extent this is also the case for classroom-based foreign language learning, particularly in children. To this end, we presented a mismatch negativity (MMN) experiment during EEG recordings as part of a longitudinal developmental study: 38 monolingual (Swiss-) German speaking children (7.5 years) were tested shortly before they started to learn English at school and followed up one year later. Moreover, 22 (Swiss-) German adults were recorded. Instead of the originally found positive mismatch response in children, an MMN emerged when applying a high-pass filter of 3 Hz. The overlap of a slow-wave positivity with the MMN indicates that two concurrent mismatch processes were elicited in children. The children's MMN in response to the non-native speech contrast was smaller compared to the native speech contrast irrespective of foreign language learning, suggesting that no additional neural resources were committed to processing the foreign language speech sound after one year of classroom-based learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Intelligibility of American English Vowels of Native and Non-Native Speakers in Quiet and Speech-Shaped Noise

    Science.gov (United States)

    Liu, Chang; Jin, Su-Hyun

    2013-01-01

    This study examined intelligibility of twelve American English vowels produced by English, Chinese, and Korean native speakers in quiet and speech-shaped noise in which vowels were presented at six sensation levels from 0 dB to 10 dB. The slopes of vowel intelligibility functions and the processing time for listeners to identify vowels were…

  11. Intelligibility of American English Vowels of Native and Non-Native Speakers in Quiet and Speech-Shaped Noise

    Science.gov (United States)

    Liu, Chang; Jin, Su-Hyun

    2013-01-01

    This study examined intelligibility of twelve American English vowels produced by English, Chinese, and Korean native speakers in quiet and speech-shaped noise in which vowels were presented at six sensation levels from 0 dB to 10 dB. The slopes of vowel intelligibility functions and the processing time for listeners to identify vowels were…

  12. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.

  13. THE REFLECTION OF BILINGUALISM IN THE SPEECH OF PRESCHOOL CHILDREN SPEAKING NATIVE (ERZYA AND NON-NATIVE (RUSSIAN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Mosina, N.M.

    2016-03-01

    Full Text Available This article considers the specific features of Mordovian speech of 16 bilingual children, aged 3 to 7 years, speaking both the Erzya and Russian languages, living in Mordovia. Their language is studied on the example of short stories in pictures, and it attempts to identify the influence of the Russian language on the Erzya one and to detect the occurrences of interference at the lexical and grammatical levels.

  14. Student perceptions of native and non-native speaker language instructors: A comparison of ESL and Spanish

    Directory of Open Access Journals (Sweden)

    Laura Callahan

    2006-12-01

    Full Text Available The question of the native vs. non-native speaker status of second and foreign language instructors has been investigated chiefly from the perspective of the teacher. Anecdotal evidence suggests that students have strong opinions on the relative qualities of instruction by native and non-native speakers. Most research focuses on students of English as a foreign or second language. This paper reports on data gathered through a questionnaire administered to 55 university students: 31 students of Spanish as FL and 24 students of English as SL. Qualitative results show what strengths students believe each type of instructor has, and quantitative results confirm that any gap students may perceive between the abilities of native and non-native instructors is not so wide as one might expect based on popular notions of the issue. ESL students showed a stronger preference for native-speaker instructors overall, and were at variance with the SFL students' ratings of native-speaker instructors' performance on a number of aspects. There was a significant correlation in both groups between having a family member who is a native speaker of the target language and student preference for and self-identification with a native speaker as instructor. (English text

  15. Word Durations in Non-Native English

    Science.gov (United States)

    Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.

    2010-01-01

    In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172

  16. The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds.

    Science.gov (United States)

    Kartushina, Natalia; Hervais-Adelman, Alexis; Frauenfelder, Ulrich Hans; Golestani, Narly

    2015-08-01

    Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the same vowels produced by native speakers. The Mahalanobis acoustic distance between non-native productions and target native acoustic spaces was used to assess L2 production accuracy. The experiment shows that 1 h of training per vowel improves the production of four non-native Danish vowels: the learners' productions were closer to the corresponding Danish target vowels after training. The production performance of a control group remained unchanged. Comparisons of pre- and post-training vowel discrimination performance in the experimental group showed improvements in perception. Correlational analyses of training-related changes in production and perception revealed no relationship. These results suggest, first, that this training method is effective in improving non-native vowel production. Second, training purely on production improves perception. Finally, it appears that improvements in production and perception do not systematically progress at equal rates within individuals.

  17. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-01

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  18. Production and perception of clear speech

    Science.gov (United States)

    Bradlow, Ann R.

    2003-04-01

    When a talker believes that the listener is likely to have speech perception difficulties due to a hearing loss, background noise, or a different native language, she or he will typically adopt a clear speaking style. Previous research has established that, with a simple set of instructions to the talker, ``clear speech'' can be produced by most talkers under laboratory recording conditions. Furthermore, there is reliable evidence that adult listeners with either impaired or normal hearing typically find clear speech more intelligible than conversational speech. Since clear speech production involves listener-oriented articulatory adjustments, a careful examination of the acoustic-phonetic and perceptual consequences of the conversational-to-clear speech transformation can serve as an effective window into talker- and listener-related forces in speech communication. Furthermore, clear speech research has considerable potential for the development of speech enhancement techniques. After reviewing previous and current work on the acoustic properties of clear versus conversational speech, this talk will present recent data from a cross-linguistic study of vowel production in clear speech and a cross-population study of clear speech perception. Findings from these studies contribute to an evolving view of clear speech production and perception as reflecting both universal, auditory and language-specific, phonological contrast enhancement features.

  19. STUDY ON PHASE PERCEPTION IN SPEECH

    Institute of Scientific and Technical Information of China (English)

    Tong Ming; Bian Zhengzhong; Li Xiaohui; Dai Qijun; Chen Yanpu

    2003-01-01

    The perceptual effect of the phase information in speech has been studied by auditorysubjective tests. On the condition that the phase spectrum in speech is changed while amplitudespectrum is unchanged, the tests show that: (1) If the envelop of the reconstructed speech signalis unchanged, there is indistinctive auditory perception between the original speech and thereconstructed speech; (2) The auditory perception effect of the reconstructed speech mainly lieson the amplitude of the derivative of the additive phase; (3) td is the maximum relative time shiftbetween different frequency components of the reconstructed speech signal. The speech qualityis excellent while td <10ms; good while 10ms< td <20ms; common while 20ms< td <35ms, andpoor while td >35ms.

  20. Brain structure is related to speech perception abilities in bilinguals.

    Science.gov (United States)

    Burgaleta, Miguel; Baus, Cristina; Díaz, Begoña; Sebastián-Gallés, Núria

    2014-07-01

    Morphology of the human brain predicts the speed at which individuals learn to distinguish novel foreign speech sounds after laboratory training. However, little is known about the neuroanatomical basis of individual differences in speech perception when a second language (L2) has been learned in natural environments for extended periods of time. In the present study, two samples of highly proficient bilinguals were selected according to their ability to distinguish between very similar L2 sounds, either isolated (prelexical) or within words (lexical). Structural MRI was acquired and processed to estimate vertex-wise indices of cortical thickness (CT) and surface area (CSA), and the association between cortical morphology and behavioral performance was inspected. Results revealed that performance in the lexical task was negatively associated with the thickness of the left temporal cortex and angular gyrus, as well as with the surface area of the left precuneus. Our findings, consistently with previous fMRI studies, demonstrate that morphology of the reported areas is relevant for word recognition based on phonological information. Further, we discuss the possibility that increased CT and CSA in sound-to-meaning mapping regions, found for poor non-native speech sounds perceivers, would have plastically arisen after extended periods of increased functional activity during L2 exposure.

  1. Evidence for language transfer leading to a perceptual advantage for non-native listeners.

    Science.gov (United States)

    Chang, Charles B; Mishler, Alan

    2012-10-01

    Phonological transfer from the native language is a common problem for non-native speakers that has repeatedly been shown to result in perceptual deficits vis-à-vis native speakers. It was hypothesized, however, that transfer could help, rather than hurt, if it resulted in a beneficial bias. Due to differences in pronunciation norms between Korean and English, Koreans in the U.S. were predicted to be better than Americans at perceiving unreleased stops-not only in their native language (Korean) but also in their non-native language (English). In three experiments, Koreans were found to be significantly more accurate than Americans at identifying unreleased stops in Korean, at identifying unreleased stops in English, and at discriminating between the presence and absence of an unreleased stop in English. Taken together, these results suggest that cross-linguistic transfer is capable of boosting speech perception by non-natives beyond native levels.

  2. Native and non-native perception of phonemic length contrasts in Japanese: Effects of speaking rate and presentation context

    Science.gov (United States)

    Wilson, Amanda; Kato, Hiroaki; Tajima, Keiichi

    2005-04-01

    Japanese words can be distinguished by the length of phonemes, e.g., ``chizu'' (map) versus ``chiizu'' (cheese). Perceiving these length contrasts is therefore important for learning Japanese as a second language. The present study examined native English listeners' perception of length contrasts at different speaking rates and in different contexts. Stimuli consisted of 20 Japanese word pairs that minimally contrasted in vowel length, and 10 synthesized nonwords. The nonwords were created by modifying the duration of the second vowel of the nonword ``erete'' along a continuum (from ``erete'' to ``ereete''). Stimuli were presented with or without a carrier sentence at three rates (fast, normal, slow). Rate was either fixed or randomized trial by trial. Sixteen native English and 16 native Japanese listeners participated in a single-stimulus, two-alternative forced-choice identification task. Results suggest that native Japanese listeners' identification boundaries systematically shifted due to changes in speaking rate when the stimuli were in the context of a sentence with mixed rates of presentation. In contrast, native English listeners show a shift in the opposite direction, suggesting that they did not follow the variation in speaking rate. These results will be discussed from the viewpoint of training second-language phoneme perception. [Work supported by JSPS.

  3. Individual differneces in degraded speech perception

    Science.gov (United States)

    Carbonell, Kathy M.

    One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.

  4. The Emergence of L2 Phonological Contrast in Perception: The Case of Korean Sibilant Fricatives

    Science.gov (United States)

    Holliday, Jeffrey J.

    2012-01-01

    The perception of non-native speech sounds is heavily influenced by the acoustic cues that are relevant for differentiating members of a listener's native (L1) phonological contrasts. Many studies of both (naive) non-native and (not naive) second language (L2) speech perception implicitly assume continuity in a listener's habits of…

  5. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  6. Neural pathways for visual speech perception.

    Science.gov (United States)

    Bernstein, Lynne E; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  7. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  8. Native and Non-native Teachers’ Pragmatic Criteria for Rating Request Speech Act: The Case of American and Iranian EFL Teachers

    Directory of Open Access Journals (Sweden)

    Minoo Alemi

    2017-04-01

    Full Text Available Abstract: Over the last few decades, several aspects of pragmatic knowledge and its effects on teaching  and  learning  a  second  language  (L2  have  been  explored  in  many  studies.  However, among  various  studies,  the  area  of  interlanguage  pragmatic  (ILP  assessment  is  quite  novel issue and many features of it have remained unnoticed. As ILP assessment has received more attention recently, the necessity of investigation on the EFL teachers‟ rating criteria for rating various  speech  acts  has  become  important.  In  this  respect,  the  present  study  aimed  to investigate  the  native  and  non-native EFL teachers‟ rating scores and criteria  regarding  the speech  act  of  request.  To  this  end,  50  American  ESL  teachers  and  50  Iranian  EFL  teachers participated to rate the EFL learners‟ responses to video-prompted Discourse Completion Tests (DCTs  regarding  the  speech  act  of  request.  Raters  were  supposed to rate the EFL learners‟ responses and mention their criteria for assessment. The results of the content analysis of raters‟ comments revealed nine criteria that they considered in their assessment. Moreover, the result of  the  t-test  and  chi-square analyses of raters‟ rating scores and criteria proved that there are significant differences between native and non-native EFL teachers‟ rating pattern. The results of this study also shed light on importance of sociopragmatic and pragmalinguistic features in native  and  non-native teachers‟ pragmatic rating, which can have several implications for L2 teachers, learners, and material developers. معیارهای معلمان زبان بومی و غیربومی در نمره دهی کنش کلامی درخواست : مورد معلمان انگلیسی زبان آمریکایی و ایرانی چکیده: طی چند دهه اخیر،  جنبه های 

  9. The Beginnings of Danish Speech Perception

    DEFF Research Database (Denmark)

    Østerbye, Torkil

    , in the light of the rich and complex Danish sound system. The first two studies report on native adults’ perception of Danish speech sounds in quiet and noise. The third study examined the development of language-specific perception in native Danish infants at 6, 9 and 12 months of age. The book points...... to interesting differences in speech perception and acquisition of Danish adults and infants when compared to English. The book is useful for professionals as well as students of linguistics, psycholinguistics and phonetics/phonology, or anyone else who may be interested in language.......Little is known about the perception of speech sounds by native Danish listeners. However, the Danish sound system differs in several interesting ways from the sound systems of other languages. For instance, Danish is characterized, among other features, by a rich vowel inventory and by different...

  10. Speech perception in children with speech output disorders.

    NARCIS (Netherlands)

    Nijland, L.

    2009-01-01

    Research in the field of speech production pathology is dominated by describing deficits in output. However, perceptual problems might underlie, precede, or interact with production disorders. The present study hypothesizes that the level of the production disorders is linked to level of perception

  11. Speech perception in children with speech output disorders.

    NARCIS (Netherlands)

    Nijland, L.

    2009-01-01

    Research in the field of speech production pathology is dominated by describing deficits in output. However, perceptual problems might underlie, precede, or interact with production disorders. The present study hypothesizes that the level of the production disorders is linked to level of perception

  12. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  13. Perception and Temporal Properties of Speech

    Science.gov (United States)

    1990-07-26

    rqvcrw it necessary and idenbty by bloc* numewrl ,ELI I ROu I SUS. GR. l speech perception, prosody, context effects, phonetic 05 09 1 segments...found to aid listeners in correctly attributing the phonological source of vowel duration. The second series of experiments examines the role of... phonetic segments, and on the role of coarse-grained aspects of the speech signal in facilitating segment recognition. These extensions will address the

  14. Reflections on mirror neurons and speech perception.

    Science.gov (United States)

    Lotto, Andrew J; Hickok, Gregory S; Holt, Lori L

    2009-03-01

    The discovery of mirror neurons, a class of neurons that respond when a monkey performs an action and also when the monkey observes others producing the same action, has promoted a renaissance for the Motor Theory (MT) of speech perception. This is because mirror neurons seem to accomplish the same kind of one to one mapping between perception and action that MT theorizes to be the basis of human speech communication. However, this seeming correspondence is superficial, and there are theoretical and empirical reasons to temper enthusiasm about the explanatory role mirror neurons might have for speech perception. In fact, rather than providing support for MT, mirror neurons are actually inconsistent with the central tenets of MT.

  15. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  16. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex...... experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have...

  17. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages.......Integration of speech signals from ear and eye is a well-known feature of speech perception. This is evidenced by the McGurk illusion in which visual speech alters auditory speech perception and by the advantage observed in auditory speech detection when a visual signal is present. Here we...

  18. Aero-tactile integration in speech perception

    Science.gov (United States)

    Gick, Bryan; Derrick, Donald

    2013-01-01

    Visual information from a speaker’s face can enhance1 or interfere with2 accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies3,4, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities5. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration. However, previous studies have found an influence of tactile input on speech perception only under limited circumstances, either where perceivers were aware of the task6,7 or where they had received training to establish a cross-modal mapping8–10. Here we show that perceivers integrate naturalistic tactile information during auditory speech perception without previous training. Drawing on the observation that some speech sounds produce tiny bursts of aspiration (such as English ‘p’)11, we applied slight, inaudible air puffs on participants’ skin at one of two locations: the right hand or the neck. Syllables heard simultaneously with cutaneous air puffs were more likely to be heard as aspirated (for example, causing participants to mishear ‘b’ as ‘p’). These results demonstrate that perceivers integrate event-relevant tactile information in auditory perception in much the same way as they do visual information. PMID:19940925

  19. Are there interactive processes in speech perception?

    Science.gov (United States)

    McClelland, James L.; Mirman, Daniel; Holt, Lori L.

    2012-01-01

    Lexical information facilitates speech perception, especially when sounds are ambiguous or degraded. The interactive approach to understanding this effect posits that this facilitation is accomplished through bi-directional flow of information, allowing lexical knowledge to influence pre-lexical processes. Alternative autonomous theories posit feed-forward processing with lexical influence restricted to post-perceptual decision processes. We review evidence supporting the prediction of interactive models that lexical influences can affect pre-lexical mechanisms, triggering compensation, adaptation and retuning of phonological processes generally taken to be pre-lexical. We argue that these and other findings point to interactive processing as a fundamental principle for perception of speech and other modalities. PMID:16843037

  20. Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder

    Science.gov (United States)

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2006-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…

  1. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  2. The synergy between speech production and perception

    Science.gov (United States)

    Ru, Powen; Chi, Taishih; Shamma, Shihab

    2003-01-01

    Speech intelligibility is known to be relatively unaffected by certain deformations of the acoustic spectrum. These include translations, stretching or contracting dilations, and shearing of the spectrum (represented along the logarithmic frequency axis). It is argued here that such robustness reflects a synergy between vocal production and auditory perception. Thus, on the one hand, it is shown that these spectral distortions are produced by common and unavoidable variations among different speakers pertaining to the length, cross-sectional profile, and losses of their vocal tracts. On the other hand, it is argued that these spectral changes leave the auditory cortical representation of the spectrum largely unchanged except for translations along one of its representational axes. These assertions are supported by analyses of production and perception models. On the production side, a simplified sinusoidal model of the vocal tract is developed which analytically relates a few ``articulatory'' parameters, such as the extent and location of the vocal tract constriction, to the spectral peaks of the acoustic spectra synthesized from it. The model is evaluated by comparing the identification of synthesized sustained vowels to labeled natural vowels extracted from the TIMIT corpus. On the perception side a ``multiscale'' model of sound processing is utilized to elucidate the effects of the deformations on the representation of the acoustic spectrum in the primary auditory cortex. Finally, the implications of these results for the perception of generally identifiable classes of sound sources beyond the specific case of speech and the vocal tract are discussed.

  3. Perception of speech in noise: neural correlates.

    Science.gov (United States)

    Song, Judy H; Skoe, Erika; Banai, Karen; Kraus, Nina

    2011-09-01

    The presence of irrelevant auditory information (other talkers, environmental noises) presents a major challenge to listening to speech. The fundamental frequency (F(0)) of the target speaker is thought to provide an important cue for the extraction of the speaker's voice from background noise, but little is known about the relationship between speech-in-noise (SIN) perceptual ability and neural encoding of the F(0). Motivated by recent findings that music and language experience enhance brainstem representation of sound, we examined the hypothesis that brainstem encoding of the F(0) is diminished to a greater degree by background noise in people with poorer perceptual abilities in noise. To this end, we measured speech-evoked auditory brainstem responses to /da/ in quiet and two multitalker babble conditions (two-talker and six-talker) in native English-speaking young adults who ranged in their ability to perceive and recall SIN. Listeners who were poorer performers on a standardized SIN measure demonstrated greater susceptibility to the degradative effects of noise on the neural encoding of the F(0). Particularly diminished was their phase-locked activity to the fundamental frequency in the portion of the syllable known to be most vulnerable to perceptual disruption (i.e., the formant transition period). Our findings suggest that the subcortical representation of the F(0) in noise contributes to the perception of speech in noisy conditions.

  4. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    of the speaker. Observers were required to report this after primary target categorization. We found a significant McGurk effect only in the natural speech and speech mode conditions supporting the finding of Tuomainen et al. Performance in the secondary task was similar in all conditions indicating......Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non-speech......, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...

  5. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non-speech......, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... of the speaker. Observers were required to report this after primary target categorization. We found a significant McGurk effect only in the natural speech and speech mode conditions supporting the finding of Tuomainen et al. Performance in the secondary task was similar in all conditions indicating...

  6. Perception of Speech Sounds in School-Aged Children with Speech Sound Disorders.

    Science.gov (United States)

    Preston, Jonathan L; Irwin, Julia R; Turcios, Jacqueline

    2015-11-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System, which has been effectively used to assess preschoolers' ability to perform goodness judgments, is explored for school-aged children with residual speech errors (RSEs). However, data suggest that this particular task may not be sensitive to perceptual differences in school-aged children. The need for the development of clinical tools for assessment of speech perception in school-aged children with RSE is highlighted, and clinical suggestions are provided. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  8. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, o...

  9. Lexical and sublexical units in speech perception.

    Science.gov (United States)

    Giroux, Ibrahima; Rey, Arnaud

    2009-03-01

    Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.

  10. Musical expertise and foreign speech perception

    Directory of Open Access Journals (Sweden)

    Eduardo eMartínez-Montes

    2013-11-01

    Full Text Available The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT or equivalent that were either far from (Large deviants or close to (Small deviants the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception is discussed.

  11. [Speech perception in the first two years].

    Science.gov (United States)

    Bertoncini, J; Cabrera, L

    2014-10-01

    The development of speech perception relies upon early auditory capacities (i.e. discrimination, segmentation and representation). Infants are able to discriminate most of the phonetic contrasts occurring in natural languages, and at the end of the first year, this universal ability starts to narrow down to the contrasts used in the environmental language. During the second year, this specialization is characterized by the development of comprehension, lexical organization and word production. That process appears now as the result of multiple interactions between perceptual, cognitive and social developing abilities. Distinct factors like word acquisition, sensitivity to the statistical properties of the input, or even the nature of the social interactions, might play a role at one time or another during the acquisition of phonological patterns. Experience with the native language is necessary for phonetic segments to be functional units of perception and for speech sound representations (words, syllables) to be more specified and phonetically organized. This evolution goes on beyond 24 months of age in a learning context characterized from the early stages by the interaction with other developing (linguistic and non-linguistic) capacities.

  12. Impact of second-language experience in infancy: brain measures of first- and second-language speech perception.

    Science.gov (United States)

    Conboy, Barbara T; Kuhl, Patricia K

    2011-03-01

    Language experience 'narrows' speech perception by the end of infants' first year, reducing discrimination of non-native phoneme contrasts while improving native-contrast discrimination. Previous research showed that declines in non-native discrimination were reversed by second-language experience provided at 9-10 months, but it is not known whether second-language experience affects first-language speech sound processing. Using event-related potentials (ERPs), we examined learning-related changes in brain activity to Spanish and English phoneme contrasts in monolingual English-learning infants pre- and post-exposure to Spanish from 9.5-10.5 months of age. Infants showed a significant discriminatory ERP response to the Spanish contrast at 11 months (post-exposure), but not at 9 months (pre-exposure). The English contrast elicited an earlier discriminatory response at 11 months than at 9 months, suggesting improvement in native-language processing. The results show that infants rapidly encode new phonetic information, and that improvement in native speech processing can occur during second-language learning in infancy.

  13. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  14. Review of Visual Speech Perception by Hearing and Hearing-Impaired People: Clinical Implications

    Science.gov (United States)

    Woodhouse, Lynn; Hickson, Louise; Dodd, Barbara

    2009-01-01

    Background: Speech perception is often considered specific to the auditory modality, despite convincing evidence that speech processing is bimodal. The theoretical and clinical roles of speech-reading for speech perception, however, have received little attention in speech-language therapy. Aims: The role of speech-read information for speech…

  15. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  16. Free classification of American English dialects by native and non-native listeners.

    Science.gov (United States)

    Clopper, Cynthia G; Bradlow, Ann R

    2009-10-01

    Most second language acquisition research focuses on linguistic structures, and less research has examined the acquisition of sociolinguistic patterns. The current study explored the perceptual classification of regional dialects of American English by native and non-native listeners using a free classification task. Results revealed similar classification strategies for the native and non-native listeners. However, the native listeners were more accurate overall than the non-native listeners. In addition, the non-native listeners were less able to make use of constellations of cues to accurately classify the talkers by dialect. However, the non-native listeners were able to attend to cues that were either phonologically or sociolinguistically relevant in their native language. These results suggest that non-native listeners can use information in the speech signal to classify talkers by regional dialect, but that their lack of signal-independent cultural knowledge about variation in the second language leads to less accurate classification performance.

  17. Sound frequency affects speech emotion perception: Results from congenital amusia

    Directory of Open Access Journals (Sweden)

    Sydney eLolli

    2015-09-01

    Full Text Available Congenital amusics, or tone-deaf individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying band-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody (MBEP were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under band-pass and unfiltered speech conditions. Results showed a significant correlation between pitch discrimination threshold and emotion identification accuracy for band-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold > 16 Hz performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between band-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation.

  18. Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification

    CERN Document Server

    Bouselmi, Ghazi; Illina, Irina; Haton, Jean-Paul

    2007-01-01

    In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques.

  19. Recognition of spoken words by native and non-native listeners: Talker-, listener-, and item-related factors

    Science.gov (United States)

    Bradlow, Ann R.; Pisoni, David B.

    2012-01-01

    In order to gain insight into the interplay between the talker-, listener-, and item-related factors that influence speech perception, a large multi-talker database of digitally recorded spoken words was developed, and was then submitted to intelligibility tests with multiple listeners. Ten talkers produced two lists of words at three speaking rates. One list contained lexically “easy” words (words with few phonetically similar sounding “neighbors” with which they could be confused), and the other list contained lexically “hard” (wordswords with many phonetically similar sounding “neighbors”). An analysis of the intelligibility data obtained with native speakers of English (experiment 1) showed a strong effect of lexical similarity. Easy words had higher intelligibility scores than hard words. A strong effect of speaking rate was also found whereby slow and medium rate words had higher intelligibility scores than fast rate words. Finally, a relationship was also observed between the various stimulus factors whereby the perceptual difficulties imposed by one factor, such as a hard word spoken at a fast rate, could be overcome by the advantage gained through the listener's experience and familiarity with the speech of a particular talker. In experiment 2, the investigation was extended to another listener population, namely, non-native listeners. Results showed that the ability to take advantage of surface phonetic information, such as a consistent talker across items, is a perceptual skill that transfers easily from first to second language perception. However, non-native listeners had particular difficulty with lexically hard words even when familiarity with the items was controlled, suggesting that non-native word recognition may be compromised when fine phonetic discrimination at the segmental level is required. Taken together, the results of this study provide insight into the signal-dependent and signal-independent factors that influence spoken

  20. Perception of Sung Speech in Bimodal Cochlear Implant Users

    Science.gov (United States)

    Galvin, John J.; Fu, Qian-Jie

    2016-01-01

    Combined use of a hearing aid (HA) and cochlear implant (CI) has been shown to improve CI users’ speech and music performance. However, different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefits. In this study, speech and music perception were measured in bimodal listeners for CI-only, HA-only, and CI + HA conditions, using the Sung Speech Corpus, a database of monosyllabic words produced at different fundamental frequencies. Sentence recognition was measured using sung speech in which pitch was held constant or varied across words, as well as for spoken speech. Melodic contour identification (MCI) was measured using sung speech in which the words were held constant or varied across notes. Results showed that sentence recognition was poorer with sung speech relative to spoken, with little difference between sung speech with a constant or variable pitch; mean performance was better with CI-only relative to HA-only, and best with CI + HA. MCI performance was better with constant words versus variable words; mean performance was better with HA-only than with CI-only and was best with CI + HA. Relative to CI-only, a strong bimodal benefit was observed for speech and music perception. Relative to the better ear, bimodal benefits remained strong for sentence recognition but were marginal for MCI. While variations in pitch and timbre may negatively affect CI users’ speech and music perception, bimodal listening may partially compensate for these deficits. PMID:27837051

  1. Incorporating Social Oriented Agent and Interactive Simulation in E-learning: Impact on Learning, Perceptions, Experiences to Non-Native English Students

    Science.gov (United States)

    Ballera, Melvin; Elssaedi, Mosbah Mohamed

    2012-01-01

    There is an unrealized potential in the use of socially-oriented pedagogical agent and interactive simulation in e-learning system. In this paper, we investigate the impact of having a socially oriented tutor agent and the incorporation of interactive simulation in e-learning into student performances, perceptions and experiences for non-native…

  2. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive...... but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences...

  3. Music training and speech perception: a gene–environment interaction

    National Research Council Canada - National Science Library

    Schellenberg, E. Glenn

    2015-01-01

    Claims of beneficial side effects of music training are made for many different abilities, including verbal and visuospatial abilities, executive functions, working memory, IQ, and speech perception in particular...

  4. Ecological impacts of non-native species

    Science.gov (United States)

    Wilkinson, John W.

    2012-01-01

    Non-native species are considered one of the greatest threats to freshwater biodiversity worldwide (Drake et al. 1989; Allen and Flecker 1993; Dudgeon et al. 2005). Some of the first hypotheses proposed to explain global patterns of amphibian declines included the effects of non-native species (Barinaga 1990; Blaustein and Wake 1990; Wake and Morowitz 1991). Evidence for the impact of non-native species on amphibians stems (1) from correlative research that relates the distribution or abundance of a species to that of a putative non-native species, and (2) from experimental tests of the effects of a non-native species on survival, growth, development or behaviour of a target species (Kats and Ferrer 2003). Over the past two decades, research on the effects of non-native species on amphibians has mostly focused on introduced aquatic predators, particularly fish. Recent research has shifted to more complex ecological relationships such as influences of sub-lethal stressors (e.g. contaminants) on the effects of non-native species (Linder et al. 2003; Sih et al. 2004), non-native species as vectors of disease (Daszak et al. 2004; Garner et al. 2006), hybridization between non-natives and native congeners (Riley et al. 2003; Storfer et al. 2004), and the alteration of food-webs by non-native species (Nystrom et al. 2001). Other research has examined the interaction of non-native species in terms of facilitation (i.e. one non-native enabling another to become established or spread) or the synergistic effects of multiple non-native species on native amphibians, the so-called invasional meltdown hypothesis (Simerloff and Von Holle 1999). Although there is evidence that some non-native species may interact (Ricciardi 2001), there has yet to be convincing evidence that such interactions have led to an accelerated increase in the number of non-native species and cumulative impacts are still uncertain (Simberloff 2006). Applied research on the control, eradication, and

  5. Cognitive Control Factors in Speech Perception at 11 Months

    Science.gov (United States)

    Conboy, Barbara T.; Sommerville, Jessica A.; Kuhl, Patricia K.

    2008-01-01

    The development of speech perception during the 1st year reflects increasing attunement to native language features, but the mechanisms underlying this development are not completely understood. One previous study linked reductions in nonnative speech discrimination to performance on nonlinguistic tasks, whereas other studies have shown…

  6. Audio-Visual Speech Perception: A Developmental ERP Investigation

    Science.gov (United States)

    Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…

  7. Audio-Visual Speech Perception: A Developmental ERP Investigation

    Science.gov (United States)

    Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…

  8. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  9. Influences of speech familiarity on immediate perception and final comprehension.

    Science.gov (United States)

    Perry, Lynn K; Mech, Emily N; MacDonald, Maryellen C; Seidenberg, Mark S

    2017-05-01

    Unfamiliar speech-spoken in a familiar language but with an accent different from the listener's-is known to increase comprehension difficulty. However, there is evidence of listeners' rapid adaptation to unfamiliar accents (although perhaps not to the level of familiar accents). This paradox might emerge from prior focus on isolated word perception and/or use of single comprehension measures. We investigated processing of fluent connected speech spoken either in a familiar or unfamiliar accent, using participants' ability to "shadow" the speech as an immediate measure as well as a comprehension test at passage end. Shadowing latencies and errors and comprehension errors increased for Unfamiliar relative to Familiar Speech conditions, especially for relatively informal rather than more academic content. Additionally, there was evidence of less adaptation to Unfamiliar than Familiar Speech. These results suggest that unfamiliar speech imposes costs, especially in the immediate timescale of perceiving speech.

  10. Cued speech for enhancing speech perception and first language development of children with cochlear implants.

    Science.gov (United States)

    Leybaert, Jacqueline; LaSasso, Carol J

    2010-06-01

    Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants.

  11. Visual-tactile integration in speech perception: Evidence for modality neutral speech primitives.

    Science.gov (United States)

    Bicevskis, Katie; Derrick, Donald; Gick, Bryan

    2016-11-01

    Audio-visual [McGurk and MacDonald (1976). Nature 264, 746-748] and audio-tactile [Gick and Derrick (2009). Nature 462(7272), 502-504] speech stimuli enhance speech perception over audio stimuli alone. In addition, multimodal speech stimuli form an asymmetric window of integration that is consistent with the relative speeds of the various signals [Munhall, Gribble, Sacco, and Ward (1996). Percept. Psychophys. 58(3), 351-362; Gick, Ikegami, and Derrick (2010). J. Acoust. Soc. Am. 128(5), EL342-EL346]. In this experiment, participants were presented video of faces producing /pa/ and /ba/ syllables, both alone and with air puffs occurring synchronously and at different timings up to 300 ms before and after the stop release. Perceivers were asked to identify the syllable they perceived, and were more likely to respond that they perceived /pa/ when air puffs were present, with asymmetrical preference for puffs following the video signal-consistent with the relative speeds of visual and air puff signals. The results demonstrate that visual-tactile integration of speech perception occurs much as it does with audio-visual and audio-tactile stimuli. This finding contributes to the understanding of multimodal speech perception, lending support to the idea that speech is not perceived as an audio signal that is supplemented by information from other modes, but rather that primitives of speech perception are, in principle, modality neutral.

  12. Perception of the Auditory-Visual Illusion in Speech Perception by Children with Phonological Disorders

    Science.gov (United States)

    Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis

    2008-01-01

    An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond…

  13. Experimental study on phase perception in speech

    Institute of Scientific and Technical Information of China (English)

    BU Fanliang; CHEN Yanpu

    2003-01-01

    As the human ear is dull to the phase in speech, little attention has been paid tophase information in speech coding. In fact, the speech perceptual quality may be degeneratedif the phase distortion is very large. The perceptual effect of the STFT (Short time Fouriertransform) phase spectrum is studied by auditory subjective hearing tests. Three main con-clusions are (1) If the phase information is neglected completely, the subjective quality of thereconstructed speech may be very poor; (2) Whether the neglected phase is in low frequencyband or high frequency band, the difference from the original speech can be perceived by ear;(3) It is very difficult for the human ear to perceive the difference of speech quality betweenoriginal speech and reconstructed speech while the phase quantization step size is shorter thanπ/7.

  14. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  15. Speech perception of noise with binary gains

    DEFF Research Database (Denmark)

    Wang, DeLiang; Kjems, Ulrik; Pedersen, Michael Syskind

    2008-01-01

    For a given mixture of speech and noise, an ideal binary time-frequency mask is constructed by comparing speech energy and noise energy within local time-frequency units. It is observed that listeners achieve nearly perfect speech recognition from gated noise with binary gains prescribed...

  16. Neural correlates of quality perception for complex speech signals

    CERN Document Server

    Antons, Jan-Niklas

    2015-01-01

    This book interconnects two essential disciplines to study the perception of speech: Neuroscience and Quality of Experience, which to date have rarely been used together for the purposes of research on speech quality perception. In five key experiments, the book demonstrates the application of standard clinical methods in neurophysiology on the one hand, and of methods used in fields of research concerned with speech quality perception on the other. Using this combination, the book shows that speech stimuli with different lengths and different quality impairments are accompanied by physiological reactions related to quality variations, e.g., a positive peak in an event-related potential. Furthermore, it demonstrates that – in most cases – quality impairment intensity has an impact on the intensity of physiological reactions.

  17. Individual differences in speech-in-noise perception parallel neural speech processing and attention in preschoolers.

    Science.gov (United States)

    Thompson, Elaine C; Woodruff Carr, Kali; White-Schwoch, Travis; Otto-Meyer, Sebastian; Kraus, Nina

    2017-02-01

    From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3-5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ∼12 months), we followed a cohort of 59 preschoolers, ages 3.0-4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Perceptions of French Fluency in Second Language Speech Production

    Science.gov (United States)

    Préfontaine, Yvonne

    2013-01-01

    Recent literature in second language (L2) perceived fluency has focused on English as a second language, with a primary reliance on impressions from native-speaker judges, leaving learners' self-perceptions of speech production unexplored. This study investigates the relationship between learners' and judges' perceptions of French fluency under…

  19. Children's perception of their synthetically corrected speech production.

    Science.gov (United States)

    Strömbergsson, Sofia; Wengelin, Asa; House, David

    2014-06-01

    We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.

  20. Perception of words and pitch patterns in song and speech

    Directory of Open Access Journals (Sweden)

    Julia eMerrill

    2012-03-01

    Full Text Available This fMRI study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words, pitch and rhythm. Univariate and multivariate analyses were performed on the brain activity patterns of six conditions, arranged in a subtractive hierarchy: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; as well as the pure musical or speech rhythm.Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG and intraparietal sulcus (IPS in processing song and speech. The left IFG was involved in word- and pitch-related processing in speech, the right IFG in processing pitch in song.Furthermore, the IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the activity of IFG and IPS.

  1. Speech perception at the interface of neurobiology and linguistics.

    Science.gov (United States)

    Poeppel, David; Idsardi, William J; van Wassenhove, Virginie

    2008-03-12

    Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.

  2. Vision of tongue movements bias auditory speech perception.

    Science.gov (United States)

    D'Ausilio, Alessandro; Bartoli, Eleonora; Maffongelli, Laura; Berry, Jeffrey James; Fadiga, Luciano

    2014-10-01

    Audiovisual speech perception is likely based on the association between auditory and visual information into stable audiovisual maps. Conflicting audiovisual inputs generate perceptual illusions such as the McGurk effect. Audiovisual mismatch effects could be either driven by the detection of violations in the standard audiovisual statistics or via the sensorimotor reconstruction of the distal articulatory event that generated the audiovisual ambiguity. In order to disambiguate between the two hypotheses we exploit the fact that the tongue is hidden to vision. For this reason, tongue movement encoding can solely be learned via speech production but not via others׳ speech perception alone. Here we asked participants to identify speech sounds while matching or mismatching visual representations of tongue movements which were shown. Vision of congruent tongue movements facilitated auditory speech identification with respect to incongruent trials. This result suggests that direct visual experience of an articulator movement is not necessary for the generation of audiovisual mismatch effects. Furthermore, we suggest that audiovisual integration in speech may benefit from speech production learning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The effects of noise vocoding on speech quality perception.

    Science.gov (United States)

    Anderson, Melinda C; Arehart, Kathryn H; Kates, James M

    2014-03-01

    Speech perception depends on access to spectral and temporal acoustic cues. Temporal cues include slowly varying amplitude changes (i.e. temporal envelope, TE) and quickly varying amplitude changes associated with the center frequency of the auditory filter (i.e. temporal fine structure, TFS). This study quantifies the effects of TFS randomization through noise vocoding on the perception of speech quality by parametrically varying the amount of original TFS available above 1500Hz. The two research aims were: 1) to establish the role of TFS in quality perception, and 2) to determine if the role of TFS in quality perception differs between subjects with normal hearing and subjects with sensorineural hearing loss. Ratings were obtained from 20 subjects (10 with normal hearing and 10 with hearing loss) using an 11-point quality scale. Stimuli were processed in three different ways: 1) A 32-channel noise-excited vocoder with random envelope fluctuations in the noise carrier, 2) a 32-channel noise-excited vocoder with the noise-carrier envelope smoothed, and 3) removal of high-frequency bands. Stimuli were presented in quiet and in babble noise at 18dB and 12dB signal-to-noise ratios. TFS randomization had a measurable detrimental effect on quality ratings for speech in quiet and a smaller effect for speech in background babble. Subjects with normal hearing and subjects with sensorineural hearing loss provided similar quality ratings for noise-vocoded speech.

  4. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    Science.gov (United States)

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  5. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    Directory of Open Access Journals (Sweden)

    Claudia Kubicek

    Full Text Available The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German and non-native (French fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  6. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    Science.gov (United States)

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  7. Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2013-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their typically developing peers. To shed light on possible differences in the maturation of audiovisual speech integration, we tested younger (ages 6-12) and older (ages 13-18) children with and without ASD on a task indexing such multisensory integration. To do this, we used the McGurk effect, in which the pairing of incongruent auditory and visual speech tokens typically results in the perception of a fused percept distinct from the auditory and visual signals, indicative of active integration of the two channels conveying speech information. Whereas little difference was seen in audiovisual speech processing (i.e., reports of McGurk fusion) between the younger ASD and TD groups, there was a significant difference at the older ages. While TD controls exhibited an increased rate of fusion (i.e., integration) with age, children with ASD failed to show this increase. These data suggest arrested development of audiovisual speech integration in ASD. The results are discussed in light of the extant literature and necessary next steps in research. PMID:24218241

  8. Heartbeat perception in social anxiety before and during speech anticipation.

    Science.gov (United States)

    Stevens, Stephan; Gerlach, Alexander L; Cludius, Barbara; Silkens, Anna; Craske, Michelle G; Hermann, Christiane

    2011-02-01

    According to current cognitive models of social phobia, individuals with social anxiety create a distorted image of themselves in social situations, relying, at least partially, on interoceptive cues. We investigated differences in heartbeat perception as a proxy of interoception in 48 individuals high and low in social anxiety at baseline and while anticipating a public speech. Results revealed lower error scores for high fearful participants both at baseline and during speech anticipation. Speech anticipation improved heartbeat perception in both groups only marginally. Eight of nine accurate perceivers as determined using a criterion of maximum difference between actual and counted beats were high socially anxious. Higher interoceptive accuracy might increase the risk of misinterpreting physical symptoms as visible signs of anxiety which then trigger negative evaluation by others. Treatment should take into account that in socially anxious individuals perceived physical arousal is likely to be accurate rather than false alarm. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Auditory and visual information in speech perception: A developmental perspective.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Fostick, Leah

    This study investigates the development of audiovisual speech perception from age 4 to 80, analysing the contribution of modality, context and special features of specific language being tested. Data of 77 participants in five age groups is presented in the study. Speech stimuli were introduced via auditory, visual and audiovisual modalities. Monosyllabic meaningful and nonsense words were included in a signal-to-noise ratio of 0 dB. Speech perception accuracy in audiovisual and auditory modalities by age resulted in an inverse U-shape, with lowest performance at ages 4-5 and 65-80. In the visual modality, a clear difference was shown between performance of children (ages 4-5 and 8-9) and adults (age 20 and above). The findings of the current study have important implications for strategic planning in rehabilitation programmes for child and adult speakers of different languages with hearing difficulties.

  10. Cross-language and second language speech perception

    DEFF Research Database (Denmark)

    Bohn, Ocke-Schwen

    2017-01-01

    in cross-language and second language speech perception research: The mapping issue (the perceptual relationship of sounds of the native and the nonnative language in the mind of the native listener and the L2 learner), the perceptual and learning difficulty/ease issue (how this relationship may or may...... not cause perceptual and learning difficulty), and the plasticity issue (whether and how experience with the nonnative language affects the perceptual organization of speech sounds in the mind of L2 learners). One important general conclusion from this research is that perceptual learning is possible at all......This chapter provides an overview of the main research questions and findings in the areas of second language and cross-language speech perception research, and of the most widely used models that have guided this research. The overview is structured in a way that addresses three overarching topics...

  11. The Role of the Listener's State in Speech Perception

    Science.gov (United States)

    Viswanathan, Navin

    2009-01-01

    Accounts of speech perception disagree on whether listeners perceive the acoustic signal (Diehl, Lotto, & Holt, 2004) or the vocal tract gestures that produce the signal (e.g., Fowler, 1986). In this dissertation, I outline a research program using a phenomenon called "perceptual compensation for coarticulation" (Mann, 1980) to examine this…

  12. Listener Perceptions of Stuttering, Prolonged Speech, and Verbal Avoidance Behaviors

    Science.gov (United States)

    Von Tiling, Johannes

    2011-01-01

    This study examined listener perceptions of different ways of speaking often produced by people who stutter. Each of 115 independent listeners made quantitative and qualitative judgments upon watching one of four randomly assigned speech samples. Each of the four video clips showed the same everyday conversation between three young men, but…

  13. Vocabulary Facilitates Speech Perception in Children With Hearing Aids.

    Science.gov (United States)

    Klein, Kelsey E; Walker, Elizabeth A; Kirby, Benjamin; McCreery, Ryan W

    2017-08-16

    We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs. Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5-12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli. Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs. Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH.

  14. Multisensory Speech Perception in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Woynaroski, Tiffany G.; Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Stevenson, Ryan A.; Stone, Wendy L.; Wallace, Mark T.

    2013-01-01

    This study examined unisensory and multisensory speech perception in 8-17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant-vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual ("McGurk")…

  15. Speech Perception Ability in Individuals with Friedreich Ataxia

    Science.gov (United States)

    Rance, Gary; Fava, Rosanne; Baldock, Heath; Chong, April; Barker, Elizabeth; Corben, Louise; Delatycki

    2008-01-01

    The aim of this study was to investigate auditory pathway function and speech perception ability in individuals with Friedreich ataxia (FRDA). Ten subjects confirmed by genetic testing as being homozygous for a GAA expansion in intron 1 of the FXN gene were included. While each of the subjects demonstrated normal, or near normal sound detection, 3…

  16. Visual Influences on Speech Perception in Children with Autism

    Science.gov (United States)

    Iarocci, Grace; Rombough, Adrienne; Yager, Jodi; Weeks, Daniel J.; Chua, Romeo

    2010-01-01

    The bimodal perception of speech sounds was examined in children with autism as compared to mental age--matched typically developing (TD) children. A computer task was employed wherein only the mouth region of the face was displayed and children reported what they heard or saw when presented with consonant-vowel sounds in unimodal auditory…

  17. Speech Perception Ability in Individuals with Friedreich Ataxia

    Science.gov (United States)

    Rance, Gary; Fava, Rosanne; Baldock, Heath; Chong, April; Barker, Elizabeth; Corben, Louise; Delatycki

    2008-01-01

    The aim of this study was to investigate auditory pathway function and speech perception ability in individuals with Friedreich ataxia (FRDA). Ten subjects confirmed by genetic testing as being homozygous for a GAA expansion in intron 1 of the FXN gene were included. While each of the subjects demonstrated normal, or near normal sound detection, 3…

  18. The Role of Variation in the Perception of Accented Speech

    Science.gov (United States)

    Sumner, Meghan

    2011-01-01

    Phonetic variation has been considered a barrier that listeners must overcome in speech perception, but has been proved beneficial in category learning. In this paper, I show that listeners use within-speaker variation to accommodate gross categorical variation. Within the perceptual learning paradigm, listeners are exposed to p-initial words in…

  19. Multisensory Speech Perception in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Woynaroski, Tiffany G.; Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Stevenson, Ryan A.; Stone, Wendy L.; Wallace, Mark T.

    2013-01-01

    This study examined unisensory and multisensory speech perception in 8-17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant-vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual ("McGurk")…

  20. Visual speech acts differently than lexical context in supporting speech perception.

    Science.gov (United States)

    Samuel, Arthur G; Lieblich, Jerrold

    2014-08-01

    The speech signal is often badly articulated, and heard under difficult listening conditions. To deal with these problems, listeners make use of various types of context. In the current study, we examine a type of context that in previous work has been shown to affect how listeners report what they hear: visual speech (i.e., the visible movements of the speaker's articulators). Despite the clear utility of this type of context under certain conditions, prior studies have shown that visually driven phonetic percepts (via the "McGurk" effect) are not "real" enough to affect perception of later-occurring speech; such percepts have not produced selective adaptation effects. This failure contrasts with successful adaptation by sounds that are generated by lexical context-the word that a sound occurs within. We demonstrate here that this dissociation is robust, leading to the conclusion that visual and lexical contexts operate differently. We suggest that the dissociation reflects the dual nature of speech as both a perceptual object and a linguistic object. Visual speech seems to contribute directly to the computations of the perceptual object but not the linguistic one, while lexical context is used in both types of computations.

  1. "Perception of the speech code" revisited: Speech is alphabetic after all.

    Science.gov (United States)

    Fowler, Carol A; Shankweiler, Donald; Studdert-Kennedy, Michael

    2016-03-01

    We revisit an article, "Perception of the Speech Code" (PSC), published in this journal 50 years ago (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967) and address one of its legacies concerning the status of phonetic segments, which persists in theories of speech today. In the perspective of PSC, segments both exist (in language as known) and do not exist (in articulation or the acoustic speech signal). Findings interpreted as showing that speech is not a sound alphabet, but, rather, phonemes are encoded in the signal, coupled with findings that listeners perceive articulation, led to the motor theory of speech perception, a highly controversial legacy of PSC. However, a second legacy, the paradoxical perspective on segments has been mostly unquestioned. We remove the paradox by offering an alternative supported by converging evidence that segments exist in language both as known and as used. We support the existence of segments in both language knowledge and in production by showing that phonetic segments are articulatory and dynamic and that coarticulation does not eliminate them. We show that segments leave an acoustic signature that listeners can track. This suggests that speech is well-adapted to public communication in facilitating, not creating a barrier to, exchange of language forms. (c) 2016 APA, all rights reserved).

  2. An Analysis of Speech Structure and Perception Processes and Its Effects on Oral English Teaching Centering around Lexical Chunks

    Institute of Scientific and Technical Information of China (English)

    ZHOU Li; NIE Yong-Wei

    2015-01-01

    The paper tries to analyze speech perception in terms of its structure, process, levels and models. Some problems con⁃cerning speech perception have been touched upon. The paper aims at providing some reference for oral English teaching and learning in the light of speech perception. It is intended to arouse readers’reflection upon the effect of speech perception on oral English teaching.

  3. The Role of Somatosensory Information in Speech Perception: Imitation Improves Recognition of Disordered Speech.

    Science.gov (United States)

    Borrie, Stephanie A; Schäfer, Martina C M

    2015-12-01

    Perceptual learning paradigms involving written feedback appear to be a viable clinical tool to reduce the intelligibility burden of dysarthria. The underlying theoretical assumption is that pairing the degraded acoustics with the intended lexical targets facilitates a remapping of existing mental representations in the lexicon. This study investigated whether ties to mental representations can be strengthened by way of a somatosensory motor trace. Following an intelligibility pretest, 100 participants were assigned to 1 of 5 experimental groups. The control group received no training, but the other 4 groups received training with dysarthric speech under conditions involving a unique combination of auditory targets, written feedback, and/or a vocal imitation task. All participants then completed an intelligibility posttest. Training improved intelligibility of dysarthric speech, with the largest improvements observed when the auditory targets were accompanied by both written feedback and an imitation task. Further, a significant relationship between intelligibility improvement and imitation accuracy was identified. This study suggests that somatosensory information can strengthen the activation of speech sound maps of dysarthric speech. The findings, therefore, implicate a bidirectional relationship between speech perception and speech production as well as advance our understanding of the mechanisms that underlie perceptual learning of degraded speech.

  4. Feedback in online course for non-native English-speaking students

    CERN Document Server

    Olesova, Larisa

    2013-01-01

    Feedback in Online Course for Non-Native English-Speaking Students is an investigation of the effectiveness of audio and text feedback provided in English in an online course for non-native English-speaking students. The study presents results showing how audio and text feedback can impact on non-native English-speaking students' higher-order learning as they participate in an asynchronous online course. It also discusses the results of how students perceive both types of the feedback provided. In addition, the study examines how the impact and perceptions differ when the instructor giving the

  5. Face configuration affects speech perception: Evidence from a McGurk mismatch negativity study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; MacDonald, Ewen; Andersen, Tobias

    2015-01-01

    We perceive identity, expression and speech from faces. While perception of identity and expression depends crucially on the configuration of facial features it is less clear whether this holds for visual speech perception. Facial configuration is poorly perceived for upside-down faces...... the face is upright indicating that facial configuration can be important for visual speech perception. This effect can propagate to auditory speech perception through audiovisual integration so that Thatcherization disrupts the McGurk illusion in which visual speech perception alters perception...... perception due to the McGurk illusion without any change in the acoustic stimulus. We found that Thatcherization disrupted a strong McGurk illusion and a correspondingly strong McGurk-MMN only for upright faces. This confirms that facial configuration can be important for audiovisual speech perception...

  6. The effects of speech motor preparation on auditory perception

    Science.gov (United States)

    Myers, John

    Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.

  7. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Theta Brain Rhythms Index Perceptual Narrowing in Infant Speech Perception

    Directory of Open Access Journals (Sweden)

    Alexis eBosseler

    2013-10-01

    Full Text Available The development of speech perception shows a dramatic transition between infancy and adulthood. Between 6 and 12 months, infants’ initial ability to discriminate all phonetic units across the worlds’ languages narrows—native discrimination increases while nonnative discrimination shows a steep decline. We used magnetoencephalography (MEG to examine whether brain oscillations in the theta band (4-8Hz, reflecting increases in attention and cognitive effort, would provide a neural measure of the perceptual narrowing phenomenon in speech. Using an oddball paradigm, we varied speech stimuli in two dimensions, stimulus frequency (frequent vs. infrequent and language (native vs. nonnative speech syllables and tested 6-month-old infants, 12-month-old infants, and adults. We hypothesized that 6-month-old infants would show increased relative theta power (RTP for frequent syllables, regardless of their status as native or nonnative syllables, reflecting young infants’ attention and cognitive effort in response to highly frequent stimuli (statistical learning. In adults, we hypothesized increased RTP for nonnative stimuli, regardless of their presentation frequency, reflecting increased cognitive effort for nonnative phonetic categories. The 12-month-old infants were expected to show a pattern in transition, but one more similar to adults than to 6-month-old infants. The MEG brain rhythm results supported these hypotheses. We suggest that perceptual narrowing in speech perception is governed by an implicit learning process. This learning process involves an implicit shift in attention from frequent events (infants to learned categories (adults. Theta brain oscillatory activity may provide an index of perceptual narrowing beyond speech, and would offer a test of whether the early speech learning process is governed by domain-general or domain-specific processes.

  9. Sources of Confusion in Infant Audiovisual Speech Perception Research

    Directory of Open Access Journals (Sweden)

    Kathleen Elizabeth Shaw

    2015-12-01

    Full Text Available Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners — infants — are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding.

  10. Perceptions of Refusals to Invitations: Exploring the Minds of Foreign Language Learners

    Science.gov (United States)

    Felix-Brasdefer, J. Cesar

    2008-01-01

    Descriptions of speech act realisations of native and non-native speakers abound in the cross-cultural and interlanguage pragmatics literature. Yet, what is lacking is an analysis of the cognitive processes involved in the production of speech acts. This study examines the cognitive processes and perceptions of learners of Spanish when refusing…

  11. NATIVE VS NON-NATIVE ENGLISH TEACHERS

    Directory of Open Access Journals (Sweden)

    Masrizal Masrizal

    2013-02-01

    Full Text Available Although the majority of English language teachers worldwide are non-native English speakers (NNS, no research was conducted on these teachers until recently. A pioneer research by Peter Medgyes in 1994 took quite a long time until the other researchers found their interests in this issue. There is a widespread stereotype that a native speaker (NS is by nature the best person to teach his/her foreign language. In regard to this assumption, we then see a very limited room and opportunities for a non native teacher to teach language that is not his/hers. The aim of this article is to analyze the differences among these teachers in order to prove that non-native teachers have equal advantages that should be taken into account. The writer expects that the result of this short article could be a valuable input to the area of teaching English as a foreign language in Indonesia.

  12. Effects of sounds of locomotion on speech perception

    OpenAIRE

    2015-01-01

    Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walkin...

  13. Aero-tactile integration in speech perception

    OpenAIRE

    Gick, Bryan; Derrick, Donald

    2009-01-01

    Visual information from a speaker’s face can enhance1 or interfere with2 accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies3,4, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities5. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration...

  14. Non-natives: 141 scientists object

    NARCIS (Netherlands)

    Simberloff, D.; Van der Putten, W.H.

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. dsimberloff@utk.edu Jake Alexander Institute of Integrative

  15. Non-natives: 141 scientists object

    NARCIS (Netherlands)

    Simberloff, D.; Van der Putten, W.H.

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. dsimberloff@utk.edu Jake Alexander Institute of Integrative

  16. Comprehending non-native speakers: theory and evidence for adjustment in manner of processing.

    Science.gov (United States)

    Lev-Ari, Shiri

    2014-01-01

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker's upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker's language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers.

  17. Perception of speech by individuals with Parkinson’s disease: a review

    OpenAIRE

    Kwan, Lorinda C.; Whitehill, Tara L.

    2011-01-01

    A few clinical reports and empirical studies have suggested a possible deficit in the perception of speech in individuals with Parkinson's disease. In this paper, these studies are reviewed in an attempt to support clinical anecdotal observations by relevant empirical research findings. The combined evidence suggests a possible deficit in patients' perception of their own speech loudness. Other research studies on the perception of speech in this population were reviewed, in a broader scope o...

  18. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  19. Cognitive factors and cochlear implants: some thoughts on perception, learning, and memory in speech perception.

    Science.gov (United States)

    Pisoni, D B

    2000-02-01

    Over the past few years, there has been increased interest in studying some of the cognitive factors that affect speech perception performance of cochlear implant patients. In this paper, I provide a brief theoretical overview of the fundamental assumptions of the information-processing approach to cognition and discuss the role of perception, learning, and memory in speech perception and spoken language processing. The information-processing framework provides researchers and clinicians with a new way to understand the time-course of perceptual and cognitive development and the relations between perception and production of spoken language. Directions for future research using this approach are discussed including the study of individual differences, predicting success with a cochlear implant from a set of cognitive measures of performance and developing new intervention strategies.

  20. Evaluation of the speech perception in the noise in different positions in adults with cochlear implants

    Directory of Open Access Journals (Sweden)

    Santos, Karlos Thiago Pinheiro dos

    2009-03-01

    Full Text Available Introduction: The most frequent complaint of the cochlear implant users has been to recognize and understand the speech signal in the presence of noise. Researches have been developed on the speech perception of users of cochlear implant with focus on aspects such as the effect of the reduction to the signal/noise ratio in the speech perception, the speech recognition in the noise, with different types of cochlear implant and strategies of speech codification and the effects of the binaural stimulation in the speech perception in noise. Objective: 1-To assess the speech perception in cochlear implant adult users in different positions regarding the presentation of the stimulus, 2-to compare the index of speech recognition in the frontal, ipsilateral and contralateral positions and 3-to analyze the effect of monoaural adaptation in the speech perception with noise. Method: 22 cochlear implant adult users were evaluated regarding the speech perception. The individuals were submitted to sentences recognition evaluation, with competitive noise in the signal/noise ratio +10 decibels in three positions: frontal, ipsilateral and contralateral to the cochlear implant side. Results: The results demonstrated the largest index of speech recognition in the ipsilateral position (100% and the lowest index of speech recognition with sentences in the contralateral position (5%. Conclusion: The performance of speech perception in cochlear implant users is damaged when the competitive noise is introduced, the index of speech recognition is better when the speech is presented ipsilaterally, and it's consequently worse when presented contralaterally to the cochlear implant, and there are more damages in the speech intelligibility when there is only monoaural input.

  1. Subcortical Differentiation of Stop Consonants Relates to Reading and Speech-in-Noise Perception

    National Research Council Canada - National Science Library

    Jane Hornickel; Erika Skoe; Trent Nicol; Steven Zecker; Nina Kraus; Michael M. Merzenich

    2009-01-01

    Children with reading impairments have deficits in phonological awareness, phonemic categorization, speech-in-noise perception, and psychophysical tasks such as frequency and temporal discrimination...

  2. Speech perception test for Arabic-speaking children.

    Science.gov (United States)

    Kishon-Rabin, L; Rosenhouse, J

    2000-01-01

    The high incidence of hearing impairment in the Arabic-speaking population in Israel, as well as the use of advanced aural rehabilitation devices, motivated the development of Arabic speech assessment tests for this population. The purpose of this paper is twofold. The first goal is to describe features that are unique to the Arabic language and that need to be considered when developing such speech tests. These include Arabic diglossia (i.e., the sharp dichotomy between Literary and Colloquial Arabic), emphatization, and a simple vowel system. The second goal is to describe a new analytic speech test that assesses the perception of significant phonological contrasts in the Colloquial Arabic variety used in Israel. The perception of voicing, place, and manner of articulation, in both initial and final word positions, was tested at four sensation levels in 10 normally-hearing subjects using a binary forced-choice paradigm. Results show a relationship between percent correct and presentation level that is in keeping with articulation curves obtained with Saudi Arabic and English monosyllabic words. Furthermore, different contrasts yielded different articulation curves: emphatization was the easiest to perceive whereas place of articulation was the most difficult. The results can be explained by the specific acoustical features of Arabic.

  3. Hemispheric asymmetry in the hierarchical perception of music and speech.

    Science.gov (United States)

    Rosenthal, Matthew A

    2016-11-01

    The perception of music and speech involves a higher level, cognitive mechanism that allows listeners to form expectations for future music and speech events. This article comprehensively reviews studies on hemispheric differences in the formation of melodic and harmonic expectations in music and selectively reviews studies on hemispheric differences in the formation of syntactic and semantic expectations in speech. On the basis of this review, it is concluded that the higher level mechanism flexibly lateralizes music processing to either hemisphere depending on the expectation generated by a given musical context. When a context generates in the listener an expectation whose elements are sequentially ordered over time, higher level processing is dominant in the left hemisphere. When a context generates in the listener an expectation whose elements are not sequentially ordered over time, higher level processing is dominant in the right hemisphere. This article concludes with a spreading activation model that describes expectations for music and speech in terms of shared temporal and nontemporal representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  5. Non-natives: 141 scientists object

    OpenAIRE

    Simberloff, D.; van der Putten, W. H.

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. Jake Alexander Institute of Integrative Biology, Zurich, Switzerland. Fred Allendorf University of Montana, Missoula, Montana, USA. James Aronson CEFE/CNRS, Montpellier, France. Pedro M. Antunes Algoma University, Sault Ste. Marie, Onta...

  6. Bridging music and speech rhythm: rhythmic priming and audio-motor training affect speech perception.

    Science.gov (United States)

    Cason, Nia; Astésano, Corine; Schön, Daniele

    2015-02-01

    Following findings that musical rhythmic priming enhances subsequent speech perception, we investigated whether rhythmic priming for spoken sentences can enhance phonological processing - the building blocks of speech - and whether audio-motor training enhances this effect. Participants heard a metrical prime followed by a sentence (with a matching/mismatching prosodic structure), for which they performed a phoneme detection task. Behavioural (RT) data was collected from two groups: one who received audio-motor training, and one who did not. We hypothesised that 1) phonological processing would be enhanced in matching conditions, and 2) audio-motor training with the musical rhythms would enhance this effect. Indeed, providing a matching rhythmic prime context resulted in faster phoneme detection, thus revealing a cross-domain effect of musical rhythm on phonological processing. In addition, our results indicate that rhythmic audio-motor training enhances this priming effect. These results have important implications for rhythm-based speech therapies, and suggest that metrical rhythm in music and speech may rely on shared temporal processing brain resources.

  7. Tracing the emergence of categorical speech perception in the human auditory system.

    Science.gov (United States)

    Bidelman, Gavin M; Moreno, Sylvain; Alain, Claude

    2013-10-01

    Speech perception requires the effortless mapping from smooth, seemingly continuous changes in sound features into discrete perceptual units, a conversion exemplified in the phenomenon of categorical perception. Explaining how/when the human brain performs this acoustic-phonetic transformation remains an elusive problem in current models and theories of speech perception. In previous attempts to decipher the neural basis of speech perception, it is often unclear whether the alleged brain correlates reflect an underlying percept or merely changes in neural activity that covary with parameters of the stimulus. Here, we recorded neuroelectric activity generated at both cortical and subcortical levels of the auditory pathway elicited by a speech vowel continuum whose percept varied categorically from /u/ to /a/. This integrative approach allows us to characterize how various auditory structures code, transform, and ultimately render the perception of speech material as well as dissociate brain responses reflecting changes in stimulus acoustics from those that index true internalized percepts. We find that activity from the brainstem mirrors properties of the speech waveform with remarkable fidelity, reflecting progressive changes in speech acoustics but not the discrete phonetic classes reported behaviorally. In comparison, patterns of late cortical evoked activity contain information reflecting distinct perceptual categories and predict the abstract phonetic speech boundaries heard by listeners. Our findings demonstrate a critical transformation in neural speech representations between brainstem and early auditory cortex analogous to an acoustic-phonetic mapping necessary to generate categorical speech percepts. Analytic modeling demonstrates that a simple nonlinearity accounts for the transformation between early (subcortical) brain activity and subsequent cortical/behavioral responses to speech (>150-200 ms) thereby describing a plausible mechanism by which the

  8. Early Language Development of Children at Familial Risk of Dyslexia: Speech Perception and Production

    Science.gov (United States)

    Gerrits, Ellen; de Bree, Elise

    2009-01-01

    Speech perception and speech production were examined in 3-year-old Dutch children at familial risk of developing dyslexia. Their performance in speech sound categorisation and their production of words was compared to that of age-matched children with specific language impairment (SLI) and typically developing controls. We found that speech…

  9. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    Science.gov (United States)

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  10. Mapping the speech code: Cortical responses linking the perception and production of vowels

    NARCIS (Netherlands)

    Schuerman, W.L.; Meyer, A.S.; McQueen, J.M.

    2017-01-01

    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during

  11. Early Language Development of Children at Familial Risk of Dyslexia: Speech Perception and Production

    Science.gov (United States)

    Gerrits, Ellen; de Bree, Elise

    2009-01-01

    Speech perception and speech production were examined in 3-year-old Dutch children at familial risk of developing dyslexia. Their performance in speech sound categorisation and their production of words was compared to that of age-matched children with specific language impairment (SLI) and typically developing controls. We found that speech…

  12. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    Science.gov (United States)

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  13. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  14. The Role of Broca's Area in Speech Perception: Evidence from Aphasia Revisited

    Science.gov (United States)

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-01-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that…

  15. The Role of Broca's Area in Speech Perception: Evidence from Aphasia Revisited

    Science.gov (United States)

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-01-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that…

  16. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  17. Noise on, Voicing off: Speech Perception Deficits in Children with Specific Language Impairment

    Science.gov (United States)

    Ziegler, Johannes C.; Pech-Georgel, Catherine; George, Florence; Lorenzi, Christian

    2011-01-01

    Speech perception of four phonetic categories (voicing, place, manner, and nasality) was investigated in children with specific language impairment (SLI) (n=20) and age-matched controls (n=19) in quiet and various noise conditions using an AXB two-alternative forced-choice paradigm. Children with SLI exhibited robust speech perception deficits in…

  18. The Development of the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test

    Science.gov (United States)

    Mealings, Kiri T.; Demuth, Katherine; Buchholz, Jörg; Dillon, Harvey

    2015-01-01

    Purpose: Open-plan classroom styles are increasingly being adopted in Australia despite evidence that their high intrusive noise levels adversely affect learning. The aim of this study was to develop a new Australian speech perception task (the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test) and use it in an open-plan…

  19. Auditory and visual lexical neighborhoods in audiovisual speech perception.

    Science.gov (United States)

    Tye-Murray, Nancy; Sommers, Mitchell; Spehar, Brent

    2007-12-01

    Much evidence suggests that the mental lexicon is organized into auditory neighborhoods, with words that are phonologically similar belonging to the same neighborhood. In this investigation, we considered the existence of visual neighborhoods. When a receiver watches someone speak a word, a neighborhood of homophenes (ie, words that look alike on the face, such as pat and bat) is activated. The simultaneous activation of a word's auditory and visual neighborhoods may, in part, account for why individuals recognize speech better in an auditory-visual condition than what would be predicted by their performance in audition-only and vision-only conditions. A word test was administered to 3 groups of participants in audition-only, vision-only, and auditory-visual conditions, in the presence of 6-talker babble. Test words with sparse visual neighborhoods were recognized more accurately than words with dense neighborhoods in a vision-only condition. Densities of both the acoustic and visual neighborhoods as well as their intersection overlap were predictive of how well the test words were recognized in the auditory-visual condition. These results suggest that visual neighborhoods exist and that they affect auditory-visual speech perception. One implication is that in the presence of dual sensory impairment, the boundaries of both acoustic and visual neighborhoods may shift, adversely affecting speech recognition.

  20. Cross-Cultural Variation of Politeness Orientation & Speech Act Perception

    Directory of Open Access Journals (Sweden)

    Nisreen Naji Al-Khawaldeh

    2013-05-01

    Full Text Available This paper presents the findings of an empirical study which compares Jordanian and English native speakers’ perceptions about the speech act of thanking. The forty interviews conducted revealed some similarities but also of remarkable cross-cultural differences relating to the significance of thanking, the variables affecting it, and the appropriate linguistic and paralinguistic choices, as well as their impact on the interpretation of thanking behaviour. The most important theoretical finding is that the data, while consistent with many views found in the existing literature, do not support Brown and Levinson’s (1987 claim that thanking is a speech act which intrinsically threatens the speaker’s negative face because it involves overt acceptance of an imposition on the speaker.  Rather, thanking should be viewed as a means of establishing and sustaining social relationships. The study findings suggest that cultural variation in thanking is due to the high degree of sensitivity of this speech act to the complex interplay of a range of social and contextual variables, and point to some promising directions for further research.

  1. The Role of Categorical Speech Perception and Phonological Processing in Familial Risk Children with and without Dyslexia

    Science.gov (United States)

    Hakvoort, Britt; de Bree, Elise; van der Leij, Aryan; Maassen, Ben; van Setten, Ellie; Maurits, Natasha; van Zuijen, Titia L.

    2016-01-01

    Purpose: This study assessed whether a categorical speech perception (CP) deficit is associated with dyslexia or familial risk for dyslexia, by exploring a possible cascading relation from speech perception to phonology to reading and by identifying whether speech perception distinguishes familial risk (FR) children with dyslexia (FRD) from those…

  2. The Role of Categorical Speech Perception and Phonological Processing in Familial Risk Children With and Without Dyslexia

    NARCIS (Netherlands)

    Hakvoort, B.; de Bree, E.; van der Leij, A.; Maassen, B.; van Setten, E.; Maurits, N.; van Zuijen, T.L.

    2016-01-01

    Purpose This study assessed whether a categorical speech perception (CP) deficit is associated with dyslexia or familial risk for dyslexia, by exploring a possible cascading relation from speech perception to phonology to reading and by identifying whether speech perception distinguishes familial

  3. The Role of Categorical Speech Perception and Phonological Processing in Familial Risk Children with and without Dyslexia

    Science.gov (United States)

    Hakvoort, Britt; de Bree, Elise; van der Leij, Aryan; Maassen, Ben; van Setten, Ellie; Maurits, Natasha; van Zuijen, Titia L.

    2016-01-01

    Purpose: This study assessed whether a categorical speech perception (CP) deficit is associated with dyslexia or familial risk for dyslexia, by exploring a possible cascading relation from speech perception to phonology to reading and by identifying whether speech perception distinguishes familial risk (FR) children with dyslexia (FRD) from those…

  4. The right hemisphere is highlighted in connected natural speech production and perception.

    Science.gov (United States)

    Alexandrou, Anna Maria; Saarinen, Timo; Mäkelä, Sasu; Kujala, Jan; Salmelin, Riitta

    2017-05-15

    Current understanding of the cortical mechanisms of speech perception and production stems mostly from studies that focus on single words or sentences. However, it has been suggested that processing of real-life connected speech may rely on additional cortical mechanisms. In the present study, we examined the neural substrates of natural speech production and perception with magnetoencephalography by modulating three central features related to speech: amount of linguistic content, speaking rate and social relevance. The amount of linguistic content was modulated by contrasting natural speech production and perception to speech-like non-linguistic tasks. Meaningful speech was produced and perceived at three speaking rates: normal, slow and fast. Social relevance was probed by having participants attend to speech produced by themselves and an unknown person. These speech-related features were each associated with distinct spatiospectral modulation patterns that involved cortical regions in both hemispheres. Natural speech processing markedly engaged the right hemisphere in addition to the left. In particular, the right temporo-parietal junction, previously linked to attentional processes and social cognition, was highlighted in the task modulations. The present findings suggest that its functional role extends to active generation and perception of meaningful, socially relevant speech. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Music training and speech perception: a gene-environment interaction.

    Science.gov (United States)

    Schellenberg, E Glenn

    2015-03-01

    Claims of beneficial side effects of music training are made for many different abilities, including verbal and visuospatial abilities, executive functions, working memory, IQ, and speech perception in particular. Such claims assume that music training causes the associations even though children who take music lessons are likely to differ from other children in music aptitude, which is associated with many aspects of speech perception. Music training in childhood is also associated with cognitive, personality, and demographic variables, and it is well established that IQ and personality are determined largely by genetics. Recent evidence also indicates that the role of genetics in music aptitude and music achievement is much larger than previously thought. In short, music training is an ideal model for the study of gene-environment interactions but far less appropriate as a model for the study of plasticity. Children seek out environments, including those with music lessons, that are consistent with their predispositions; such environments exaggerate preexisting individual differences. © 2015 New York Academy of Sciences.

  6. Using TMS to study the role of the articulatory motor system in speech perception.

    Science.gov (United States)

    Möttönen, Riikka; Watkins, Kate E

    2012-09-01

    Background: The ability to communicate using speech is a remarkable skill, which requires precise coordination of articulatory movements and decoding of complex acoustic signals. According to the traditional view, speech production and perception rely on motor and auditory brain areas, respectively. However, there is growing evidence that auditory-motor circuits support both speech production and perception.Aims: In this article we provide a review of how transcranial magnetic stimulation (TMS) has been used to investigate the excitability of the motor system during listening to speech and the contribution of the motor system to performance in various speech perception tasks. We also discuss how TMS can be used in combination with brain-imaging techniques to study interactions between motor and auditory systems during speech perception.Main contribution: TMS has proven to be a powerful tool to investigate the role of the articulatory motor system in speech perception.Conclusions: TMS studies have provided support for the view that the motor structures that control the movements of the articulators contribute not only to speech production but also to speech perception.

  7. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  8. Effects of sounds of locomotion on speech perception.

    Science.gov (United States)

    Larsson, Matz; Ekström, Seth Reino; Ranjbar, Parivash

    2015-01-01

    Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel) and the target sound (speech) were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level) when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR) for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA)]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  9. Taiwanese University Students' Attitudes to Non-Native Speakers English Teachers

    Science.gov (United States)

    Chang, Feng-Ru

    2016-01-01

    Numerous studies have been conducted to explore issues surrounding non-native speakers (NNS) English teachers and native speaker (NS) teachers which concern, among others, the comparison between the two, the self-perceptions of NNS English teachers and the effectiveness of their teaching, and the students' opinions on and attitudes towards them.…

  10. The Knowledge Base of Non-Native English-Speaking Teachers: Perspectives of Teachers and Administrators

    Science.gov (United States)

    Zhang, Fengjuan; Zhan, Ju

    2014-01-01

    This study explores the knowledge base of non-native English-speaking teachers (NNESTs) working in the Canadian English as a second language (ESL) context. By examining NNESTs' experiences in seeking employment and teaching ESL in Canada, and investigating ESL program administrators' perceptions and hiring practices in relation to NNESTs, it…

  11. Functional overlap between regions involved in speech perception and in monitoring one's own voice during speech production.

    Science.gov (United States)

    Zheng, Zane Z; Munhall, Kevin G; Johnsrude, Ingrid S

    2010-08-01

    The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.

  12. Lexical Context Effects on Speech Perception in Chinese People with Autistic Traits

    OpenAIRE

    Huang, Hui-Chun

    2007-01-01

    One theory (weak central coherence) that accounts for a different perceptual-cognitive style in autism may suggest the possibility that individuals with autism are less likely to be affected by lexical knowledge on speech perception. This lexical context effects on speech perception has been evidenced by Ganong (1980) by using word-to-nonword identification test along a VOT dimension. This Ganong effect (which suggests that people tend to make their percept a real word) can be seen as one ...

  13. Lexical Context Effects on Speech Perception in Chinese People with Autistic Traits

    OpenAIRE

    Huang, Hui-Chun

    2007-01-01

    One theory (weak central coherence) that accounts for a different perceptual-cognitive style in autism may suggest the possibility that individuals with autism are less likely to be affected by lexical knowledge on speech perception. This lexical context effects on speech perception has been evidenced by Ganong (1980) by using word-to-nonword identification test along a VOT dimension. This Ganong effect (which suggests that people tend to make their percept a real word) can be seen as one ...

  14. Beyond production: Brain responses during speech perception in adults who stutter

    Directory of Open Access Journals (Sweden)

    Tali Halag-Milo

    2016-01-01

    Full Text Available Developmental stuttering is a speech disorder that disrupts the ability to produce speech fluently. While stuttering is typically diagnosed based on one's behavior during speech production, some models suggest that it involves more central representations of language, and thus may affect language perception as well. Here we tested the hypothesis that developmental stuttering implicates neural systems involved in language perception, in a task that manipulates comprehensibility without an overt speech production component. We used functional magnetic resonance imaging to measure blood oxygenation level dependent (BOLD signals in adults who do and do not stutter, while they were engaged in an incidental speech perception task. We found that speech perception evokes stronger activation in adults who stutter (AWS compared to controls, specifically in the right inferior frontal gyrus (RIFG and in left Heschl's gyrus (LHG. Significant differences were additionally found in the lateralization of response in the inferior frontal cortex: AWS showed bilateral inferior frontal activity, while controls showed a left lateralized pattern of activation. These findings suggest that developmental stuttering is associated with an imbalanced neural network for speech processing, which is not limited to speech production, but also affects cortical responses during speech perception.

  15. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  16. On the perception/production interface in speech processing

    Science.gov (United States)

    Hemphill, Rachel Marie

    1999-10-01

    In a series of five experiments, the author tests the hypothesis that speech processing in the human mind demands two separate phonological representations: one for perception and one for production (Menn 1980, 1983; Straight 1980; Menn & Matthei 1992). The experiments probe the structure and of these mental categories and how they change in the process of acquisition. Three groups of native English-speaking subjects were taught to categorically perceive a three way Thai voicing contrast in synthetic bilabial stop consonants, which varied only in VOT (after Pisoni, Aslin, Perey, and Hennessy 1982). Perception and production tests were administered following training. Subjects showed the ability, which improved with training, to categorically identify the three-way voicing contrast. Subsequent acoustic and perceptual analyses showed that they were unable to produce the contrast correctly, producing no difference, or manipulating acoustic variables other than VOT (vowel duration, vowel quality, nasalization, etc.). When subjects' productions were compared to their pronunciations of English labial stops, it was found that subjects construct a new production category for the Thai prevoiced stop category. In contrast, subjects split their existing English perceptual /b/ category, indicating that perceptual and production phonological categories do not change in parallel. In a subsequent experiment, subjects were re-tested on perception of the synthetic stimuli, productions of two native Thai speakers, and on their own productions from the previous experiments. An analysis of the perceptual data shows that subjects performed equally well on the four tasks, indicating that they are no better at identifying their own productions than those of novel talkers or synthetic talkers. This finding contradicts the hypothetical direct link between perception and production phonologies. These results are explained in terms of separate expressive and receptive representations and the

  17. Analytic study of the Tadoma method: effects of hand position on segmental speech perception.

    Science.gov (United States)

    Reed, C M; Durlach, N I; Braida, L D; Schultz, M C

    1989-12-01

    In the Tadoma method of communication, deaf-blind individuals receive speech by placing a hand on the face and neck of the talker and monitoring actions associated with speech production. Previous research has documented the speech perception, speech production, and linguistic abilities of highly experienced users of the Tadoma method. The current study was performed to gain further insight into the cues involved in the perception of speech segments through Tadoma. Small-set segmental identification experiments were conducted in which the subjects' access to various types of articulatory information was systematically varied by imposing limitations on the contact of the hand with the face. Results obtained on 3 deaf-blind, highly experienced users of Tadoma were examined in terms of percent-correct scores, information transfer, and reception of speech features for each of sixteen experimental conditions. The results were generally consistent with expectations based on the speech cues assumed to be available in the various hand positions.

  18. Can auditory and visual speech perception be trained within a group setting?

    Science.gov (United States)

    Preminger, Jill E; Ziegler, Craig H

    2008-06-01

    This study attempted to determine whether auditory-only and auditory-visual speech perception could be trained in a group format. A randomized controlled trial with at least 16 participants per group was completed. A training-only group completed at least 5 hr of group speech perception training; a training plus psychosocial group completed at least 5 hr of group speech perception training and psychosocial exercises; and a control group did not receive training. Evaluations were conducted before and after training and included analytic and synthetic measures of speech perception, hearing loss-related and generic quality of life scales, and a class evaluation form. No significant group changes were measured on any of the analytic auditory-only or auditory-visual measures of speech perception, yet the majority of training participants (regardless of training group) reported improvement in auditory and auditory-visual speech perception. The training participants demonstrated a significant reduction on the emotional subscale of the hearing loss-related quality of life scale, while the control participants did not demonstrate a change on this subscale. Benefits of group audiologic rehabilitation classes may not result from an actual improvement in auditory or visual speech perception abilities, but participants still perceive training in these areas as useful.

  19. Aided and unaided speech perception by older hearing impaired listeners.

    Directory of Open Access Journals (Sweden)

    David L Woods

    Full Text Available The most common complaint of older hearing impaired (OHI listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1 consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC syllables presented at consonant-specific signal-to-noise (SNR levels, and (2 SeRTs obtained with the Quick Speech in Noise Test (QSIN and the Hearing in Noise Test (HINT. Compared to older normal hearing (ONH listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91 with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB, with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86 by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI

  20. Hierarchical organization of speech perception in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Colin eHumphries

    2014-12-01

    Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.

  1. Gaze-direction-based MEG averaging during audiovisual speech perception

    Directory of Open Access Journals (Sweden)

    Lotta Hirvenkari

    2010-03-01

    Full Text Available To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent and /aka/ (incongruent in synchrony, repeated once every 3 s. Subjects (N = 10 were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’ was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.

  2. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  3. Functional correlates of the speech-in-noise perception impairment in dyslexia: an MRI study.

    Science.gov (United States)

    Dole, Marjorie; Meunier, Fanny; Hoen, Michel

    2014-07-01

    Dyslexia is a language-based neurodevelopmental disorder. It is characterized as a persistent deficit in reading and spelling. These difficulties have been shown to result from an underlying impairment of the phonological component of language, possibly also affecting speech perception. Although there is little evidence for such a deficit under optimal, quiet listening conditions, speech perception difficulties in adults with dyslexia are often reported under more challenging conditions, such as when speech is masked by noise. Previous studies have shown that these difficulties are more pronounced when the background noise is speech and when little spatial information is available to facilitate differentiation between target and background sound sources. In this study, we investigated the neuroimaging correlates of speech-in-speech perception in typical readers and participants with dyslexia, focusing on the effects of different listening configurations. Fourteen adults with dyslexia and 14 matched typical readers performed a subjective intelligibility rating test with single words presented against concurrent speech during functional magnetic resonance imaging (fMRI) scanning. Target words were always presented with a four-talker background in one of three listening configurations: Dichotic, Binaural or Monaural. The results showed that in the Monaural configuration, in which no spatial information was available and energetic masking was maximal, intelligibility was severely decreased in all participants, and this effect was particularly strong in participants with dyslexia. Functional imaging revealed that in this configuration, participants partially compensate for their poorer listening abilities by recruiting several areas in the cerebral networks engaged in speech perception. In the Binaural configuration, participants with dyslexia achieved the same performance level as typical readers, suggesting that they were able to use spatial information when available

  4. Understanding the threats posed by non-native species: public vs. conservation managers.

    Directory of Open Access Journals (Sweden)

    Rodolphe E Gozlan

    Full Text Available Public perception is a key factor influencing current conservation policy. Therefore, it is important to determine the influence of the public, end-users and scientists on the prioritisation of conservation issues and the direct implications for policy makers. Here, we assessed public attitudes and the perception of conservation managers to five non-native species in the UK, with these supplemented by those of an ecosystem user, freshwater anglers. We found that threat perception was not influenced by the volume of scientific research or by the actual threats posed by the specific non-native species. Media interest also reflected public perception and vice versa. Anglers were most concerned with perceived threats to their recreational activities but their concerns did not correspond to the greatest demonstrated ecological threat. The perception of conservation managers was an amalgamation of public and angler opinions but was mismatched to quantified ecological risks of the species. As this suggests that invasive species management in the UK is vulnerable to a knowledge gap, researchers must consider the intrinsic characteristics of their study species to determine whether raising public perception will be effective. The case study of the topmouth gudgeon Pseudorasbora parva reveals that media pressure and political debate has greater capacity to ignite policy changes and impact studies on non-native species than scientific evidence alone.

  5. Understanding the threats posed by non-native species: public vs. conservation managers.

    Science.gov (United States)

    Gozlan, Rodolphe E; Burnard, Dean; Andreou, Demetra; Britton, J Robert

    2013-01-01

    Public perception is a key factor influencing current conservation policy. Therefore, it is important to determine the influence of the public, end-users and scientists on the prioritisation of conservation issues and the direct implications for policy makers. Here, we assessed public attitudes and the perception of conservation managers to five non-native species in the UK, with these supplemented by those of an ecosystem user, freshwater anglers. We found that threat perception was not influenced by the volume of scientific research or by the actual threats posed by the specific non-native species. Media interest also reflected public perception and vice versa. Anglers were most concerned with perceived threats to their recreational activities but their concerns did not correspond to the greatest demonstrated ecological threat. The perception of conservation managers was an amalgamation of public and angler opinions but was mismatched to quantified ecological risks of the species. As this suggests that invasive species management in the UK is vulnerable to a knowledge gap, researchers must consider the intrinsic characteristics of their study species to determine whether raising public perception will be effective. The case study of the topmouth gudgeon Pseudorasbora parva reveals that media pressure and political debate has greater capacity to ignite policy changes and impact studies on non-native species than scientific evidence alone.

  6. Inductive Inference in Non-Native Speech Processing and Learning

    Science.gov (United States)

    Pajak, Bozena

    2012-01-01

    Despite extensive research on language acquisition, our understanding of how people learn abstract linguistic structures remains limited. In the phonological domain, we know that perceptual reorganization in infancy results in attuning to native language (L1) phonetic categories and, consequently, in difficulty discriminating and learning…

  7. Perceived Job Skill Limitations and Participation in Education and Training Opportunities: Differences between Us Native-Born and Non-Native-Born Individuals

    Science.gov (United States)

    Smith, M. Cecil; Smith, Thomas J.

    2010-01-01

    Data from the 2003 National Assessment of Adult Literacy were examined to determine if non-native-born adults in the US differ from their native-born counterparts in (1) participation in work-related training or education, and (2) perceptions that specific skills limit their job opportunities. Results indicated that non-native-born persons were…

  8. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    -like nature of the signal. The sine-wave speech was dubbed onto congruent and incongruent video of a talking face. Tuomainen et al. found that the McGurk effect did not occur for naïve observers, but did occur when observers were informed. This indicates that the McGurk illusion is due to a mechanism...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... their motivation for looking at the face. Since Tuomainen et al. did not monitor eye-movements in their experiments the magnitude of the effect of motivation is unknown. The purpose of our first experiment was to replicate Tuomainen et al.’s findings while controlling observers’ eye movements using a secondary...

  9. The interplay of speech perception and phonology: experimental evidence from Turkish.

    Science.gov (United States)

    Mielke, Jeff

    2003-01-01

    This study supports claims of a relationship between speech perception and phonology with evidence from a crosslinguistic perception experiment involving /h/ deletion in Turkish. Turkish /h/ is often deleted in fast speech, but only in a specific set of segmental contexts which defy traditional explanation. It is shown that /h/ deletes in environments where lower perceptibility is predicted. The results of the perception experiment verify these predictions and further show that language background has a significant impact on speech perception. Finally, this perceptual account of Turkish /h/ deletion points to an empirical means of testing the conflicting hypotheses that perception is active in the synchronic grammar or that its influence is limited to diachrony.

  10. Voluntary stuttering suppresses true stuttering: a window on the speech perception-production link.

    Science.gov (United States)

    Saltuklaroglu, Tim; Kalinowski, Joseph; Dayalu, Vikram N; Stuart, Andrew; Rastatter, Michael P

    2004-02-01

    In accord with a proposed innate link between speech perception and production (e.g., motor theory), this study provides compelling evidence for the inhibition of stuttering events in people who stutter prior to the initiation of the intended speech act, via both the perception and the production of speech gestures. Stuttering frequency during reading was reduced in 10 adults who stutter by approximately 40% in three of four experimental conditions: (1) following passive audiovisual presentation (i.e., viewing and hearing) of another person producing pseudostuttering (stutter-like syllabic repetitions) and following active shadowing of both (2) pseudostuttered and (3) fluent speech. Stuttering was not inhibited during reading following passive audiovisual presentation of fluent speech. Syllabic repetitions can inhibit stuttering both when produced and when perceived, and we suggest that these elementary stuttering forms may serve as compensatory speech gestures for releasing involuntary stuttering blocks by engaging mirror neuronal systems that are predisposed for fluent gestural imitation.

  11. Vowel perception: Effects of non-native language versus non-native dialect

    NARCIS (Netherlands)

    Cutler, A.; Smits, R.; Cooper, N.

    2005-01-01

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllable

  12. Vowel perception: Effects of non-native language versus non-native dialect

    NARCIS (Netherlands)

    Cutler, A.; Smits, R.; Cooper, N.

    2005-01-01

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The

  13. Relative Contributions of the Dorsal vs. Ventral Speech Streams to Speech Perception are Context Dependent: a lesion study

    Directory of Open Access Journals (Sweden)

    Corianne Rogalsky

    2014-04-01

    Full Text Available The neural basis of speech perception has been debated for over a century. While it is generally agreed that the superior temporal lobes are critical for the perceptual analysis of speech, a major current topic is whether the motor system contributes to speech perception, with several conflicting findings attested. In a dorsal-ventral speech stream framework (Hickok & Poeppel 2007, this debate is essentially about the roles of the dorsal versus ventral speech processing streams. A major roadblock in characterizing the neuroanatomy of speech perception is task-specific effects. For example, much of the evidence for dorsal stream involvement comes from syllable discrimination type tasks, which have been found to behaviorally doubly dissociate from auditory comprehension tasks (Baker et al. 1981. Discrimination task deficits could be a result of difficulty perceiving the sounds themselves, which is the typical assumption, or it could be a result of failures in temporary maintenance of the sensory traces, or the comparison and/or the decision process. Similar complications arise in perceiving sentences: the extent of inferior frontal (i.e. dorsal stream activation during listening to sentences increases as a function of increased task demands (Love et al. 2006. Another complication is the stimulus: much evidence for dorsal stream involvement uses speech samples lacking semantic context (CVs, non-words. The present study addresses these issues in a large-scale lesion-symptom mapping study. 158 patients with focal cerebral lesions from the Mutli-site Aphasia Research Consortium underwent a structural MRI or CT scan, as well as an extensive psycholinguistic battery. Voxel-based lesion symptom mapping was used to compare the neuroanatomy involved in the following speech perception tasks with varying phonological, semantic, and task loads: (i two discrimination tasks of syllables (non-words and words, respectively, (ii two auditory comprehension tasks

  14. Defining the Impact of Non-Native Species

    OpenAIRE

    Jeschke, Jonathan M; Bacher, Sven; Tim M Blackburn; Dick, Jaimie T. A.; Essl, Franz; Evans, Thomas; Gaertner, Mirijam; Hulme, Philip E.; Kühn, Ingolf; Mrugała, Agata; Pergl, Jan; Pyšek, Petr; Rabitsch, Wolfgang; Ricciardi, Anthony; Richardson, David M.

    2014-01-01

    Non-native species cause changes in the ecosystems to which they are introduced. These changes, or some of them, are usually termed impacts; they can be manifold and potentially damaging to ecosystems and biodiversity. However, the impacts of most non-native species are poorly understood, and a synthesis of available information is being hindered because authors often do not clearly define impact. We argue that explicitly defining the impact of non-native species will promote progress toward ...

  15. Compensation for Coarticulation: Disentangling Auditory and Gestural Theories of Perception of Coarticulatory Effects in Speech

    Science.gov (United States)

    Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.

    2010-01-01

    According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…

  16. Effects of Speech Style and Sex of Speaker on Person Perception.

    Science.gov (United States)

    Newcombe, Nora; Arnkoff, Diane B.

    1979-01-01

    Two experiments examined Lakoff's suggestion that men and women use different speech styles (women's speech being more polite and less assertive than men's). The effects of undergraduate students' use of three linguistic variables (tag questions, qualifiers, and compound requests) on person perception was tested. (CM)

  17. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-01-01

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception. PMID:27734953

  18. Effects of Speech Style and Sex of Speaker on Person Perception.

    Science.gov (United States)

    Newcombe, Nora; Arnkoff, Diane B.

    1979-01-01

    Two experiments examined Lakoff's suggestion that men and women use different speech styles (women's speech being more polite and less assertive than men's). The effects of undergraduate students' use of three linguistic variables (tag questions, qualifiers, and compound requests) on person perception was tested. (CM)

  19. Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech

    Science.gov (United States)

    Ben-David, Boaz M.; Multani, Namita; Shakuf, Vered; Rudzicz, Frank; van Lieshout, Pascal H. H. M.

    2016-01-01

    Purpose: Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. Method: We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5…

  20. Non-native educators in English language teaching

    CERN Document Server

    Braine, George

    2013-01-01

    The place of native and non-native speakers in the role of English teachers has probably been an issue ever since English was taught internationally. Although ESL and EFL literature is awash, in fact dependent upon, the scrutiny of non-native learners, interest in non-native academics and teachers is fairly new. Until recently, the voices of non-native speakers articulating their own concerns have been even rarer. This book is a response to this notable vacuum in the ELT literature, providing a forum for language educators from diverse geographical origins and language backgrounds. In additio

  1. Differential Allocation of Attention During Speech Perception in Monolingual and Bilingual Listeners

    OpenAIRE

    Astheimer, Lori B.; Berkes, Matthias; Bialystok, Ellen

    2015-01-01

    Attention is required during speech perception to focus processing resources on critical information. Previous research has shown that bilingualism modifies attentional processing in nonverbal domains. The current study used event-related potentials (ERPs) to determine whether bilingualism also modifies auditory attention during speech perception. We measured attention to word onsets in spoken English for monolinguals and Chinese-English bilinguals. Auditory probes were inserted at four times...

  2. Students Writing Emails to Faculty: An Examination of E-Politeness among Native and Non-Native Speakers of English

    Science.gov (United States)

    Biesenbach-Lucas, Sigrun

    2007-01-01

    This study combines interlanguage pragmatics and speech act research with computer-mediated communication and examines how native and non-native speakers of English formulate low- and high-imposition requests to faculty. While some research claims that email, due to absence of non-verbal cues, encourages informal language, other research has…

  3. Effects of audio-visual information and mode of speech on listener perceptions of alaryngeal speakers.

    Science.gov (United States)

    Evitts, Paul M; Van Dine, Ami; Holler, Aline

    2009-01-01

    There is minimal research on listener perceptions of an individual with a laryngectomy (IWL) based on audio-visual information. The aim of this research was to provide preliminary insight into whether listeners have different perceptions of an individual with a laryngectomy based on mode of presentation (audio-only vs. audio-visual) and mode of speech (tracheoesophageal, oesophageal, electrolaryngeal, normal). Thirty-four naïve listeners were randomly presented with a standard reading passage produced by one typical speaker from each mode of speech in both audio-only and audio-visual presentation mode. Listeners used a visual analogue scale (10 cm line) to indicate their perceptions of each speaker's personality. A significant effect for mode of speech was present. There was no significant difference in listener perceptions between mode of presentation using individual ratings. However, principal component analysis showed ratings were more favourable in the audio-visual mode. Results of this study suggest that visual information may only have a minor impact on listener perceptions of a speakers' personality and that mode of speech and degree of speech proficiency may only play a small role in listener perceptions. However, results should be interpreted with caution as results are based on only one speaker per mode of speech.

  4. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  5. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  6. Emotion and lying in a non-native language.

    Science.gov (United States)

    Caldwell-Harris, Catherine L; Ayçiçeği-Dinn, Ayşe

    2009-03-01

    Bilingual speakers frequently report experiencing greater emotional resonance in their first language compared to their second. In Experiment 1, Turkish university students who had learned English as a foreign language had reduced skin conductance responses (SCRs) when listening to emotional phrases in English compared to Turkish, an effect which was most pronounced for childhood reprimands. A second type of emotional language, reading out loud true and false statements, was studied in Experiment 2. Larger SCRs were elicited by lies compared to true statements, and larger SCRs were evoked by English statements compared to Turkish statements. In contrast, ratings of how strongly participants felt they were lying showed that Turkish lies were more strongly felt than English lies. Results suggest that two factors influence the electrodermal activity elicited when bilingual speakers lie in their two languages: arousal due to emotions associated with lying, and arousal due to anxiety about managing speech production in non-native language. Anxiety and emotionality when speaking a non-naive language need to be better understood to inform practices ranging from bilingual psychotherapy to police interrogation of suspects and witnesses.

  7. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  8. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  9. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  10. Tactile enhancement of auditory and visual speech perception in untrained perceivers

    Science.gov (United States)

    Gick, Bryan; Jóhannsdóttir, Kristín M.; Gibraiel, Diana; Mühlbauer, Jeff

    2008-01-01

    A single pool of untrained subjects was tested for interactions across two bimodal perception conditions: audio-tactile, in which subjects heard and felt speech, and visual-tactile, in which subjects saw and felt speech. Identifications of English obstruent consonants were compared in bimodal and no-tactile baseline conditions. Results indicate that tactile information enhances speech perception by about 10 percent, regardless of which other mode (auditory or visual) is active. However, within-subject analysis indicates that individual subjects who benefit more from tactile information in one cross-modal condition tend to benefit less from tactile information in the other. PMID:18396924

  11. On a Supposed Dogma of Speech Perception Research: a Response to Appelbaum (1999

    Directory of Open Access Journals (Sweden)

    Fernando Orphão de Carvalho

    2009-04-01

    Full Text Available . In this paper we purport to qualify the claim, advanced by Appelbaum (1999 that speech perception research, in the last 70 years or so, has endorsed a view on the nature of speech for which no evidence can be adduced and which has resisted falsification through active ad hoc “theoretical repair” carried by speech scientists. We show that the author’s qualms on the putative dogmatic status of speech research are utterly unwarranted, if not misconstrued as a whole. On more general grounds, the present article can be understood as a work on the rather underdeveloped area of the philosophy and history of Linguistics.

  12. Audio-visual speech perception in noise: Implanted children and young adults versus normal hearing peers.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Fostick, Leah

    2017-01-01

    The purpose of the current study was to evaluate auditory, visual and audiovisual speech perception abilities among two groups of cochlear implant (CI) users: prelingual children and long-term young adults, as compared to their normal hearing (NH) peers. Prospective cohort study that included 50 participants, divided into two groups of CI (10 children and 10 adults), and two groups of normal hearing peers (15 participants each). Speech stimuli included monosyllabic meaningful and nonsense words in a signal to noise ratio of 0 dB. Speech stimuli were introduced via auditory, visual and audiovisual modalities. (1) CI children and adults show lower speech perception accuracy with background noise in audiovisual and auditory modalities, as compared to NH peers, but significantly higher visual speech perception scores. (2) CI children are superior to CI adults in speech perception in noise via auditory modality, but inferior in the visual one. Both CI children and CI adults had similar audiovisual integration. The findings of the current study show that in spite of the fact that the CI children were implanted bilaterally, at a very young age, and using advanced technology, they still have difficulties in perceiving speech in adverse listening conditions even when adding the visual modality. This suggests that adding audiovisual training might be beneficial for this group by improving their audiovisual integration in difficult listening situations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Parental perception vs. professional assessment of speech changes following premature loss of maxillary primary incisors.

    Science.gov (United States)

    Adewumi, Abimbola O; Horton, Camille; Guelmann, Marcio; Dixon-Wood, Virginia; McGorray, Susan P

    2012-01-01

    This study's purpose was to compare parental perceptions of children's speech changes with a professional speech assessment following premature extractions of maxillary primary incisors (PEMPI). Healthy 5- to 6-year-olds, with no cognitive and speech delay and who received PEMPI between the ages of 2 and 4 years old at a university-based clinic, were recruited for the study. First, their parents took part in a telephone interview regarding their perceptions of speech changes following the extractions. The children were then invited to undergo individual speech evaluations by a certified speech and language pathologist. Of 204 patients identified from the database, 57 parental interviews were completed. Sixty percent (34) felt their children sounded different following extractions, and 65% (37) reported difficulty with pronunciation of the "s" sound. For children who were perceived by their parents to sound different, 46% had problems pronouncing words with the letters s and z. For parents who did not perceive speech changes, none of the children had problems with s and z as determined by the professionally conduced speech evaluations (Fisher exact test P=.02). Children who undergo premature extractions of maxillary primary incisors show problems articulating words containing s and z, and there is an agreement between parental perceptions and actual disarticulations detected from a professional assessment.

  14. Production and perception of listener-oriented clear speech in child language.

    Science.gov (United States)

    Syrett, Kristen; Kawahara, Shigeto

    2014-11-01

    In this paper, we ask whether children are sensitive to the needs of their interlocutor, and, if so, whether they - like adults - modify acoustic characteristics of their speech as part of a communicative goal. In a production task, preschoolers participated in a word learning task that favored the use of clear speech. Children produced vowels that were longer, more intense, more dispersed in the vowel space, and had a more expanded F0 range than normal speech. Two perception studies with adults showed that these acoustic differences were perceptible and were used to distinguish normal and clear speech styles. We conclude that preschoolers are sensitive to aspects of the speaker-hearer relationship calling upon them to modify their speech in ways that benefit their listener.

  15. Preparing Non-Native English-Speaking ESL Teachers

    Science.gov (United States)

    Shin, Sarah J.

    2008-01-01

    This article addresses the challenges that non-native English-speaking teacher trainees face as they begin teaching English as a Second Language (ESL) in Western, English-speaking countries. Despite a great deal of training, non-native speaker teachers may be viewed as inadequate language teachers because they often lack native speaker competence…

  16. When the Teacher Is a Non-native Speaker

    Institute of Scientific and Technical Information of China (English)

    Pèter Medgyes

    2005-01-01

    @@ In "When the Teacher is a Non-native Speaker," Medgyes examines the differences in teaching behavior between native and non-native teachers of English, and then specifies the causes of those differences. The aim of the discussion is to raise the awareness of both groups of teachers to their respective strengths and weaknesses, and thus help them become better teachers.

  17. The Non-Native English Speaker Teachers in TESOL Movement

    Science.gov (United States)

    Kamhi-Stein, Lía D.

    2016-01-01

    It has been almost 20 years since what is known as the non-native English-speaking (NNES) professionals' movement--designed to increase the status of NNES professionals--started within the US-based TESOL International Association. However, still missing from the literature is an understanding of what a movement is, and why non-native English…

  18. PRONUNCIATION LANGUAGE SUBSYSTEM AND EEG-CORRELATES OF FOREIGN SPEECH PERCEPTION (PSYCHOACOUSTIC AND PHYSIOLOGICAL ASPECTS

    Directory of Open Access Journals (Sweden)

    Larisa Evgenevna Deryagina

    2015-02-01

    Full Text Available Article is devoted to identification of psychoacoustic differences between languages of Roman, Germanic and Slavic groups, as factors that hinder the learning of foreign languages and EEG-correlates of perception and recognition of foreign speech, as the process of communication. We used theoretico- methodological analysis of psycholinguistic data, psychoacoustic and physiological (own studies. It was determined that the acoustic characteristics of foreign speech affect cerebration and form s of its functioning through the auditory sensory system. Prosodic and articulatory system of the native language has a significant influence on the perception of foreign speech. Patterns of foreign language speech perception are based on different functions of the cerebral hemispheres. Differences in hemispheric organization of the brain can have a significant impact on the effectiveness of learning languages belonging to the Roman, Germanic and Slavic groups, having acoustic and rhythmical-melodic features.

  19. Association of auditory steady state responses with perception of temporal modulations and speech in noise.

    Science.gov (United States)

    Manju, Venugopal; Gopika, Kizhakke Kodiyath; Arivudai Nambi, Pitchai Muthu

    2014-01-01

    Amplitude modulations in the speech convey important acoustic information for speech perception. Auditory steady state response (ASSR) is thought to be physiological correlate of amplitude modulation perception. Limited research is available exploring association between ASSR and modulation detection ability as well as speech perception. Correlation of modulation detection thresholds (MDT) and speech perception in noise with ASSR was investigated in twofold experiments. 30 normal hearing individuals and 11 normal hearing individuals within age range of 18-24 years participated in experiments 1 and 2, respectively. MDTs were measured using ASSR and behavioral method at 60 Hz, 80 Hz, and 120 Hz modulation frequencies in the first experiment. ASSR threshold was obtained by estimating the minimum modulation depth required to elicit ASSR (ASSR-MDT). There was a positive correlation between behavioral MDT and ASSR-MDT at all modulation frequencies. In the second experiment, ASSR for amplitude modulation (AM) sweeps at four different frequency ranges (30-40 Hz, 40-50 Hz, 50-60 Hz, and 60-70 Hz) was recorded. Speech recognition threshold in noise (SRTn) was estimated using staircase procedure. There was a positive correlation between amplitude of ASSR for AM sweep with frequency range of 30-40 Hz and SRTn. Results of the current study suggest that ASSR provides substantial information about temporal modulation and speech perception.

  20. Working memory training to improve speech perception in noise across languages.

    Science.gov (United States)

    Ingvalson, Erin M; Dhar, Sumitrajit; Wong, Patrick C M; Liu, Hanjun

    2015-06-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners.

  1. Contributions of electric and acoustic hearing to bimodal speech and music perception.

    Science.gov (United States)

    Crew, Joseph D; Galvin, John J; Landsberger, David M; Fu, Qian-Jie

    2015-01-01

    Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.

  2. Contributions of electric and acoustic hearing to bimodal speech and music perception.

    Directory of Open Access Journals (Sweden)

    Joseph D Crew

    Full Text Available Cochlear implant (CI users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA only, and both devices together (CI+HA. Speech reception thresholds (SRTs were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only. Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.

  3. Acoustic cues in the perception of second language speech sounds

    Science.gov (United States)

    Bogacka, Anna A.

    2001-05-01

    The experiment examined to what acoustic cues Polish learners of English pay attention when distinguishing between English high vowels. Predictions concerned the influence of Polish vowel system (no duration differences and only one vowel in the high back vowel region), salience of duration cues and L1 orthography. Thirty-seven Polish subjects and a control group of English native speakers identified stimuli from heed-hid and who'd-hood continua varying in spectral and duration steps. Identification scores by spectral and duration steps, and F1/F2 plots of identifications, were given as well as fundamental frequency variation comments. English subjects strongly relied on spectral cues (typical categorical perception) and almost did not react to temporal cues. Polish subjects relied strongly on temporal cues for both continua, but showed a reversed pattern of identification of who'd-hood contrast. Their reliance on spectral cues was weak and had a reversed pattern for heed-hid contrast. The results were interpreted with reference to the speech learning model [Flege (1995)], perceptual assimilation model [Best (1995)] and ontogeny phylogeny model [Major (2001)].

  4. An algorithm of improving speech emotional perception for hearing aid

    Science.gov (United States)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  5. Intelligibility of non-natively produced Dutch words: interaction between segmental and suprasegmental errors.

    Science.gov (United States)

    Caspers, Johanneke; Horłoza, Katarzyna

    2012-01-01

    In the field of second language research many adhere to the idea that prosodic errors are more detrimental to the intelligibility of non-native speakers than segmental errors. The current study reports on a series of experiments testing the influence of stress errors and segmental errors, and a combination of these, on native processing of words produced by intermediate speakers of Dutch as a second language with either Mandarin Chinese or French as mother tongue. The results suggest that both stress and segmental errors influence processing, but suprasegmental errors do not outweigh segmental errors. It seems that a more 'foreign' generic pronunciation leads to a greater impact of (supra)segmental errors, suggesting that segmental and prosodic deviations should not be viewed as independent factors in processing non-native speech.

  6. Perception of Suprasegmental Features of Speech by Children with Cochlear Implants and Children with Hearing Aids

    Science.gov (United States)

    Most, Tova; Peled, Miriam

    2007-01-01

    This study assessed perception of suprasegmental features of speech by 30 prelingual children with sensorineural hearing loss. Ten children had cochlear implants (CIs), and 20 children wore hearing aids (HA): 10 with severe hearing loss and 10 with profound hearing loss. Perception of intonation, syllable stress, word emphasis, and word pattern…

  7. Is There a Relationship between Speech Identification in Noise and Categorical Perception in Children with Dyslexia?

    Science.gov (United States)

    Calcus, Axelle; Lorenzi, Christian; Collet, Gregory; Colin, Cécile; Kolinsky, Régine

    2016-01-01

    Purpose: Children with dyslexia have been suggested to experience deficits in both categorical perception (CP) and speech identification in noise (SIN) perception. However, results regarding both abilities are inconsistent, and the relationship between them is still unclear. Therefore, this study aimed to investigate the relationship between CP…

  8. Is There a Relationship between Speech Identification in Noise and Categorical Perception in Children with Dyslexia?

    Science.gov (United States)

    Calcus, Axelle; Lorenzi, Christian; Collet, Gregory; Colin, Cécile; Kolinsky, Régine

    2016-01-01

    Purpose: Children with dyslexia have been suggested to experience deficits in both categorical perception (CP) and speech identification in noise (SIN) perception. However, results regarding both abilities are inconsistent, and the relationship between them is still unclear. Therefore, this study aimed to investigate the relationship between CP…

  9. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    Science.gov (United States)

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  10. Speech-perception-in-noise and bilateral spatial abilities in adults with delayed sequential cochlear implantation

    Directory of Open Access Journals (Sweden)

    Ilze Oosthuizen

    2012-12-01

    Full Text Available Objective: To determine speech-perception-in-noise (with speech and noise spatially distinct and coincident and bilateral spatial benefits of head-shadow effect, summation, squelch and spatial release of masking in adults with delayed sequential cochlear implants. Study design: A cross-sectional one group post-test-only exploratory design was employed. Eleven adults (mean age 47 years; range 21 – 69 years of the Pretoria Cochlear Implant Programme (PCIP in South Africa with a bilateral severe-to-profound sensorineural hearing loss were recruited. Prerecorded Everyday Speech Sentences of The Central Institute for the Deaf (CID were used to evaluate participants’ speech-in-noise perception at sentence level. An adaptive procedure was used to determine the signal-to-noise ratio (SNR, in dB at which the participant’s speech reception threshold (SRT was achieved. Specific calculations were used to estimate bilateral spatial benefit effects. Results: A minimal bilateral benefit for speech-in-noise perception was observed with noise directed to the first implant (CI 1 (1.69 dB and in the speech and noise spatial listening condition (0.78 dB, but was not statistically significant. The head-shadow effect at 180° was the most robust bilateral spatial benefit. An improvement in speech perception in spatially distinct speech and noise indicates the contribution of the second implant (CI 2 is greater than that of the first implant (CI 1 for bilateral spatial benefit. Conclusion: Bilateral benefit for delayed sequentially implanted adults is less than previously reported for simultaneous and sequentially implanted adults. Delayed sequential implantation benefit seems to relate to the availability of the ear with the most favourable SNR.

  11. A Cross-Linguistic ERP Examination of Audiovisual Speech Perception between English and Japanese

    Directory of Open Access Journals (Sweden)

    Satoko Hisanaga

    2011-10-01

    Full Text Available According to recent ERP (event-related potentials studies, the visual speech facilitates the neural processing of auditory speech for speakers of European languages in audiovisual speech perception. We examined whether this visual facilitation is also the case for Japanese speakers for whom the weaker susceptibility of the visual influence has been behaviorally reported. We conducted a cross-linguistic experiment comparing ERPs of Japanese and English language groups (JL and EL when they were presented with audiovisual congruent as well as audio-only speech stimuli. The temporal facilitation by the additional visual speech was observed only for native speech stimuli, suggesting a role of articulating experiences for early ERP components. For native stimuli, the EL showed sustained visual facilitation for about 300 ms from audio onset. On the other hand, the visual facilitation was limited to the first 100 ms for the JL, and they rather showed a visual inhibitory effect at 300 ms from the audio onset. Thus the type of native language affects neural processing of visual speech in audiovisual speech perception. This inhibition is consistent with behaviorally reported weaker visual influence for the JL.

  12. Am\\'elioration des Performances des Syst\\`emes Automatiques de Reconnaissance de la Parole pour la Parole Non Native

    CERN Document Server

    Bouselmi, Ghazi; Illina, Irina; Haton, Jean-Paul

    2007-01-01

    In this article, we present an approach for non native automatic speech recognition (ASR). We propose two methods to adapt existing ASR systems to the non-native accents. The first method is based on the modification of acoustic models through integration of acoustic models from the mother tong. The phonemes of the target language are pronounced in a similar manner to the native language of speakers. We propose to combine the models of confused phonemes so that the ASR system could recognize both concurrent pronounciations. The second method we propose is a refinment of the pronounciation error detection through the introduction of graphemic constraints. Indeed, non native speakers may rely on the writing of words in their uttering. Thus, the pronounctiation errors might depend on the characters composing the words. The average error rate reduction that we observed is (22.5%) relative for the sentence error rate, and 34.5% (relative) in word error rate.

  13. Influence of Telecommunication Modality, Internet Transmission Quality, and Accessories on Speech Perception in Cochlear Implant Users.

    Science.gov (United States)

    Mantokoudis, Georgios; Koller, Roger; Guignard, Jérémie; Caversaccio, Marco; Kompis, Martin; Senn, Pascal

    2017-04-24

    Telecommunication is limited or even impossible for more than one-thirds of all cochlear implant (CI) users. We sought therefore to study the impact of voice quality on speech perception with voice over Internet protocol (VoIP) under real and adverse network conditions. Telephone speech perception was assessed in 19 CI users (15-69 years, average 42 years), using the German HSM (Hochmair-Schulz-Moser) sentence test comparing Skype and conventional telephone (public switched telephone networks, PSTN) transmission using a personal computer (PC) and a digital enhanced cordless telecommunications (DECT) telephone dual device. Five different Internet transmission quality modes and four accessories (PC speakers, headphones, 3.5 mm jack audio cable, and induction loop) were compared. As a secondary outcome, the subjective perceived voice quality was assessed using the mean opinion score (MOS). Speech telephone perception was significantly better (median 91.6%, P 15%) were not superior to conventional telephony. In addition, there were no significant differences between the tested accessories (P>.05) using a PC. Coupling a Skype DECT phone device with an audio cable to the CI, however, resulted in higher speech perception (median 65%) and subjective MOS scores (3.2) than using PSTN (median 7.5%, P<.001). Skype calls significantly improve speech perception for CI users compared with conventional telephony under real network conditions. Listening accessories do not further improve listening experience. Current Skype DECT telephone devices do not fully offer technical advantages in voice quality.

  14. Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels.

    Science.gov (United States)

    Schuerman, William L; Meyer, Antje S; McQueen, James M

    2017-01-01

    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation.

  15. The effect of short-term musical training on speech perception in noise

    Directory of Open Access Journals (Sweden)

    Chandni Jain

    2015-03-01

    Full Text Available The aim of the study was to assess the effect of short-term musical training on speech perception in noise. In the present study speech perception in noise was measured pre- and post- short-term musical training. The musical training involved auditory perceptual training for raga identification of two Carnatic ragas. The training was given for eight sessions. A total of 18 normal hearing adults in the age range of 18-25 years participated in the study wherein group 1 consisted of ten individuals who underwent musical training and group 2 consisted of eight individuals who did not undergo any training. Results revealed that post training, speech perception in noise improved significantly in group 1, whereas group 2 did not show any changes in speech perception scores. Thus, short-term musical training shows an enhancement of speech perception in the presence of noise. However, generalization and long-term maintenance of these benefits needs to be evaluated.

  16. Mandarin speech perception in combined electric and acoustic stimulation.

    Directory of Open Access Journals (Sweden)

    Yongxin Li

    Full Text Available For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI and hearing aid (HA typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0 information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2 information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects' HA-aided pure-tone average (PTA thresholds between 250 and 2000 Hz; subjects were divided into two groups: "better" PTA (50 dB HL. The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12, further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception.

  17. Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation.

    Science.gov (United States)

    Hunter, Cynthia R; Kronenberger, William G; Castellanos, Irina; Pisoni, David B

    2017-08-16

    We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes. Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes. Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors. Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants. https://doi.org/10.23641/asha.5216200.

  18. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  19. Audio-visual speech in noise perception in dyslexia

    NARCIS (Netherlands)

    van Laarhoven, T.; Keetels, M.N.; Schakel, L.; Vroomen, J.

    2017-01-01

    Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adu

  20. The effects of bilingualism on children's perception of speech sounds

    NARCIS (Netherlands)

    Brasileiro, I.

    2009-01-01

    The general topic addressed by this dissertation is that of bilingualism, and more specifically, the topic of bilingual acquisition of speech sounds. The central question in this study is the following: does bilingualism affect children’s perceptual development of speech sounds? The term bilingual i

  1. The Functional Neuroanatomy of Prelexical Processing in Speech Perception

    Science.gov (United States)

    Scott, Sophie K.; Wise, Richard J. S.

    2004-01-01

    In this paper we attempt to relate the prelexical processing of speech, with particular emphasis on functional neuroimaging studies, to the study of auditory perceptual systems by disciplines in the speech and hearing sciences. The elaboration of the sound-to-meaning pathways in the human brain enables their integration into models of the human…

  2. Durations of American English vowels by native and non-native speakers: acoustic analyses and perceptual effects.

    Science.gov (United States)

    Liu, Chang; Jin, Su-Hyun; Chen, Chia-Tsen

    2014-06-01

    The goal of this study was to examine durations of American English vowels produced by English-, Chinese-, and Korean-native speakers and the effects of vowel duration on vowel intelligibility. Twelve American English vowels were recorded in the /hVd/ phonetic context by native speakers and non-native speakers. The English vowel duration patterns as a function of vowel produced by non-native speakers were generally similar to those produced by native speakers. These results imply that using duration differences across vowels may be an important strategy for non-native speakers' production before they are able to employ spectral cues to produce and perceive English speech sounds. In the intelligibility experiment, vowels were selected from 10 native and non-native speakers and vowel durations were equalized at 170 ms. Intelligibility of vowels with original and equalized durations was evaluated by American English native listeners. Results suggested that vowel intelligibility of native and non-native speakers degraded slightly by 3-8% when durations were equalized, indicating that vowel duration plays a minor role in vowel intelligibility.

  3. Reading fluency and speech perception speed of beginning readers with persistent reading problems: the perception of initial stop consonants and consonant clusters

    NARCIS (Netherlands)

    Snellings, P.; van der Leij, A.; Blok, H.; de Jong, P.F.

    2010-01-01

    This study investigated the role of speech perception accuracy and speed in fluent word decoding of reading disabled (RD) children. A same-different phoneme discrimination task with natural speech tested the perception of single consonants and consonant clusters by young but persistent RD children.

  4. Speech perception and talker segregation: Effects of level, pitch, and tactile support with multiple simultaneous talkers

    Science.gov (United States)

    Drullman, Rob; Bronkhorst, Adelbert W.

    2004-11-01

    Speech intelligibility was investigated by varying the number of interfering talkers, level, and mean pitch differences between target and interfering speech, and the presence of tactile support. In a first experiment the speech-reception threshold (SRT) for sentences was measured for a male talker against a background of one to eight interfering male talkers or speech noise. Speech was presented diotically and vibro-tactile support was given by presenting the low-pass-filtered signal (0-200 Hz) to the index finger. The benefit in the SRT resulting from tactile support ranged from 0 to 2.4 dB and was largest for one or two interfering talkers. A second experiment focused on masking effects of one interfering talker. The interference was the target talker's own voice with an increased mean pitch by 2, 4, 8, or 12 semitones. Level differences between target and interfering speech ranged from -16 to +4 dB. Results from measurements of correctly perceived words in sentences show an intelligibility increase of up to 27% due to tactile support. Performance gradually improves with increasing pitch difference. Louder target speech generally helps perception, but results for level differences are considerably dependent on pitch differences. Differences in performance between noise and speech maskers and between speech maskers with various mean pitches are explained by the effect of informational masking. .

  5. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

    Science.gov (United States)

    Good, Arla; Gordon, Karen A; Papsin, Blake C; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A

    Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.

  6. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants

    Science.gov (United States)

    Gordon, Karen A.; Papsin, Blake C.; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A.

    2017-01-01

    Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. PMID:28085739

  7. Development of an audiovisual speech perception app for children with autism spectrum disorders.

    Science.gov (United States)

    Irwin, Julia; Preston, Jonathan; Brancazio, Lawrence; D'angelo, Michael; Turcios, Jacqueline

    2015-01-01

    Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8-10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.

  8. Are mirror neurons the basis of speech perception? Evidence from five cases with damage to the purported human mirror system.

    Science.gov (United States)

    Rogalsky, Corianne; Love, Tracy; Driscoll, David; Anderson, Steven W; Hickok, Gregory

    2011-01-01

    The discovery of mirror neurons in macaque has led to a resurrection of motor theories of speech perception. Although the majority of lesion and functional imaging studies have associated perception with the temporal lobes, it has also been proposed that the 'human mirror system', which prominently includes Broca's area, is the neurophysiological substrate of speech perception. Although numerous studies have demonstrated a tight link between sensory and motor speech processes, few have directly assessed the critical prediction of mirror neuron theories of speech perception, namely that damage to the human mirror system should cause severe deficits in speech perception. The present study measured speech perception abilities of patients with lesions involving motor regions in the left posterior frontal lobe and/or inferior parietal lobule (i.e., the proposed human 'mirror system'). Performance was at or near ceiling in patients with fronto-parietal lesions. It is only when the lesion encroaches on auditory regions in the temporal lobe that perceptual deficits are evident. This suggests that 'mirror system' damage does not disrupt speech perception, but rather that auditory systems are the primary substrate for speech perception.

  9. Auditory Processing and Speech Perception in Children with Specific Language Impairment: Relations with Oral Language and Literacy Skills

    Science.gov (United States)

    Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…

  10. Speech-in-Noise Perception Deficit in Adults with Dyslexia: Effects of Background Type and Listening Configuration

    Science.gov (United States)

    Dole, Marjorie; Hoen, Michel; Meunier, Fanny

    2012-01-01

    Developmental dyslexia is associated with impaired speech-in-noise perception. The goal of the present research was to further characterize this deficit in dyslexic adults. In order to specify the mechanisms and processing strategies used by adults with dyslexia during speech-in-noise perception, we explored the influence of background type,…

  11. Result on speech perception after conversion from Spectra® to Freedom®.

    Science.gov (United States)

    Magalhães, Ana Tereza de Matos; Goffi-Gomez, Maria Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens

    2012-04-01

    New technology in the Freedom® speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. To identify the contribution of this technology-- the Nucleus 22®--on speech perception tests in silence and in noise, and on audiometric thresholds. A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra® was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom® technology for the Nucleus22®, auditory thresholds and speech perception tests were performed in free field in sound-proof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Freedom® applied for the Nucleus22® showed a statistically significant difference in all speech perception tests and audiometric thresholds. The Freedom® technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22®.

  12. Teaching Media in the Teaching of Arabic Language to Non-Native Arabic Speakers

    Directory of Open Access Journals (Sweden)

    Rais Abdullah

    2016-06-01

    Full Text Available Learning media has demonstrated its superiority in helping educators or teachers in the process of conveying the message of learning more quickly and easily caught by the students. The media play a role in enriching the learning experience of students, increase their attention to the lesson, minimize differences in perception between teachers and students as well as to help resolve personal differences between students. The teaching Arabic to non-native speaker would be more interesting and easier to learn, remembered, understood and practiced by the students, if taught through the media. This article aims to explore the benefits, importance and role of instructional media in teaching Arabic to non- native Speaker

  13. Descriptions of Difficult Conversations between Native and Non-Native English Speakers: In-Group Membership and Helping Behaviors

    Science.gov (United States)

    Young, Ray; Faux, William V., II

    2011-01-01

    This study illustrated the perceptions of native English speakers about difficult conversations with non-native English speakers. A total of 114 native English speakers enrolled in undergraduate communication courses at a regional state university answered a questionnaire about a recent difficult conversation the respondent had with a non-native…

  14. On the nature of the speech perception deficits in children with autism spectrum disorders.

    Science.gov (United States)

    You, R S; Serniclaes, W; Rider, D; Chabane, N

    2017-02-01

    Previous studies have claimed to show deficits in the perception of speech sounds in autism spectrum disorders (ASD). The aim of the current study was to clarify the nature of such deficits. Children with ASD might only exhibit a lesser amount of precision in the perception of phoneme categories (CPR deficit). However, these children might further present an allophonic mode of speech perception, similar to the one evidenced in dyslexia, characterised by enhanced discrimination of acoustic differences within phoneme categories. Allophonic perception usually gives rise to a categorical perception (CP) deficit, characterised by a weaker coherence between discrimination and identification of speech sounds. The perceptual performance of ASD children was compared to that of control children of the same chronological age. Identification and discrimination data were collected for continua of natural vowels, synthetic vowels, and synthetic consonants. Results confirmed that children with ASD exhibit a CPR deficit for the three stimulus continua. These children further exhibited a trend toward allophonic perception that was, however, not accompanied by the usual CP deficit. These findings confirm that the commonly found CPR deficit is also present in ASD. Whether children with ASD also present allophonic perception requires further investigations.

  15. Effects of Musicality on the Perception of Rhythmic Structure in Speech

    Directory of Open Access Journals (Sweden)

    Natalie Boll-Avetisyan

    2017-04-01

    Full Text Available Language and music share many rhythmic properties, such as variations in intensity and duration leading to repeating patterns. Perception of rhythmic properties may rely on cognitive networks that are shared between the two domains. If so, then variability in speech rhythm perception may relate to individual differences in musicality. To examine this possibility, the present study focuses on rhythmic grouping, which is assumed to be guided by a domain-general principle, the Iambic/Trochaic law, stating that sounds alternating in intensity are grouped as strong-weak, and sounds alternating in duration are grouped as weak-strong. German listeners completed a grouping task: They heard streams of syllables alternating in intensity, duration, or neither, and had to indicate whether they perceived a strong-weak or weak-strong pattern. Moreover, their music perception abilities were measured, and they filled out a questionnaire reporting their productive musical experience. Results showed that better musical rhythm perception ability was associated with more consistent rhythmic grouping of speech, while melody perception ability and productive musical experience were not. This suggests shared cognitive procedures in the perception of rhythm in music and speech. Also, the results highlight the relevance of considering individual differences in musicality when aiming to explain variability in prosody perception.

  16. Audio-visual speech in noise perception in dyslexia.

    Science.gov (United States)

    van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean

    2016-12-18

    Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip-read information that disambiguates noise-masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio-visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD. © 2016 John Wiley & Sons Ltd.

  17. Dissociating speech perception and comprehension at reduced levels of awareness

    NARCIS (Netherlands)

    Davis, Matthew H.; Coleman, Martin R.; Absalom, Anthony R.; Rodd, Jennifer M.; Johnsrude, Ingrid S.; Matta, Basil F.; Owen, Adrian M.; Menon, David K.

    2007-01-01

    We used functional MRI and the anesthetic agent propofol to assess the relationship among neural responses to speech, successful comprehension, and conscious awareness. Volunteers were scanned while listening to sentences containing ambiguous words, matched sentences without ambiguous words, and sig

  18. Predicting individual variation in language from infant speech perception measures

    NARCIS (Netherlands)

    A. Christia; A. Seidl; C. Junge; M. Soderstrom; P. Hagoort

    2013-01-01

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theore

  19. Predicting Individual Variation in Language From Infant Speech Perception Measures

    NARCIS (Netherlands)

    Cristia, A.; Seidl, A.; Junge, C.M.M.; Soderstrom, M.; Hagoort, P.

    2014-01-01

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theore

  20. Context-dependent encoding in the auditory brainstem subserves enhanced speech-in-noise perception in musicians.

    Science.gov (United States)

    Parbery-Clark, A; Strait, D L; Kraus, N

    2011-10-01

    Musical training strengthens speech perception in the presence of background noise. Given that the ability to make use of speech sound regularities, such as pitch, underlies perceptual acuity in challenging listening environments, we asked whether musicians' enhanced speech-in-noise perception is facilitated by increased neural sensitivity to acoustic regularities. To this aim we examined subcortical encoding of the same speech syllable presented in predictable and variable conditions and speech-in-noise perception in 31 musicians and nonmusicians. We anticipated that musicians would demonstrate greater neural enhancement of speech presented in the predictable compared to the variable condition than nonmusicians. Accordingly, musicians demonstrated more robust neural encoding of the fundamental frequency (i.e., pitch) of speech presented in the predictable relative to the variable condition than nonmusicians. The degree of neural enhancement observed to predictable speech correlated with subjects' musical practice histories as well as with their speech-in-noise perceptual abilities. Taken together, our findings suggest that subcortical sensitivity to speech regularities is shaped by musical training and may contribute to musicians' enhanced speech-in-noise perception. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Neurophysiological Evidence That Musical Training Influences the Recruitment of Right Hemispheric Homologues for Speech Perception

    Directory of Open Access Journals (Sweden)

    McNeel Gordon Jantzen

    2014-03-01

    Full Text Available Musicians have a more accurate temporal and tonal representation of auditory stimuli than their non-musician counterparts (Kraus & Chandrasekaran, 2010; Parbery-Clark, Skoe, & Kraus, 2009; Zendel & Alain, 2008; Musacchia, Sams, Skoe, & Kraus, 2007. Musicians who are adept at the production and perception of music are also more sensitive to key acoustic features of speech such as voice onset timing and pitch. Together, these data suggest that musical training may enhance the processing of acoustic information for speech sounds. In the current study, we sought to provide neural evidence that musicians process speech and music in a similar way. We hypothesized that for musicians, right hemisphere areas traditionally associated with music are also engaged for the processing of speech sounds. In contrast we predicted that in non-musicians processing of speech sounds would be localized to traditional left hemisphere language areas. Speech stimuli differing in voice onset time was presented using a dichotic listening paradigm. Subjects either indicated aural location for a specified speech sound or identified a specific speech sound from a directed aural location. Musical training effects and organization of acoustic features were reflected by activity in source generators of the P50. This included greater activation of right middle temporal gyrus (MTG and superior temporal gyrus (STG in musicians. The findings demonstrate recruitment of right hemisphere in musicians for discriminating speech sounds and a putative broadening of their language network. Musicians appear to have an increased sensitivity to acoustic features and enhanced selective attention to temporal features of speech that is facilitated by musical training and supported, in part, by right hemisphere homologues of established speech processing regions of the brain.

  2. Effective connectivity analysis demonstrates involvement of premotor cortex during speech perception.

    Science.gov (United States)

    Osnes, Berge; Hugdahl, Kenneth; Specht, Karsten

    2011-02-01

    Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.

  3. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  4. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  5. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    Science.gov (United States)

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  6. High visual resolution matters in audiovisual speech perception, but only for some.

    Science.gov (United States)

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  7. Intensive foreign language learning reveals effects on categorical perception of sibilant voicing after only 3 weeks

    DEFF Research Database (Denmark)

    Nielsen, Andreas Højlund; Horn, Nynne Thorup; Derdau Sørensen, Stine

    2015-01-01

    of training. Results from immersion studies are inconclusive, but some suggest continued effects on non-native speech perception after 6-8 years of experience. We investigated this apparent discrepancy in the timing of adaptation to foreign speech sounds in a longitudinal study of foreign language learning......Models of speech learning suggest that adaptations to foreign language sound categories take place within 6-12 months of exposure to a foreign language. Results from laboratory language training show effects of very targeted training on non-native speech contrasts within only one to three weeks...... (T0), after three weeks (T1), six months (T2), and 19 months (T3). We used a phonemic Arabic contrast (pharyngeal vs. glottal frication) and a phonemic Dari contrast (sibilant voicing) as stimuli. We observed an effect of learning on the Dari learners’ identification of the Dari stimuli already after...

  8. Self-perceived oral communication competence in English, self-perceived employability and career expectations among non-native English speaking business professionals

    OpenAIRE

    Kuokka, Tiia

    2016-01-01

    Objective of the Study: The objectives for this thesis were 1) to understand non-native English speaking business professionals' self-perception of their oral communication competence in English, 2) to understand the importance of English language and competence in English for non-native English speaking business professionals when they consider employability and career expectations and finally 3) to study whether the concepts of self-perceived oral English communication competence, self-...

  9. Speech Intelligibility and Accents in Speech-Mediated Interfaces: Results and Recommendations

    Science.gov (United States)

    Lawrence, Halcyon M.

    2013-01-01

    There continues to be significant growth in the development and use of speech--mediated devices and technology products; however, there is no evidence that non-native English speech is used in these devices, despite the fact that English is now spoken by more non-native speakers than native speakers, worldwide. This relative absence of nonnative…

  10. Relationship between speech perception in noise and phonological awareness skills for children with normal hearing.

    Science.gov (United States)

    Lewis, Dawna; Hoover, Brenda; Choi, Sangsook; Stelmachowicz, Patricia

    2010-12-01

    Speech perception difficulties experienced by children in adverse listening environments have been well documented. It has been suggested that phonological awareness may be related to children's ability to understand speech in noise. The goal of this study was to provide data that will allow a clearer characterization of this potential relation in typically developing children. Doing so may result in a better understanding of how children learn to listen in noise as well as providing information to identify children who are at risk for difficulties listening in noise. Thirty-six children (5 to 7 yrs) with normal hearing participated in the study. Three phonological awareness tasks (syllable counting, initial consonant same, and phoneme deletion), representing a range of skills, were administered. For perception in noise tasks, nonsense syllables, monosyllabic words, and meaningful sentences with three key words were presented (50 dB SPL) at three signal to noise ratios (0, +5, and +10 dB). Among the speech in noise tasks, there was a significant effect of signal to noise ratio, with children performing less well at 0-dB signal to noise ratio for all stimuli. A significant age effect occurred only for word recognition, with 7-yr-olds scoring significantly higher than 5-yr olds. For all three phonological awareness tasks, an age effect existed with 7-year-olds again performing significantly better than 5-yr-olds. However, when examining the relation between speech recognition in noise and phonological awareness skills, no single variable accounted for a significant part of the variance in performance on nonsense syllables, words, or sentences. However, there was an association between vocabulary knowledge and speech perception in noise. Although phonological awareness skills are strongly related to reading and some children with reading difficulties also demonstrate poor speech perception in noise, results of this study question a relation between phonological

  11. Differential Allocation of Attention During Speech Perception in Monolingual and Bilingual Listeners.

    Science.gov (United States)

    Astheimer, Lori B; Berkes, Matthias; Bialystok, Ellen

    Attention is required during speech perception to focus processing resources on critical information. Previous research has shown that bilingualism modifies attentional processing in nonverbal domains. The current study used event-related potentials (ERPs) to determine whether bilingualism also modifies auditory attention during speech perception. We measured attention to word onsets in spoken English for monolinguals and Chinese-English bilinguals. Auditory probes were inserted at four times in a continuous narrative: concurrent with word onset, 100 ms before or after onset, and at random control times. Greater attention was indexed by an increase in the amplitude of the early negativity (N1). Among monolinguals, probes presented after word onsets elicited a larger N1 than control probes, replicating previous studies. For bilinguals, there was no N1 difference for probes at different times around word onsets, indicating less specificity in allocation of attention. These results suggest that bilingualism shapes attentional strategies during English speech comprehension.

  12. Differential Allocation of Attention During Speech Perception in Monolingual and Bilingual Listeners

    Science.gov (United States)

    Astheimer, Lori B.; Berkes, Matthias; Bialystok, Ellen

    2016-01-01

    Attention is required during speech perception to focus processing resources on critical information. Previous research has shown that bilingualism modifies attentional processing in nonverbal domains. The current study used event-related potentials (ERPs) to determine whether bilingualism also modifies auditory attention during speech perception. We measured attention to word onsets in spoken English for monolinguals and Chinese-English bilinguals. Auditory probes were inserted at four times in a continuous narrative: concurrent with word onset, 100 ms before or after onset, and at random control times. Greater attention was indexed by an increase in the amplitude of the early negativity (N1). Among monolinguals, probes presented after word onsets elicited a larger N1 than control probes, replicating previous studies. For bilinguals, there was no N1 difference for probes at different times around word onsets, indicating less specificity in allocation of attention. These results suggest that bilingualism shapes attentional strategies during English speech comprehension. PMID:27110579

  13. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...... audio-visual speech percepts and to measure coarticulatory effects....

  14. The Effects of Corrective Feedback on Instructed L2 Speech Perception

    Science.gov (United States)

    Lee, Andrew H.; Lyster, Roy

    2016-01-01

    To what extent do second language (L2) learners benefit from instruction that includes corrective feedback (CF) on L2 speech perception? This article addresses this question by reporting the results of a classroom-based experimental study conducted with 32 young adult Korean learners of English. An instruction-only group and an instruction + CF…

  15. Speech Perception Deficits in Poor Readers: A Reply to Denenberg's Critique.

    Science.gov (United States)

    Studdert-Kennedy, Michael; Mody, Maria; Brady, Susan

    2000-01-01

    This rejoinder to a critique of the authors' research on speech perception deficits in poor readers answers the specific criticisms and reaffirms their conclusion that the difficulty some poor readers have with rapid /ba/-/da/ discrimination does not stem from difficulty in discriminating the rapid spectral transitions at stop-vowel syllable…

  16. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review

    Science.gov (United States)

    Schomers, Malte R.; Pulvermüller, Friedemann

    2016-01-01

    In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding. PMID:27708566

  17. Speech perception after cochlear implantation in 53 patients with otosclerosis: multicentre results.

    NARCIS (Netherlands)

    Rotteveel, L.J.C.; Snik, A.F.M.; Cooper, H.; Mawman, D.J.; Olphen, A.F. van; Mylanus, E.A.M.

    2010-01-01

    OBJECTIVES: To analyse the speech perception performance of 53 cochlear implant recipients with otosclerosis and to evaluate which factors influenced patient performance in this group. The factors included disease-related data such as demographics, pre-operative audiological characteristics, the

  18. Impact of Language on Development of Auditory-Visual Speech Perception

    Science.gov (United States)

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  19. General Auditory Processing, Speech Perception and Phonological Awareness Skills in Chinese-English Biliteracy

    Science.gov (United States)

    Chung, Kevin K. H.; McBride-Chang, Catherine; Cheung, Him; Wong, Simpson W. L.

    2013-01-01

    This study focused on the associations of general auditory processing, speech perception, phonological awareness and word reading in Cantonese-speaking children from Hong Kong learning to read both Chinese (first language [L1]) and English (second language [L2]). Children in Grades 2--4 ("N" = 133) participated and were administered…

  20. The effect of speech recognition on working postures, productivity and the perception of user friendliness

    NARCIS (Netherlands)

    Korte, E.M. de; Lingen, P. van

    2006-01-01

    A comparative, experimental study with repeated measures has been conducted to evaluate the effect of the use of speech recognition on working postures, productivity and the perception of user friendliness. Fifteen subjects performed a standardised task, first with keyboard and mouse and, after a

  1. Bullying in Children Who Stutter: Speech-Language Pathologists' Perceptions and Intervention Strategies

    Science.gov (United States)

    Blood, Gordon W.; Boyle, Michael P.; Blood, Ingrid M.; Nalesnik, Gina R.

    2010-01-01

    Bullying in school-age children is a global epidemic. School personnel play a critical role in eliminating this problem. The goals of this study were to examine speech-language pathologists' (SLPs) perceptions of bullying, endorsement of potential strategies for dealing with bullying, and associations among SLPs' responses and specific demographic…

  2. Engineering biofuel tolerance in non-native producing microorganisms.

    Science.gov (United States)

    Jin, Hu; Chen, Lei; Wang, Jiangxin; Zhang, Weiwen

    2014-01-01

    Large-scale production of renewable biofuels through microbiological processes has drawn significant attention in recent years, mostly due to the increasing concerns on the petroleum fuel shortages and the environmental consequences of the over-utilization of petroleum-based fuels. In addition to native biofuel-producing microbes that have been employed for biofuel production for decades, recent advances in metabolic engineering and synthetic biology have made it possible to produce biofuels in several non-native biofuel-producing microorganisms. Compared to native producers, these non-native systems carry the advantages of fast growth, simple nutrient requirements, readiness for genetic modifications, and even the capability to assimilate CO2 and solar energy, making them competitive alternative systems to further decrease the biofuel production cost. However, the tolerance of these non-native microorganisms to toxic biofuels is naturally low, which has restricted the potentials of their application for high-efficiency biofuel production. To address the issues, researches have been recently conducted to explore the biofuel tolerance mechanisms and to construct robust high-tolerance strains for non-native biofuel-producing microorganisms. In this review, we critically summarize the recent progress in this area, focusing on three popular non-native biofuel-producing systems, i.e. Escherichia coli, Lactobacillus and photosynthetic cyanobacteria.

  3. Defining the impact of non-native species.

    Science.gov (United States)

    Jeschke, Jonathan M; Bacher, Sven; Blackburn, Tim M; Dick, Jaimie T A; Essl, Franz; Evans, Thomas; Gaertner, Mirijam; Hulme, Philip E; Kühn, Ingolf; Mrugała, Agata; Pergl, Jan; Pyšek, Petr; Rabitsch, Wolfgang; Ricciardi, Anthony; Richardson, David M; Sendek, Agnieszka; Vilà, Montserrat; Winter, Marten; Kumschick, Sabrina

    2014-10-01

    Non-native species cause changes in the ecosystems to which they are introduced. These changes, or some of them, are usually termed impacts; they can be manifold and potentially damaging to ecosystems and biodiversity. However, the impacts of most non-native species are poorly understood, and a synthesis of available information is being hindered because authors often do not clearly define impact. We argue that explicitly defining the impact of non-native species will promote progress toward a better understanding of the implications of changes to biodiversity and ecosystems caused by non-native species; help disentangle which aspects of scientific debates about non-native species are due to disparate definitions and which represent true scientific discord; and improve communication between scientists from different research disciplines and between scientists, managers, and policy makers. For these reasons and based on examples from the literature, we devised seven key questions that fall into 4 categories: directionality, classification and measurement, ecological or socio-economic changes, and scale. These questions should help in formulating clear and practical definitions of impact to suit specific scientific, stakeholder, or legislative contexts. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.

  4. Don't Listen With Your Mouth Full: The Role of Facial Motor Action in Visual Speech Perception.

    Science.gov (United States)

    Turner, Angela C; McIntosh, Daniel N; Moody, Eric J

    2015-06-01

    Theories of speech perception agree that visual input enhances the understanding of speech but disagree on whether physically mimicking the speaker improves understanding. This study investigated whether facial motor mimicry facilitates visual speech perception by testing whether blocking facial motor action impairs speechreading performance. Thirty-five typically developing children (19 boys; 16 girls; M age = 7 years) completed the Revised Craig Lipreading Inventory under two conditions. While observing silent videos of 15 words being spoken, participants either held a tongue depressor horizontally with their teeth (blocking facial motor action) or squeezed a ball with one hand (allowing facial motor action). As hypothesized, blocking motor action resulted in fewer correctly understood words than that of the control task. The results suggest that facial mimicry or other methods of facial action support visual speech perception in children. Future studies on the impact of motor action on the typical and atypical development of speech perception are warranted.

  5. Melodic Contour Training and Its Effect on Speech in Noise, Consonant Discrimination, and Prosody Perception for Cochlear Implant Recipients

    Directory of Open Access Journals (Sweden)

    Chi Yhun Lo

    2015-01-01

    Full Text Available Cochlear implant (CI recipients generally have good perception of speech in quiet environments but difficulty perceiving speech in noisy conditions, reduced sensitivity to speech prosody, and difficulty appreciating music. Auditory training has been proposed as a method of improving speech perception for CI recipients, and recent efforts have focussed on the potential benefits of music-based training. This study evaluated two melodic contour training programs and their relative efficacy as measured on a number of speech perception tasks. These melodic contours were simple 5-note sequences formed into 9 contour patterns, such as “rising” or “rising-falling.” One training program controlled difficulty by manipulating interval sizes, the other by note durations. Sixteen adult CI recipients (aged 26–86 years and twelve normal hearing (NH adult listeners (aged 21–42 years were tested on a speech perception battery at baseline and then after 6 weeks of melodic contour training. Results indicated that there were some benefits for speech perception tasks for CI recipients after melodic contour training. Specifically, consonant perception in quiet and question/statement prosody was improved. In comparison, NH listeners performed at ceiling for these tasks. There was no significant difference between the posttraining results for either training program, suggesting that both conferred benefits for training CI recipients to better perceive speech.

  6. Neuronal oscillations and speech perception: critical-band temporal envelopes are the essence

    Directory of Open Access Journals (Sweden)

    Oded eGhitza

    2013-01-01

    Full Text Available A recent opinion article (Neural oscillations in speech: don’t be enslaved by the envelope. Obleser et al., 2012 questions the validity of a class of speech perception models inspired by the possible role of neuronal oscillations in decoding speech (e.g., Ghitza 2011, Giraud & Poeppel 2012. They criticize, in particular, what they see as the over-emphasis of the role of temporal speech envelope information, and the over-emphasis of entrainment to the input rhythm while neglecting the role of top-down processes in modulating the entrainment of neuronal oscillations. Here we respond to these arguments, referring to the phenomenological model of Ghitza (2011, taken as a representative of the criticized approach.

  7. Testing Speech Recognition in Spanish-English Bilingual Children with the Computer-Assisted Speech Perception Assessment (CASPA): Initial Report.

    Science.gov (United States)

    García, Paula B; Rosado Rogers, Lydia; Nishi, Kanae

    2016-01-01

    This study evaluated the English version of Computer-Assisted Speech Perception Assessment (E-CASPA) with Spanish-English bilingual children. E-CASPA has been evaluated with monolingual English speakers ages 5 years and older, but it is unknown whether a separate norm is necessary for bilingual children. Eleven Spanish-English bilingual and 12 English monolingual children (6 to 12 years old) with normal hearing participated. Responses were scored by word, phoneme, consonant, and vowel. Regardless of scores, performance across three signal-to-noise ratio conditions was similar between groups, suggesting that the same norm can be used for both bilingual and monolingual children.

  8. Speech Perception in Noise by Children with Cochlear Implants

    Science.gov (United States)

    Caldwell, Amanda; Nittrouer, Susan

    2013-01-01

    Purpose: Common wisdom suggests that listening in noise poses disproportionately greater difficulty for listeners with cochlear implants (CIs) than for peers with normal hearing (NH). The purpose of this study was to examine phonological, language, and cognitive skills that might help explain speech-in-noise abilities for children with CIs.…

  9. The influence of phonetic dimensions on aphasic speech perception

    NARCIS (Netherlands)

    de Kok, D.A.; Jonkers, R.; Bastiaanse, Y.R.M.

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with 'audiovisual', 'auditory only' and

  10. The Influence of Phonetic Dimensions on Aphasic Speech Perception

    Science.gov (United States)

    Hessler, Dorte; Jonkers, Roel; Bastiaanse, Roelien

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with "audiovisual", "auditory only" and "visual only" stimulus display. Subjects had to…

  11. Influence of musical training on perception of L2 speech

    NARCIS (Netherlands)

    Sadakata, M.; Zanden, L.D.T. van der; Sekiyama, K.

    2010-01-01

    The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and vowels

  12. Visual Speech Perception in Children with Language Learning Impairments

    Science.gov (United States)

    Knowland, Victoria C. P.; Evans, Sam; Snell, Caroline; Rosen, Stuart

    2016-01-01

    Purpose: The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. Method: In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with…

  13. The influence of phonetic dimensions on aphasic speech perception

    NARCIS (Netherlands)

    de Kok, D.A.; Jonkers, R.; Bastiaanse, Y.R.M.

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with 'audiovisual', 'auditory only' and '

  14. Infant directed speech and the development of speech perception: enhancing development or an unintended consequence?

    Science.gov (United States)

    McMurray, Bob; Kovack-Lesh, Kristine A; Goodwin, Dresden; McEchron, William

    2013-11-01

    Infant directed speech (IDS) is a speech register characterized by simpler sentences, a slower rate, and more variable prosody. Recent work has implicated it in more subtle aspects of language development. Kuhl et al. (1997) demonstrated that segmental cues for vowels are affected by IDS in a way that may enhance development: the average locations of the extreme "point" vowels (/a/, /i/ and /u/) are further apart in acoustic space. If infants learn speech categories, in part, from the statistical distributions of such cues, these changes may specifically enhance speech category learning. We revisited this by asking (1) if these findings extend to a new cue (Voice Onset Time, a cue for voicing); (2) whether they extend to the interior vowels which are much harder to learn and/or discriminate; and (3) whether these changes may be an unintended phonetic consequence of factors like speaking rate or prosodic changes associated with IDS. Eighteen caregivers were recorded reading a picture book including minimal pairs for voicing (e.g., beach/peach) and a variety of vowels to either an adult or their infant. Acoustic measurements suggested that VOT was different in IDS, but not in a way that necessarily supports better development, and that these changes are almost entirely due to slower rate of speech of IDS. Measurements of the vowel suggested that in addition to changes in the mean, there was also an increase in variance, and statistical modeling suggests that this may counteract the benefit of any expansion of the vowel space. As a whole this suggests that changes in segmental cues associated with IDS may be an unintended by-product of the slower rate of speech and different prosodic structure, and do not necessarily derive from a motivation to enhance development.

  15. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study

    Science.gov (United States)

    Kumar, G. Vinodh; Halder, Tamesh; Jaiswal, Amit K.; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus

  16. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  17. Mapping the Developmental Trajectory and Correlates of Enhanced Pitch Perception on Speech Processing in Adults with ASD

    Science.gov (United States)

    Mayer, Jennifer L.; Hannent, Ian; Heaton, Pamela F.

    2016-01-01

    Whilst enhanced perception has been widely reported in individuals with Autism Spectrum Disorders (ASDs), relatively little is known about the developmental trajectory and impact of atypical auditory processing on speech perception in intellectually high-functioning adults with ASD. This paper presents data on perception of complex tones and…

  18. Amplified induced neural oscillatory activity predicts musicians' benefits in categorical speech perception.

    Science.gov (United States)

    Bidelman, Gavin M

    2017-04-21

    Event-related brain potentials (ERPs) reveal musical experience refines neural encoding and confers stronger categorical perception (CP) and neural organization for speech sounds. In addition to evoked brain activity, the human EEG can be decomposed into induced (non-phase-locked) responses whose various frequency bands reflect different mechanisms of perceptual-cognitive processing. Here, we aimed to clarify which spectral properties of these neural oscillations are most prone to music-related neuroplasticity and which are linked to behavioral benefits in the categorization of speech. We recorded electrical brain activity while musicians and nonmusicians rapidly identified speech tokens from a sound continuum. Time-frequency analysis parsed evoked and induced EEG into alpha- (∼10Hz), beta- (∼20Hz), and gamma- (>30Hz) frequency bands. We found that musicians' enhanced behavioral CP was accompanied by improved evoked speech responses across the frequency spectrum, complementing previously observed enhancements in evoked potential studies (i.e., ERPs). Brain-behavior correlations implied differences in the underlying neural mechanisms supporting speech CP in each group: modulations in induced gamma power predicted the slope of musicians' speech identification functions whereas early evoked alpha activity predicted behavior in nonmusicians. Collectively, findings indicate that musical training tunes speech processing via two complementary mechanisms: (i) strengthening the formation of auditory object representations for speech signals (gamma-band) and (ii) improving network control and/or the matching of sounds to internalized memory templates (alpha/beta-band). Both neurobiological enhancements may be deployed behaviorally and account for musicians' benefits in the perceptual categorization of speech. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Assessing speech perception in children with language difficulties: effects of background noise and phonetic contrast.

    Science.gov (United States)

    Vance, Maggie; Martindale, Nicola

    2012-02-01

    Deficits in speech perception are reported for some children with language impairments. This deficit is more marked when listening against background noise. This study investigated the speech perception skills of young children with and without language difficulties. A speech discrimination task, using non-word minimal pairs in an XAB paradigm, was presented to 20 5-7-year-old children with language difficulties and 33 typically-developing (TD) children aged between 4- to 7-years. Stimuli were presented in quiet and in background noise (babble), and stimuli varied in phonetic contrasts, differing in either place of articulation or presence/absence of voicing. Children with language difficulties performed less well than TD children in all conditions. There was an interaction between group and noise condition, such that children with language difficulties were more affected by the presence of noise. Both groups of children made more errors with one voicing contrast /s z/ and there was some indication that children with language difficulties had proportionately greater difficulty with this contrast. Speech discrimination scores were significantly correlated with language scores for children with language difficulties. Issues in developing material for assessment of speech discrimination in children with LI are discussed.

  20. Processing Reduced Word-Forms in Speech Perception Using Probabilistic Knowledge About Speech Production

    NARCIS (Netherlands)

    Mitterer, H.; McQueen, J.M.

    2009-01-01

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tax, bag, or a reduced-/t/ version of last, touch). Eye movements of Dutch participants were tracked as they looked

  1. Processing Reduced Word-Forms in Speech Perception Using Probabilistic Knowledge about Speech Production

    Science.gov (United States)

    Mitterer, Holger; McQueen, James M.

    2009-01-01

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is "tas," bag, or a reduced-/t/ version of "tast," touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4…

  2. Processing Reduced Word-Forms in Speech Perception Using Probabilistic Knowledge about Speech Production

    Science.gov (United States)

    Mitterer, Holger; McQueen, James M.

    2009-01-01

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is "tas," bag, or a reduced-/t/ version of "tast," touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4…

  3. Processing Reduced Word-Forms in Speech Perception Using Probabilistic Knowledge About Speech Production

    NARCIS (Netherlands)

    Mitterer, H.; McQueen, J.M.

    2009-01-01

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tax, bag, or a reduced-/t/ version of last, touch). Eye movements of Dutch participants were tracked as they looked

  4. Speech perception and quality of life of open-fit hearing aid users

    Science.gov (United States)

    GARCIA, Tatiana Manfrini; JACOB, Regina Tangerino de Souza; MONDELLI, Maria Fernanda Capoani Garcia

    2016-01-01

    ABSTRACT Objective To relate the performance of individuals with hearing loss at high frequencies in speech perception with the quality of life before and after the fitting of an open-fit hearing aid (HA). Methods The WHOQOL-BREF had been used before the fitting and 90 days after the use of HA. The Hearing in Noise Test (HINT) had been conducted in two phases: (1) at the time of fitting without an HA (situation A) and with an HA (situation B); (2) with an HA 90 days after fitting (situation C). Study Sample Thirty subjects with sensorineural hearing loss at high frequencies. Results By using an analysis of variance and the Tukey’s test comparing the three HINT situations in quiet and noisy environments, an improvement has been observed after the HA fitting. The results of the WHOQOL-BREF have showed an improvement in the quality of life after the HA fitting (paired t-test). The relationship between speech perception and quality of life before the HA fitting indicated a significant relationship between speech recognition in noisy environments and in the domain of social relations after the HA fitting (Pearson’s correlation coefficient). Conclusions The auditory stimulation has improved speech perception and the quality of life of individuals. PMID:27383708

  5. Cochlear Implantation in Inner Ear Malformations: Systematic Review of Speech Perception Outcomes and Intraoperative Findings.

    Science.gov (United States)

    Farhood, Zachary; Nguyen, Shaun A; Miller, Stephen C; Holcomb, Meredith A; Meyer, Ted A; Rizk, And Habib G

    2017-03-01

    Objective (1) To analyze reported speech perception outcomes in patients with inner ear malformations who undergo cochlear implantation, (2) to review the surgical complications and findings, and (3) to compare the 2 classification systems of Jackler and Sennaroglu. Data Sources PubMed, Scopus (including Embase), Medline, and CINAHL Plus. Review Methods Fifty-nine articles were included that contained speech perception and/or intraoperative data. Cases were differentiated depending on whether the Jackler or Sennaroglu malformation classification was used. A meta-analysis of proportions examined incidences of complete insertion, gusher, and facial nerve aberrancy. For speech perception data, weighted means and standard deviations were calculated for all malformations for short-, medium-, and long-term follow-up. Speech tests were grouped into 3 categories-closed-set words, open-set words, and open-set sentences-and then compared through a comparison-of-means t test. Results Complete insertion was seen in 81.8% of all inner ear malformations (95% CI: 72.6-89.5); gusher was reported in 39.1% of cases (95% CI: 30.3-48.2); and facial nerve anomalies were encountered in 34.4% (95% CI: 20.1-50.3). Significant improvements in average performance were seen for closed- and open-set tests across all malformation types at 12 months postoperatively. Conclusions Cochlear implantation outcomes are favorable for those with inner ear malformations from a surgical and speech outcome standpoint. Accurate classification of anatomic malformations, as well as standardization of postimplantation speech outcomes, is necessary to improve understanding of the impact of implantation in this difficult patient population.

  6. STUDENTS WRITING EMAILS TO FACULTY: AN EXAMINATION OF E-POLITENESS AMONG NATIVE AND NON-NATIVE SPEAKERS OF ENGLISH

    Directory of Open Access Journals (Sweden)

    Sigrun Biesenbach-Lucas

    2007-02-01

    Full Text Available This study combines interlanguage pragmatics and speech act research with computer-mediated communication and examines how native and non-native speakers of English formulate low- and high-imposition requests to faculty. While some research claims that email, due to absence of non-verbal cues, encourages informal language, other research has claimed the opposite. However, email technology also allows writers to plan and revise messages before sending them, thus affording the opportunity to edit not only for grammar and mechanics, but also for pragmatic clarity and politeness.The study examines email requests sent by native and non-native English speaking graduate students to faculty at a major American university over a period of several semesters and applies Blum-Kulka, House, and Kasper’s (1989 speech act analysis framework – quantitatively to distinguish levels of directness, i.e. pragmatic clarity; and qualitatively to compare syntactic and lexical politeness devices, the request perspectives, and the specific linguistic request realization patterns preferred by native and non-native speakers. Results show that far more requests are realized through direct strategies as well as hints than conventionally indirect strategies typically found in comparative speech act studies. Politeness conventions in email, a text-only medium with little guidance in the academic institutional hierarchy, appear to be a work in progress, and native speakers demonstrate greater resources in creating e-polite messages to their professors than non-native speakers. A possible avenue for pedagogical intervention with regard to instruction in and acquisition of politeness routines in hierarchically upward email communication is presented.

  7. Acoustic Context Alters Vowel Categorization in Perception of Noise-Vocoded Speech.

    Science.gov (United States)

    Stilp, Christian E

    2017-06-01

    Normal-hearing listeners' speech perception is widely influenced by spectral contrast effects (SCEs), where perception of a given sound is biased away from stable spectral properties of preceding sounds. Despite this influence, it is not clear how these contrast effects affect speech perception for cochlear implant (CI) users whose spectral resolution is notoriously poor. This knowledge is important for understanding how CIs might better encode key spectral properties of the listening environment. Here, SCEs were measured in normal-hearing listeners using noise-vocoded speech to simulate poor spectral resolution. Listeners heard a noise-vocoded sentence where low-F1 (100-400 Hz) or high-F1 (550-850 Hz) frequency regions were amplified to encourage "eh" (/ɛ/) or "ih" (/ɪ/) responses to the following target vowel, respectively. This was done by filtering with +20 dB (experiment 1a) or +5 dB gain (experiment 1b) or filtering using 100 % of the difference between spectral envelopes of /ɛ/ and /ɪ/ endpoint vowels (experiment 2a) or only 25 % of this difference (experiment 2b). SCEs influenced identification of noise-vocoded vowels in each experiment at every level of spectral resolution. In every case but one, SCE magnitudes exceeded those reported for full-spectrum speech, particularly when spectral peaks in the preceding sentence were large (+20 dB gain, 100 % of the spectral envelope difference). Even when spectral resolution was insufficient for accurate vowel recognition, SCEs were still evident. Results are suggestive of SCEs influencing CI users' speech perception as well, encouraging further investigation of CI users' sensitivity to acoustic context.

  8. The effects of blurred vision on auditory-visual speech perception in younger and older adults.

    Science.gov (United States)

    Legault, Isabelle; Gagné, Jean-Pierre; Rhoualem, Wafaa; Anderson-Gosselin, Penny

    2010-12-01

    Speech understanding is improved when the observer can both see and hear the talker. This study compared the effects of reduced visual acuity on auditory-visual (AV) speech-recognition in noise among younger and older adults. Two groups of participants performed a closed-set sentence-recognition task in one auditory-alone (A-alone) condition and under three AV conditions: normal visual acuity (6/6), and with blurred vision to simulate a 6/30 and 6/60 visual impairment. The results showed that (1) the addition of visual speech cues improved speech-perception relative to the A-alone condition, (2) under the AV conditions, performance declined as the amount of blurring increased, (3) even under the AV condition that simulated a visual acuity of 6/60, the speech recognition scores were significantly higher than those obtained under the A-alone condition, and (4) generally, younger adults obtained higher scores than older adults under all conditions. Our results demonstrate the benefits of visual cues to enhance speech understanding even when visual acuity is not optimal.

  9. Reading fluency and speech perception speed of beginning readers with persistent reading problems: the perception of initial stop consonants and consonant clusters.

    Science.gov (United States)

    Snellings, Patrick; van der Leij, Aryan; Blok, Henk; de Jong, Peter F

    2010-12-01

    This study investigated the role of speech perception accuracy and speed in fluent word decoding of reading disabled (RD) children. A same-different phoneme discrimination task with natural speech tested the perception of single consonants and consonant clusters by young but persistent RD children. RD children were slower than chronological age (CA) controls in recognizing identical sounds, suggesting less distinct phonemic categories. In addition, after controlling for phonetic similarity Tallal's (Brain Lang 9:182-198, 1980) fast transitions account of RD children's speech perception problems was contrasted with Studdert-Kennedy's (Read Writ Interdiscip J 15:5-14, 2002) similarity explanation. Results showed no specific RD deficit in perceiving fast transitions. Both phonetic similarity and fast transitions influenced accurate speech perception for RD children as well as CA controls.

  10. Initial Teacher Training Courses and Non-Native Speaker Teachers

    Science.gov (United States)

    Anderson, Jason

    2016-01-01

    This article reports on a study contrasting 41 native speakers (NSs) and 38 non-native speakers (NNSs) of English from two short initial teacher training courses, the Cambridge Certificate in English Language Teaching to Adults and the Trinity College London CertTESOL. After a brief history and literature review, I present findings on teachers'…

  11. Initial Teacher Training Courses and Non-Native Speaker Teachers

    Science.gov (United States)

    Anderson, Jason

    2016-01-01

    This article reports on a study contrasting 41 native speakers (NSs) and 38 non-native speakers (NNSs) of English from two short initial teacher training courses, the Cambridge Certificate in English Language Teaching to Adults and the Trinity College London CertTESOL. After a brief history and literature review, I present findings on teachers'…

  12. The Ceremonial Elements of Non-Native Cultures.

    Science.gov (United States)

    Horwood, Bert

    1994-01-01

    Explores reasons behind the wrongful adoption of Native American ceremonies by Euro-Americans. Focuses on the need for ceremony, its relevance to environmental education, and the fact that some immigrant cultural traditions neither fit this new land nor value the earth. Suggests how non-Natives can express their connection to the land by creating…

  13. Empowering Non-Native English Speaking Teachers through Critical Pedagogy

    Science.gov (United States)

    Hayati, Nur

    2010-01-01

    Critical pedagogy is a teaching approach that aims to develop students' critical thinking, political and social awareness, and self esteem through dialogue learning and reflection. Related to the teaching of EFL, this pedagogy holds the potential to empower non native English speaking teachers (NNESTs) when incorporated into English teacher…

  14. Communication Between Speech Production and Perception Within the Brain--Observation and Simulation

    Institute of Scientific and Technical Information of China (English)

    Jianwu Dang; Masato Akagi; Kiyoshi Honda

    2006-01-01

    Realization of an intelligent human-machine interface requires us to investigate human mechanisms and learn from them. This study focuses on communication between speech production and perception within human brain and realizing it in an artificial system. A physiological research study based on electromyographic signals (Honda, 1996) suggested that speech communication in human brain might be based on a topological mapping between speech production and perception, according to an analogous topology between motor and sensory representations. Following this hypothesis, this study first investigated the topologies of the vowel system across the motor, kinematic, and acoustic spaces by means of a model simulation, and then examined the linkage between vowel production and perception in terms of a transformed auditory feedback (TAF) experiment. The model simulation indicated that there exists an invariant mapping from muscle activations (motor space) to articulations (kinematic space) via a coordinate consisting of force-dependent equilibrium positions, and the mapping from the motor space to kinematic space is unique. The motor-kinematic-acoustic deduction in the model simulation showed that the topologies were compatible from one space to another. In the TAF experiment, vowel production exhibited a compensatory response for a perturbation in the feedback sound. This implied that vowel production is controlled in reference to perception monitoring.

  15. Acoustic Features and Perceptive Cues of Songs and Dialogues in Whistled Speech: Convergences with Sung Speech

    CERN Document Server

    Meyer, Julien

    2007-01-01

    Whistled speech is a little studied local use of language shaped by several cultures of the world either for distant dialogues or for rendering traditional songs. This practice consists of an emulation of the voice thanks to a simple modulated pitch. It is therefore the result of a transformation of the vocal signal that implies simplifications in the frequency domain. The whistlers adapt their productions to the way each language combines the qualities of height perceived simultaneously by the human ear in the complex frequency spectrum of the spoken or sung voice (pitch, timbre). As a consequence, this practice underlines key acoustic cues for the intelligibility of the concerned languages. The present study provides an analysis of the acoustic and phonetic features selected by whistled speech in several traditions either in purely oral whistles (Spanish, Turkish, Mazatec) or in whistles produced with an instrument like a leaf (Akha, Hmong). It underlines the convergences with the strategies of the singing ...

  16. Acoustic Features and Perceptive Cues of Songs and Dialogues in Whistled Speech: Convergences with Sung Speech

    OpenAIRE

    Meyer, Julien

    2007-01-01

    International audience; Whistled speech is a little studied local use of language shaped by several cultures of the world either for distant dialogues or for rendering traditional songs. This practice consists of an emulation of the voice thanks to a simple modulated pitch. It is therefore the result of a transformation of the vocal signal that implies simplifications in the frequency domain. The whistlers adapt their productions to the way each language combines the qualities of height perce...

  17. The discrepancy in the perception of the public-political speech in Croatia.

    Science.gov (United States)

    Tanta, Ivan; Lesinger, Gordana

    2014-03-01

    Key place in this paper takes the study of political speech in the Republic of Croatia and their impact on voters, or which keywords are in political speeches and public appearances of politicians in Croatia that their voting body wants to hear. Given listed below we will define the research topic in the form of a question - is there a discrepancy in the perception of the public-political speech in Croatia, and which keywords are specific to the two main regions in Croatia and that inhabitant these regions respond. Marcus Tullius Cicero, the most important Roman orator, he used a specific associative mnemonic technique that is called "technique room". He would talk expound on keywords and conceptual terms that he needed for the desired topic and join in these make them, according to the desired order, in a very creative and unique way, the premises of the house or palace, which he knew well. Then, while holding the speech intended to pass through rooms of the house or palace and then put keywords and concepts come to mind, again according to the desired order. Given that this is a specific kind of research political speech that is relatively recent in Croatia, it should be noted that there is still, this kind of political communication is not sufficiently explored. Particularly the emphasis on the impact and use of keywords specific to the Republic of Croatia, in everyday public and political communication. The paper will be analyzed the political, campaign speeches and promises several winning candidates, and now Croatian MEPs, specific keywords related to: economics, culture, science, education and health. The analysis is based on comparison of the survey results on the representation of key words in the speeches of politicians and qualitative analysis of the speeches of politicians on key words during the election campaign.

  18. The role of experience in the perception of phonetic detail in children's speech: a comparison between speech-language pathologists and clinically untrained listeners.

    Science.gov (United States)

    Munson, Benjamin; Johnson, Julie M; Edwards, Jan

    2012-05-01

    This study examined whether experienced speech-language pathologists (SLPs) differ from inexperienced people in their perception of phonetic detail in children's speech. Twenty-one experienced SLPs and 21 inexperienced listeners participated in a series of tasks in which they used a visual-analog scale (VAS) to rate children's natural productions of target /s/-/θ/, /t/-/k/, and /d/-// in word-initial position. Listeners rated the perceived distance between individual productions and ideal productions. The experienced listeners' ratings differed from the inexperienced listeners' ratings in four ways: They had higher intrarater reliability, showed less bias toward a more frequent sound, and were more closely related to the acoustic characteristics of the children's speech. In addition, the experienced listeners' responses were related to a different set of predictor variables. Results suggest that experience working as an SLP leads to better perception of phonetic detail in children's speech. Limitations and future research are discussed.

  19. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners with Normal Hearing

    Science.gov (United States)

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…

  20. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    Science.gov (United States)

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  1. Speech perception and localisation with SCORE bimodal: a loudness normalisation strategy for combined cochlear implant and hearing aid stimulation.

    Directory of Open Access Journals (Sweden)

    Tom Francart

    Full Text Available A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.

  2. Speech perception and localisation with SCORE bimodal: a loudness normalisation strategy for combined cochlear implant and hearing aid stimulation.

    Science.gov (United States)

    Francart, Tom; McDermott, Hugh

    2012-01-01

    A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.

  3. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  4. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    OpenAIRE

    Arianna eLaCroix; Diaz, Alvaro F.; Corianne eRogalsky

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel’s Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch’s neurocognitive model of music perception suggest a high degree of overlap, particularly in ...

  5. Auditory Perception and Production of Speech Feature Contrasts by Pediatric Implant Users.

    Science.gov (United States)

    Mahshie, James; Core, Cynthia; Larsen, Michael D

    2015-01-01

    The aim of the present research is to examine the relations between auditory perception and production of specific speech contrasts by children with cochlear implants (CIs) who received their implants before 3 years of age and to examine the hierarchy of abilities for perception and production for consonant and vowel features. The following features were examined: vowel height, vowel place, consonant place of articulation (front and back), continuance, and consonant voicing. Fifteen children (mean age = 4;0 and range 3;2 to 5;11) with a minimum of 18 months of experience with their implants and no additional known disabilities served as participants. Perception of feature contrasts was assessed using a modification of the Online Imitative Speech Pattern Contrast test, which uses imitation to assess speech feature perception. Production was examined by having the children name a series of pictures containing consonant and vowel segments that reflected contrasts of each feature. For five of the six feature contrasts, production accuracy was higher than perception accuracy. There was also a significant and positive correlation between accuracy of production and auditory perception for each consonant feature. This correlation was not found for vowels, owing largely to the overall high perception and production scores attained on the vowel features. The children perceived vowel feature contrasts more accurately than consonant feature contrasts. On average, the children had lower perception scores for Back Place and Continuance feature contrasts than for Anterior Place and Voicing contrasts. For all features, the median production scores were 100%; the majority of the children were able to accurately and consistently produce the feature contrasts. The mean production scores for features reflect greater score variability for consonant feature production than for vowel features. Back Place of articulation for back consonants and Continuance contrasts appeared to be the

  6. A speech perception test for children in classrooms

    Science.gov (United States)

    Feijoo, Sergio; Fernandez, Santiago; Alvarez, Jose Manuel

    2002-11-01

    The combined effects of excessive ambient noise and reverberation in classrooms interfere with speech recognition and tend to degrade the learning process of young children. This paper reports a detailed analysis of a speech recognition test carried out with two different children populations of ages 8-9 and 10-11. Unlike English, Spanish has few minimal pairs to be used for phoneme recognition in a closed set manner. The test consisted in a series of two-syllable nonsense words formed by the combination of all possible syllables in Spanish. The test was administered to the children as a dictation task in which they had to write down the words spoken by their female teacher. The test was administered in two blocks on different days, and later repeated to analyze its consistency. The rationale for this procedure was (a) the test should reproduce normal academic situations, (b) all phonological and lexical context effects should be avoided, (c) errors in both words and phonemes should be scored to unveil any possible acoustic base for them. Although word recognition scores were similar among age groups and repetitions, phoneme errors showed high variability questioning the validity of such a test for classroom assessment.

  7. A speech-perception training tool to improve phonetic transcription

    Science.gov (United States)

    Padgitt, Noelle R.; Munson, Benjamin; Carney, Edward J.

    2005-09-01

    University instruction in phonetics requires students to associate a set of quasialphabetic symbols and diacritics with speech sounds. In the case of narrow phonetic transcription, students are required to associate symbols with sounds that do not function contrastively in the language. This learning task is challenging, given that students must discriminate among different variants of sounds that are not used to convey differences in lexical meaning. Consequently, many students fail to learn phonetic transcription to a level of proficiency needed for practical application (B. Munson and K. N. Brinkman, Am. J. Speech Lang. Path. [2004]). In an effort to improve students' phonetic transcription skills, a computerized training program was developed to trains students' discrimination and identification of selected phonetic contrasts. The design of the training tool was based on similar tools that have been used to train phonetic contrasts in second-language learners of English (e.g., A. Bradlow et al., J. Acoust. Soc. Am. 102, 3115 [1997]). It consists of multiple stages (bombardment, discrimination, identification) containing phonetic contrasts that students have identified as particularly difficult to perceive. This presentation will provide a demonstration of the training tool, and will present preliminary data on the efficacy of this tool in improving students' phonetic transcription abilities.

  8. Segmental and suprasegmental features in speech perception in Cantonese-speaking second graders: an ERP study.

    Science.gov (United States)

    Tong, Xiuhong; McBride, Catherine; Lee, Chia-Ying; Zhang, Juan; Shuai, Lan; Maurer, Urs; Chung, Kevin K H

    2014-11-01

    Using a multiple-deviant oddball paradigm, this study examined second graders' brain responses to Cantonese speech. We aimed to address the question of whether a change in a consonant or lexical tone could be automatically detected by children. We measured auditory mismatch responses to place of articulation and voice onset time (VOT), reflecting segmental perception, as well as Cantonese lexical tones including level tone and contour tone, reflecting suprasegmental perception. The data showed that robust mismatch negativities (MMNs) were elicited by all deviants in the time window of 300-500 ms in second graders. Moreover, relative to the standard stimuli, the VOT deviant elicited a robust positive mismatch response, and the level tone deviant elicited a significant MMN in the time window of 150-300 ms. The findings suggest that Hong Kong second graders were sensitive to neural discriminations of speech sounds both at the segmental and suprasegmental levels.

  9. Speech-perception training for older adults with hearing loss impacts word recognition and effort.

    Science.gov (United States)

    Kuchinsky, Stefanie E; Ahlstrom, Jayne B; Cute, Stephanie L; Humes, Larry E; Dubno, Judy R; Eckert, Mark A

    2014-10-01

    The current pupillometry study examined the impact of speech-perception training on word recognition and cognitive effort in older adults with hearing loss. Trainees identified more words at the follow-up than at the baseline session. Training also resulted in an overall larger and faster peaking pupillary response, even when controlling for performance and reaction time. Perceptual and cognitive capacities affected the peak amplitude of the pupil response across participants but did not diminish the impact of training on the other pupil metrics. Thus, we demonstrated that pupillometry can be used to characterize training-related and individual differences in effort during a challenging listening task. Importantly, the results indicate that speech-perception training not only affects overall word recognition, but also a physiological metric of cognitive effort, which has the potential to be a biomarker of hearing loss intervention outcome. Copyright © 2014 Society for Psychophysiological Research.

  10. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    Science.gov (United States)

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  11. Speech perception in the child brain: cortical timing and its relevance to literacy acquisition.

    Science.gov (United States)

    Parviainen, Tiina; Helenius, Päivi; Poskiparta, Elisa; Niemi, Pekka; Salmelin, Riitta

    2011-12-01

    Speech processing skills go through intensive development during mid-childhood, providing basis also for literacy acquisition. The sequence of auditory cortical processing of speech has been characterized in adults, but very little is known about the neural representation of speech sound perception in the developing brain. We used whole-head magnetoencephalography (MEG) to record neural responses to speech and nonspeech sounds in first-graders (7-8-year-old) and compared the activation sequence to that in adults. In children, the general location of neural activity in the superior temporal cortex was similar to that in adults, but in the time domain the sequence of activation was strikingly different. Cortical differentiation between sound types emerged in a prolonged response pattern at about 250 ms after sound onset, in both hemispheres, clearly later than the corresponding effect at about 100 ms in adults that was detected specifically in the left hemisphere. Better reading skills were linked with shorter-lasting neural activation, speaking for interdependence of the maturing neural processes of auditory perception and developing linguistic skills. This study uniquely utilized the potential of MEG in comparing both spatial and temporal characteristics of neural activation between adults and children. Besides depicting the group-typical features in cortical auditory processing, the results revealed marked interindividual variability in children.

  12. Influence of anesthesia techniques of caesarean section on memory, perception and speech

    Directory of Open Access Journals (Sweden)

    Volkov O.O.

    2014-06-01

    Full Text Available In obstetrics postoperative cognitive dysfunctions may take place after caesarean section and vaginal delivery with poor results both for mother and child. The goal was to study influence of anesthesia techniques following caesarian section on memory, perception and speech. Having agreed with local ethics committee and obtained informed consent depending on anesthesia method, pregnant women were divided into 2 groups: 1st group (n=31 had spinal anesthesia, 2nd group (n=34 – total intravenous anesthesia. Spinal anesthesia: 1.8-2.2 mLs of hyperbaric 0.5% bupivacaine. ТIVА: Thiopental sodium (4 mgs kg-1, succinylcholine (1-1.5 mgs kg-1. Phentanyl (10-5-3 µgs kg-1 hour and Diazepam (10 mgs were used after newborn extraction. We used Luria’s test for memory assessment, perception was studied by test “recognition of time”. Speech was studied by test "name of fingers". Control points: 1 - before the surgery, 2 - in 24h after the caesarian section, 3 - on day 3 after surgery, 4 - at discharge from hospital (5-7th day. The study showed that initially decreased memory level in expectant mothers regressed along with the time after caesarean section. Memory is restored in 3 days after surgery regardless of anesthesia techniques. In spinal anesthesia on 5-7th postoperative day memory level exceeds that of used in total intravenous anesthesia. The perception and speech do not depend on the term of postoperative period. Anesthesia technique does not influence perception and speech restoration after caesarean sections.

  13. Electrophysiological measures of attention during speech perception predict metalinguistic skills in children

    Directory of Open Access Journals (Sweden)

    Lori Astheimer

    2014-01-01

    Full Text Available Event-related potential (ERP evidence demonstrates that preschool-aged children selectively attend to informative moments such as word onsets during speech perception. Although this observation indicates a role for attention in language processing, it is unclear whether this type of attention is part of basic speech perception mechanisms, higher-level language skills, or general cognitive abilities. The current study examined these possibilities by measuring ERPs from 5-year-old children listening to a narrative containing attention probes presented before, during, and after word onsets as well as at random control times. Children also completed behavioral tests assessing verbal and nonverbal skills. Probes presented after word onsets elicited a more negative ERP response beginning around 100 ms after probe onset than control probes, indicating increased attention to word-initial segments. Crucially, the magnitude of this difference was correlated with performance on verbal tasks, but showed no relationship to nonverbal measures. More specifically, ERP attention effects were most strongly correlated with performance on a complex metalinguistic task involving grammaticality judgments. These results demonstrate that effective allocation of attention during speech perception supports higher-level, controlled language processing in children by allowing them to focus on relevant information at individual word and complex sentence levels.

  14. Electrophysiological measures of attention during speech perception predict metalinguistic skills in children

    Science.gov (United States)

    Astheimer, Lori; Janus, Monika; Moreno, Sylvain; Bialystok, Ellen

    2014-01-01

    Event-related potential (ERP) evidence demonstrates that preschool-aged children selectively attend to informative moments such as word onsets during speech perception. Although this observation indicates a role for attention in language processing, it is unclear whether this type of attention is part of basic speech perception mechanisms, higher-level language skills, or general cognitive abilities. The current study examined these possibilities by measuring ERPs from 5-year-old children listening to a narrative containing attention probes presented before, during, and after word onsets as well as at random control times. Children also completed behavioral tests assessing verbal and nonverbal skills. Probes presented after word onsets elicited a more negative ERP response beginning around 100 ms after probe onset than control probes, indicating increased attention to word-initial segments. Crucially, the magnitude of this difference was correlated with performance on verbal tasks, but showed no relationship to nonverbal measures. More specifically, ERP attention effects were most strongly correlated with performance on a complex metalinguistic task involving grammaticality judgments. These results demonstrate that effective allocation of attention during speech perception supports higher-level, controlled language processing in children by allowing them to focus on relevant information at individual word and complex sentence levels. PMID:24316548

  15. Speech perception in Mandarin-speaking children with cochlear implants: A systematic review.

    Science.gov (United States)

    Chen, Yuan; Wong, Lena L N

    2017-03-15

    This paper reviewed the literature on the trajectories and the factors significantly affecting post-implantation speech perception development in Mandarin-speaking children with cochlear implants (CIs). A systematic literature search of textbooks and peer-reviewed published journal articles in online bibliographic databases was conducted. PubMed, Scopus and Wiley online library were searched for eligible studies based on predefined inclusion and exclusion criteria. A total of 14 journal articles were selected for this review. A number of consistent results were found. That is, children with CIs, as a group, exhibited steep improvement in early speech perception, from exhibiting few prelingual auditory behaviours before implantation to identifying sentences in noise after one year of CI use. After one to three years of CI use, children are expected to identify tones above chance and recognition of words in noise. In addition, early age at implantation, longer duration of CI use and higher maternal education level contributed to greater improvements in speech perception. Findings from this review will contribute to the establishment of appropriate short-term developmental goals for Mandarin-speaking children with CIs in mainland China and clinicians could use them to determine whether children have made appropriate progress with CIs.

  16. Reduced audiovisual integration in synesthesia--evidence from bimodal speech perception.

    Science.gov (United States)

    Sinke, Christopher; Neufeld, Janina; Zedler, Markus; Emrich, Hinderk M; Bleich, Stefan; Münte, Thomas F; Szycik, Gregor R

    2014-03-01

    Recent research suggests synesthesia as a result of a hypersensitive multimodal binding mechanism. To address the question whether multimodal integration is altered in synesthetes in general, grapheme-colour and auditory-visual synesthetes were investigated using speech-related stimulation in two behavioural experiments. First, we used the McGurk illusion to test the strength and number of illusory perceptions in synesthesia. In a second step, we analysed the gain in speech perception coming from seen articulatory movements under acoustically noisy conditions. We used disyllabic nouns as stimulation and varied signal-to-noise ratio of the auditory stream presented concurrently to a matching video of the speaker. We hypothesized that if synesthesia is due to a general hyperbinding mechanism this group of subjects should be more susceptible to McGurk illusions and profit more from the visual information during audiovisual speech perception. The results indicate that there are differences between synesthetes and controls concerning multisensory integration--but in the opposite direction as hypothesized. Synesthetes showed a reduced number of illusions and had a reduced gain in comprehension by viewing matching articulatory movements in comparison to control subjects. Our results indicate that rather than having a hypersensitive binding mechanism, synesthetes show weaker integration of vision and audition. © 2012 The British Psychological Society.

  17. Frame rate of motion picture and its influence on speech perception

    Science.gov (United States)

    Nakazono, Kaoru

    1996-03-01

    The preservation of QoS for multimedia traffic through a data network is a difficult problem. We focus our attention on video frame rate and study its influence on speech perception. When sound and picture are discrepant (e.g., acoustic `ba' combined with visual `ga'), subjects perceive a different sound (such as `da'). This phenomenon is known as the McGurk effect. In this paper, the influence of degraded video frame rate on speech perception was studied. It was shown that when frame rate decreases, correct hearing is improved for discrepant stimuli and is degraded for congruent (voice and picture are the same) stimuli. Furthermore, we studied the case where lip closure was always captured by the synchronization of sampling time and lip position. In this case, frame rate has little effect on mishearing for congruent stimuli. For discrepant stimuli, mishearing is decreased with degraded frame rate. These results indicate that stiff motion of lips resulting from low frame rate cannot give enough labial information for speech perception. In addition, the effect of delaying the picture to correct for low frame rate was studied. The results, however, were not as definitive as expected because of compound effects related to the synchronization of sound and picture.

  18. Perceptions of the Seriousness of Mispronunciations of English Speech Sounds

    Directory of Open Access Journals (Sweden)

    Moedjito

    2008-01-01

    Full Text Available The present study attempts to investigate Indonesian EFL teachers' and native English speakers' perceptions of mispronunciations of English sounds by Indonesian EFL learners. For this purpose, a paper-form questionnaire consisting of 32 target mispronunciations was distributed to Indonesian secondary school teachers of English and also to native English speakers. An analysis of the respondents' perceptions has discovered that 14 out of the 32 target mispronunciations are pedagogically significant in pronunciation instruction. A further analysis of the reasons for these major mispronunciations has reconfirmed the prevalence of interference of learners' native language in their English pronunciation as a major cause of mispronunciations. It has also revealed Indonesian EFL teachers' tendency to overestimate the seriousness of their learners' pronunciations. Based on these findings, the study makes suggestions for better English pronunciation teaching in Indonesia or other EFL countries.

  19. Drivers of Non-Native Aquatic Species Invasions across the ...

    Science.gov (United States)

    Background/Question/Methods Mapping the geographic distribution of non-native aquatic species is a critically important precursor to understanding the anthropogenic and environmental factors that drive freshwater biological invasions. Such efforts are often limited to local scales and/or to a single taxa, missing the opportunity to observe and understand the drivers of macroscale invasion patterns at sub-continental or continental scales. Here we map the distribution of exotic freshwater species richness across the continental United States using publicly accessible species occurrence data (e.g GBIF) and investigate the role of human activity in driving macroscale patterns of aquatic invasion. Using a dasymetric model of human population density and a spatially explicit model of recreational freshwater fishing demand, we analyzed the effect of these metrics of human influence on non-native aquatic species richness at the watershed scale, while controlling for spatial and sampling bias. We also assessed the effects that a temporal mismatch between occurrence data (collected since 1815) and cross-sectional predictors (developed using 2010 data) may have on model fit. Results/Conclusions Our results indicated that non-native aquatic species richness exhibits a highly patchy distribution, with hotspots in the Northeast, Great Lakes, Florida, and human population centers on the Pacific coast. These richness patterns are correlated with population density, but are m

  20. Influences of listeners’ native and other dialects on cross-language vowel perception

    Directory of Open Access Journals (Sweden)

    Daniel eWilliams

    2014-10-01

    Full Text Available This paper examines to what extent acoustic similarity between native and non-native vowels predicts non-native vowel perception and whether this process is influenced by listeners’ native and other non-native dialects. Listeners with Northern and Southern British English dialects completed a perceptual assimilation task in which they categorized tokens of 15 Dutch vowels in terms of English vowel categories. While the cross-language acoustic similarity of Dutch vowels to English vowels largely predicted Southern listeners’ perceptual assimilation patterns, this was not the case for Northern listeners, whose assimilation patterns resembled those of Southern listeners for all but three Dutch vowels. The cross-language acoustic similarity of Dutch vowels to Northern English vowels was re-examined by incorporating Southern English tokens, which resulted in considerable improvements in the predicting power of cross-language acoustic similarity. This suggests that Northern listeners’ assimilation of Dutch vowels to English vowels was influenced by knowledge of both native Northern and non-native Southern English vowel categories. The implications of these findings for theories of non-native speech perception are discussed.

  1. Influences of listeners' native and other dialects on cross-language vowel perception.

    Science.gov (United States)

    Williams, Daniel; Escudero, Paola

    2014-01-01

    This paper examines to what extent acoustic similarity between native and non-native vowels predicts non-native vowel perception and whether this process is influenced by listeners' native and other non-native dialects. Listeners with Northern and Southern British English dialects completed a perceptual assimilation task in which they categorized tokens of 15 Dutch vowels in terms of English vowel categories. While the cross-language acoustic similarity of Dutch vowels to English vowels largely predicted Southern listeners' perceptual assimilation patterns, this was not the case for Northern listeners, whose assimilation patterns resembled those of Southern listeners for all but three Dutch vowels. The cross-language acoustic similarity of Dutch vowels to Northern English vowels was re-examined by incorporating Southern English tokens, which resulted in considerable improvements in the predicting power of cross-language acoustic similarity. This suggests that Northern listeners' assimilation of Dutch vowels to English vowels was influenced by knowledge of both native Northern and non-native Southern English vowel categories. The implications of these findings for theories of non-native speech perception are discussed.

  2. Influences of listeners' native and other dialects on cross-language vowel perception

    Science.gov (United States)

    Williams, Daniel; Escudero, Paola

    2014-01-01

    This paper examines to what extent acoustic similarity between native and non-native vowels predicts non-native vowel perception and whether this process is influenced by listeners' native and other non-native dialects. Listeners with Northern and Southern British English dialects completed a perceptual assimilation task in which they categorized tokens of 15 Dutch vowels in terms of English vowel categories. While the cross-language acoustic similarity of Dutch vowels to English vowels largely predicted Southern listeners' perceptual assimilation patterns, this was not the case for Northern listeners, whose assimilation patterns resembled those of Southern listeners for all but three Dutch vowels. The cross-language acoustic similarity of Dutch vowels to Northern English vowels was re-examined by incorporating Southern English tokens, which resulted in considerable improvements in the predicting power of cross-language acoustic similarity. This suggests that Northern listeners' assimilation of Dutch vowels to English vowels was influenced by knowledge of both native Northern and non-native Southern English vowel categories. The implications of these findings for theories of non-native speech perception are discussed. PMID:25339921

  3. Predictive top-down integration of prior knowledge during speech perception.

    Science.gov (United States)

    Sohoglu, Ediz; Peelle, Jonathan E; Carlyon, Robert P; Davis, Matthew H

    2012-06-20

    A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly feedforward or whether higher-level knowledge influences sensory processing through feedback connections. Here we used concurrent EEG and MEG recordings to determine how sensory information and prior knowledge are integrated in the brain during speech perception. We manipulated listeners' prior knowledge of speech content by presenting matching, mismatching, or neutral written text before a degraded (noise-vocoded) spoken word. When speech conformed to prior knowledge, subjective perceptual clarity was enhanced. This enhancement in clarity was associated with a spatiotemporal profile of brain activity uniquely consistent with a feedback process: activity in the inferior frontal gyrus was modulated by prior knowledge before activity in lower-level sensory regions of the superior temporal gyrus. In parallel, we parametrically varied the level of speech degradation, and therefore the amount of sensory detail, so that changes in neural responses attributable to sensory information and prior knowledge could be directly compared. Although sensory detail and prior knowledge both enhanced speech clarity, they had an opposite influence on the evoked response in the superior temporal gyrus. We argue that these data are best explained within the framework of predictive coding in which sensory activity is compared with top-down predictions and only unexplained activity propagated through the cortical hierarchy.

  4. [Development of early auditory and speech perception skills within one year after cochlear implantion in prelingual deaf children].

    Science.gov (United States)

    Fu, Ying; Chen, Yuan; Xi, Xin; Hong, Mengdi; Chen, Aiting; Wang, Qian; Wong, Lena

    2015-04-01

    To investigate the development of early auditory capability and speech perception in the prelingual deaf children after cochlear implantation, and to study the feasibility of currently available Chinese assessment instruments for the evaluation of early auditory skill and speech perception in hearing-impaired children. A total of 83 children with severe-to-profound prelingual hearing impairment participated in this study. Participants were divided into four groups according to the age for surgery: A (1-2 years), B (2-3 years), C (3-4 years) and D (4-5 years). The auditory skill and speech perception ability of CI children were evaluated by trained audiologists using the infant-toddler/meaningful auditory integration scale (IT-MAIS/MAIS) questionnaire, the Mandarin Early Speech Perception (MESP) test and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The questionnaires were used in face to face interviews with the parents or guardians. Each child was assessed before the operation and 3 months, 6 months, 12 months after switch-on. After cochlear implantation, early postoperative auditory development and speech perception gradually improved. All MAIS/IT-MAIS scores showed a similar increasing trend with the rehabilitation duration (F=5.743, P=0.007). Preoperative and post operative MAIS/IT-MAIS scores of children in age group C (3-4 years) was higher than that of other groups. Children who had longer hearing aid experience before operation demonstrated higher MAIS/IT-MAIS scores than those with little or no hearing aid experience (F=4.947, P=0.000). The MESP test showed that, children were not able to perceive speech as well as detecting speech signals. However as the duration of CI use increased, speech perception ability also improved substantially. However, only about 40% of the subjects could be evaluated using the most difficult subtest on the MPSI in quiet at 12 months after switch-on. As MCR decreased, the proportion of children who could be tested

  5. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    Science.gov (United States)

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  6. Perceptions of The Seriousness of Mispronunciations of English Speech Sounds

    Directory of Open Access Journals (Sweden)

    Moedjito Moedjito

    2006-01-01

    Full Text Available The present study attempts to investigate Indonesian EFL teachers’ and native English speakers’ perceptions of mispronunciations of English sounds by Indonesian EFL learners. For this purpose, a paper-form questionnaire consisting of 32 target mispronunciations was distributed to Indonesian secondary school teachers of English and also to native English speakers. An analysis of the respondents’ perceptions has discovered that 14 out of the 32 target mispronunciations are pedagogically significant in pronunciation instruction. A further analysis of the reasons for these major mispronunciations has reconfirmed the prevalence of interference of learners’ native language in their English pronunciation as a major cause of mispronunciations. It has also revealed Indonesian EFL teachers’ tendency to overestimate the seriousness of their learners’ pronunciations. Based on these findings, the study makes suggestions for better English pronunciation teaching in Indonesia or other EFL countries.

  7. New tests of the distal speech rate effect: examining cross-linguistic generalization.

    Science.gov (United States)

    Dilley, Laura C; Morrill, Tuuli H; Banzina, Elina

    2013-01-01

    Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664-1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In Experiment 1, native Russian speakers listened to Russian sentences which had been subjected to rate manipulations and performed a lexical report task. Experiment 2 investigated speech rate effects in cross-language speech perception; non-native speakers of Russian of both high and low proficiency were tested on the same Russian sentences as in Experiment 1. They decided between two lexical interpretations of a critical portion of the sentence, where one choice contained more phonological material than the other (e.g., /str'na/ "side" vs. /str'na/ "country"). In both experiments, with native and non-native speakers of Russian, context speech rate and the relative duration of the critical sentence portion were found to influence the amount of phonological material perceived. The results support the generalized rate normalization hypothesis, according to which the content perceived in a spectrally ambiguous stretch of speech depends on the duration of that content relative to the surrounding speech, while showing that the findings of Dilley and Pitt (2010) extend to a variety of morphosyntactic contexts and a new language, Russian. Findings indicate that relative timing cues across an utterance can be critical to accurate lexical perception by both native and non-native speakers.

  8. New tests of the distal speech rate effect: Examining cross-linguistic generalization

    Directory of Open Access Journals (Sweden)

    Laura eDilley

    2013-12-01

    Full Text Available Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664-1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In Experiment 1, native Russian speakers listened to Russian sentences which had been subjected to rate manipulations and performed a lexical report task. Experiment 2 investigated speech rate effects in cross-language speech perception; non-native speakers of Russian of both high and low proficiency were tested on the same Russian sentences as in Experiment 1. They decided between two lexical interpretations of a critical portion of the sentence, where one choice contained more phonological material than the other (e.g., /stərʌ'na/ side vs. /strʌ'na/ country. In both experiments, with native and non-native speakers of Russian, context speech rate and the relative duration of the critical sentence portion were found to influence the amount of phonological material perceived. The results support the generalized rate normalization hypothesis, according to which the content perceived in a spectrally ambiguous stretch of speech depends on the duration of that content relative to the surrounding speech, while showing that the findings of Dilley and Pitt (2010 extend to a variety of morphosyntactic contexts and a new language, Russian. Findings indicate that relative timing cues across an utterance can be critical to accurate lexical perception by both native and non-native speakers.

  9. Kalispel Non-Native Fish Suppression Project 2007 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Wingert, Michele; Andersen, Todd [Kalispel Natural Resource Department

    2008-11-18

    Non-native salmonids are impacting native salmonid populations throughout the Pend Oreille Subbasin. Competition, hybridization, and predation by non-native fish have been identified as primary factors in the decline of some native bull trout (Salvelinus confluentus) and westslope cutthroat trout (Oncorhynchus clarki lewisi) populations. In 2007, the Kalispel Natural Resource Department (KNRD) initiated the Kalispel Nonnative Fish Suppression Project. The goal of this project is to implement actions to suppress or eradicate non-native fish in areas where native populations are declining or have been extirpated. These projects have previously been identified as critical to recovering native bull trout and westslope cutthroat trout (WCT). Lower Graham Creek was invaded by non-native rainbow (Oncorhynchus mykiss) and brook trout (Salvelinus fontinalis) after a small dam failed in 1991. By 2003, no genetically pure WCT remained in the lower 700 m of Graham Creek. Further invasion upstream is currently precluded by a relatively short section of steep, cascade-pool stepped channel section that will likely be breached in the near future. In 2008, a fish management structure (barrier) was constructed at the mouth of Graham Creek to preclude further invasion of non-native fish into Graham Creek. The construction of the barrier was preceded by intensive electrofishing in the lower 700 m to remove and relocate all captured fish. Westslope cutthroat trout have recently been extirpated in Cee Cee Ah Creek due to displacement by brook trout. We propose treating Cee Cee Ah Creek with a piscicide to eradicate brook trout. Once eradication is complete, cutthroat trout will be translocated from nearby watersheds. In 2004, the Washington Department of Fish and Wildlife (WDFW) proposed an antimycin treatment within the subbasin; the project encountered significant public opposition and was eventually abandoned. However, over the course of planning this 2004 project, little public

  10. Auditory-visual speech perception in three- and four-year-olds and its relationship to perceptual attunement and receptive vocabulary.

    Science.gov (United States)

    Erdener, Doğu; Burnham, Denis

    2017-06-06

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception - lip-reading and visual influence in auditory-visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory-visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory-visual speech perception.

  11. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Science.gov (United States)

    LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976

  12. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study.

    Science.gov (United States)

    LaCroix, Arianna N; Diaz, Alvaro F; Rogalsky, Corianne

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

  13. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Directory of Open Access Journals (Sweden)

    Arianna eLaCroix

    2015-08-01

    Full Text Available The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel’s Shared Syntactic Integration Resource Hypothesis (SSIRH and Koelsch’s neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music versus speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

  14. NIS occurrence - Non-native species impacts on threatened and endangered salmonids

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The objectives of this project: a) Identify the distribution of non-natives in the Columbia River Basin b) Highlight the impacts of non-natives on salmonids c)...

  15. A positron emission tomography study of the neural basis of informational and energetic masking effects in speech perception

    Science.gov (United States)

    Scott, Sophie K.; Rosen, Stuart; Wickham, Lindsay; Wise, Richard J. S.

    2004-02-01

    Positron emission tomography (PET) was used to investigate the neural basis of the comprehension of speech in unmodulated noise (``energetic'' masking, dominated by effects at the auditory periphery), and when presented with another speaker (``informational'' masking, dominated by more central effects). Each type of signal was presented at four different signal-to-noise ratios (SNRs) (+3, 0, -3, -6 dB for the speech-in-speech, +6, +3, 0, -3 dB for the speech-in-noise), with listeners instructed to listen for meaning to the target speaker. Consistent with behavioral studies, there was SNR-dependent activation associated with the comprehension of speech in noise, with no SNR-dependent activity for the comprehension of speech-in-speech (at low or negative SNRs). There was, in addition, activation in bilateral superior temporal gyri which was associated with the informational masking condition. The extent to which this activation of classical ``speech'' areas of the temporal lobes might delineate the neural basis of the informational masking is considered, as is the relationship of these findings to the interfering effects of unattended speech and sound on more explicit working memory tasks. This study is a novel demonstration of candidate neural systems involved in the perception of speech in noisy environments, and of the processing of multiple speakers in the dorso-lateral temporal lobes.

  16. Parental perception of speech and tongue mobility in three-year olds after neonatal frenotomy.

    Science.gov (United States)

    Walls, Andrew; Pierce, Matthew; Wang, Hongkun; Steehler, Ashley; Steehler, Matthew; Harley, Earl H

    2014-01-01

    The goal of this study was to evaluate parental speech outcomes and tongue mobility in children with ankyloglossia who underwent frenotomy by an otolaryngologist during the neonatal period. Cohort study and retrospective telephone survey. University Hospital. Neonates previously diagnosed with congenital ankyloglossia were separated into Surgical Intervention (N=71) and No Surgical Intervention (N=15) Groups. A Control Group (N=18) of patients was identified from the hospital medical record database, which were not diagnosed with congenital ankyloglossia. A survey provided by a certified speech pathologist utilized a Likert scale to assess speech perception and tongue mobility by parental listeners. The questionnaire also analyzed oral motor activities and the medical professionals that identified the ankyloglossia shortly after birth. Statistical analyses were performed with the Wilcoxon Rank Sum Test and Fischer's Exact Test in order to determine an effect size=1. There was significantly improved speech outcomes designated by parents in the Surgical Intervention Group when compared to the No Surgical Intervention Group [pspeech outcomes between the Surgical Intervention Group when analyzed against the Control Group [p=0.3781, pspeech outcomes and tongue mobility in children who underwent frenotomy compared to individuals who declined the operation. As a result of the data presented within this study, there appears to be a long-term benefit beyond feeding when frenotomy is performed in newborns with ankyloglossia. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Bayesian model of categorical effects in L1 and L2 speech perception

    Science.gov (United States)

    Kronrod, Yakov

    In this dissertation I present a model that captures categorical effects in both first language (L1) and second language (L2) speech perception. In L1 perception, categorical effects range between extremely strong for consonants to nearly continuous perception of vowels. I treat the problem of speech perception as a statistical inference problem and by quantifying categoricity I obtain a unified model of both strong and weak categorical effects. In this optimal inference mechanism, the listener uses their knowledge of categories and the acoustics of the signal to infer the intended productions of the speaker. The model splits up speech variability into meaningful category variance and perceptual noise variance. The ratio of these two variances, which I call Tau, directly correlates with the degree of categorical effects for a given phoneme or continuum. By fitting the model to behavioral data from different phonemes, I show how a single parametric quantitative variation can lead to the different degrees of categorical effects seen in perception experiments with different phonemes. In L2 perception, L1 categories have been shown to exert an effect on how L2 sounds are identified and how well the listener is able to discriminate them. Various models have been developed to relate the state of L1 categories with both the initial and eventual ability to process the L2. These models largely lacked a formalized metric to measure perceptual distance, a means of making a-priori predictions of behavior for a new contrast, and a way of describing non-discrete gradient effects. In the second part of my dissertation, I apply the same computational model that I used to unify L1 categorical effects to examining L2 perception. I show that we can use the model to make the same type of predictions as other SLA models, but also provide a quantitative framework while formalizing all measures of similarity and bias. Further, I show how using this model to consider L2 learners at

  18. The interlanguage speech intelligibility benefit

    Science.gov (United States)

    Bent, Tessa; Bradlow, Ann R.

    2003-09-01

    This study investigated how native language background influences the intelligibility of speech by non-native talkers for non-native listeners from either the same or a different native language background as the talker. Native talkers of Chinese (n=2), Korean (n=2), and English (n=1) were recorded reading simple English sentences. Native listeners of English (n=21), Chinese (n=21), Korean (n=10), and a mixed group from various native language backgrounds (n=12) then performed a sentence recognition task with the recordings from the five talkers. Results showed that for native English listeners, the native English talker was most intelligible. However, for non-native listeners, speech from a relatively high proficiency non-native talker from the same native language background was as intelligible as speech from a native talker, giving rise to the ``matched interlanguage speech intelligibility benefit.'' Furthermore, this interlanguage intelligibility benefit extended to the situation where the non-native talker and listeners came from different language backgrounds, giving rise to the ``mismatched interlanguage speech intelligibility benefit.'' These findings shed light on the nature of the talker-listener interaction during speech communication.

  19. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  20. Visual Contribution to Speech Perception: Measuring the Intelligibility of Animated Talking Heads

    Directory of Open Access Journals (Sweden)

    Slim Ouni

    2006-10-01

    Full Text Available Animated agents are becoming increasingly frequent in research and applications in speech science. An important challenge is to evaluate the effectiveness of the agent in terms of the intelligibility of its visible speech. In three experiments, we extend and test the Sumby and Pollack (1954 metric to allow the comparison of an agent relative to a standard or reference, and also propose a new metric based on the fuzzy logical model of perception (FLMP to describe the benefit provided by a synthetic animated face relative to the benefit provided by a natural face. A valid metric would allow direct comparisons accross different experiments and would give measures of the benfit of a synthetic animated face relative to a natural face (or indeed any two conditions and how this benefit varies as a function of the type of synthetic face, the test items (e.g., syllables versus sentences, different individuals, and applications.

  1. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...... aimed at experimentally characterizing the effects of cochlear damage on listeners' auditory processing, in terms of sensitivity loss and reduced temporal and spectral resolution. The results showed that listeners with comparable audiograms can have very different estimated cochlear input...

  2. On the role of phonetic inventory in the perception of foreign-accented speech

    Science.gov (United States)

    Sereno, Joan; McCall, Joyce; Jongman, Allard; Dijkstra, Ton; van Heuven, Walter

    2002-05-01

    The current study investigates the effect of phonetic inventory on perception of foreign-accented speech. The perception of native English speech was compared to the perception of foreign-accented English (Dutch-accented English), with selection of stimuli determined on the basis of phonetic inventory. Half of the stimuli contained phonemes that are unique to English and do not occur in Dutch (e.g., [θ] and [æ]), and the other half contained only phonemes that are similar in both English and Dutch (e.g., [s], [i]). Both word and nonword stimuli were included to investigate the role of lexical status. A native speaker of English and a native speaker of Dutch recorded all stimuli. Stimuli were then presented to 40 American listeners using a randomized blocked design in a lexical decision experiment. Results reveal an interaction between speaker (native English versus native Dutch) and phonetic inventory (unique versus common phonemes). Specifically, Dutch-accented stimuli with common phonemes were recognized faster and more accurately than Dutch-accented stimuli with unique phonemes. Results will be discussed in terms of the influence of foreign accent on word recognition processes.

  3. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    Science.gov (United States)

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  4. The effect of instantaneous input dynamic range setting on the speech perception of children with the nucleus 24 implant.

    Science.gov (United States)

    Davidson, Lisa S; Skinner, Margaret W; Holstad, Beth A; Fears, Beverly T; Richter, Marie K; Matusofsky, Margaret; Brenner, Christine; Holden, Timothy; Birath, Amy; Kettel, Jerrica L; Scollie, Susan

    2009-06-01

    The purpose of this study was to examine the effects of a wider instantaneous input dynamic range (IIDR) setting on speech perception and comfort in quiet and noise for children wearing the Nucleus 24 implant system and the Freedom speech processor. In addition, children's ability to understand soft and conversational level speech in relation to aided sound-field thresholds was examined. Thirty children (age, 7 to 17 years) with the Nucleus 24 cochlear implant system and the Freedom speech processor with two different IIDR settings (30 versus 40 dB) were tested on the Consonant Nucleus Consonant (CNC) word test at 50 and 60 dB SPL, the Bamford-Kowal-Bench Speech in Noise Test, and a loudness rating task for four-talker speech noise. Aided thresholds for frequency-modulated tones, narrowband noise, and recorded Ling sounds were obtained with the two IIDRs and examined in relation to CNC scores at 50 dB SPL. Speech Intelligibility Indices were calculated using the long-term average speech spectrum of the CNC words at 50 dB SPL measured at each test site and aided thresholds. Group mean CNC scores at 50 dB SPL with the 40 IIDR were significantly higher (p Speech in Noise Test were not significantly different for the two IIDRs. Significantly improved aided thresholds at 250 to 6000 Hz as well as higher Speech Intelligibility Indices afforded improved audibility for speech presented at soft levels (50 dB SPL). These results indicate that an increased IIDR provides improved word recognition for soft levels of speech without compromising comfort of higher levels of speech sounds or sentence recognition in noise.

  5. Effect of extreme adaptive frequency compression in bimodal listeners on sound localization and speech perception.

    Science.gov (United States)

    Veugen, Lidwien C E; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; van Opstal, A John

    2017-09-01

    This study aimed to improve access to high-frequency interaural level differences (ILD), by applying extreme frequency compression (FC) in the hearing aid (HA) of 13 bimodal listeners, using a cochlear implant (CI) and conventional HA in opposite ears. An experimental signal-adaptive frequency-lowering algorithm was tested, compressing frequencies above 160 Hz into the individual audible range of residual hearing, but only for consonants (adaptive FC), thus protecting vowel formants, with the aim to preserve speech perception. In a cross-over design with at least 5 weeks of acclimatization between sessions, bimodal performance with and without adaptive FC was compared for horizontal sound localization, speech understanding in quiet and in noise, and vowel, consonant and voice-pitch perception. On average, adaptive FC did not significantly affect any of the test results. Yet, two subjects who were fitted with a relatively weak frequency compression ratio, showed improved horizontal sound localization. After the study, four subjects preferred adaptive FC, four preferred standard frequency mapping, and four had no preference. Noteworthy, the subjects preferring adaptive FC were those with best performance on all tasks, both with and without adaptive FC. On a group level, extreme adaptive FC did not change sound localization and speech understanding in bimodal listeners. Possible reasons are too strong compression ratios, insufficient residual hearing or that the adaptive switching, although preserving vowel perception, may have been ineffective to produce consistent ILD cues. Individual results suggested that two subjects were able to integrate the frequency-compressed HA input with that of the CI, and benefitted from enhanced binaural cues for horizontal sound localization.

  6. Evaluating proposed dorsal and ventral route functions in speech perception and phonological short-term memory: Evidence from aphasia

    Directory of Open Access Journals (Sweden)

    Heather Raye Dial

    2015-04-01

    When the lexical and sublexical stimuli were matched in discriminability, scores were highly correlated and no individual demonstrated substantially better performance on lexical than sublexical perception (Figures 1a-c. However, when the word discriminations were easier (as in prior studies; e.g., Miceli et al., 1980, patients with impaired syllable discrimination were within the control range on word discrimination (Figure 1d. Finally, digit matching showed no significant relation to perception tasks (e.g., Figure 1e. Moreover, there was a wide range of digit matching spans for patients performing well on speech perception tasks (e.g., > 1.5 on syllable discrimination and digit matching ranging from 3.6 to 6.0. These data fail to support dual route claims, suggesting that lexical processing depends on sublexical perception and suggesting that phonological STM depends on a buffer separate from speech perception mechanisms.

  7. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  8. Speech perception and language acquisition in the first year of life.

    Science.gov (United States)

    Gervain, Judit; Mehler, Jacques

    2010-01-01

    During the first year of life, infants pass important milestones in language development. We review some of the experimental evidence concerning these milestones in the domains of speech perception, phonological development, word learning, morphosyntactic acquisition, and bilingualism, emphasizing their interactions. We discuss them in the context of their biological underpinnings, introducing the most recent advances not only in language development, but also in neighboring areas such as genetics and the comparative research on animal communication systems. We argue for a theory of language acquisition that integrates behavioral, cognitive, neural, and evolutionary considerations and proposes to unify previously opposing theoretical stances, such as statistical learning, rule-based nativist accounts, and perceptual learning theories.

  9. Assessing speech perception in Swedish school-aged children: preliminary data on the Listen-Say test.

    Science.gov (United States)

    Nakeva von Mentzer, Cecilia; Sundström, Martina; Enqvist, Karin; Hällgren, Mathias

    2017-10-10

    To meet the need for a linguistic speech perception test in Swedish, the 'Listen-Say test' was developed. Minimal word pairs were used as speech material to assess seven phonetic contrasts in two auditory backgrounds. In the present study, children's speech discrimination skills in quiet and in four-talker (4T) speech background were examined. Associations with lexical-access skills and academic achievement were explored. The study included 27 school children 7-9 years of age. Overall, the children discriminated phonetic contrasts well in both conditions (quiet: Mdn 95%; 4T speech; Mdn 91% correct). A significant effect of 4T speech background was evident in three of the contrasts, connected to place of articulation, voicing and syllable complexity. Reaction times for correctly identified target words were significantly longer in the quiet condition, possibly reflecting a need for further balancing of the test order. Overall speech discrimination accuracy was moderately to highly correlated with lexical-access ability. Children identified as having high concentration ability by their teacher had the highest speech discrimination scores in both conditions followed by children identified as having high reading ability. The first wave of data collection with the Listen-Say test indicates that the test appears to be sensitive to predicted perceptual difficulties of phonetic contrasts particularly in noise. The clinical benefit of using a procedure where speech discrimination, lexical-access ability and academic achievement are taken into account is discussed as well as issues for further test refinement.

  10. Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition.

    Science.gov (United States)

    Stevenson, Ryan A; Nelms, Caitlin E; Baum, Sarah H; Zurkovsky, Lilia; Barense, Morgan D; Newhouse, Paul A; Wallace, Mark T

    2015-01-01

    Over the next 2 decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio. For whole-word recognition, older adults relative to younger adults showed greater multisensory gains at intermediate SNRs but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as signal-to-noise ratio decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy-aging populations and that deficits begin to emerge only at the more complex word-recognition level of speech signals. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition

    Science.gov (United States)

    Stevenson, Ryan A.; Nelms, Caitlin; Baum, Sarah H.; Zurkovsky, Lilia; Barense, Morgan D.; Newhouse, Paul A.; Wallace, Mark T.

    2014-01-01

    Over the next two decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio (SNR). For whole-word recognition, older relative to younger adults showed greater multisensory gains at intermediate SNRs, but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as SNR decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments, and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy aging populations, and that deficits begin to emerge only at the more complex, word-recognition level of speech signals. PMID:25282337

  12. Aquatic macroinvertebrate responses to native and non-native predators

    Directory of Open Access Journals (Sweden)

    Haddaway N. R.

    2014-01-01

    Full Text Available Non-native species can profoundly affect native ecosystems through trophic interactions with native species. Native prey may respond differently to non-native versus native predators since they lack prior experience. Here we investigate antipredator responses of two common freshwater macroinvertebrates, Gammarus pulex and Potamopyrgus jenkinsi, to olfactory cues from three predators; sympatric native fish (Gasterosteus aculeatus, sympatric native crayfish (Austropotamobius pallipes, and novel invasive crayfish (Pacifastacus leniusculus. G. pulex responded differently to fish and crayfish; showing enhanced locomotion in response to fish, but a preference for the dark over the light in response to the crayfish. P.jenkinsi showed increased vertical migration in response to all three predator cues relative to controls. These different responses to fish and crayfish are hypothesised to reflect the predators’ differing predation types; benthic for crayfish and pelagic for fish. However, we found no difference in response to native versus invasive crayfish, indicating that prey naiveté is unlikely to drive the impacts of invasive crayfish. The Predator Recognition Continuum Hypothesis proposes that benefits of generalisable predator recognition outweigh costs when predators are diverse. Generalised responses of prey as observed here will be adaptive in the presence of an invader, and may reduce novel predators’ potential impacts.

  13. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  14. Defining the Impact of Non-Native Species

    Science.gov (United States)

    Jeschke, Jonathan M; Bacher, Sven; Blackburn, Tim M; Dick, Jaimie T A; Essl, Franz; Evans, Thomas; Gaertner, Mirijam; Hulme, Philip E; Kühn, Ingolf; Mrugała, Agata; Pergl, Jan; Pyšek, Petr; Rabitsch, Wolfgang; Ricciardi, Anthony; Richardson, David M; Sendek, Agnieszka; VilÀ, Montserrat; Winter, Marten; Kumschick, Sabrina

    2014-01-01

    Non-native species cause changes in the ecosystems to which they are introduced. These changes, or some of them, are usually termed impacts; they can be manifold and potentially damaging to ecosystems and biodiversity. However, the impacts of most non-native species are poorly understood, and a synthesis of available information is being hindered because authors often do not clearly define impact. We argue that explicitly defining the impact of non-native species will promote progress toward a better understanding of the implications of changes to biodiversity and ecosystems caused by non-native species; help disentangle which aspects of scientific debates about non-native species are due to disparate definitions and which represent true scientific discord; and improve communication between scientists from different research disciplines and between scientists, managers, and policy makers. For these reasons and based on examples from the literature, we devised seven key questions that fall into 4 categories: directionality, classification and measurement, ecological or socio-economic changes, and scale. These questions should help in formulating clear and practical definitions of impact to suit specific scientific, stakeholder, or legislative contexts. Definiendo el Impacto de las Especies No-Nativas Resumen Las especies no-nativas pueden causar cambios en los ecosistemas donde son introducidas. Estos cambios, o algunos de ellos, usualmente se denominan como impactos; estos pueden ser variados y potencialmente dañinos para los ecosistemas y la biodiversidad. Sin embargo, los impactos de la mayoría de las especies no-nativas están pobremente entendidos y una síntesis de información disponible se ve obstaculizada porque los autores continuamente no definen claramente impacto. Discutimos que definir explícitamente el impacto de las especies no-nativas promoverá el progreso hacia un mejor entendimiento de las implicaciones de los cambios a la biodiversidad y los

  15. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  16. Portuguese Lexical Clusters and CVC Sequences in Speech Perception and Production.

    Science.gov (United States)

    Cunha, Conceição

    2015-01-01

    This paper investigates similarities between lexical consonant clusters and CVC sequences differing in the presence or absence of a lexical vowel in speech perception and production in two Portuguese varieties. The frequent high vowel deletion in the European variety (EP) and the realization of intervening vocalic elements between lexical clusters in Brazilian Portuguese (BP) may minimize the contrast between lexical clusters and CVC sequences in the two Portuguese varieties. In order to test this hypothesis we present a perception experiment with 72 participants and a physiological analysis of 3-dimensional movement data from 5 EP and 4 BP speakers. The perceptual results confirmed a gradual confusion of lexical clusters and CVC sequences in EP, which corresponded roughly to the gradient consonantal overlap found in production. © 2015 S. Karger AG, Basel.

  17. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  18. Audio-Visual Perception of Gender by Infants Emerges Earlier for Adult-Directed Speech

    Science.gov (United States)

    Richoz, Anne-Raphaëlle; Quinn, Paul C.; Hillairet de Boisferon, Anne; Berger, Carole; Loevenbruck, Hélène; Lewkowicz, David J.; Lee, Kang; Dole, Marjorie; Caldara, Roberto; Pascalis, Olivier

    2017-01-01

    Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS. Infants participated in only one condition, either IDS or ADS. Consistent with earlier work, infants displayed advantages in matching female relative to male faces and voices. Moreover, the new finding that emerged in the current study was that extraction of gender from face and voice was stronger at 6 months with ADS than with IDS, whereas at 9 and 12 months, matching did not differ for IDS versus ADS. The results indicate that the ability to perceive gender in audiovisual speech is influenced by speech manner. Our data suggest that infants may extract multisensory gender information developmentally earlier when looking at adults engaged in conversation with other adults (i.e., ADS) than when adults are directly talking to them (i.e., IDS). Overall, our findings imply that the circumstances of social interaction may shape early multisensory abilities to perceive gender. PMID:28060872

  19. Preschool teachers' perceptions and reactions to challenging classroom behavior: implications for speech-language pathologists.

    Science.gov (United States)

    Nungesser, Nicole R; Watkins, Ruth V

    2005-04-01

    Awareness of issues of social competence and challenging behavior related to childhood language an communication disorders has been increasing. The purpose of this clinical exchange is to provide speech-language pathologists with basic information on communication disorders and challenging behaviors, as well as with insights into ways to support both students and classroom teachers. To provide effective services to children with language impairments and optimally support classroom staff, speech-language pathologists need to recognize (a) the interdependence of language, communication, social competence, and challenging behaviors; (b) the significance that challenging behaviors can have on evaluations of academic competency; and (c) how teachers in early childhood classrooms perceive and react to challenging behaviors. This clinical exchange provides an overview of the relationship between language, communication, and social competence, and presents preliminary survey research data investigating teachers' perceptions and reactions to challenging behaviors. Clinical implications are discussed, including considerations for intervention with children who may exhibit challenging behaviors in combination with language disabilities, and the speech-language pathologist's instrumental role in educating and supporting classroom staff to use communication strategies when managing challenging classroom behaviors.

  20. Audiovisual speech perception at various presentation levels in Mandarin-speaking adults with cochlear implants.

    Directory of Open Access Journals (Sweden)

    Shu-Yu Liu

    Full Text Available (1 To evaluate the recognition of words, phonemes and lexical tones in audiovisual (AV and auditory-only (AO modes in Mandarin-speaking adults with cochlear implants (CIs; (2 to understand the effect of presentation levels on AV speech perception; (3 to learn the effect of hearing experience on AV speech perception.Thirteen deaf adults (age = 29.1±13.5 years; 8 male, 5 female who had used CIs for >6 months and 10 normal-hearing (NH adults participated in this study. Seven of them were prelingually deaf, and 6 postlingually deaf. The Mandarin Monosyllablic Word Recognition Test was used to assess recognition of words, phonemes and lexical tones in AV and AO conditions at 3 presentation levels: speech detection threshold (SDT, speech recognition threshold (SRT and 10 dB SL (re:SRT.The prelingual group had better phoneme recognition in the AV mode than in the AO mode at SDT and SRT (both p = 0.016, and so did the NH group at SDT (p = 0.004. Mode difference was not noted in the postlingual group. None of the groups had significantly different tone recognition in the 2 modes. The prelingual and postlingual groups had significantly better phoneme and tone recognition than the NH one at SDT in the AO mode (p = 0.016 and p = 0.002 for phonemes; p = 0.001 and p<0.001 for tones but were outperformed by the NH group at 10 dB SL (re:SRT in both modes (both p<0.001 for phonemes; p<0.001 and p = 0.002 for tones. The recognition scores had a significant correlation with group with age and sex controlled (p<0.001.Visual input may help prelingually deaf implantees to recognize phonemes but may not augment Mandarin tone recognition. The effect of presentation level seems minimal on CI users' AV perception. This indicates special considerations in developing audiological assessment protocols and rehabilitation strategies for implantees who speak tonal languages.

  1. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody.

  2. Ventral and dorsal pathways of speech perception: an intracerebral ERP study.

    Science.gov (United States)

    Trébuchon, Agnès; Démonet, Jean-François; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2013-11-01

    Recent theory of physiology of language suggests a dual stream dorsal/ventral organization of speech perception. Using intra-cerebral Event-related potentials (ERPs) during pre-surgical assessment of twelve drug-resistant epileptic patients, we aimed to single out electrophysiological patterns during both lexical-semantic and phonological monitoring tasks involving ventral and dorsal regions respectively. Phonological information processing predominantly occurred in the left supra-marginal gyrus (dorsal stream) and lexico-semantic information occurred in anterior/middle temporal and fusiform gyri (ventral stream). Similar latencies were identified in response to phonological and lexico-semantic tasks, suggesting parallel processing. Typical ERP components were strongly left lateralized since no evoked responses were recorded in homologous right structures. Finally, ERP patterns suggested the inferior frontal gyrus as the likely final common pathway of both dorsal and ventral streams. These results brought out detailed evidence of the spatial-temporal information processing in the dual pathways involved in speech perception.

  3. Knowledge and attitudes of teachers regarding the impact of classroom acoustics on speech perception and learning.

    Science.gov (United States)

    Ramma, Lebogang

    2009-01-01

    This study investigated the knowledge and attitude of primary school teachers regarding the impact of poor classroom acoustics on learners' speech perception and learning in class. Classrooms with excessive background noise and reflective surfaces could be a barrier to learning, and it is important that teachers are aware of this. There is currently limited research data about teachers' knowledge regarding the topic of classroom acoustics. Seventy teachers from three Johannesburg primary schools participated in this study. A survey by way of structured self-administered questionnaire was the primary data collection method. The findings of this study showed that most of the participants in this study did not have adequate knowledge of classroom acoustics. Most of the participants were also unaware of the impact that classrooms with poor acoustic environments can have on speech perception and learning. These results are discussed in relation to the practical implication of empowering teachers to manage the acoustic environment of their classrooms, limitations of the study as well as implications for future research.

  4. Neuromodulatory Effects of Auditory Training and Hearing Aid Use on Audiovisual Speech Perception in Elderly Individuals

    Science.gov (United States)

    Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C.; Rishiq, Dania; Abrams, Harvey

    2017-01-01

    Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population. PMID:28270763

  5. Malaysian University Students’ Attitudes towards Six Varieties of Accented Speech in English

    Directory of Open Access Journals (Sweden)

    Zainab Thamer Ahmed

    2014-10-01

    Full Text Available Previous language attitude studies indicated that in many countries all over the world, English language learners perceived native accents either American or British, more positively than the non-native accents such as the Japanese, Korean, and Austrian accents. However, in Malaysia it is still unclear which accent Malaysian learners of English tend to perceive more positively (Pillai 2009. The verbal-guise technique and accent recognition item were adopted as indirect and direct instruments in gathering data to obtain data to clarify the inquiry. The sample includes 120 Malaysian university students and they were immersed in several speech accent situations to elicit feedback on their perceptions. Essentially two research questions are addressed: 1 What are Malaysian university students’ attitudes toward native and non-native English accents? 2 How familiar are students with accents?  The results indicated that the students had a bias towards in-group accent, meaning that they evaluated non-native lecturers’ accents more positively. These results supported the ‘social identity theory’ consistent with many previous language attitude studies of this nature. The Malaysian students were seen to be able to distinguish between native and non-native accents although there was much confusion between British and American accents.

  6. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  7. Sources of Variability in Consonant Perception and Implications for Speech Perception Modeling

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2016-01-01

    to the considered sources of variability using a measure of the perceptual distance between responses. The largest effect was found across different CVs. For stimuli of the same phonetic identity, the speech­induced  variability  across  and  within talkers  and the  across­listener  variability were  substantial...

  8. Auditory/Verbal hallucinations, speech perception neurocircuitry, and the social deafferentation hypothesis.

    Science.gov (United States)

    Hoffman, Ralph E

    2008-04-01

    Auditory/verbal hallucinations (AVHs) are comprised of spoken conversational speech seeming to arise from specific, nonself speakers. One hertz repetitive transcranial magnetic stimulation (rTMS) reduces excitability in the brain region stimulated. Studies utilizing 1-Hz rTMS delivered to the left temporoparietal cortex, a brain area critical to speech perception, have demonstrated statistically significant improvements in AVHs relative to sham simulation. A novel mechanism of AVHs is proposed whereby dramatic pre-psychotic social withdrawal prompts neuroplastic reorganization by the "social brain" to produce spurious social meaning via hallucinations of conversational speech. Preliminary evidence supporting this hypothesis includes a very high rate of social withdrawal emerging prior to the onset of frank psychosis in patients who develop schizophrenia and AVHs. Moreover, reduced AVHs elicited by temporoparietal 1-Hz rTMS are likely to reflect enhanced long-term depression. Some evidence suggests a loss of long-term depression following experimentally-induced deafferentation. Finally, abnormal cortico-cortical coupling is associated with AVHs and also is a common outcome of deafferentation. Auditory/verbal hallucinations (AVHs) of spoken speech or "voices" are reported by 60-80% of persons with schizophrenia at various times during the course of illness. AVHs are associated with high levels of distress, functional disability, and can lead to violent acts. Among patients with AVHs, these symptoms remain poorly or incompletely responsive to currently available treatments in approximately 25% of cases. For patients with AVHs who do respond to antipsychotic drugs, there is a very high likelihood that these experiences will recur in subsequent episodes. A more precise characterization of underlying pathophysiology may lead to more efficacious treatments.

  9. EMPOWERING NON-NATIVE ENGLISH SPEAKING TEACHERS THROUGH CRITICAL PEDAGOGY

    Directory of Open Access Journals (Sweden)

    Nur Hayati

    2010-02-01

    Full Text Available Critical pedagogy is a teaching approach that aims to develop students’ critical thinking, political and social awareness, and self esteem through dialogue learning and reflection. Related to the teaching of EFL, this pedagogy holds the potential to empower non native English speaking teachers (NNESTs when incorporated into English teacher education programs. It can help aspiring NNESTs to grow awareness of the political and sociocultural implications of EFL teaching, to foster their critical thinking on any concepts or ideas regarding their profession, and more importantly, to recognize their strengths as NNESTs. Despite the potential, the role of critical pedagogy in improving EFL teacher education program in Indonesia has not been sufficiently discussed. This article attempts to contribute to the discussion by looking at a number of ways critical pedagogy can be incorporated in the programs, the rationale for doing so, and the challenges that might come on the way.

  10. Discriminating non-native vowels on the basis of multimodal, auditory or visual information : Effects on infants' looking patterns and discrimination

    NARCIS (Netherlands)

    Schure, Sophie Ter; Junge, Caroline

    2016-01-01

    Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech soun

  11. Discriminating non-native vowels on the basis of multimodal, auditory or visual information : Effects on infants' looking patterns and discrimination

    NARCIS (Netherlands)

    Schure, Sophie Ter; Junge, Caroline

    2016-01-01

    Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech soun

  12. Unenthusiastic Europeans or Affected English: the Impact of Intonation on the Overall Make-up of Speech

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2005-06-01

    Full Text Available Attitudes and emotions are expressed by linguistic as well as extra-linguistic features. The linguistic features comprise the lexis, the word-order and the intonation of the utterance. The purpose of this article is to examine the impact of intonation on our perception of speech. I will attempt to show that our expression, as well as our perception and understanding of attitudes and emotions are realized in accordance with the intonation patterns typical of the mother tongue. When listening to non-native speakers using our mother tongue we expect and tolerate errors in pronunciation, grammar and lexis but are quite ignorant and intolerant of non-native intonation patterns. Foreigners often sound unenthusiastic to native English ears. On the basis of the results obtained from an analysis of speech produced by 21 non-native speakers of English, including Slovenes, I will show that the reasons for such an impression of being unenthusiastic stem from different tonality and tonicity rules, as well as from the lack of the fall-rise tone and a very narrow pitch range with no or very few pitch jumps or slumps.

  13. Auditory Sensitivity, Speech Perception, L1 Chinese, and L2 English Reading Abilities in Hong Kong Chinese Children

    Science.gov (United States)

    Zhang, Juan; McBride-Chang, Catherine

    2014-01-01

    A 4-stage developmental model, in which auditory sensitivity is fully mediated by speech perception at both the segmental and suprasegmental levels, which are further related to word reading through their associations with phonological awareness, rapid automatized naming, verbal short-term memory and morphological awareness, was tested with…

  14. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  15. Open your eyes and listen carefully. Auditory and audiovisual speech perception and the McGurk effect in aphasia

    NARCIS (Netherlands)

    Klitsch, Julia Ulrike

    2008-01-01

    This dissertation investigates speech perception in three different groups of native adult speakers of Dutch; an aphasic and two age-varying control groups. By means of two different experiments it is examined if the availability of visual articulatory information is beneficial to the auditory speec

  16. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    Science.gov (United States)

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  17. Thinking outside the (Voice) Box: A Case Study of Students' Perceptions of the Relevance of Anatomy to Speech Pathology

    Science.gov (United States)

    Weir, Kristy A.

    2008-01-01

    Speech pathology students readily identify the importance of a sound understanding of anatomical structures central to their intended profession. In contrast, they often do not recognize the relevance of a broader understanding of structure and function. This study aimed to explore students' perceptions of the relevance of anatomy to speech…

  18. The role of high-CF fibers in speech perception: Comments on Horwitz et al. (2002) (L)

    Science.gov (United States)

    Strickland, Elizabeth A.; Viemeister, Neal F.; van Tasell, Dianne J.; Preminger, Jill E.

    2004-07-01

    In a recent paper, Horwitz et al. [J. Acoust. Soc. Am. 111, 409-416 (2002)] concluded that listeners with high-frequency hearing impairment show a decrement in the perception of low-frequency speech sounds that is due to loss of information normally carried by auditory-nerve fibers with high characteristic frequencies (CFs). However, in their own study and in other studies, highpass-filtered noise did not degrade the perception of lowpass-filtered speech in listeners with normal hearing. An alternate conclusion proposed by Strickland et al. [J. Acoust. Soc. Am. 95, 497-501 (1994)] is that information conveyed by high-CF fibers is not necessary for speech perception. To reconcile these opposite conclusions, we suggest that the hearing-impaired listeners tested by Horwitz et al. may not have had normal hearing even in the low frequencies, and that the conclusion from Strickland et al. remains correct: high-CF fibers are not necessary for normal speech perception.

  19. Open your eyes and listen carefully. Auditory and audiovisual speech perception and the McGurk effect in aphasia

    NARCIS (Netherlands)

    Klitsch, Julia Ulrike

    2008-01-01

    This dissertation investigates speech perception in three different groups of native adult speakers of Dutch; an aphasic and two age-varying control groups. By means of two different experiments it is examined if the availability of visual articulatory information is beneficial to the auditory speec

  20. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  1. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  2. The Neurobiology of Speech Perception and Production-Can Functional Imaging Tell Us Anything We Did Not Already Know?

    Science.gov (United States)

    Scott, Sophie K.

    2012-01-01

    Our understanding of the neurobiological basis for human speech production and perception has benefited from insights from psychology, neuropsychology and neurology. In this overview, I outline some of the ways that functional imaging has added to this knowledge and argue that, as a neuroanatomical tool, functional imaging has led to some…

  3. Plasticity in speech production and perception: A study of accent change in young adults

    Science.gov (United States)

    Evans, Bronwen G.; Iverson, Paul

    2005-04-01

    This study investigated plasticity in speech production and perception among university students, as individuals change their accent from regional to educated norms. Subjects were tested before beginning university, 3 months later and on completion of their first year of study. At each stage they were recorded reading a set of test words and a short passage. They also completed two perceptual tasks; they found best exemplar locations for vowels embedded in carrier sentences and identified words in noise. The results demonstrated that subjects changed their spoken accent after attending university. The changes were linked to sociolinguistic factors; subjects who were highly motivated to fit in with their university community changed their accent more. There was some evidence for a link between production and perception; between-subject differences in production and perception were correlated. However, this relationship was weaker for within-subject changes in accent over time. The results suggest that there were limitations in the ability of these subjects to acquire new phonological rules.

  4. Single-trial analysis of the neural correlates of speech quality perception

    Science.gov (United States)

    Porbadnigk, Anne K.; Treder, Matthias S.; Blankertz, Benjamin; Antons, Jan-Niklas; Schleicher, Robert; Möller, Sebastian; Curio, Gabriel; Müller, Klaus-Robert

    2013-10-01

    Objective. Assessing speech quality perception is a challenge typically addressed in behavioral and opinion-seeking experiments. Only recently, neuroimaging methods were introduced, which were used to study the neural processing of quality at group level. However, our electroencephalography (EEG) studies show that the neural correlates of quality perception are highly individual. Therefore, it became necessary to establish dedicated machine learning methods for decoding subject-specific effects. Approach. The effectiveness of our methods is shown by the data of an EEG study that investigates how the quality of spoken vowels is processed neurally. Participants were asked to indicate whether they had perceived a degradation of quality (signal-correlated noise) in vowels, presented in an oddball paradigm. Main results. We find that the P3 amplitude is attenuated with increasing noise. Single-trial analysis allows one to show that this is partly due to an increasing jitter of the P3 component. A novel classification approach helps to detect trials with presumably non-conscious processing at the threshold of perception. We show that this approach uncovers a non-trivial confounder between neural hits and neural misses. Significance. The combined use of EEG signals and machine learning methods results in a significant ‘neural’ gain in sensitivity (in processing quality loss) when compared to standard behavioral evaluation; averaged over 11 subjects, this amounts to a relative improvement in sensitivity of 35%.

  5. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  6. The effect of presentation level and stimulation rate on speech perception and modulation detection for cochlear implant users.

    Science.gov (United States)

    Brochier, Tim; McDermott, Hugh J; McKay, Colette M

    2017-06-01

    In order to improve speech understanding for cochlear implant users, it is important to maximize the transmission of temporal information. The combined effects of stimulation rate and presentation level on temporal information transfer and speech understanding remain unclear. The present study systematically varied presentation level (60, 50, and 40 dBA) and stimulation rate [500 and 2400 pulses per second per electrode (pps)] in order to observe how the effect of rate on speech understanding changes for different presentation levels. Speech recognition in quiet and noise, and acoustic amplitude modulation detection thresholds (AMDTs) were measured with acoustic stimuli presented to speech processors via direct audio input (DAI). With the 500 pps processor, results showed significantly better performance for consonant-vowel nucleus-consonant words in quiet, and a reduced effect of noise on sentence recognition. However, no rate or level effect was found for AMDTs, perhaps partly because of amplitude compression in the sound processor. AMDTs were found to be strongly correlated with the effect of noise on sentence perception at low levels. These results indicate that AMDTs, at least when measured with the CP910 Freedom speech processor via DAI, explain between-subject variance of speech understanding, but do not explain within-subject variance for different rates and levels.

  7. Impacts of fire on non-native plant recruitment in black spruce forests of interior Alaska

    Science.gov (United States)

    Conway, Alexandra J.; Jean, Mélanie

    2017-01-01

    Climate change is expected to increase the extent and severity of wildfires throughout the boreal forest. Historically, black spruce (Picea mariana (Mill.) B.S.P.) forests in interior Alaska have been relatively free of non-native species, but the compounding effects of climate change and an altered fire regime could facilitate the expansion of non-native plants. We tested the effects of wildfire on non-native plant colonization by conducting a seeding experiment of non-native plants on different substrate types in a burned black spruce forest, and surveying for non-native plants in recently burned and mature black spruce forests. We found few non-native plants in burned or mature forests, despite their high roadside presence, although invasion of some burned sites by dandelion (Taraxacum officinale) indicated the potential for non-native plants to move into burned forest. Experimental germination rates were significantly higher on mineral soil compared to organic soil, indicating that severe fires that combust much of the organic layer could increase the potential for non-native plant colonization. We conclude that fire disturbances that remove the organic layer could facilitate the invasion of non-native plants providing there is a viable seed source and dispersal vector. PMID:28158284

  8. Dynamic visual speech perception in a patient with visual form agnosia.

    Science.gov (United States)

    Munhall, K G; Servos, P; Santi, A; Goodale, M A

    2002-10-01

    To examine the role of dynamic cues in visual speech perception, a patient with visual form agnosia (DF) was tested with a set of static and dynamic visual displays of three vowels. Five conditions were tested: (1) auditory only which provided only vocal pitch information, (2) dynamic visual only, (3) dynamic audiovisual with vocal pitch information, (4) dynamic audiovisual with full voice information and (5) static visual only images of postures during vowel production. DF showed normal performance in all conditions except the static visual only condition in which she scored at chance. Control subjects scored close to ceiling in this condition. The results suggest that spatiotemporal signatures for objects and events are processed separately from static form cues.

  9. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    Directory of Open Access Journals (Sweden)

    David eJenson

    2014-07-01

    Full Text Available Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz and alpha (~10Hz spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different of syllables pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to 1 identify clusters of µ components common to all conditions and 2 examine real-time event-related spectral perturbations (ERSP within alpha and beta bands. 17 and 15 out of 20 participants produced left and right µ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR<.05 early alpha event-related synchronization (ERS prior to and during stimulus presentation and later alpha event-related desynchronization (ERD following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that µ-beta ERD indexes early predictive coding (e.g., internal modeling and/or overt and covert attentional / motor processes. µ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while µ-alpha ERD may index re-afferent sensory feedback during speech rehearsal and production.

  10. Gopherus agassizii (Desert Tortoise). Non-native seed dispersal

    Science.gov (United States)

    Ennen, J.R.; Loughran, Caleb L.; Lovich, Jeffrey E.

    2011-01-01

    Sahara Mustard (Brassica tournefortii) is a non-native, highly invasive weed species of southwestern U.S. deserts. Sahara Mustard is a hardy species, which flourishes under many conditions including drought and in both disturbed and undisturbed habitats (West and Nabhan 2002. In B. Tellman [ed.], Invasive Plants: Their Occurrence and Possible Impact on the Central Gulf Coast of Sonora and the Midriff Islands in the Sea of Cortes, pp. 91–111. University of Arizona Press, Tucson). Because of this species’ ability to thrive in these habitats, B. tournefortii has been able to propagate throughout the southwestern United States establishing itself in the Mojave and Sonoran Deserts in Arizona, California, Nevada, and Utah. Unfortunately, naturally disturbed areas created by native species, such as the Desert Tortoise (Gopherus agassizii), within these deserts could have facilitated the propagation of B. tournefortii. (Lovich 1998. In R. G. Westbrooks [ed.], Invasive Plants, Changing the Landscape of America: Fact Book, p. 77. Federal Interagency Committee for the Management of Noxious and Exotic Weeds [FICMNEW], Washington, DC). However, Desert Tortoises have never been directly observed dispersing Sahara Mustard seeds. Here we present observations of two Desert Tortoises dispersing Sahara Mustard seeds at the interface between the Mojave and Sonoran deserts in California.

  11. Native and Non-Native English Language Teachers

    Directory of Open Access Journals (Sweden)

    Ian Walkinshaw

    2014-05-01

    Full Text Available The English language teaching industry in East and Southeast Asia subscribes to an assumption that native English-speaking teachers (NESTs are the gold standard of spoken and written language, whereas non-native English-speaking teachers (non-NESTs are inferior educators because they lack this innate linguistic skill. But does this premise correspond with the views of second language learners? This article reports on research carried out with university students in Vietnam and Japan exploring the advantages and disadvantages of learning English from NESTs and non-NESTs. Contrary to the above notion, our research illuminated a number of perceived advantages—and disadvantages—in both types of teachers. Students viewed NESTs as models of pronunciation and correct language use, as well as being repositories of cultural knowledge, but they also found NESTs poor at explaining grammar, and their different cultures created tension. Non-NESTs were perceived as good teachers of grammar, and had the ability to resort to the students’ first language when necessary. Students found classroom interaction with non-NESTs easier because of their shared culture. Non-NESTs’ pronunciation was often deemed inferior to that of NESTs, but also easier to comprehend. Some respondents advocated learning from both types of teachers, depending on learners’ proficiency and the skill being taught.

  12. Effect of hearing aid release time and presentation level on speech perception in noise in elderly individuals with hearing loss.

    Science.gov (United States)

    Pottackal Mathai, Jijo; Mohammed, Hasheem

    2017-02-01

    To investigate the effect of compression time settings and presentation levels on speech perception in noise for elderly individuals with hearing loss. To compare aided speech perception performance in these individuals with age-matched normal hearing subjects. Twenty (normal hearing) participants within the age range of 60-68 years and 20 (mild-to-moderate sensorineural hearing loss) in the age range of 60-70 years were randomly recruited for the study. In the former group, SNR-50 was determined using phonetically balanced sentences that were mixed with speech-shaped noise presented at the most comfortable level. In the SNHL group, aided SNR-50 was determined at three different presentation levels (40, 60, and 80 dB HL) after fitting binaural hearing aids that had different compression time settings (fast and slow). In the SNHL group, slow compression time settings showed significantly better SNR-50 compared to fast release time. In addition, the mean of SNR-50 in the SNHL group was comparable to normal hearing participants while using a slow release time. A hearing aid with slow compression time settings led to significantly better speech perception in noise, compared to that of a hearing aid that had fast compression time settings.

  13. Discrimination of static and dynamic spectral patterns by children and young adults in relationship to speech perception in noise

    Directory of Open Access Journals (Sweden)

    Hanin Rayes

    2014-03-01

    Full Text Available Past work has shown relationship between the ability to discriminate spectral patterns and measures of speech intelligibility. The purpose of this study was to investigate the ability of both children and young adults to discriminate static and dynamic spectral patterns, comparing performance between the two groups and evaluating within- group results in terms of relationship to speech-in-noise perception. Data were collected from normal-hearing children (age range: 5.4-12.8 years and young adults (mean age: 22.8 years on two spectral discrimination tasks and speech-in-noise perception. The first discrimination task, involving static spectral profiles, measured the ability to detect a change in the phase of a low-density sinusoidal spectral ripple of wideband noise. Using dynamic spectral patterns, the second task determined the signal-to-noise ratio needed to discriminate the temporal pattern of frequency fluctuation imposed by stochastic lowrate frequency modulation (FM. Children performed significantly poorer than young adults on both discrimination tasks. For children, a significant correlation between speech-in-noise perception and spectral- pattern discrimination was obtained only with the dynamic patterns of the FM condition, with partial correlation suggesting that factors related to the children’s age mediated the relationship.

  14. Neural networks for learning and prediction with applications to remote sensing and speech perception

    Science.gov (United States)

    Gjaja, Marin N.

    1997-11-01

    Neural networks for supervised and unsupervised learning are developed and applied to problems in remote sensing, continuous map learning, and speech perception. Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART networks synthesize fuzzy logic and neural networks, and supervised ARTMAP networks incorporate ART modules for prediction and classification. New ART and ARTMAP methods resulting from analyses of data structure, parameter specification, and category selection are developed. Architectural modifications providing flexibility for a variety of applications are also introduced and explored. A new methodology for automatic mapping from Landsat Thematic Mapper (TM) and terrain data, based on fuzzy ARTMAP, is developed. System capabilities are tested on a challenging remote sensing problem, prediction of vegetation classes in the Cleveland National Forest from spectral and terrain features. After training at the pixel level, performance is tested at the stand level, using sites not seen during training. Results are compared to those of maximum likelihood classifiers, back propagation neural networks, and K-nearest neighbor algorithms. Best performance is obtained using a hybrid system based on a convex combination of fuzzy ARTMAP and maximum likelihood predictions. This work forms the foundation for additional studies exploring fuzzy ARTMAP's capability to estimate class mixture composition for non-homogeneous sites. Exploratory simulations apply ARTMAP to the problem of learning continuous multidimensional mappings. A novel system architecture retains basic ARTMAP properties of incremental and fast learning in an on-line setting while adding components to solve this class of problems. The perceptual magnet effect is a language-specific phenomenon arising early in infant speech development that is characterized by a warping of speech sound perception. An

  15. Automatically identifying characteristic features of non-native English accents

    NARCIS (Netherlands)

    Bloem, Jelke; Wieling, Martijn; Nerbonne, John; Côté, Marie-Hélène; Knooihuizen, Remco; Nerbonne, John

    2016-01-01

    In this work, we demonstrate the application of statistical measures from dialectometry to the study of accented English speech. This new methodology enables a more quantitative approach to the study of accents. Studies on spoken dialect data have shown that a combination of representativeness (the

  16. Parents and Speech Therapist Perception of Parental Involvement in Kailila Therapy Center, Jakarta, Indonesia

    Science.gov (United States)

    Jane, Griselda; Tunjungsari, Harini

    2015-01-01

    Parental involvement in a speech therapy has not been prioritized in most therapy centers in Indonesia. One of the therapy centers that has recognized the importance of parental involvement is Kailila Speech Therapy Center. In Kailila speech therapy center, parental involvement in children's speech therapy is an obligation that has been…

  17. International students in speech-language pathology clinical education placements: Perceptions of experience and competency development.

    Science.gov (United States)

    Attrill, Stacie; Lincoln, Michelle; McAllister, Sue

    2015-06-01

    This study aimed to describe perceptions of clinical placement experiences and competency development for international speech-language pathology students and to determine if these perceptions were different for domestic students. Domestic and international students at two Australian universities participated in nine focus group interviews. Thematic analysis led to the identification of two themes shared by international and domestic students and several separate themes. Shared themes identified the important influence of students' relationships with clinical educators, unique opportunities and learning that occurs on placement. International student themes included concerns about their communication skills and the impact of these skills on client progress. They also explored their adjustment to unfamiliar placement settings and relationships, preferring structured placements to assist this adjustment. Domestic student themes explored the critical nature of competency attainment and assessment on placement, valuing placements that enabled them to achieve their goals. The findings of this study suggest that international students experience additional communication, cultural and contextual demands on clinical placement, which may increase their learning requirements. Clinical education practices must be responsive to the learning needs of diverse student populations. Strategies are suggested to assist all students to adjust to the professional and learning expectations of clinical education placements.

  18. Use of speech generating devices can improve perception of qualifications for skilled, verbal, and interactive jobs.

    Science.gov (United States)

    Stern, Steven E; Chobany, Chelsea M; Beam, Alexander A; Hoover, Brittany N; Hull, Thomas T; Linsenbigler, Melissa; Makdad-Light, Courtney; Rubright, Courtney N

    2017-01-01

    We have previously demonstrated that when speech generating devices (SGD) are used as assistive technologies, they are preferred over the users' natural voices. We sought to examine whether using SGDs would affect listener's perceptions of hirability of people with complex communication needs. In a series of three experiments, participants rated videotaped actors, one using SGD and the other using their natural, mildly dysarthric voice, on (a) a measurement of perceptions of speaker credibility, strength, and informedness and (b) measurements of hirability for jobs coded in terms of skill, verbal ability, and interactivity. Experiment 1 examined hirability for jobs varying in terms of skill and verbal ability. Experiment 2 was a replication that examined hirability for jobs varying in terms of interactivity. Experiment 3 examined jobs in terms of skill and specific mode of interaction (face-to-face, telephone, computer-mediated). Actors were rated more favorably when using SGD than their own voices. Actors using SGD were also rated more favorably for highly skilled and highly verbal jobs. This preference for SGDs over mildly dysarthric voice was also found for jobs entailing computer-mediated-communication, particularly skillful jobs.

  19. Context-dependent impact of presuppositions on early magnetic brain responses during speech perception.

    Science.gov (United States)

    Hertrich, Ingo; Kirsten, Mareike; Tiemann, Sonja; Beck, Sigrid; Wühle, Anja; Ackermann, Hermann; Rolke, Bettina

    2015-10-01

    Discourse structure enables us to generate expectations based upon linguistic material that has already been introduced. The present magnetoencephalography (MEG) study addresses auditory perception of test sentences in which discourse coherence was manipulated by using presuppositions (PSP) that either correspond or fail to correspond to items in preceding context sentences with respect to uniqueness and existence. Context violations yielded delayed auditory M50 and enhanced auditory M200 cross-correlation responses to syllable onsets within an analysis window of 1.5s following the PSP trigger words. Furthermore, discourse incoherence yielded suppression of spectral power within an expanded alpha band ranging from 6 to 16Hz. This effect showed a bimodal temporal distribution, being significant in an early time window of 0.0-0.5s following the PSP trigger and a late interval of 2.0-2.5s. These findings indicate anticipatory top-down mechanisms interacting with various aspects of bottom-up processing during speech perception.

  20. Modern Greek Language: Acquisition of Morphology and Syntax by Non-Native Speakers

    Science.gov (United States)

    Andreou, Georgia; Karapetsas, Anargyros; Galantomos, Ioannis

    2008-01-01

    This study investigated the performance of native and non native speakers of Modern Greek language on morphology and syntax tasks. Non-native speakers of Greek whose native language was English, which is a language with strict word order and simple morphology, made more errors and answered more slowly than native speakers on morphology but not…

  1. 75 FR 60405 - Lincoln National Forest, New Mexico, Integrated Non-Native Invasive Plant Project

    Science.gov (United States)

    2010-09-30

    ... Forest Service Lincoln National Forest, New Mexico, Integrated Non-Native Invasive Plant Project AGENCY... control spread of non- native invasive plants (NNIP) within the LNF. The proposal utilizes several... methods, and adaptive management. Invasive plants designated by the State of New Mexico as noxious weeds...

  2. Language Distance and Non-Native Syntactic Processing: Evidence from Event-Related Potentials

    Science.gov (United States)

    Zawiszewski, Adam; Gutierrez, Eva; Fernandez, Beatriz; Laka, Itziar

    2011-01-01

    In this study, we explore native and non-native syntactic processing, paying special attention to the language distance factor. To this end, we compared how native speakers of Basque and highly proficient non-native speakers of Basque who are native speakers of Spanish process certain core aspects of Basque syntax. Our results suggest that…

  3. Chinese Fantasy Novel: Empirical Study on New Word Teaching for Non-Native Learners

    Science.gov (United States)

    Meng, Bok Check; Soon, Goh Ying

    2014-01-01

    Giving additional learning materials such as Chinese fantasy novel to non-native learners can be strenuous. This study seeks to render empirical support on the usefulness of the use of new words in Chinese fantasy novel to enhance vocabulary learning among the non-native learners of Chinese. In general, the students agreed that they like to learn…

  4. The Impact of Non-Native English Teachers' Linguistic Insecurity on Learners' Productive Skills

    Science.gov (United States)

    Daftari, Giti Ehtesham; Tavil, Zekiye Müge

    2017-01-01

    The discrimination between native and non-native English speaking teachers is reported in favor of native speakers in literature. The present study examines the linguistic insecurity of non-native English speaking teachers (NNESTs) and investigates its influence on learners' productive skills by using SPSS software. The eighteen teachers…

  5. Determinants of success in native and non-native listening comprehension: an individual differences approach

    NARCIS (Netherlands)

    S. Andringa; N. Olsthoorn; C. van Beuningen; R. Schoonen; J. Hulstijn

    2012-01-01

    The goal of this study was to explain individual differences in both native and non-native listening comprehension; 121 native and 113 non-native speakers of Dutch were tested on various linguistic and nonlinguistic cognitive skills thought to underlie listening comprehension. Structural equation mo

  6. The Factors Influencing the Motivational Strategy Use of Non-Native English Teachers

    Science.gov (United States)

    Solak, Ekrem; Bayar, Adem

    2014-01-01

    Motivation can be considered one of the most important factors determining success in language classroom. Therefore, this research aims to determine the variables influencing the motivational strategies used by non-native English teachers in Turkish context. 122 non-native English teachers teaching English at a state-run university prep school…

  7. Cognitive and Emotional Evaluation of Two Educational Outdoor Programs Dealing with Non-Native Bird Species

    Science.gov (United States)

    Braun, Michael; Buyer, Regine; Randler, Christoph

    2010-01-01

    "Non-native organisms are a major threat to biodiversity". This statement is often made by biologists, but general conclusions cannot be drawn easily because of contradictory evidence. To introduce pupils aged 11-14 years to this topic, we employed an educational program dealing with non-native animals in Central Europe. The pupils took part in a…

  8. Delayed Next Turn Repair Initiation in Native/Non-native Speaker English Conversation.

    Science.gov (United States)

    Wong, Jean

    2000-01-01

    Examines a form of other-initiated conversational repair that is delayed within next turn position, a form that is produced by non-native speakers of English whose native language is Mandarin. Using the framework of conversational analysis, shows that in native/non-native conversation, other-initiated repair is not always done as early as possible…

  9. Facing Innovation: Preparing Lecturers for English-Medium Instruction in a Non-Native Context.

    Science.gov (United States)

    Klaassen, R. G.; De Graaff, E.

    2001-01-01

    Discusses the effects of training on the teaching staff in an innovation process that is the implementation of English-medium instruction by non-native speaking lecturers to non-native speaking students. The workshop turned out to be the most appropriate professional development for the first two phases in the innovation process. (Contains 13…

  10. Cross-Linguistic Influence in Non-Native Languages: Explaining Lexical Transfer Using Language Production Models

    Science.gov (United States)

    Burton, Graham

    2013-01-01

    The focus of this research is on the nature of lexical cross-linguistic influence (CLI) between non-native languages. Using oral interviews with 157 L1 Italian high-school students studying English and German as non-native languages, the project investigated which kinds of lexis appear to be more susceptible to transfer from German to English and…

  11. Structural Correlates for Lexical Efficiency and Number of Languages in Non-Native Speakers of English

    Science.gov (United States)

    Grogan, A.; Parker Jones, O.; Ali, N.; Crinion, J.; Orabona, S.; Mechias, M. L.; Ramsden, S.; Green, D. W.; Price, C. J.

    2012-01-01

    We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investigate whether the efficiency of word processing in the non-native language (lexical efficiency) and the number of non-native languages spoken (2+ versus 1) were related to local differences in the brain structure of bilingual and multilingual speakers.…

  12. Managing conflicts arising from fisheries enhancements based on non-native fishes in southern Africa.

    Science.gov (United States)

    Ellender, B R; Woodford, D J; Weyl, O L F; Cowx, I G

    2014-12-01

    Southern Africa has a long history of non-native fish introductions for the enhancement of recreational and commercial fisheries, due to a perceived lack of suitable native species. This has resulted in some important inland fisheries being based on non-native fishes. Regionally, these introductions are predominantly not benign, and non-native fishes are considered one of the main threats to aquatic biodiversity because they affect native biota through predation, competition, habitat alteration, disease transfer and hybridization. To achieve national policy objectives of economic development, food security and poverty eradication, countries are increasingly looking towards inland fisheries as vehicles for development. As a result, conflicts have developed between economic and conservation objectives. In South Africa, as is the case for other invasive biota, the control and management of non-native fishes is included in the National Environmental Management: Biodiversity Act. Implementation measures include import and movement controls and, more recently, non-native fish eradication in conservation priority areas. Management actions are, however, complicated because many non-native fishes are important components in recreational and subsistence fisheries that contribute towards regional economies and food security. In other southern African countries, little attention has focussed on issues and management of non-native fishes, and this is cause for concern. This paper provides an overview of introductions, impacts and fisheries in southern Africa with emphasis on existing and evolving legislation, conflicts, implementation strategies and the sometimes innovative approaches that have been used to prioritize conservation areas and manage non-native fishes.

  13. Cross-Linguistic Influence in Non-Native Languages: Explaining Lexical Transfer Using Language Production Models

    Science.gov (United States)

    Burton, Graham

    2013-01-01

    The focus of this research is on the nature of lexical cross-linguistic influence (CLI) between non-native languages. Using oral interviews with 157 L1 Italian high-school students studying English and German as non-native languages, the project investigated which kinds of lexis appear to be more susceptible to transfer from German to English and…

  14. Contrasting xylem vessel constraints on hydraulic conductivity between native and non-native woody understory species

    Directory of Open Access Journals (Sweden)

    Maria S Smith

    2013-11-01

    Full Text Available We examined the hydraulic properties of 82 native and non-native woody species common to forests of Eastern North America, including several congeneric groups, representing a range of anatomical wood types. We observed smaller conduit diameters with greater frequency in non-native species, corresponding to lower calculated potential vulnerability to cavitation index. Non-native species exhibited higher vessel-grouping in metaxylem compared with native species, however, solitary vessels were more prevalent in secondary xylem. Higher frequency of solitary vessels in secondary xylem was related to a lower potential vulnerability index. We found no relationship between anatomical characteristics of xylem, origin of species and hydraulic conductivity, indicating that non-native species did not exhibit advantageous hydraulic efficiency over native species. Our results confer anatomical advantages for non-native species under the potential for cavitation due to freezing, perhaps permitting extended growing seasons.

  15. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception.

    Science.gov (United States)

    Coffey, Emily B J; Chepesiuk, Alexander M P; Herholz, Sibylle C; Baillet, Sylvain; Zatorre, Robert J

    2017-01-01

    Speech-in-noise (SIN) perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response) contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.

  16. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception

    Directory of Open Access Journals (Sweden)

    Emily B. J. Coffey

    2017-08-01

    Full Text Available Speech-in-noise (SIN perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.

  17. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise.

    Science.gov (United States)

    Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey

    2017-09-01

    Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by

  18. Perception of Emotion in Conversational Speech by Younger and Older Listeners

    Directory of Open Access Journals (Sweden)

    Juliane eSchmidt

    2016-05-01

    Full Text Available This study investigated whether age and/or differences in hearing sensitivity influence the perception of the emotion dimensions arousal (calm vs. aroused and valence (positive vs. negative attitude in conversational speech. To that end, this study specifically focused on the relationship between participants’ ratings of short affective utterances and the utterances’ acoustic parameters (pitch, intensity, and articulation rate known to be associated with the emotion dimensions arousal and valence. Stimuli consisted of short utterances taken from a corpus of conversational speech. In two rating tasks, younger and older adults either rated arousal or valence using a 5-point scale. Mean intensity was found to be the main cue participants used in the arousal task (i.e., higher mean intensity cueing higher levels of arousal while mean F0 was the main cue in the valence task (i.e., higher mean F0 being interpreted as more negative. Even though there were no overall age group differences in arousal or valence ratings, compared to younger adults, older adults responded less strongly to mean intensity differences cueing arousal and responded more strongly to differences in mean F0 cueing valence. Individual hearing sensitivity among the older adults did not modify the use of mean intensity as an arousal cue. However, individual hearing sensitivity generally affected valence ratings and modified the use of mean F0. We conclude that age differences in the interpretation of mean F0 as a cue for valence are likely due to age-related hearing loss, whereas age differences in rating arousal do not seem to be driven by hearing sensitivity differences between age groups (as measured by pure-tone audiometry.

  19. On the matching of top-down knowledge with sensory input in the perception of ambiguous speech

    Directory of Open Access Journals (Sweden)

    Hannemann R

    2010-06-01

    Full Text Available Abstract Background How does the brain repair obliterated speech and cope with acoustically ambivalent situations? A widely discussed possibility is to use top-down information for solving the ambiguity problem. In the case of speech, this may lead to a match of bottom-up sensory input with lexical expectations resulting in resonant states which are reflected in the induced gamma-band activity (GBA. Methods In the present EEG study, we compared the subject's pre-attentive GBA responses to obliterated speech segments presented after a series of correct words. The words were a minimal pair in German and differed with respect to the degree of specificity of segmental phonological information. Results The induced GBA was larger when the expected lexical information was phonologically fully specified compared to the underspecified condition. Thus, the degree of specificity of phonological information in the mental lexicon correlates with the intensity of the matching process of bottom-up sensory input with lexical information. Conclusions These results together with those of a behavioural control experiment support the notion of multi-level mechanisms involved in the repair of deficient speech. The delineated alignment of pre-existing knowledge with sensory input is in accordance with recent ideas about the role of internal forward models in speech perception.

  20. Cue integration in categorical tasks: insights from audio-visual speech perception.

    Directory of Open Access Journals (Sweden)

    Vikranth Rao Bejjanki

    Full Text Available Previous cue integration studies have examined continuous perceptual dimensions (e.g., size and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes. In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks.

  1. Familiarity breeds support: speech-language pathologists' perceptions of bullying of students with autism spectrum disorders.

    Science.gov (United States)

    Blood, Gordon W; Blood, Ingrid M; Coniglio, Amy D; Finke, Erinn H; Boyle, Michael P

    2013-01-01

    Children with autism spectrum disorders (ASD) are primary targets for bullies and victimization. Research shows school personnel may be uneducated about bullying and ways to intervene. Speech-language pathologists (SLPs) in schools often work with children with ASD and may have victims of bullying on their caseloads. These victims may feel most comfortable turning to SLPs for help during one-to-one treatment sessions to discuss these types of experiences. A nationwide survey mailed to 1000 school-based SLPs, using a vignette design technique, determined perceptions about intervention for bullying and use of specific strategies. Results revealed a majority of the SLPs (89%) responses were in "likely" or "very likely" to intervene categories for all types of bullying (physical, verbal, relational and cyber), regardless of whether the episode was observed or not. A factor analysis was conducted on a 14 item strategy scale for dealing with bullying for children with ASD. Three factors emerged, labeled "Report/Consult", "Educate the Victim", and Reassure the Victim". SLPs providing no services to children with ASD on their caseloads demonstrated significantly lower mean scores for the likelihood of intervention and using select strategies. SLPs may play an important role in reducing and/or eliminating bullying episodes in children with ASD. Readers will be able to (a) explain four different types of bullying, (b) describe the important role of school personnel in reducing and eliminating bullying, (c) describe the perceptions and strategies selected by SLPs to deal with bullying episodes for students with ASD, and (d) outline the potential role of SLPs in assisting students with ASD who are victimized. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech.

    Science.gov (United States)

    Venezia, Jonathan H; Fillmore, Paul; Matchin, William; Isenberg, A Lisette; Hickok, Gregory; Fridriksson, Julius

    2016-02-01

    Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development.

  3. Speech perception and talker segregation : effects of level, pitch, and tactile support with multiple simultaneous talkers

    NARCIS (Netherlands)

    Drullman, R.; Bronkhorst, A.W.

    2004-01-01

    Speech intelligibility was investigated by varying the number of interfering talkers, level, and mean pitch differences between target and interfering speech, and the presence of tactile support. In a first experiment the speech-reception threshold (SRT) for sentences was measured for a male talker

  4. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  5. Lexical influences on speech perception: A Granger causality analysis of MEG and EEG source estimates

    Science.gov (United States)

    Gow, David W.; Segawa, Jennifer A.; Ahlfors, Seppo P.; Lin, Fa-Hsuan

    2008-01-01

    Behavioural and functional imaging studies have demonstrated that lexical knowledge influences the categorization of perceptually ambiguous speech sounds. However, methodological and inferential constraints have so far been unable to resolve the question of whether this interaction takes the form of direct top-down influences on perceptual processing, or feedforward convergence during a decision process. We examined top-down lexical influences on the categorization of segments in a /s/−/∫/ continuum presented in different lexical contexts to produce a robust Ganong effect. Using integrated MEG/EEG and MRI data we found that, within a network identified by 40Hz gamma phase locking, activation in the supramarginal gyrus associated with wordform representation influences phonetic processing in the posterior superior temporal gyrus during a period of time associated with lexical processing. This result provides direct evidence that lexical processes influence lower level phonetic perception, and demonstrates the potential value of combining Granger causality analyses and high spatiotemporal resolution multimodal imaging data to explore the functional architecture of cognition. PMID:18703146

  6. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    Science.gov (United States)

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  7. Hyperarticulation of vowels enhances phonetic change responses in both native and non-native speakers of English: evidence from an auditory event-related potential study.

    Science.gov (United States)

    Uther, Maria; Giannakopoulou, Anastasia; Iverson, Paul

    2012-08-27

    The finding that hyperarticulation of vowel sounds occurs in certain speech registers (e.g., infant- and foreigner-directed speech) suggests that hyperarticulation may have a didactic function in facilitating acquisition of new phonetic categories in language learners. This event-related potential study tested whether hyperarticulation of vowels elicits larger phonetic change responses, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP) and tested native and non-native speakers of English. Data from 11 native English-speaking and 10 native Greek-speaking participants showed that Greek speakers in general had smaller MMNs compared to English speakers, confirming previous studies demonstrating sensitivity of the MMN to language background. In terms of the effect of hyperarticulation, hyperarticulated stimuli elicited larger MMNs for both language groups, suggesting vowel space expansion does elicit larger pre-attentive phonetic change responses. Interestingly Greek native speakers showed some P3a activity that was not present in the English native speakers, raising the possibility that additional attentional switch mechanisms are activated in non-native speakers compared to native speakers. These results give general support for models of speech learning such as Kuhl's Native Language Magnet enhanced (NLM-e) theory. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  8. Mapping the Developmental Trajectory and Correlates of Enhanced Pitch Perception on Speech Processing in Adults with ASD.

    Science.gov (United States)

    Mayer, Jennifer L; Hannent, Ian; Heaton, Pamela F

    2016-05-01

    Whilst enhanced perception has been widely reported in individuals with Autism Spectrum Disorders (ASDs), relatively little is known about the developmental trajectory and impact of atypical auditory processing on speech perception in intellectually high-functioning adults with ASD. This paper presents data on perception of complex tones and speech pitch in adult participants with high-functioning ASD and typical development, and compares these with pre-existing data using the same paradigm with groups of children and adolescents with and without ASD. As perceptual processing abnormalities are likely to influence behavioural performance, regression analyses were carried out on the adult data set. The findings revealed markedly different pitch discrimination trajectories and language correlates across diagnostic groups. While pitch discrimination increased with age and correlated with receptive vocabulary in groups without ASD, it was enhanced in childhood and stable across development in ASD. Pitch discrimination scores did not correlate with receptive vocabulary scores in the ASD group and for adults with ASD superior pitch perception was associated with sensory atypicalities and diagnostic measures of symptom severity. We conclude that the development of pitch discrimination, and its associated mechanisms markedly distinguish those with and without ASD.

  9. Discrimination Between Native and Non-Native Speech Using Visual Features Only

    NARCIS (Netherlands)

    Georgakis, Christos; Petridis, Stavros; Pantic, Maja

    2015-01-01

    Accent is a soft biometric trait that can be inferred from pronunciation and articulation patterns characterizing the speaking style of an individual. Past research has addressed the task of classifying accent, as belonging to a native language speaker or a foreign language speaker, by means of the

  10. 2011 Invasive Non-native Plant Inventory dataset : Quivira National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This dataset is a product of the 2011 invasive non-native plant inventory conducted at Quivira National Wildlife Refuge by Utah State University. This inventory...

  11. Recreational freshwater fishing drives non-native aquatic species richness patterns at a continental scale

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aim. Mapping the geographic distribution of non-native aquatic species is a critically important precursor to understanding the anthropogenic and environmental...

  12. Non-native Chinese Foreign Language (CFL) Teachers: Identity and Discourse

    DEFF Research Database (Denmark)

    Zhang, Chun

    2014-01-01

    Abstract Native Chinese foreign language (CFL) teacher identity is an emerging subject of research interest in the teacher education. Yet, limited study has been done on the construction of Non-native CFL teachers in their home culture. Guided by a concept of teacher identity......-in-discourse, the paper reports on a qualitative study that explores how three Non-native CFL teachers construct their teacher identity as they interact with Danish students while teaching CFL at one Danish university. Data collected from in-depth interviews over a period of two years show that the Non-native CFL...... teachers face tensions and challenges in constructing their identities as CFL teachers, and the tensions and challenges that arose from Danish teaching culture could influence the Non-native CFL teachers' contributions to CFL teaching in their home cultures. The findings further show that in order to cope...

  13. Perception of Music and Speech in Adolescents with Cochlear Implants – A Pilot Study on Effects of Intensive Musical Ear Training

    DEFF Research Database (Denmark)

    Petersen, Bjørn; Sørensen, Stine Derdau; Pedersen, Ellen Raben

    their standard school schedule and received no music training. Before and after the intervention period, both groups completed a set of tests for perception of music, speech and emotional prosody. In addition, the participants filled out a questionnaire which examined music listening habits and enjoyment...... measures of rehabilitation are important throughout adolescence. Music training may provide a beneficial method of strengthening not only music perception, but also linguistic skills, particularly prosody. The purpose of this study was to examine perception of music and speech and music engagement....... RESULTS CI users significantly improved their overall music perception and discrimination of melodic contour and rhythm in particular. No effect of the music training was found on discrimination of emotional prosody or speech. The CI users described levels of music engagement and enjoyment that were...

  14. Non-native fishes in Florida freshwaters: a literature review and synthesis

    Science.gov (United States)

    Schofield, Pamela J.; Loftus, William F.

    2015-01-01

    Non-native fishes have been known from freshwater ecosystems of Florida since the 1950s, and dozens of species have established self-sustaining populations. Nonetheless, no synthesis of data collected on those species in Florida has been published until now. We searched the literature for peer-reviewed publications reporting original data for 42 species of non-native fishes in Florida that are currently established, were established in the past, or are sustained by human intervention. Since the 1950s, the number of non-native fish species increased steadily at a rate of roughly six new species per decade. Studies documented (in decreasing abundance): geographic location/range expansion, life- and natural-history characteristics (e.g., diet, habitat use), ecophysiology, community composition, population structure, behaviour, aquatic-plant management, and fisheries/aquaculture. Although there is a great deal of taxonomic uncertainty and confusion associated with many taxa, very few studies focused on clarifying taxonomic ambiguities of non-native fishes in the State. Most studies were descriptive; only 15 % were manipulative. Risk assessments, population-control studies and evaluations of effects of non-native fishes were rare topics for research, although they are highly valued by natural-resource managers. Though some authors equated lack of data with lack of effects, research is needed to confirm or deny conclusions. Much more is known regarding the effects of lionfish (Pterois spp.) on native fauna, despite its much shorter establishment time. Natural-resource managers need biological and ecological information to make policy decisions regarding non-native fishes. Given the near-absence of empirical data on effects of Florida non-native fishes, and the lengthy time-frames usually needed to collect such information, we provide suggestions for data collection in a manner that may be useful in the evaluation and prediction of non-native fish effects.

  15. Turkish Students' Perspectives on Speaking Anxiety in Native and Non-Native English Speaker Classes

    Science.gov (United States)

    Bozavli, Ebubekir; Gulmez, Recep

    2012-01-01

    The aim of this study is to reveal the effect of FLA (foreign language anxiety) in native/non-native speaker of English classrooms. In this study, two groups of students (90 in total) of whom 38 were in NS (native speaker) class and 52 in NNS (non-native speaker) class taking English as a second language course for 22 hours a week at Erzincan…

  16. Spatial arrangement overrules environmental factors to structure native and non-native assemblages of synanthropic harvestmen.

    Directory of Open Access Journals (Sweden)

    Christoph Muster

    Full Text Available Understanding how space affects the occurrence of native and non-native species is essential for inferring processes that shape communities. However, studies considering spatial and environmental variables for the entire community - as well as for the native and non-native assemblages in a single study - are scarce for animals. Harvestmen communities in central Europe have undergone drastic turnovers during the past decades, with several newly immigrated species, and thus provide a unique system to study such questions. We studied the wall-dwelling harvestmen communities from 52 human settlements in Luxembourg and found the assemblages to be largely dominated by non-native species (64% of specimens. Community structure was analysed using Moran's eigenvector maps as spatial variables, and landcover variables at different radii (500 m, 1000 m, 2000 m in combination with climatic parameters as environmental variables. A surprisingly high portion of pure spatial variation (15.7% of total variance exceeded the environmental (10.6% and shared (4% components of variation, but we found only minor differences between native and non-native assemblages. This could result from the ecological flexibility of both, native and non-native harvestmen that are not restricted to urban habitats but also inhabit surrounding semi-natural landscapes. Nevertheless, urban landcover variables explained more variation in the non-native community, whereas coverage of semi-natural habitats (forests, rivers at broader radii better explained the native assemblage. This indicates that some urban characteristics apparently facilitate the establishment of non-native species. We found no evidence for competitive replacement of native by invasive species, but a community with novel combination of native and non-native species.

  17. An Analysis of Student Evaluations of Native and Non Native Korean Foreign Language Teachers

    Directory of Open Access Journals (Sweden)

    Julie Damron

    2009-08-01

    Full Text Available In an effort to analyze the strengths and weaknesses of native and non-native teaching assistants and part-time teachers (both referred to as TAs in this article, students completed 632 evaluations of Ko-rean Language TAs from 2005 to 2008, and these evaluations were compiled for an analysis of variants (ANOVA. The evaluations were categorized into three groups of TAs: native Korean-speaking female, native Korean-speaking male, and non-native male; non-native females would have been included in the study, but there were not enough non-native female teachers to have a reliable sample. In an effort to encourage more self-examined teaching practices, this study addresses the greatest strengths and weaknesses of each group. Results revealed several significant differences between the ratings of the groups: native female TAs rated lowest overall, and non-native male TAs rated highest overall. The most prominent differences be-tween groups occurred in ratings of amount students learned, TAs’ preparedness, TAs’ active involvement in students’ learning, TAs’ enthusiasm, and TAs’ tardiness. This study reviews students’ written comments on the evaluations and proposes possible causes of these findings, concluding that differences in ratings are based on both teaching patterns associated with each group of TAs and student re-sponse bias that favors non-native male speakers. Teaching patterns include a tendency for native (Korean female TAs to teach using a lecture format and non-native male TAs to teach using a discussion format; for native TAs to have difficulty adapting to the language level of the students; and for a more visible enthusiasm for Korean culture held by non-native TAs. Causes for bias may include “other-ing” females and natives, TA selection procedures, and trends in evaluating TAs based on language level.

  18. Trophic consequences of non-native pumpkinseed Lepomis gibbosus for native pond fishes

    OpenAIRE

    Copp, G. H.; Britton, J R; Guo, Z.; Edmonds-Brown, V; Pegg, Josie; L. VILIZZI; Davison, P.

    2017-01-01

    Introduced non-native fishes can cause considerable adverse impacts on freshwater ecosystems. The pumpkinseed Lepomis gibbosus, a North American centrarchid, is one of the most widely distributed non-native fishes in Europe, having established self-sustaining populations in at least 28 countries, including the U.K. where it is predicted to become invasive under warmer climate conditions. To predict the consequences of increased invasiveness, a field experiment was completed over a summer peri...

  19. 非言语声音影响汉语听者言语声音的知觉%The Non-speech Sounds Affect the Perception of Speech Sounds in Chinese Listeners

    Institute of Scientific and Technical Information of China (English)

    刘文理; 乐国安

    2012-01-01

    采用启动范式,以汉语听者为被试,考察了非言语声音是否影响言语声音的知觉.实验1考察了纯音对辅音范畴连续体知觉的影响,结果发现纯音影响到辅音范畴连续体的知觉,表现出频谱对比效应.实验2考察了纯音和复合音对元音知觉的影响,结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别,表现出启动效应.两个实验一致发现非言语声音能够影响言语声音的知觉,表明言语声音知觉也需要一个前言语的频谱特征分析阶段,这与言语知觉听觉理论的观点一致.%A long-standing debate in the field of speech perception concerns whether specialized processing mechanisms are necessary to perceive speech sounds. The motor theory argues that speech perception is a special process and non-speech sounds don't affect the perception of speech sounds. The auditory theory suggests that speech perception can be understood in terms of general auditory process, which is shared with the perception of non-speech sounds. The findings from English subjects indicate that the processing of non-speech sounds affects the perception of speech sounds. Few studies have been administered in Chinese. The present study administered two experiments to examine whether the processing of non-speech sounds could affect the perception of speech segments in Chinese listeners. In experiment 1, speech sounds were a continuum of synthesized consonant category ranging from /ba/ to /da/. Non-speech sounds were two sine wave tones, with frequency equal to the onset frequency of F2 of/ba/ and /da/, respectively. Following the two tones, the /ba/-/da/ series were presented with a 50ms ISI. Undergraduate participants were asked to identify the speech sounds. The results found that non-speech tones influenced identification of speech targets: when the frequency of tone was equal to F2 onset frequency of /ba/, participants were more likely to identify consonant

  20. Setting Priorities for Monitoring and Managing Non-native Plants: Toward a Practical Approach

    Science.gov (United States)

    Koch, Christiane; Jeschke, Jonathan M.; Overbeck, Gerhard E.; Kollmann, Johannes

    2016-09-01

    Land managers face the challenge to set priorities in monitoring and managing non-native plant species, as resources are limited and not all non-natives become invasive. Existing frameworks that have been proposed to rank non-native species require extensive information on their distribution, abundance, and impact. This information is difficult to obtain and often not available for many species and regions. National watch or priority lists are helpful, but it is questionable whether they provide sufficient information for environmental management on a regional scale. We therefore propose a decision tree that ranks species based on more simple albeit robust information, but still provides reliable management recommendations. To test the decision tree, we collected and evaluated distribution data from non-native plants in highland grasslands of Southern Brazil. We compared the results with a national list from the Brazilian Invasive Species Database for the state to discuss advantages and disadvantages of the different approaches on a regional scale. Out of 38 non-native species found, only four were also present on the national list. If management would solely rely on this list, many species that were identified as spreading based on the decision tree would go unnoticed. With the suggested scheme, it is possible to assign species to active management, to monitoring, or further evaluation. While national lists are certainly important, management on a regional scale should employ additional tools that adequately consider the actual risk of non-natives to become invasive.