WorldWideScience

Sample records for nonnative speech discrimination

  1. Visual-only discrimination between native and non-native speech

    NARCIS (Netherlands)

    Georgakis, Christos; Petridis, Stavros; Pantic, Maja

    2014-01-01

    Accent is an important biometric characteristic that is defined by the presence of specific traits in the speaking style of an individual. These are identified by patterns in the speech production system, such as those present in the vocal tract or in lip movements. Evidence from linguistics and spe

  2. Perceptual assimilation and discrimination of non-native vowel contrasts

    OpenAIRE

    2014-01-01

    Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: The Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and L1-assimilation (categorization and goodnes...

  3. Perceptual assimilation and discrimination of non-native vowel contrasts.

    Science.gov (United States)

    Tyler, Michael D; Best, Catherine T; Faber, Alice; Levitt, Andrea G

    2014-01-01

    Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: the Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and native language assimilation (categorization and goodness rating) tests on six nonnative vowel contrasts. Discrimination was consistent with PAM assimilation types, but asymmetries predicted by NRV were only observed for single-category assimilations, suggesting that perceptual assimilation might modulate the effects of vowel peripherality on non-native vowel perception.

  4. Perceptual assimilation and discrimination of non-native vowel contrasts

    Science.gov (United States)

    Tyler, Michael D.; Best, Catherine T.; Faber, Alice; Levitt, Andrea G.

    2014-01-01

    Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: The Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and L1-assimilation (categorization and goodness rating) tests on six non-native vowel contrasts. Discrimination was consistent with PAM assimilation types, but asymmetries predicted by NRV were only observed for single-category assimilations, suggesting that perceptual assimilation might modulate the effects of vowel peripherality on non-native vowel perception. PMID:24923313

  5. Auditory free classification of nonnative speech

    Science.gov (United States)

    Atagi, Eriko; Bent, Tessa

    2013-01-01

    Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task—a task in which listeners freely group talkers based on audio samples—has been a useful tool for examining listeners’ representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech. The category structure and salient perceptual dimensions of nonnative speech were investigated from two perspectives: general similarity and perceived native language background. Talker intelligibility and whether native talkers were included were manipulated to test stimulus set effects. Results showed that degree of accent was a highly salient feature of nonnative speech for classification based on general similarity and on perceived native language background. This salience, however, was attenuated when listeners were listening to highly intelligible stimuli and attending to the talkers’ native language backgrounds. These results suggest that the context in which nonnative speech stimuli are presented—such as the listeners’ attention to the talkers’ native language and the variability of stimulus intelligibility—can influence listeners’ perceptual organization of nonnative speech. PMID:24363470

  6. Auditory free classification of nonnative speech.

    Science.gov (United States)

    Atagi, Eriko; Bent, Tessa

    2013-11-01

    Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task-a task in which listeners freely group talkers based on audio samples-has been a useful tool for examining listeners' representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech. The category structure and salient perceptual dimensions of nonnative speech were investigated from two perspectives: general similarity and perceived native language background. Talker intelligibility and whether native talkers were included were manipulated to test stimulus set effects. Results showed that degree of accent was a highly salient feature of nonnative speech for classification based on general similarity and on perceived native language background. This salience, however, was attenuated when listeners were listening to highly intelligible stimuli and attending to the talkers' native language backgrounds. These results suggest that the context in which nonnative speech stimuli are presented-such as the listeners' attention to the talkers' native language and the variability of stimulus intelligibility-can influence listeners' perceptual organization of nonnative speech.

  7. Speech intelligibility of native and non-native speech

    NARCIS (Netherlands)

    Wijngaarden, S.J. van

    1999-01-01

    The intelligibility of speech is known to be lower if the talker is non-native instead of native for the given language. This study is aimed at quantifying the overall degradation due to acoustic-phonetic limitations of non-native talkers of Dutch, specifically of Dutch-speaking Americans who have l

  8. The role of abstraction in non-native speech perception.

    Science.gov (United States)

    Pajak, Bozena; Levy, Roger

    2014-09-01

    The end-result of perceptual reorganization in infancy is currently viewed as a reconfigured perceptual space, "warped" around native-language phonetic categories, which then acts as a direct perceptual filter on any non-native sounds: naïve-listener discrimination of non-native-sounds is determined by their mapping onto native-language phonetic categories that are acoustically/articulatorily most similar. We report results that suggest another factor in non-native speech perception: some perceptual sensitivities cannot be attributed to listeners' warped perceptual space alone, but rather to enhanced general sensitivity along phonetic dimensions that the listeners' native language employs to distinguish between categories. Specifically, we show that the knowledge of a language with short and long vowel categories leads to enhanced discrimination of non-native consonant length contrasts. We argue that these results support a view of perceptual reorganization as the consequence of learners' hierarchical inductive inferences about the structure of the language's sound system: infants not only acquire the specific phonetic category inventory, but also draw higher-order generalizations over the set of those categories, such as the overall informativity of phonetic dimensions for sound categorization. Non-native sound perception is then also determined by sensitivities that emerge from these generalizations, rather than only by mappings of non-native sounds onto native-language phonetic categories.

  9. Fluency in native and nonnative English speech

    CERN Document Server

    Götz, Sandra

    2013-01-01

    This book takes a new and holistic approach to fluency in English speech and differentiates between productive, perceptive, and nonverbal fluency. The in-depth corpus-based description of productive fluency points out major differences of how fluency is established in native and nonnative speech. It also reveals areas in which even highly advanced learners of English still deviate strongly from the native target norm and in which they have already approximated to it. Based on these findings, selected learners are subjected to native speakers' ratings of seven perceptive fluency variables in or

  10. Overnight consolidation promotes generalization across talkers in the identification of nonnative speech sounds.

    Science.gov (United States)

    Earle, F Sayako; Myers, Emily B

    2015-01-01

    This investigation explored the generalization of phonetic learning across talkers following training on a nonnative (Hindi dental and retroflex) contrast. Participants were trained in two groups, either in the morning or in the evening. Discrimination and identification performance was assessed in the trained talker and an untrained talker three times over 24 h following training. Results suggest that overnight consolidation promotes generalization across talkers in identification, but not necessarily discrimination, of nonnative speech sounds.

  11. Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification

    CERN Document Server

    Bouselmi, Ghazi; Illina, Irina; Haton, Jean-Paul

    2007-01-01

    In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques.

  12. Non-native speech perception in adverse conditions: A review

    NARCIS (Netherlands)

    Garcia Lecumberri, M.L.; Cooke, M.P.; Cutler, A.

    2010-01-01

    If listening in adverse conditions is hard, then listening in a foreign language is doubly so: non-native listeners have to cope with both imperfect signals and imperfect knowledge. Comparison of native and non-native listener performance in speech-in-noise tasks helps to clarify the role of prior l

  13. Intelligibility of native and non-native Dutch Speech

    NARCIS (Netherlands)

    Wijngaarden, S.J. van

    2001-01-01

    The intelligibility of speech is known to be lower if the speaker is non-native instead of native for the given language. This study is aimed at quantifying the overall degradation due to limitations of non-native speakers of Dutch, specifically of Dutch-speaking Americans who have lived in the Neth

  14. Perceptual learning of non-native speech contrast and functioning of the olivocochlear bundle.

    Science.gov (United States)

    Kumar, Ajith U; Hegde, Medha; Mayaleela

    2010-07-01

    The purpose of this study was to investigate the relationship between perceptual learning of non-native speech sounds and strength of feedback in the medial olivocochlear bundle (MOCB). Discrimination abilities of non-native speech sounds (Malayalam) from its native counterparts (Hindi) were monitored during 12 days of training. Contralateral inhibition of otoacoustic emissions were measured on the first and twelfth day of training. Results suggested that training significantly improved reaction time and accuracy of identification of non-native speech sounds. There was a significant positive correlation between the slope (linear) of identification scores and change in distortion product otoacoustic emission inhibition at 3000 Hz. Findings suggest that during perceptual learning feedback from the MOCB may fine tune the brain stem and/or cochlea. However, such a change, isolated to a narrow frequency region, represents a limited effect and needs further exploration to confirm and/or extend any generalization of findings.

  15. Sleep and native language interference affect non-native speech sound learning.

    Science.gov (United States)

    Earle, F Sayako; Myers, Emily B

    2015-12-01

    Adults learning a new language are faced with a significant challenge: non-native speech sounds that are perceptually similar to sounds in one's native language can be very difficult to acquire. Sleep and native language interference, 2 factors that may help to explain this difficulty in acquisition, are addressed in 3 studies. Results of Experiment 1 showed that participants trained on a non-native contrast at night improved in discrimination 24 hr after training, while those trained in the morning showed no such improvement. Experiments 2 and 3 addressed the possibility that incidental exposure to perceptually similar native language speech sounds during the day interfered with maintenance in the morning group. Taken together, results show that the ultimate success of non-native speech sound learning depends not only on the similarity of learned sounds to the native language repertoire, but also to interference from native language sounds before sleep.

  16. Using the Speech Transmission Index to predict the intelligibility of non-native speech

    Science.gov (United States)

    van Wijngaarden, Sander J.; Steeneken, Herman J. M.; Houtgast, Tammo; Bronkhorst, Adelbert W.

    2002-05-01

    The calibration of the Speech Transmission Index (STI) is based on native speech, presented to native listeners. This means that the STI predicts speech intelligibility under the implicit assumption of fully native communication. In order to assess effects of both non-native production and non-native perception of speech, the intelligibility of short sentences was measured in various non-native scenarios, as a function of speech-to-noise ratio. Since each speech-to-noise ratio is associated with a unique STI value, this establishes the relation between sentence intelligibility and STI. The difference between native and non-native intelligibility as a function of STI was used to calculate a correction function for the STI for each separate non-native scenario. This correction function was applied to the STI ranges corresponding to certain intelligibility categories (bad-excellent). Depending on the proficiency of non-native talkers and listeners, the category boundaries were found to differ from the standard (native) boundaries by STI values up to 0.30 (on the standard 0-1 scale). The corrections needed for non-native listeners are greater than for non-native talkers with a similar level of proficiency. For some categories of non-native communicators, the qualification excellent requires an STI higher than 1.00, and therefore cannot be reached.

  17. The intelligibility of Lombard speech for non-native listeners.

    Science.gov (United States)

    Cooke, Martin; Lecumberri, Maria Luisa García

    2012-08-01

    Speech produced in the presence of noise--Lombard speech--is more intelligible in noise than speech produced in quiet, but the origin of this advantage is poorly understood. Some of the benefit appears to arise from auditory factors such as energetic masking release, but a role for linguistic enhancements similar to those exhibited in clear speech is possible. The current study examined the effect of Lombard speech in noise and in quiet for Spanish learners of English. Non-native listeners showed a substantial benefit of Lombard speech in noise, although not quite as large as that displayed by native listeners tested on the same task in an earlier study [Lu and Cooke (2008), J. Acoust. Soc. Am. 124, 3261-3275]. The difference between the two groups is unlikely to be due to energetic masking. However, Lombard speech was less intelligible in quiet for non-native listeners than normal speech. The relatively small difference in Lombard benefit in noise for native and non-native listeners, along with the absence of Lombard benefit in quiet, suggests that any contribution of linguistic enhancements in the Lombard benefit for natives is small.

  18. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  19. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  20. Discriminative learning for speech recognition

    CERN Document Server

    He, Xiadong

    2008-01-01

    In this book, we introduce the background and mainstream methods of probabilistic modeling and discriminative parameter optimization for speech recognition. The specific models treated in depth include the widely used exponential-family distributions and the hidden Markov model. A detailed study is presented on unifying the common objective functions for discriminative learning in speech recognition, namely maximum mutual information (MMI), minimum classification error, and minimum phone/word error. The unification is presented, with rigorous mathematical analysis, in a common rational-functio

  1. Using the Speech Transmission Index for predicting non-native speech intelligibility

    Science.gov (United States)

    van Wijngaarden, Sander J.; Bronkhorst, Adelbert W.; Houtgast, Tammo; Steeneken, Herman J. M.

    2004-03-01

    While the Speech Transmission Index (STI) is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions for sentence intelligibility in noise, for populations of native and non-native communicators, a correction function for the interpretation of the STI is derived. This function is applied to determine the appropriate STI ranges with qualification labels (``bad''-``excellent''), for specific populations of non-natives. The correction function is derived by relating the non-native psychometric function to the native psychometric function by a single parameter (ν). For listeners, the ν parameter is found to be highly correlated with linguistic entropy. It is shown that the proposed correction function is also valid for conditions featuring bandwidth limiting and reverberation.

  2. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  3. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  4. How much does language proficiency by non-native listeners influence speech audiometric tests in noise?

    Science.gov (United States)

    Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger

    2015-01-01

    The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.

  5. Speech Recognition of Non-Native Speech Using Native and Non-Native Acoustic Models

    Science.gov (United States)

    2000-08-01

    NATIVE AND NON-NATIVE ACOUSTIC MODELS David A. van Leeuwen and Rosemary Orr vanLeeuwentm .tno. nl R. 0rr~kno. azn. nl TNO Human Factors Research...a] is pronounced closer to the [c] by the vowels . Journal of Phonetics, 25:437-470, 1997. 32 [2] D. B. Paul and J. M. Baker. The design for [9] R. H...J. Kershaw, [12] Tony Robinson. Private Communication. L. Lamel, D. A. van Leeuwen , D. Pye, A. J. Robinson, H. J. M. Steeneken, and P. C. Wood- [13

  6. Influence of native and non-native multitalker babble on speech recognition in noise

    Directory of Open Access Journals (Sweden)

    Chandni Jain

    2014-03-01

    Full Text Available The aim of the study was to assess speech recognition in noise using multitalker babble of native and non-native language at two different signal to noise ratios. The speech recognition in noise was assessed on 60 participants (18 to 30 years with normal hearing sensitivity, having Malayalam and Kannada as their native language. For this purpose, 6 and 10 multitalker babble were generated in Kannada and Malayalam language. Speech recognition was assessed for native listeners of both the languages in the presence of native and nonnative multitalker babble. Results showed that the speech recognition in noise was significantly higher for 0 dB signal to noise ratio (SNR compared to -3 dB SNR for both the languages. Performance of Kannada Listeners was significantly higher in the presence of native (Kannada babble compared to non-native babble (Malayalam. However, this was not same with the Malayalam listeners wherein they performed equally well with native (Malayalam as well as non-native babble (Kannada. The results of the present study highlight the importance of using native multitalker babble for Kannada listeners in lieu of non-native babble and, considering the importance of each SNR for estimating speech recognition in noise scores. Further research is needed to assess speech recognition in Malayalam listeners in the presence of other non-native backgrounds of various types.

  7. The influence of non-native language proficiency on speech perception performance

    Directory of Open Access Journals (Sweden)

    Lisa eKilman

    2014-07-01

    Full Text Available The present study examined to what extent proficiency in a non-native language influences speech perception in noise. We explored how English proficiency affected native (Swedish and non-native (English speech perception in four speech reception threshold (SRT conditions including two energetic (stationary, fluctuating noise and two informational (two-talker babble Swedish, two-talker babble English maskers. Twenty-three normal-hearing native Swedish listeners participated, age between 28 and 64 years. The participants also performed standardized tests in English proficiency, non-verbal reasoning and working memory capacity. Our approach with focus on proficiency and the assessment of external as well as internal, listener-related factors allowed us to examine which variables explained intra-and interindividual differences in native and non-native speech perception performance. The main result was that in the non-native target, the level of English proficiency is a decisive factor for speech intelligibility in noise. High English proficiency improved performance in all four conditions when target language was English. The informational maskers were interfering more with perception than energetic maskers, specifically in the non-native language. The study also confirmed that the SRT's were better when target language was native compared to non-native.

  8. Effects of training on learning non-native speech contrasts

    Science.gov (United States)

    Sinnott, Joan M.

    2002-05-01

    An animal psychoacoustic procedure was used to train human listeners to categorize two non-native phonemic distinctions. In Exp 1, Japanese perception of the English liquid contrast /r-l/ was examined. In Exp 2, American-English perception of the Hindi dental-retroflex contrast /d-D/was examined. The training methods were identical in the two studies. The stimuli consisted of 64 CVs produced by four different native talkers (two male, two female) using four different vowels. The procedure involved manually moving a lever to make either a ``go-left'' or ``go-right'' response to categorize the stimuli. Feedback was given for correct and incorrect responses after each trial. After 32 training sessions, lasting about 8 weeks, performance was analyzed using both percent correct and response time as measures. Results showed that the Japanese listeners, as a group, were statistically similar to a group of native listeners in categorizing the liquid contrast. In contrast, the Amercan-English listeners were not nativelike in categorizing the dental-retroflex contrast. Hypotheses for the different results in the two experiments are discussed, including possible subject-related variables. In addition, the use of an animal model is proposed to objectively ``calibrate'' the psychoacoustic salience of various phoneme contrasts used in human speech.

  9. Quantifying the intelligibility of speech in noise for non-native listeners

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Steeneken, H.J.M.; Houtgast, T.

    2002-01-01

    When listening to languages learned at a later age, speech intelligibility is generally lower than when listening to one's native language. The main purpose of this study is to quantify speech intelligibility in noise for specific populations of non-native listeners, only broadly addressing the unde

  10. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  11. How noise and language proficiency influence speech recognition by individual non-native listeners.

    Science.gov (United States)

    Zhang, Jin; Xie, Lingli; Li, Yongjun; Chatterjee, Monita; Ding, Nai

    2014-01-01

    This study investigated how speech recognition in noise is affected by language proficiency for individual non-native speakers. The recognition of English and Chinese sentences was measured as a function of the signal-to-noise ratio (SNR) in sixty native Chinese speakers who never lived in an English-speaking environment. The recognition score for speech in quiet (which varied from 15%-92%) was found to be uncorrelated with speech recognition threshold (SRTQ/2), i.e. the SNR at which the recognition score drops to 50% of the recognition score in quiet. This result demonstrates separable contributions of language proficiency and auditory processing to speech recognition in noise.

  12. Combined Acoustic and Pronunciation Modelling for Non-Native Speech Recognition

    CERN Document Server

    Bouselmi, Ghazi; Illina, Irina

    2007-01-01

    In this paper, we present several adaptation methods for non-native speech recognition. We have tested pronunciation modelling, MLLR and MAP non-native pronunciation adaptation and HMM models retraining on the HIWIRE foreign accented English speech database. The ``phonetic confusion'' scheme we have developed consists in associating to each spoken phone several sequences of confused phones. In our experiments, we have used different combinations of acoustic models representing the canonical and the foreign pronunciations: spoken and native models, models adapted to the non-native accent with MAP and MLLR. The joint use of pronunciation modelling and acoustic adaptation led to further improvements in recognition accuracy. The best combination of the above mentioned techniques resulted in a relative word error reduction ranging from 46% to 71%.

  13. Quantifying the intelligibility of speech in noise for non-native talkers

    Science.gov (United States)

    van Wijngaarden, Sander J.; Steeneken, Herman J. M.; Houtgast, Tammo

    2002-12-01

    The intelligibility of speech pronounced by non-native talkers is generally lower than speech pronounced by native talkers, especially under adverse conditions, such as high levels of background noise. The effect of foreign accent on speech intelligibility was investigated quantitatively through a series of experiments involving voices of 15 talkers, differing in language background, age of second-language (L2) acquisition and experience with the target language (Dutch). Overall speech intelligibility of L2 talkers in noise is predicted with a reasonable accuracy from accent ratings by native listeners, as well as from the self-ratings for proficiency of L2 talkers. For non-native speech, unlike native speech, the intelligibility of short messages (sentences) cannot be fully predicted by phoneme-based intelligibility tests. Although incorrect recognition of specific phonemes certainly occurs as a result of foreign accent, the effect of reduced phoneme recognition on the intelligibility of sentences may range from severe to virtually absent, depending on (for instance) the speech-to-noise ratio. Objective acoustic-phonetic analyses of accented speech were also carried out, but satisfactory overall predictions of speech intelligibility could not be obtained with relatively simple acoustic-phonetic measures.

  14. Cognitive Processes Underlying Nonnative Speech Production: The Significance of Recurrent Sequences.

    Science.gov (United States)

    Oppenheim, Nancy

    This study was designed to identify whether advanced nonnative speakers of English rely on recurrent sequences to produce fluent speech in conformance with neural network theories and symbolic network theories; participants were 6 advanced, speaking and listening university students, aged 18-37 years (their native countries being Korea, Japan,…

  15. Native Speakers' Perception of Non-Native English Speech

    Science.gov (United States)

    Jaber, Maysa; Hussein, Riyad F.

    2011-01-01

    This study is aimed at investigating the rating and intelligibility of different non-native varieties of English, namely French English, Japanese English and Jordanian English by native English speakers and their attitudes towards these foreign accents. To achieve the goals of this study, the researchers used a web-based questionnaire which…

  16. Phonetic processing of non-native speech in semantic vs non-semantic tasks.

    Science.gov (United States)

    Gustafson, Erin; Engstler, Caroline; Goldrick, Matthew

    2013-12-01

    Research with speakers with acquired production difficulties has suggested phonetic processing is more difficult in tasks that require semantic processing. The current research examined whether similar effects are found in bilingual phonetic processing. English-French bilinguals' productions in picture naming (which requires semantic processing) were compared to those elicited by repetition (which does not require semantic processing). Picture naming elicited slower, more accented speech than repetition. These results provide additional support for theories integrating cognitive and phonetic processes in speech production and suggest that bilingual speech research must take cognitive factors into account when assessing the structure of non-native sound systems.

  17. Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration

    OpenAIRE

    Bouselmi, Ghazi; Fohr, Dominique; Illina, Irina; Haton, Jean-Paul

    2005-01-01

    This paper presents a fully automated approach for the recognition of non-native speech based on acoustic model modification. For a native language (L1) and a spoken language (L2), pronunciation variants of the phones of L2 are automatically extracted from an existing non-native database as a confusion matrix with sequences of phones of L1. This is done using L1's and L2's ASR systems. This confusion concept deals with the problem of non existence of match between some L2 and L1 phones. The c...

  18. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Science.gov (United States)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  19. Speech Recognition by Goats, Wolves, Sheep and Non-Natives

    Science.gov (United States)

    2000-08-01

    existent, will lead to vowel insertions and 1126. diphtongs are likely to be replaced by a single vowel. A [Bona97] P. Bonaventura , F. Gallocchio, G. Micca...Bona98] P. Bonaventura , F. Gallocchio, J. Mari, G. efficient for the small group of frequent language-pair Micca, "Speech recognition methods for non

  20. Inductive Inference in Non-Native Speech Processing and Learning

    Science.gov (United States)

    Pajak, Bozena

    2012-01-01

    Despite extensive research on language acquisition, our understanding of how people learn abstract linguistic structures remains limited. In the phonological domain, we know that perceptual reorganization in infancy results in attuning to native language (L1) phonetic categories and, consequently, in difficulty discriminating and learning…

  1. A Multidimensional Scaling Study of Native and Non-Native Listeners' Perception of Second Language Speech.

    Science.gov (United States)

    Foote, Jennifer A; Trofimovich, Pavel

    2016-04-01

    Second language speech learning is predicated on learners' ability to notice differences between their own language output and that of their interlocutors. Because many learners interact primarily with other second language users, it is crucial to understand which dimensions underlie the perception of second language speech by learners, compared to native speakers. For this study, 15 non-native and 10 native English speakers rated 30-s language audio-recordings from controlled reading and interview tasks for dissimilarity, using all pairwise combinations of recordings. PROXSCAL multidimensional scaling analyses revealed fluency and aspects of speakers' pronunciation as components underlying listener judgments but showed little agreement across listeners. Results contribute to an understanding of why second language speech learning is difficult and provide implications for language training.

  2. Correlates of older adults' discrimination of acoustic properties in speech

    NARCIS (Netherlands)

    Neger, T.M.; Janse, E.; Rietveld, A.C.M.

    2015-01-01

    Auditory discrimination of speech stimuli is an essential tool in speech and language therapy, e.g., in dysarthria rehabilitation. It is unclear, however, which listener characteristics are associated with the ability to perceive differences between one's own utterance and target speech. Knowledge a

  3. Decoding speech perception by native and non-native speakers using single-trial electrophysiological data.

    Directory of Open Access Journals (Sweden)

    Alex Brandmeyer

    Full Text Available Brain-computer interfaces (BCIs are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1 Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2 Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native. A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.

  4. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights.

  5. Speech-on-speech masking with variable access to the linguistic content of the masker speech for native and nonnative english speakers.

    Science.gov (United States)

    Calandruccio, Lauren; Bradlow, Ann R; Dhar, Sumitrajit

    2014-04-01

    Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech Affiliationect masking release. A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native

  6. Discriminating non-native vowels on the basis of multimodal, auditory or visual information : Effects on infants' looking patterns and discrimination

    NARCIS (Netherlands)

    Schure, Sophie Ter; Junge, Caroline

    2016-01-01

    Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech soun

  7. Discriminating non-native vowels on the basis of multimodal, auditory or visual information : Effects on infants' looking patterns and discrimination

    NARCIS (Netherlands)

    Schure, Sophie Ter; Junge, Caroline

    2016-01-01

    Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech soun

  8. Visual speech fills in both discrimination and identification of non-intact auditory speech in children.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-07-20

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. bæz) coupled to non-intact (excised onsets) auditory speech (signified by /-b/æz). Children discriminated syllable pairs that differed in intactness (i.e. bæz:/-b/æz) and identified non-intact nonwords (/-b/æz). We predicted that visual speech would cause children to perceive the non-intact onsets as intact, resulting in more same responses for discrimination and more intact (i.e. bæz) responses for identification in the audiovisual than auditory mode. Visual speech for the easy-to-speechread /b/ but not for the difficult-to-speechread /g/ boosted discrimination and identification (about 35-45%) in children from four to fourteen years. The influence of visual speech on discrimination was uniquely associated with the influence of visual speech on identification and receptive vocabulary skills.

  9. Hate speech, report 2. Research on hate and discrimination

    OpenAIRE

    Eggebø, Helga; Stubberud, Elisabeth

    2016-01-01

    Hate speech has been a punishable offence in Norway since 1970. The prohibition against hate speech was incorporated into Norwegian legislation when Norway ratified the UN’s International Convention on the Elimination of All Forms of Racial Discrimination in 1970. In recent years, hate speech has become all the more current as an important issue of democracy on the public and political agenda. This is related to two processes: Firstly, the growth of extremism and radicalisation subsequent to ...

  10. Optimizing Automatic Speech Recognition for Low-Proficient Non-Native Speakers

    Directory of Open Access Journals (Sweden)

    Catia Cucchiarini

    2010-01-01

    Full Text Available Computer-Assisted Language Learning (CALL applications for improving the oral skills of low-proficient learners have to cope with non-native speech that is particularly challenging. Since unconstrained non-native ASR is still problematic, a possible solution is to elicit constrained responses from the learners. In this paper, we describe experiments aimed at selecting utterances from lists of responses. The first experiment on utterance selection indicates that the decoding process can be improved by optimizing the language model and the acoustic models, thus reducing the utterance error rate from 29–26% to 10–8%. Since giving feedback on incorrectly recognized utterances is confusing, we verify the correctness of the utterance before providing feedback. The results of the second experiment on utterance verification indicate that combining duration-related features with a likelihood ratio (LR yield an equal error rate (EER of 10.3%, which is significantly better than the EER for the other measures in isolation.

  11. Effects of noise, reverberation and foreign accent on native and non-native listeners' performance of English speech comprehension.

    Science.gov (United States)

    Peng, Z Ellen; Wang, Lily M

    2016-05-01

    A large number of non-native English speakers may be found in American classrooms, both as listeners and talkers. Little is known about how this population comprehends speech in realistic adverse acoustical conditions. A study was conducted to investigate the effects of background noise level (BNL), reverberation time (RT), and talker foreign accent on native and non-native listeners' speech comprehension, while controlling for English language abilities. A total of 115 adult listeners completed comprehension tasks under 15 acoustic conditions: three BNLs (RC-30, RC-40, and RC-50) and five RTs (from 0.4 to 1.2 s). Fifty-six listeners were tested with speech from native English-speaking talkers and 59 with native Mandarin-Chinese-speaking talkers. Results show that, while higher BNLs were generally more detrimental to listeners with lower English proficiency, all listeners experienced significant comprehension deficits above RC-40 with native English talkers. This limit was lower (i.e., above RC-30), however, with Chinese talkers. For reverberation, non-native listeners as a group performed best with RT up to 0.6 s, while native listeners performed equally well up to 1.2 s. A matched foreign accent benefit has also been identified, where the negative impact of higher reverberation does not exist for non-native listeners who share the talker's native language.

  12. Individual differences in the discrimination of novel speech sounds: effects of sex, temporal processing, musical and cognitive abilities.

    Science.gov (United States)

    Kempe, Vera; Thoresen, John C; Kirk, Neil W; Schaeffler, Felix; Brooks, Patricia J

    2012-01-01

    This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120) discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.

  13. Individual differences in the discrimination of novel speech sounds: effects of sex, temporal processing, musical and cognitive abilities.

    Directory of Open Access Journals (Sweden)

    Vera Kempe

    Full Text Available This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120 discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.

  14. Phonetic training and non-native speech perception--New memory traces evolve in just three days as indexed by the mismatch negativity (MMN) and behavioural measures.

    Science.gov (United States)

    Tamminen, Henna; Peltola, Maija S; Kujala, Teija; Näätänen, Risto

    2015-07-01

    Language-specific, automatically responding memory traces form the basis for speech sound perception and new neural representations can also evolve for non-native speech categories. The aim of this study was to find out how a three-day phonetic listen-and-repeat training affects speech perception, and whether it generates new memory traces. We used behavioural identification, goodness rating, discrimination, and reaction time tasks together with mismatch negativity (MMN) brain response registrations to determine the training effects on native Finnish speakers. We trained the subjects the voicing contrast in fricative sounds. Fricatives are not differentiated by voicing in Finnish, i.e., voiced fricatives do not belong to the Finnish phonological system. Therefore, they are extremely hard for Finns to learn. However, only after three days of training, the native Finnish subjects had learned to perceive the distinction. The results show striking changes in the MMN response; it was significantly larger on the second day after two training sessions. Also, the majority of the behavioural indicators showed improvement during training. Identification altered after four sessions of training and discrimination and reaction times improved throughout training. These results suggest remarkable language-learning effects both at the perceptual and pre-attentive neural level as a result of brief listen-and-repeat training in adult participants.

  15. Effects of language experience on the discrimination of the Portuguese palatal lateral by nonnative listeners.

    Science.gov (United States)

    Santos Oliveira, Daniela; Casenhiser, Devin M; Hedrick, Mark; Teixeira, António; Bunta, Ferenc

    2016-01-01

    The purpose of this study was to investigate (1) whether manner or place takes precedence over the other during a phonological category discrimination task and (2) whether this pattern of precedence persists during the early stages of acquisition of the L2. In doing so, we investigated the Portuguese palatal lateral approximant /ʎ/ since it differs from English /l/ only by the place of articulation, and from English /j/ only by the manner of articulation. Our results indicate that monolinguals' perception of the non-native sound is dominated by manner while Portuguese learners show a different pattern of results. The results are interpreted as being consistent with evidence suggesting that manner may be neurophysiologically dominant over place of articulation. The study adds further details to the literature on the effects of experience on language acquisition, and has significant clinical implications for bilingualism in general, and foreign accent training, in particular.

  16. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    Science.gov (United States)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  17. The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers.

    Science.gov (United States)

    Kawase, Saya; Hannah, Beverly; Wang, Yue

    2014-09-01

    This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.

  18. Automatic discrimination between laughter and speech

    NARCIS (Netherlands)

    Truong, K.; Leeuwen, D. van

    2007-01-01

    Emotions can be recognized by audible paralinguistic cues in speech. By detecting these paralinguistic cues that can consist of laughter, a trembling voice, coughs, changes in the intonation contour etc., information about the speaker’s state and emotion can be revealed. This paper describes the

  19. Automatic discrimination between laughter and speech

    NARCIS (Netherlands)

    Truong, K.; Leeuwen, D. van

    2007-01-01

    Emotions can be recognized by audible paralinguistic cues in speech. By detecting these paralinguistic cues that can consist of laughter, a trembling voice, coughs, changes in the intonation contour etc., information about the speaker’s state and emotion can be revealed. This paper describes the dev

  20. Speech feature discrimination in deaf children following cochlear implantation

    Science.gov (United States)

    Bergeson, Tonya R.; Pisoni, David B.; Kirk, Karen Iler

    2002-05-01

    Speech feature discrimination is a fundamental perceptual skill that is often assumed to underlie word recognition and sentence comprehension performance. To investigate the development of speech feature discrimination in deaf children with cochlear implants, we conducted a retrospective analysis of results from the Minimal Pairs Test (Robbins et al., 1988) selected from patients enrolled in a longitudinal study of speech perception and language development. The MP test uses a 2AFC procedure in which children hear a word and select one of two pictures (bat-pat). All 43 children were prelingually deafened, received a cochlear implant before 6 years of age or between ages 6 and 9, and used either oral or total communication. Children were tested once every 6 months to 1 year for 7 years; not all children were tested at each interval. By 2 years postimplant, the majority of these children achieved near-ceiling levels of discrimination performance for vowel height, vowel place, and consonant manner. Most of the children also achieved plateaus but did not reach ceiling performance for consonant place and voicing. The relationship between speech feature discrimination, spoken word recognition, and sentence comprehension will be discussed. [Work supported by NIH/NIDCD Research Grant No. R01DC00064 and NIH/NIDCD Training Grant No. T32DC00012.

  1. Robust Speech Recognition Method Based on Discriminative Environment Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    HAN Jiqing; GAO Wen

    2001-01-01

    It is an effective approach to learn the influence of environmental parameters,such as additive noise and channel distortions, from training data for robust speech recognition.Most of the previous methods are based on maximum likelihood estimation criterion. However,these methods do not lead to a minimum error rate result. In this paper, a novel discrimina-tive learning method of environmental parameters, which is based on Minimum ClassificationError (MCE) criterion, is proposed. In the method, a simple classifier and the Generalized Probabilistic Descent (GPD) algorithm are adopted to iteratively learn the environmental parameters. Consequently, the clean speech features are estimated from the noisy speech features with the estimated environmental parameters, and then the estimations of clean speech features are utilized in the back-end HMM classifier. Experiments show that the best error rate reduction of 32.1% is obtained, tested on a task of 18 isolated confusion Korean words, relative to a conventional HMM system.

  2. Designing acoustics for linguistically diverse classrooms: Effects of background noise, reverberation and talker foreign accent on speech comprehension by native and non-native English-speaking listeners

    Science.gov (United States)

    Peng, Zhao Ellen

    The current classroom acoustics standard (ANSI S12.60-2010) recommends core learning spaces not to exceed background noise level (BNL) of 35 dBA and reverberation time (RT) of 0.6 second, based on speech intelligibility performance mainly by the native English-speaking population. Existing literature has not correlated these recommended values well with student learning outcomes. With a growing population of non-native English speakers in American classrooms, the special needs for perceiving degraded speech among non-native listeners, either due to realistic room acoustics or talker foreign accent, have not been addressed in the current standard. This research seeks to investigate the effects of BNL and RT on the comprehension of English speech from native English and native Mandarin Chinese talkers as perceived by native and non-native English listeners, and to provide acoustic design guidelines to supplement the existing standard. This dissertation presents two studies on the effects of RT and BNL on more realistic classroom learning experiences. How do native and non-native English-speaking listeners perform on speech comprehension tasks under adverse acoustic conditions, if the English speech is produced by talkers of native English (Study 1) versus native Mandarin Chinese (Study 2)? Speech comprehension materials were played back in a listening chamber to individual listeners: native and non-native English-speaking in Study 1; native English, native Mandarin Chinese, and other non-native English-speaking in Study 2. Each listener was screened for baseline English proficiency level, and completed dual tasks simultaneously involving speech comprehension and adaptive dot-tracing under 15 acoustic conditions, comprised of three BNL conditions (RC-30, 40, and 50) and five RT scenarios (0.4 to 1.2 seconds). The results show that BNL and RT negatively affect both objective performance and subjective perception of speech comprehension, more severely for non-native

  3. SPEECH EMOTION RECOGNITION USING MODIFIED QUADRATIC DISCRIMINATION FUNCTION

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quadratic Discrimination Function(QDF)is commonly used in speech emotion recognition,which proceeds on the premise that the input data is normal distribution.In this Paper,we propose a transformation to normalize the emotional features,then derivate a Modified QDF(MQDF) to speech emotion recognition.Features based on prosody and voice quality are extracted and Principal Component Analysis Neural Network (PCANN) is used to reduce dimension of the feature vectors.The results show that voice quality features are effective supplement for recognition.and the method in this paper could improve the recognition ratio effectively.

  4. Space discriminative function for microphone array robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    Zhao Xianyu; Ou Zhijian; Wang Zuoying

    2005-01-01

    Based on W-disjoint orthogonality of speech mixtures, a space discriminative function was proposed to enumerate and localize competing speakers in the surrounding environments. Then, a Wiener-like post-filterer was developed to adaptively suppress interferences. Experimental results with a hands-free speech recognizer under various SNR and competing speakers settings show that nearly 69% error reduction can be obtained with a two-channel small aperture microphone array against the conventional single microphone baseline system. Comparisons were made against traditional delay-and-sum and Griffiths-Jim adaptive beamforming techniques to further assess the effectiveness of this method.

  5. Prediction of IOI-HA Scores Using Speech Reception Thresholds and Speech Discrimination Scores in Quiet

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Lantz, Johannes; Nielsen, Lars Holme

    2014-01-01

    BACKGROUND: Outcome measures can be used to improve the quality of the rehabilitation by identifying and understanding which variables influence the outcome. This information can be used to improve outcomes for clients. In clinical practice, pure-tone audiometry, speech reception thresholds (SRTs......), and speech discrimination scores (SDSs) in quiet or in noise are common assessments made prior to hearing aid (HA) fittings. It is not known whether SRT and SDS in quiet relate to HA outcome measured with the International Outcome Inventory for Hearing Aids (IOI-HA). PURPOSE: The aim of the present study...

  6. Investigating Applications of Speech-to-Text Recognition Technology for a Face-to-Face Seminar to Assist Learning of Non-Native English-Speaking Participants

    Science.gov (United States)

    Shadiev, Rustam; Hwang, Wu-Yuin; Huang, Yueh-Min; Liu, Chia-Ju

    2016-01-01

    This study applied speech-to-text recognition (STR) technology to assist non-native English-speaking participants to learn at a seminar given in English. How participants used transcripts generated by the STR technology for learning and their perceptions toward the STR were explored. Three main findings are presented in this study. Most…

  7. High stimulus variability in nonnative speech learning supports formation of abstract categories: Evidence from Japanese geminates

    NARCIS (Netherlands)

    Sadakata, M.; McQueen, J.M.

    2013-01-01

    This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative

  8. Discriminating non-native vowels on the basis of multimodal, auditory or visual information: effects on infants’ looking patterns and discrimination

    Directory of Open Access Journals (Sweden)

    Sophie eTer Schure

    2016-04-01

    Full Text Available Infants’ perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech sounds than when their input contains a one-peaked frequency distribution. Effects of frequency distributions on phonetic learning have been tested almost exclusively for auditory input. But auditory speech is usually accompanied by visual information, that is, by visible articulations. This study tested whether infants’ phonological perception is shaped by distributions of visual speech as well as by distributions of auditory speech, by comparing learning from multimodal (i.e. auditory–visual, visual-only, or auditory-only information. Dutch 8-month-old infants were exposed to either a one-peaked or two-peaked distribution from a continuum of vowels that formed a contrast in English, but not in Dutch. We used eye tracking to measure effects of distribution and sensory modality on infants’ discrimination of the contrast. Although there were no overall effects of distribution or modality, separate t-tests in each of the six training conditions demonstrated significant discrimination of the vowel contrast in the two-peaked multimodal condition. For the modalities where the mouth was visible (visual-only and multimodal we further examined infant looking patterns for the dynamic speaker’s face. Infants in the two-peaked multimodal condition looked longer at her mouth than infants in any of the three other conditions. We propose that by eight months, infants’ native vowel categories are established insofar that learning a novel contrast is supported by attention to additional information, such as visual articulations.

  9. Discriminating Non-native Vowels on the Basis of Multimodal, Auditory or Visual Information: Effects on Infants' Looking Patterns and Discrimination.

    Science.gov (United States)

    Ter Schure, Sophie; Junge, Caroline; Boersma, Paul

    2016-01-01

    Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech sounds than when their input contains a one-peaked frequency distribution. Effects of frequency distributions on phonetic learning have been tested almost exclusively for auditory input. But auditory speech is usually accompanied by visual information, that is, by visible articulations. This study tested whether infants' phonological perception is shaped by distributions of visual speech as well as by distributions of auditory speech, by comparing learning from multimodal (i.e., auditory-visual), visual-only, or auditory-only information. Dutch 8-month-old infants were exposed to either a one-peaked or two-peaked distribution from a continuum of vowels that formed a contrast in English, but not in Dutch. We used eye tracking to measure effects of distribution and sensory modality on infants' discrimination of the contrast. Although there were no overall effects of distribution or modality, separate t-tests in each of the six training conditions demonstrated significant discrimination of the vowel contrast in the two-peaked multimodal condition. For the modalities where the mouth was visible (visual-only and multimodal) we further examined infant looking patterns for the dynamic speaker's face. Infants in the two-peaked multimodal condition looked longer at her mouth than infants in any of the three other conditions. We propose that by 8 months, infants' native vowel categories are established insofar that learning a novel contrast is supported by attention to additional information, such as visual articulations.

  10. Assessing the Performance of Automatic Speech Recognition Systems When Used by Native and Non-Native Speakers of Three Major Languages in Dictation Workflows

    DEFF Research Database (Denmark)

    Zapata, Julián; Kirkedal, Andreas Søeborg

    2015-01-01

    In this paper, we report on a two-part experiment aiming to assess and compare the performance of two types of automatic speech recognition (ASR) systems on two different computational platforms when used to augment dictation workflows. The experiment was performed with a sample of speakers...... of three major languages and with different linguistic profiles: non-native English speakers; non-native French speakers; and native Spanish speakers. The main objective of this experiment is to examine ASR performance in translation dictation (TD) and medical dictation (MD) workflows without manual...

  11. Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions.

    Science.gov (United States)

    Porter, Benjamin A; Rosenthal, Tara R; Ranasinghe, Kamalini G; Kilgard, Michael P

    2011-05-16

    Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. The role of the motor system in discriminating normal and degraded speech sounds.

    Science.gov (United States)

    D'Ausilio, Alessandro; Bufalari, Ilaria; Salmas, Paola; Fadiga, Luciano

    2012-07-01

    Listening to speech recruits a network of fronto-temporo-parietal cortical areas. Classical models consider anterior, motor, sites involved in speech production whereas posterior sites involved in comprehension. This functional segregation is more and more challenged by action-perception theories suggesting that brain circuits for speech articulation and speech perception are functionally interdependent. Recent studies report that speech listening elicits motor activities analogous to production. However, the motor system could be crucially recruited only under certain conditions that make speech discrimination hard. Here, by using event-related double-pulse transcranial magnetic stimulation (TMS) on lips and tongue motor areas, we show data suggesting that the motor system may play a role in noisy, but crucially not in noise-free environments, for the discrimination of speech signals. Copyright © 2011 Elsevier Srl. All rights reserved.

  13. Speech discrimination and lip reading in patients with word deafness or auditory agnosia.

    Science.gov (United States)

    Shindo, M; Kaga, K; Tanaka, Y

    1991-02-01

    The purpose of this study was to assess the ability of four patients with word deafness or auditory agnosia to discriminate speech by reading lips. The patients were studied using nonsense monosyllables to test for speech discrimination, a lip reading test, the Token Test for auditory comprehension, and the Aphasia test. Our results show that patients with word deafness or auditory agnosia without aphasia can improve speech comprehension by reading lips in combination with listening, as compared with lip reading or listening alone. In conclusion, lip reading was shown to be useful for speech comprehension among these patients.

  14. The impact of tone language and non-native language listening on measuring speech quality

    NARCIS (Netherlands)

    Ebem, D.U.; Beerends, J.G.; Vugt, J. van; Schmidmer, C.; Kooij, R.E.; Uguru, J.O.

    2011-01-01

    The extent to which the modeling used in objective speech quality algorithms depends on the cultural background of listeners as well as on the language characteristics using American English and Igbo, an African tone language is investigated. Two different approaches were used in order to separate b

  15. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same

  16. Twenty-two-month-olds discriminate fluent from disfluent adult-directed speech.

    Science.gov (United States)

    Soderstrom, Melanie; Morgan, James L

    2007-09-01

    Deviation of real speech from grammatical ideals due to disfluency and other speech errors presents potentially serious problems for the language learner. While infants may initially benefit from attending primarily or solely to infant-directed speech, which contains few grammatical errors, older infants may listen more to adult-directed speech. In a first experiment, Post-verbal infants preferred fluent speech to disfluent speech, while Pre-verbal infants showed no preference. In a second experiment, Post-verbal infants discriminated disfluent and fluent speech even when lexical information was removed, showing that they make use of prosodic properties of the speech stream to detect disfluency. Because disfluencies are highly correlated with grammatical errors, this sensitivity provides infants with a means of filtering ungrammaticality from their input.

  17. Pragmatic assessment of request speech act of Iranian EFL learners by non-native English speaking teachers

    Directory of Open Access Journals (Sweden)

    Minoo Alemi

    2016-07-01

    Full Text Available The analysis of raters' comments on pragmatic assessment of L2 learners is among new and understudied concepts in second language studies. To shed light on this issue, the present investigation targeted important variables such as raters’ criteria and rating patterns by analyzing the interlanguage pragmatic assessment process of the Iranian non-native English speaking raters (NNESRs regarding the request speech act, while considering important factors such as raters’ gender and background teaching experiences. For this purpose, 62 raters’ rating scores and comments on Iranian EFL learners’ requests based on six situations of specified video prompts were analyzed. The results of the content analysis of raters’ comments revealed nine criteria, including pragmalinguistic and socio-pragmatic components of language, which have been noted by raters differently through six request situations. Among the considered criteria, politeness, conversers’ relationship, style and register, and explanation were of great importance to NNESRs. Furthermore, t-test and chi-square analysis of raters’ assigned rating scores and mentioned criteria across different situations verified the insignificance of factors such as raters’ gender and teaching experiences on the process of EFL learners’ pragmatic assessment. In addition, the results of the study suggest the necessity of teaching L2 pragmatics in language classes and in teacher training courses.

  18. Neural activation in speech production and reading aloud in native and non-native languages.

    Science.gov (United States)

    Berken, Jonathan A; Gracco, Vincent L; Chen, Jen-Kai; Soles, Jennika; Watkins, Kate E; Baum, Shari; Callahan, Megan; Klein, Denise

    2015-05-15

    We used fMRI to investigate neural activation in reading aloud in bilinguals differing in age of acquisition. Three groups were compared: French-English bilinguals who acquired two languages from birth (simultaneous), French-English bilinguals who learned their L2 after the age of 5 years (sequential), and English-speaking monolinguals. While the bilingual groups contrasted in age of acquisition, they were matched for language proficiency, although sequential bilinguals produced speech with a less native-like accent in their L2 than in their L1. Simultaneous bilinguals activated similar brain regions to an equivalent degree when reading in their two languages. In contrast, sequential bilinguals more strongly activated areas related to speech-motor control and orthographic to phonological mapping, the left inferior frontal gyrus, left premotor cortex, and left fusiform gyrus, when reading aloud in L2 compared to L1. In addition, the activity in these regions showed a significant positive correlation with age of acquisition. The results provide evidence for the engagement of overlapping neural substrates for processing two languages when acquired in native context from birth. However, it appears that the maturation of certain brain regions for both speech production and phonological encoding is limited by a sensitive period for L2 acquisition regardless of language proficiency.

  19. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    Science.gov (United States)

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236.

  20. Evaluation of a novel technique for assessing speech discrimination in children

    NARCIS (Netherlands)

    Newton, C.; Chiat, S.; Hald, L.A.

    2008-01-01

    Methods used to assess children's speech perception and recognition in the clinical setting are out of step with current methods used to investigate these experimentally. Traditional methods of assessing speech discrimination, such as picture pointing, yield accuracy scores which may fail to detect

  1. Cognitive Control Factors in Speech Perception at 11 Months

    Science.gov (United States)

    Conboy, Barbara T.; Sommerville, Jessica A.; Kuhl, Patricia K.

    2008-01-01

    The development of speech perception during the 1st year reflects increasing attunement to native language features, but the mechanisms underlying this development are not completely understood. One previous study linked reductions in nonnative speech discrimination to performance on nonlinguistic tasks, whereas other studies have shown…

  2. Speech Discrimination in 11-Month-Old Bilingual and Monolingual Infants: A Magnetoencephalography Study

    Science.gov (United States)

    Ferjan Ramírez, Naja; Ramírez, Rey R.; Clarke, Maggie; Taulu, Samu; Kuhl, Patricia K.

    2017-01-01

    Language experience shapes infants' abilities to process speech sounds, with universal phonetic discrimination abilities narrowing in the second half of the first year. Brain measures reveal a corresponding change in neural discrimination as the infant brain becomes selectively sensitive to its native language(s). Whether and how bilingual…

  3. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    Directory of Open Access Journals (Sweden)

    Crystal T Engineer

    2014-08-01

    Full Text Available Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes.

  4. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    Science.gov (United States)

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Kilgard, Michael P.

    2014-01-01

    Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes. PMID:25140133

  5. Discrimination Between Native and Non-Native Speech Using Visual Features Only

    NARCIS (Netherlands)

    Georgakis, Christos; Petridis, Stavros; Pantic, Maja

    2015-01-01

    Accent is a soft biometric trait that can be inferred from pronunciation and articulation patterns characterizing the speaking style of an individual. Past research has addressed the task of classifying accent, as belonging to a native language speaker or a foreign language speaker, by means of the

  6. Infants' brain responses to speech suggest analysis by synthesis.

    Science.gov (United States)

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-01

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  7. [Audiological evaluation of the middle ear implant--speech discrimination under noise circumstances].

    Science.gov (United States)

    Saiki, T; Gyo, K; Yanagihara, N

    1990-04-01

    Speech discrimination scores under noise circumstances were studied by use of the middle ear implant (MEI) and the conventional hearing aid (HA, HA-33, RION, Co, Ltd). The studies were performed in 10 patients implanted with the MEI and in 12 adult volunteers with normal hearing as a control. The tests were carried out using Japanese monosyllabic lists from 57S-speech discrimination test list as a test sound and multi-talker noise as a noise source. A speaker was placed in front of a subject in one meter apart. Sound characteristics of HA were adjusted as far as possible to those of the MEI by use of a sound equalizer. The intensity of speech sound pressure was adjusted at 65dB SPL, while that of the noise was changed to 65, 70 and 75dB SPL in the position of patients. Audiological evaluation of the MEI, the HA and control obtained by percentages of correct answers to 50 words in speech discrimination test with and without noise (65, 70, 75dB SPL). When the test was performed without noise, speech discrimination scores by the MEI and the HA were either 96.8 +/- 3.6% and 94.8 +/- 4.1%. However, under noise circumstances (65dB SPL) that by both devices were either 81.6 +/- 9.1% and 66.8 +/- 10.6% (P less than 0.001). When the intensity of the noise increased to 70 and 75dB SPL, speech discrimination scores by both devices deprecated together with consistent difference (P less than 0.01). Moreover speech discrimination scores by the MEI were almost same as control.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Raman spectroscopy and multivariate analysis for the rapid discrimination between native-like and non-native states in freeze-dried protein formulations.

    Science.gov (United States)

    Pieters, Sigrid; Vander Heyden, Yvan; Roger, Jean-Michel; D'Hondt, Matthias; Hansen, Laurent; Palagos, Bernard; De Spiegeleer, Bart; Remon, Jean-Paul; Vervaet, Chris; De Beer, Thomas

    2013-10-01

    This study investigates whether Raman spectroscopy combined with multivariate analysis (MVA) enables a rapid and direct differentiation between two classes of conformational states, i.e., native-like and non-native proteins, in freeze-dried formulations. A data set comprising of 99 spectra, both from native-like and various types of non-native freeze-dried protein formulations, was obtained by freeze-drying lactate dehydrogenase (LDH) as model protein under various conditions. Changes in the secondary structure in the solid freeze-dried proteins were determined through visual interpretation of the blank corrected second derivative amide I band in the ATR-FTIR spectra (further called FTIR spectra) and served as an independent reference to assign class labels. Exploratory analysis and supervised classification, using Principal Components Analysis (PCA) and Partial Least Squares - Linear Discriminant Analysis (PLS-LDA), respectively, revealed that Raman spectroscopy is with 95% accuracy able to correctly discriminate between native-like and non-native states in the tested freeze-dried LDH formulations. Backbone (i.e., amide III) and side chain sensitive spectral regions proved important for making the discrimination between both classes. As discrimination was not influenced by the spectral signals from the tested excipients, there was no need for blank corrections. The Raman model may allow direct and automated analysis of the investigated quality attribute, opening possibilities for a real time and in-line quality indication as a future step. However, the sensitivity of the method should be further investigated and where possible improved. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Pitch characteristics of infant-directed speech affect infants' ability to discriminate vowels.

    Science.gov (United States)

    Trainor, Laurel J; Desjardins, Renée N

    2002-06-01

    "Baby talk" or speech directed to prelinguistic infants is high in pitch and has exaggerated pitch contours (up/down patterns of pitch change) across languages and cultures. Using an acoustic model, we predicted that the large pitch contours of infant-directed speech should improve infants' ability to discriminate vowels. On the other hand, the same model predicted that high pitch would not benefit, and might actually impair, infants' ability to discriminate vowels. We then confirmed these predictions experimentally. We conclude that the exaggerated pitch contours of infant-directed speech aid infants' acquisition of vowel categories but that the high pitch of infant-directed speech must serve another function, such as attracting infants' attention or aiding emotional communication.

  10. Discrimination of stress in speech and music: a mismatch negativity (MMN) study.

    Science.gov (United States)

    Peter, Varghese; McArthur, Genevieve; Thompson, William Forde

    2012-12-01

    The aim of this study was to determine if duration-related stress in speech and music is processed in a similar way in the brain. To this end, we tested 20 adults for their abstract mismatch negativity (MMN) event-related potentials to two duration-related stress patterns: stress on the first syllable or note (long-short), and stress on the second syllable or note (short-long). A significant MMN was elicited for both speech and music except for the short-long speech stimulus. The long-short stimuli elicited larger MMN amplitudes for speech and music compared to short-long stimuli. An extra negativity-the late discriminative negativity (LDN)-was observed only for music. The larger MMN amplitude for long-short stimuli might be due to the familiarity of the stress pattern in speech and music. The presence of LDN for music may reflect greater long-term memory transfer for music stimuli.

  11. Dynamic modulation of shared sensory and motor cortical rhythms mediates speech and non-speech discrimination performance

    Directory of Open Access Journals (Sweden)

    Andrew Lee Bowers

    2014-05-01

    Full Text Available Oscillatory models of speech processing have proposed that rhythmic cortical oscillations in sensory and motor regions modulate speech sound processing from the bottom-up via phase reset at low frequencies (3-10Hz and from the top-down via the disinhibition of alpha/beta rhythms (8-30Hz. To investigate how the proposed rhythms mediate perceptual performance, EEG was recorded while participants passively listened to or actively identified speech and tone-sweeps in a two-force choice in noise discrimination task presented high and low signal-to-noise ratios. EEG data were decomposed using independent component analysis (ICA and clustered across participants using principle component methods in EEGLAB. Clustering analysis showed left and right hemisphere sensorimotor and posterior temporal lobe components. In posterior temporal clusters, increases in phase reset at low-frequencies were driven by the quality of bottom-up acoustic information for speech and non-speech stimuli, whereas phase reset in sensorimotor clusters was associated with top-down active task demands. A comparison of correct discrimination trials to those identified at chance showed an earlier performance related effect for the left sensorimotor cluster relative to the left-temporal lobe cluster during the syllable discrimination task only. The right sensorimotor cluster was associated with performance related differences for tone-sweep stimuli only. Alpha/beta suppression was associated with active tasks only in sensorimotor and temporal clusters. Findings are consistent with internal model accounts suggesting that early efferent sensorimotor models transmitted along alpha and beta channels reflect a release from inhibition related to active attention to auditory stimuli. Results are discussed in the broader context of dynamic, oscillatory models of cognition proposing that top-down internally generated states interact with bottom-up sensory processing to enhance task performance.

  12. Suppression of the µ rhythm during speech and non-speech discrimination revealed by independent component analysis: implications for sensorimotor integration in speech processing.

    Directory of Open Access Journals (Sweden)

    Andrew Bowers

    Full Text Available BACKGROUND: Constructivist theories propose that articulatory hypotheses about incoming phonetic targets may function to enhance perception by limiting the possibilities for sensory analysis. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor μ rhythm to accurate speech and non-speech discrimination performance (i.e., correct trials.. METHODS: Sixteen participants (15 female and 1 male were asked to passively listen to or actively identify speech and tone-sweeps in a two-force choice discrimination task while the electroencephalograph (EEG was recorded from 32 channels. The stimuli were presented at signal-to-noise ratios (SNRs in which discrimination accuracy was high (i.e., 80-100% and low SNRs producing discrimination performance at chance. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. RESULTS: ICA revealed left and right sensorimotor µ components for 14/16 and 13/16 participants respectively that were identified on the basis of scalp topography, spectral peaks, and localization to the precentral and postcentral gyri. Time-frequency analysis of left and right lateralized µ component clusters revealed significant (pFDR<.05 suppression in the traditional beta frequency range (13-30 Hz prior to, during, and following syllable discrimination trials. No significant differences from baseline were found for passive tasks. Tone conditions produced right µ beta suppression following stimulus onset only. For the left µ, significant differences in the magnitude of beta suppression were found for correct speech discrimination trials relative to chance trials following stimulus offset. CONCLUSIONS: Findings are consistent with constructivist, internal model theories proposing that early forward motor models generate predictions about likely phonemic units

  13. Electrical brain imaging evidences left auditory cortex involvement in speech and non-speech discrimination based on temporal features

    Directory of Open Access Journals (Sweden)

    Jancke Lutz

    2007-12-01

    Full Text Available Abstract Background Speech perception is based on a variety of spectral and temporal acoustic features available in the acoustic signal. Voice-onset time (VOT is considered an important cue that is cardinal for phonetic perception. Methods In the present study, we recorded and compared scalp auditory evoked potentials (AEP in response to consonant-vowel-syllables (CV with varying voice-onset-times (VOT and non-speech analogues with varying noise-onset-time (NOT. In particular, we aimed to investigate the spatio-temporal pattern of acoustic feature processing underlying elemental speech perception and relate this temporal processing mechanism to specific activations of the auditory cortex. Results Results show that the characteristic AEP waveform in response to consonant-vowel-syllables is on a par with those of non-speech sounds with analogue temporal characteristics. The amplitude of the N1a and N1b component of the auditory evoked potentials significantly correlated with the duration of the VOT in CV and likewise, with the duration of the NOT in non-speech sounds. Furthermore, current density maps indicate overlapping supratemporal networks involved in the perception of both speech and non-speech sounds with a bilateral activation pattern during the N1a time window and leftward asymmetry during the N1b time window. Elaborate regional statistical analysis of the activation over the middle and posterior portion of the supratemporal plane (STP revealed strong left lateralized responses over the middle STP for both the N1a and N1b component, and a functional leftward asymmetry over the posterior STP for the N1b component. Conclusion The present data demonstrate overlapping spatio-temporal brain responses during the perception of temporal acoustic cues in both speech and non-speech sounds. Source estimation evidences a preponderant role of the left middle and posterior auditory cortex in speech and non-speech discrimination based on temporal

  14. Native and non-native speech sound processing and the neural mismatch responses: A longitudinal study on classroom-based foreign language learning.

    Science.gov (United States)

    Jost, Lea B; Eberhard-Moscicka, Aleksandra K; Pleisch, Georgette; Heusser, Veronica; Brandeis, Daniel; Zevin, Jason D; Maurer, Urs

    2015-06-01

    Learning a foreign language in a natural immersion context with high exposure to the new language has been shown to change the way speech sounds of that language are processed at the neural level. It remains unclear, however, to what extent this is also the case for classroom-based foreign language learning, particularly in children. To this end, we presented a mismatch negativity (MMN) experiment during EEG recordings as part of a longitudinal developmental study: 38 monolingual (Swiss-) German speaking children (7.5 years) were tested shortly before they started to learn English at school and followed up one year later. Moreover, 22 (Swiss-) German adults were recorded. Instead of the originally found positive mismatch response in children, an MMN emerged when applying a high-pass filter of 3 Hz. The overlap of a slow-wave positivity with the MMN indicates that two concurrent mismatch processes were elicited in children. The children's MMN in response to the non-native speech contrast was smaller compared to the native speech contrast irrespective of foreign language learning, suggesting that no additional neural resources were committed to processing the foreign language speech sound after one year of classroom-based learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-01

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  16. Pragmatic Difficulties in the Production of the Speech Act of Apology by Iraqi EFL Learners

    Science.gov (United States)

    Al-Ghazalli, Mehdi Falih; Al-Shammary, Mohanad A. Amert

    2014-01-01

    The purpose of this paper is to investigate the pragmatic difficulties encountered by Iraqi EFL university students in producing the speech act of apology. Although the act of apology is easy to recognize or use by native speakers of English, non-native speakers generally encounter difficulties in discriminating one speech act from another. The…

  17. Speech discrimination difficulties in High-Functioning Autism Spectrum Disorder are likely independent of auditory hypersensitivity.

    Directory of Open Access Journals (Sweden)

    William Andrew Dunlop

    2016-08-01

    Full Text Available Autism Spectrum Disorder (ASD, characterised by impaired communication skills and repetitive behaviours, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants.

  18. Tone model integration based on discriminative weight training for Putonghua speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2008-01-01

    A discriminative framework of tone model integration in continuous speech recognition was proposed. The method uses model dependent weights to scale probabilities of the hidden Markov models based on spectral features and tone models based on tonal features.The weights are discriminatively trahined by minimum phone error criterion. Update equation of the model weights based on extended Baum-Welch algorithm is derived. Various schemes of model weight combination are evaluated and a smoothing technique is introduced to make training robust to over fitting. The proposed method is ewluated on tonal syllable output and character output speech recognition tasks. The experimental results show the proposed method has obtained 9.5% and 4.7% relative error reduction than global weight on the two tasks due to a better interpolation of the given models. This proves the effectiveness of discriminative trained model weights for tone model integration.

  19. Age of Acquisition and Proficiency in a Second Language Independently Influence the Perception of Non-Native Speech

    Science.gov (United States)

    Archila-Suerte, Pilar; Zevin, Jason; Bunta, Ferenc; Hernandez, Arturo E.

    2012-01-01

    Sensorimotor processing in children and higher-cognitive processing in adults could determine how non-native phonemes are acquired. This study investigates how age-of-acquisition (AOA) and proficiency-level (PL) predict native-like perception of statistically dissociated L2 categories, i.e., within-category and between-category. In a similarity…

  20. Infants' Discrimination of Familiar and Unfamiliar Accents in Speech

    Science.gov (United States)

    Butler, Joseph; Floccia, Caroline; Goslin, Jeremy; Panneton, Robin

    2011-01-01

    This study investigates infants' discrimination abilities for familiar and unfamiliar regional English accents. Using a variation of the head-turn preference procedure, 5-month-old infants demonstrated that they were able to distinguish between their own South-West English accent and an unfamiliar Welsh English accent. However, this distinction…

  1. Discriminative tonal feature extraction method in mandarin speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2007-01-01

    To utilize the supra-segmental nature of Mandarin tones, this article proposes a feature extraction method for hidden markov model (HMM) based tone modeling. The method uses linear transforms to project F0 (fundamental frequency) features of neighboring syllables as compensations, and adds them to the original F0 features of the current syllable. The transforms are discriminatively trained by using an objective function termed as "minimum tone error", which is a smooth approximation of tone recognition accuracy. Experiments show that the new tonal features achieve 3.82% tone recognition rate improvement, compared with the baseline, using maximum likelihood trained HMM on the normal F0 features. Further experiments show that discriminative HMM training on the new features is 8.78% better than the baseline.

  2. Discrimination of foreign language speech contrasts by English monolinguals and French/English bilinguals.

    Science.gov (United States)

    McKelvie-Sebileau, Pippa; Davis, Chris

    2014-05-01

    The primary aim of this study was to determine whether late French/English bilinguals are able to utilize knowledge of bilabial stop contrasts that exist in each of their separate languages to discriminate bilabial stop contrasts from a new language (Thai). Secondary aims were to determine associations between bilabial stop consonant production in the L1 and the L2, between language learning factors and production and discrimination, and to compare English bilinguals' and monolinguals' discrimination. Three Thai bilabial stop consonant pairs differentiated by Voice Onset Time (VOT) (combinations of [b], [p], and [p(h)]) were presented to 28 French-English bilinguals, 25 English-French bilinguals, and 43 English monolinguals in an AX discrimination task. It was hypothesized that L2 experience would facilitate discrimination of contrasts that were phonemic in the L2 but not in the L1 for bilinguals. Only limited support for this hypothesis was found. However, results indicate that high production proficiency bilinguals had higher discrimination of the phonemic L2 contrasts (non-phonemic in L1). Discrimination patterns indicate lasting L1 influence, with similarity between unknown foreign language contrasts and L1 contrasts influencing discrimination rates. Production results show evidence for L2 influence in the L1. Results are discussed in the context of current speech perception models.

  3. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity

    NARCIS (Netherlands)

    Jansson-Verkasalo, E.; Eggers, K.; Järvenpää, A.; Suominen, K.; Van Den Bergh, B.R.H.; de Nil, L.; Kujala, T.

    2014-01-01

    Purpose Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are

  4. Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.

    Science.gov (United States)

    Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine

    2017-03-22

    Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language

  5. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech From Speech Delay: III. Theoretical Coherence of the Pause Marker with Speech Processing Deficits in Childhood Apraxia of Speech.

    Science.gov (United States)

    Shriberg, Lawrence D; Strand, Edythe A; Fourakis, Marios; Jakielski, Kathy J; Hall, Sheryl D; Karlsson, Heather B; Mabie, Heather L; McSweeny, Jane L; Tilkens, Christie M; Wilson, David L

    2017-04-14

    Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. PM and other scores were obtained for 264 participants in 6 groups: CAS in idiopathic, neurogenetic, and complex neurodevelopmental disorders; adult-onset apraxia of speech (AAS) consequent to stroke and primary progressive apraxia of speech; and idiopathic speech delay. Participants with CAS and AAS had significantly lower scores than typically speaking reference participants and speech delay controls on measures posited to assess representational and transcoding processes. Representational deficits differed between CAS and AAS groups, with support for both underspecified linguistic representations and memory/access deficits in CAS, but for only the latter in AAS. CAS-AAS similarities in the age-sex standardized percentages of occurrence of the most frequent type of inappropriate pauses (abrupt) and significant differences in the standardized occurrence of appropriate pauses were consistent with speech processing findings. Results support the hypotheses of core representational and transcoding speech processing deficits in CAS and theoretical coherence of the PM's pause-speech elements with these deficits.

  6. Gender and vocal production mode discrimination using the high frequencies for speech and singing.

    Science.gov (United States)

    Monson, Brian B; Lotto, Andrew J; Story, Brad H

    2014-01-01

    Humans routinely produce acoustical energy at frequencies above 6 kHz during vocalization, but this frequency range is often not represented in communication devices and speech perception research. Recent advancements toward high-definition (HD) voice and extended bandwidth hearing aids have increased the interest in the high frequencies. The potential perceptual information provided by high-frequency energy (HFE) is not well characterized. We found that humans can accomplish tasks of gender discrimination and vocal production mode discrimination (speech vs. singing) when presented with acoustic stimuli containing only HFE at both amplified and normal levels. Performance in these tasks was robust in the presence of low-frequency masking noise. No substantial learning effect was observed. Listeners also were able to identify the sung and spoken text (excerpts from "The Star-Spangled Banner") with very few exposures. These results add to the increasing evidence that the high frequencies provide at least redundant information about the vocal signal, suggesting that its representation in communication devices (e.g., cell phones, hearing aids, and cochlear implants) and speech/voice synthesizers could improve these devices and benefit normal-hearing and hearing-impaired listeners.

  7. Gender and vocal production mode discrimination using the high frequencies for speech and singing

    Directory of Open Access Journals (Sweden)

    Brian B Monson

    2014-10-01

    Full Text Available Humans routinely create acoustical energy at frequencies above 6 kHz during vocalization, but this frequency range is often not represented in communication devices and speech perception research. Recent advancements toward HD voice and extended bandwidth hearing aids have increased the interest in the high frequencies. The potential perceptual information provided by high-frequency energy (HFE is not well characterized. We found that humans can accomplish tasks of gender discrimination and vocal production mode discrimination (speech vs. singing when presented with acoustic stimuli containing only HFE at both amplified and normal levels. Performance in these tasks was robust in the presence of low-frequency masking noise. No substantial learning effect was observed. Listeners also were able to identify the sung and spoken text (excerpts from The Star-Spangled Banner with very few exposures. These results add to the increasing evidence that the high frequencies provide at least redundant information about the vocal signal, suggesting that its representation in communication devices (e.g., cell phones, hearing aids, and cochlear implants and speech/voice synthesizers could improve these devices and benefit normal-hearing and hearing-impaired listeners.

  8. Intelligibility of American English Vowels of Native and Non-Native Speakers in Quiet and Speech-Shaped Noise

    Science.gov (United States)

    Liu, Chang; Jin, Su-Hyun

    2013-01-01

    This study examined intelligibility of twelve American English vowels produced by English, Chinese, and Korean native speakers in quiet and speech-shaped noise in which vowels were presented at six sensation levels from 0 dB to 10 dB. The slopes of vowel intelligibility functions and the processing time for listeners to identify vowels were…

  9. L2 Learners' Assessments of Accentedness, Fluency, and Comprehensibility of Native and Nonnative German Speech

    Science.gov (United States)

    O'Brien, Mary Grantham

    2014-01-01

    In early stages of classroom language learning, many adult second language (L2) learners communicate primarily with one another, yet we know little about which speech stream characteristics learners tune into or the extent to which they understand this lingua franca communication. In the current study, 25 native English speakers learning German as…

  10. Intelligibility of American English Vowels of Native and Non-Native Speakers in Quiet and Speech-Shaped Noise

    Science.gov (United States)

    Liu, Chang; Jin, Su-Hyun

    2013-01-01

    This study examined intelligibility of twelve American English vowels produced by English, Chinese, and Korean native speakers in quiet and speech-shaped noise in which vowels were presented at six sensation levels from 0 dB to 10 dB. The slopes of vowel intelligibility functions and the processing time for listeners to identify vowels were…

  11. Theta Brain Rhythms Index Perceptual Narrowing in Infant Speech Perception

    Directory of Open Access Journals (Sweden)

    Alexis eBosseler

    2013-10-01

    Full Text Available The development of speech perception shows a dramatic transition between infancy and adulthood. Between 6 and 12 months, infants’ initial ability to discriminate all phonetic units across the worlds’ languages narrows—native discrimination increases while nonnative discrimination shows a steep decline. We used magnetoencephalography (MEG to examine whether brain oscillations in the theta band (4-8Hz, reflecting increases in attention and cognitive effort, would provide a neural measure of the perceptual narrowing phenomenon in speech. Using an oddball paradigm, we varied speech stimuli in two dimensions, stimulus frequency (frequent vs. infrequent and language (native vs. nonnative speech syllables and tested 6-month-old infants, 12-month-old infants, and adults. We hypothesized that 6-month-old infants would show increased relative theta power (RTP for frequent syllables, regardless of their status as native or nonnative syllables, reflecting young infants’ attention and cognitive effort in response to highly frequent stimuli (statistical learning. In adults, we hypothesized increased RTP for nonnative stimuli, regardless of their presentation frequency, reflecting increased cognitive effort for nonnative phonetic categories. The 12-month-old infants were expected to show a pattern in transition, but one more similar to adults than to 6-month-old infants. The MEG brain rhythm results supported these hypotheses. We suggest that perceptual narrowing in speech perception is governed by an implicit learning process. This learning process involves an implicit shift in attention from frequent events (infants to learned categories (adults. Theta brain oscillatory activity may provide an index of perceptual narrowing beyond speech, and would offer a test of whether the early speech learning process is governed by domain-general or domain-specific processes.

  12. Comparison of ISFs and LSFs in Speech/Music Discrimination System

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The immittance spectral frequencies (ISFs) is proposed as a new set of classification features and compared with the linear spectral frequencies (LSFs) applied in a frame-level wideband speech/music discrimination system. These two sets of features can be shared by the classifier and coding module to reduce the total computational complexity, making our classification system suitable for multi-mode audio coding applications. A performance assessment and comparison of the features are made. The experiment results show that the ISFs and LSFs have similar good performance when using full covariance matrices in classification models and the ISFs perform slightly better when using diagonal matrices. Their statistical differences for speech and music signals are also revealed.

  13. Discrimination of static and dynamic spectral patterns by children and young adults in relationship to speech perception in noise

    Directory of Open Access Journals (Sweden)

    Hanin Rayes

    2014-03-01

    Full Text Available Past work has shown relationship between the ability to discriminate spectral patterns and measures of speech intelligibility. The purpose of this study was to investigate the ability of both children and young adults to discriminate static and dynamic spectral patterns, comparing performance between the two groups and evaluating within- group results in terms of relationship to speech-in-noise perception. Data were collected from normal-hearing children (age range: 5.4-12.8 years and young adults (mean age: 22.8 years on two spectral discrimination tasks and speech-in-noise perception. The first discrimination task, involving static spectral profiles, measured the ability to detect a change in the phase of a low-density sinusoidal spectral ripple of wideband noise. Using dynamic spectral patterns, the second task determined the signal-to-noise ratio needed to discriminate the temporal pattern of frequency fluctuation imposed by stochastic lowrate frequency modulation (FM. Children performed significantly poorer than young adults on both discrimination tasks. For children, a significant correlation between speech-in-noise perception and spectral- pattern discrimination was obtained only with the dynamic patterns of the FM condition, with partial correlation suggesting that factors related to the children’s age mediated the relationship.

  14. THE REFLECTION OF BILINGUALISM IN THE SPEECH OF PRESCHOOL CHILDREN SPEAKING NATIVE (ERZYA AND NON-NATIVE (RUSSIAN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Mosina, N.M.

    2016-03-01

    Full Text Available This article considers the specific features of Mordovian speech of 16 bilingual children, aged 3 to 7 years, speaking both the Erzya and Russian languages, living in Mordovia. Their language is studied on the example of short stories in pictures, and it attempts to identify the influence of the Russian language on the Erzya one and to detect the occurrences of interference at the lexical and grammatical levels.

  15. Right hemisphere specialization for intensity discrimination of musical and speech sounds.

    Science.gov (United States)

    Brancucci, Alfredo; Babiloni, Claudio; Rossini, Paolo Maria; Romani, Gian Luca

    2005-01-01

    Sound intensity is the primary and most elementary feature of auditory signals. Its discrimination plays a fundamental role in different behaviours related to auditory perception such as sound source localization, motion detection, and recognition of speech sounds. This study was aimed at investigating hemispheric asymmetries for processing intensity of complex tones and consonant-vowel syllables. Forty-four right-handed non-musicians were presented with two dichotic matching-to-sample tests with focused attention: one with complex tones of different intensities (musical test) and the other with consonant-vowel syllables of different intensities (speech test). Intensity differences (60, 70, and 80 dBA) were obtained by altering the gain of a synthesized harmonic tone (260 Hz fundamental frequency) and of a consonant-vowel syllable (/ba/) recorded from a natural voice. Dependent variables were accuracy and reaction time. Results showed a significant clear-cut left ear advantage in both tests for both dependent variables. A monaural control experiment ruled out possible attentional biases. This study provides behavioural evidence of a right hemisphere specialization for the perception of the intensity of musical and speech sounds in healthy subjects.

  16. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity.

    Science.gov (United States)

    Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija

    2014-09-01

    Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Evidence for language transfer leading to a perceptual advantage for non-native listeners.

    Science.gov (United States)

    Chang, Charles B; Mishler, Alan

    2012-10-01

    Phonological transfer from the native language is a common problem for non-native speakers that has repeatedly been shown to result in perceptual deficits vis-à-vis native speakers. It was hypothesized, however, that transfer could help, rather than hurt, if it resulted in a beneficial bias. Due to differences in pronunciation norms between Korean and English, Koreans in the U.S. were predicted to be better than Americans at perceiving unreleased stops-not only in their native language (Korean) but also in their non-native language (English). In three experiments, Koreans were found to be significantly more accurate than Americans at identifying unreleased stops in Korean, at identifying unreleased stops in English, and at discriminating between the presence and absence of an unreleased stop in English. Taken together, these results suggest that cross-linguistic transfer is capable of boosting speech perception by non-natives beyond native levels.

  18. Cognitive and Linguistic Sources of Variance in 2-Year-Olds' Speech-Sound Discrimination: A Preliminary Investigation

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2014-01-01

    Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…

  19. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  20. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  1. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: IV. the Pause Marker Index

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: Three previous articles provided rationale, methods, and several forms of validity support for a diagnostic marker of childhood apraxia of speech (CAS), termed the pause marker (PM). Goals of the present article were to assess the validity and stability of the PM Index (PMI) to scale CAS severity. Method: PM scores and speech, prosody,…

  2. Speech-feature discrimination in children with Asperger syndrome as determined with the multi-feature mismatch negativity paradigm.

    Science.gov (United States)

    Kujala, T; Kuuluvainen, S; Saalasti, S; Jansson-Verkasalo, E; von Wendt, L; Lepistö, T

    2010-09-01

    Asperger syndrome, belonging to the autistic spectrum of disorders, involves deficits in social interaction and prosodic use of language but normal development of formal language abilities. Auditory processing involves both hyper- and hypoactive reactivity to acoustic changes. Responses composed of mismatch negativity (MMN) and obligatory components were recorded for five types of deviations in syllables (vowel, vowel duration, consonant, syllable frequency, syllable intensity) with the multi-feature paradigm from 8-12-year old children with Asperger syndrome. Children with Asperger syndrome had larger MMNs for intensity and smaller MMNs for frequency changes than typically developing children, whereas no MMN group differences were found for the other deviant stimuli. Furthermore, children with Asperger syndrome performed more poorly than controls in Comprehension of Instructions subtest of a language test battery. Cortical speech-sound discrimination is aberrant in children with Asperger syndrome. This is evident both as hypersensitive and depressed neural reactions to speech-sound changes, and is associated with features (frequency, intensity) which are relevant for prosodic processing. The multi-feature MMN paradigm, which includes variation and thereby resembles natural speech hearing circumstances, suggests abnormal pattern of speech discrimination in Asperger syndrome, including both hypo- and hypersensitive responses for speech features. 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Word Durations in Non-Native English

    Science.gov (United States)

    Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.

    2010-01-01

    In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172

  4. A qualitative analysis of hate speech reported to the Romanian National Council for Combating Discrimination (2003‑2015

    Directory of Open Access Journals (Sweden)

    Adriana Iordache

    2015-12-01

    Full Text Available The article analyzes the specificities of Romanian hate speech over a period of twelve years through a qualitative analysis of 384 Decisions of the National Council for Combating Discrimination. The study employs a coding methodology which allows one to separate decisions according to the group that was the victim of hate speech. The article finds that stereotypes employed are similar to those encountered in the international literature. The main target of hate speech is the Roma, who are seen as „dirty“, „uncivilized“ and a threat to Romania’s image abroad. Other stereotypes encountered were that of the „disloyal“ Hungarian and of the sexually promiscuous woman. Moreover, women are seen as unfit for management positions. The article also discusses stereotypes about homosexuals, who are seen as „sick“ and about non-orthodox religions, portrayed as „sectarian“.

  5. Earlier speech exposure does not accelerate speech acquisition.

    Science.gov (United States)

    Peña, Marcela; Werker, Janet F; Dehaene-Lambertz, Ghislaine

    2012-08-15

    Critical periods in language acquisition have been discussed primarily with reference to studies of people who are deaf or bilingual. Here, we provide evidence on the opening of sensitivity to the linguistic environment by studying the response to a change of phoneme at a native and nonnative phonetic boundary in full-term and preterm human infants using event-related potentials. Full-term infants show a decline in their discrimination of nonnative phonetic contrasts between 9 and 12 months of age. Because the womb is a high-frequency filter, many phonemes are strongly degraded in utero. Preterm infants thus benefit from earlier and richer exposure to broadcast speech. We find that preterms do not take advantage of this enriched linguistic environment: the decrease in amplitude of the mismatch response to a nonnative change of phoneme at the end of the first year of life was dependent on maturational age and not on the duration of exposure to broadcast speech. The shaping of phonological representations by the environment is thus strongly constrained by brain maturation factors.

  6. Native and Non-native Teachers’ Pragmatic Criteria for Rating Request Speech Act: The Case of American and Iranian EFL Teachers

    Directory of Open Access Journals (Sweden)

    Minoo Alemi

    2017-04-01

    Full Text Available Abstract: Over the last few decades, several aspects of pragmatic knowledge and its effects on teaching  and  learning  a  second  language  (L2  have  been  explored  in  many  studies.  However, among  various  studies,  the  area  of  interlanguage  pragmatic  (ILP  assessment  is  quite  novel issue and many features of it have remained unnoticed. As ILP assessment has received more attention recently, the necessity of investigation on the EFL teachers‟ rating criteria for rating various  speech  acts  has  become  important.  In  this  respect,  the  present  study  aimed  to investigate  the  native  and  non-native EFL teachers‟ rating scores and criteria  regarding  the speech  act  of  request.  To  this  end,  50  American  ESL  teachers  and  50  Iranian  EFL  teachers participated to rate the EFL learners‟ responses to video-prompted Discourse Completion Tests (DCTs  regarding  the  speech  act  of  request.  Raters  were  supposed to rate the EFL learners‟ responses and mention their criteria for assessment. The results of the content analysis of raters‟ comments revealed nine criteria that they considered in their assessment. Moreover, the result of  the  t-test  and  chi-square analyses of raters‟ rating scores and criteria proved that there are significant differences between native and non-native EFL teachers‟ rating pattern. The results of this study also shed light on importance of sociopragmatic and pragmalinguistic features in native  and  non-native teachers‟ pragmatic rating, which can have several implications for L2 teachers, learners, and material developers. معیارهای معلمان زبان بومی و غیربومی در نمره دهی کنش کلامی درخواست : مورد معلمان انگلیسی زبان آمریکایی و ایرانی چکیده: طی چند دهه اخیر،  جنبه های 

  7. Effect of Age on Silent Gap Discrimination in Synthetic Speech Stimuli.

    Science.gov (United States)

    Lister, Jennifer; Tarver, Kenton

    2004-01-01

    The difficulty that older listeners experience understanding conversational speech may be related to their limited ability to use information present in the silent intervals (i.e., temporal gaps) between dynamic speech sounds. When temporal gaps are present between nonspeech stimuli that are spectrally invariant (e.g., noise bands or sinusoids),…

  8. Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study

    Science.gov (United States)

    Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle

    2012-01-01

    In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…

  9. Effects of peripheral auditory adaptation on the discrimination of speech sounds

    OpenAIRE

    Lacerda, Francisco

    1987-01-01

    This study investigates perceptual effects of discharge rate adaptation in the auditory-nerve fibers. Discrimination tests showed that brief synthetic stimuli with stationary formants and periodic source were better discriminated when they had an abrupt as opposed to a gradual onset (non-adapted vs adapted condition). This effect was not observed for corresponding stimuli with noise source. Discrimination among synthetic /da/ stimuli (abrupt onsets) was worse than among /ad/ stimuli when the ...

  10. Discrimination

    National Research Council Canada - National Science Library

    Midtbøen, Arnfinn H; Rogstad, Jon

    2012-01-01

    ... of discrimination in the labour market as well as to the mechanisms involved in discriminatory hiring practices. The design has several advantages compared to -‘single-method’ approaches and provides a more substantial understanding of the processes leading to ethnic inequality in the labour market.

  11. Pitch expertise is not created equal: Cross-domain effects of musicianship and tone language experience on neural and behavioural discrimination of speech and music.

    Science.gov (United States)

    Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain

    2015-05-01

    Psychophysiological evidence supports a music-language association, such that experience in one domain can impact processing required in the other domain. We investigated the bidirectionality of this association by measuring event-related potentials (ERPs) in native English-speaking musicians, native tone language (Cantonese) nonmusicians, and native English-speaking nonmusician controls. We tested the degree to which pitch expertise stemming from musicianship or tone language experience similarly enhances the neural encoding of auditory information necessary for speech and music processing. Early cortical discriminatory processing for music and speech sounds was characterized using the mismatch negativity (MMN). Stimuli included 'large deviant' and 'small deviant' pairs of sounds that differed minimally in pitch (fundamental frequency, F0; contrastive musical tones) or timbre (first formant, F1; contrastive speech vowels). Behavioural F0 and F1 difference limen tasks probed listeners' perceptual acuity for these same acoustic features. Musicians and Cantonese speakers performed comparably in pitch discrimination; only musicians showed an additional advantage on timbre discrimination performance and an enhanced MMN responses to both music and speech. Cantonese language experience was not associated with enhancements on neural measures, despite enhanced behavioural pitch acuity. These data suggest that while both musicianship and tone language experience enhance some aspects of auditory acuity (behavioural pitch discrimination), musicianship confers farther-reaching enhancements to auditory function, tuning both pitch and timbre-related brain processes.

  12. Speech Intelligibility and Accents in Speech-Mediated Interfaces: Results and Recommendations

    Science.gov (United States)

    Lawrence, Halcyon M.

    2013-01-01

    There continues to be significant growth in the development and use of speech--mediated devices and technology products; however, there is no evidence that non-native English speech is used in these devices, despite the fact that English is now spoken by more non-native speakers than native speakers, worldwide. This relative absence of nonnative…

  13. Toward an Understanding of the Role of Speech Recognition in Nonnative Speech Assessment. TOEFL iBT Research Report. TOEFL iBT-02. ETS RR-07-02

    Science.gov (United States)

    Zechner, Klaus; Bejar, Isaac I.; Hemat, Ramin

    2007-01-01

    The increasing availability and performance of computer-based testing has prompted more research on the automatic assessment of language and speaking proficiency. In this investigation, we evaluated the feasibility of using an off-the-shelf speech-recognition system for scoring speaking prompts from the LanguEdge field test of 2002. We first…

  14. The effect of a hearing aid noise reduction algorithm on the acquisition of novel speech contrasts.

    Science.gov (United States)

    Marcoux, André M; Yathiraj, Asha; Côté, Isabelle; Logan, John

    2006-12-01

    Audiologists are reluctant to prescribe digital hearing aids with active digital noise reduction (DNR) to pre-verbal children due to their potential for an adverse effect on the acquisition of language. The present study investigated the relation between DNR and language acquisition by modeling pre-verbal language acquisition using adult listeners presented with a non-native speech contrast. Two groups of normal-hearing, monolingual Anglophone subjects were trained over four testing sessions to discriminate novel, difficult to discriminate, non-native Hindi speech contrasts in continuous noise, where one group listened to both speech items and noise processed with DNR, and where the other group listened to unprocessed speech in noise. Results did not reveal a significant difference in performance between groups across testing sessions. A significant learning effect was noted for both groups between the first and second testing sessions only. Overall, DNR does not appear to enhance or impair the acquisition of novel speech contrasts by adult listeners.

  15. Processing Nonnative Consonant Clusters in the Classroom: Perception and Production of Phonetic Detail

    Science.gov (United States)

    Davidson, Lisa; Wilson, Colin

    2016-01-01

    Recent research has shown that speakers are sensitive to non-contrastive phonetic detail present in nonnative speech (e.g. Escudero et al. 2012; Wilson et al. 2014). Difficulties in interpreting and implementing unfamiliar phonetic variation can lead nonnative speakers to modify second language forms by vowel epenthesis and other changes. These…

  16. Recognition of spoken words by native and non-native listeners: Talker-, listener-, and item-related factors

    Science.gov (United States)

    Bradlow, Ann R.; Pisoni, David B.

    2012-01-01

    In order to gain insight into the interplay between the talker-, listener-, and item-related factors that influence speech perception, a large multi-talker database of digitally recorded spoken words was developed, and was then submitted to intelligibility tests with multiple listeners. Ten talkers produced two lists of words at three speaking rates. One list contained lexically “easy” words (words with few phonetically similar sounding “neighbors” with which they could be confused), and the other list contained lexically “hard” (wordswords with many phonetically similar sounding “neighbors”). An analysis of the intelligibility data obtained with native speakers of English (experiment 1) showed a strong effect of lexical similarity. Easy words had higher intelligibility scores than hard words. A strong effect of speaking rate was also found whereby slow and medium rate words had higher intelligibility scores than fast rate words. Finally, a relationship was also observed between the various stimulus factors whereby the perceptual difficulties imposed by one factor, such as a hard word spoken at a fast rate, could be overcome by the advantage gained through the listener's experience and familiarity with the speech of a particular talker. In experiment 2, the investigation was extended to another listener population, namely, non-native listeners. Results showed that the ability to take advantage of surface phonetic information, such as a consistent talker across items, is a perceptual skill that transfers easily from first to second language perception. However, non-native listeners had particular difficulty with lexically hard words even when familiarity with the items was controlled, suggesting that non-native word recognition may be compromised when fine phonetic discrimination at the segmental level is required. Taken together, the results of this study provide insight into the signal-dependent and signal-independent factors that influence spoken

  17. Age-related sensitive periods influence visual language discrimination in adults.

    Science.gov (United States)

    Weikum, Whitney M; Vouloumanos, Athena; Navarra, Jordi; Soto-Faraco, Salvador; Sebastián-Gallés, Núria; Werker, Janet F

    2013-01-01

    Adults as well as infants have the capacity to discriminate languages based on visual speech alone. Here, we investigated whether adults' ability to discriminate languages based on visual speech cues is influenced by the age of language acquisition. Adult participants who had all learned English (as a first or second language) but did not speak French were shown faces of bilingual (French/English) speakers silently reciting sentences in either language. Using only visual speech information, adults who had learned English from birth or as a second language before the age of 6 could discriminate between French and English significantly better than chance. However, adults who had learned English as a second language after age 6 failed to discriminate these two languages, suggesting that early childhood exposure is crucial for using relevant visual speech information to separate languages visually. These findings raise the possibility that lowered sensitivity to non-native visual speech cues may contribute to the difficulties encountered when learning a new language in adulthood.

  18. Age-related sensitive periods influence visual language discrimination in adults

    Directory of Open Access Journals (Sweden)

    Whitney M. Weikum

    2013-11-01

    Full Text Available Adults as well as infants have the capacity to discriminate languages based on visual speech alone. Here, we investigated whether adults’ ability to discriminate languages based on visual speech cues is influenced by the age of language acquisition. Adult participants who had all learned English (as a first or second language but did not speak French were shown faces of bilingual (French/English speakers silently reciting sentences in either language. Using only visual speech information, adults who had learned English from birth or as a second language before the age of 6 could discriminate between French and English significantly better than chance. However, adults who had learned English as a second language after age 6 failed to discriminate these two languages, suggesting that early childhood exposure is crucial for using relevant visual speech information to separate languages visually. These findings raise the possibility that lowered sensitivity to non-native visual speech cues may contribute to the difficulties encountered when learning a new language in adulthood.

  19. Age-related sensitive periods influence visual language discrimination in adults

    Science.gov (United States)

    Weikum, Whitney M.; Vouloumanos, Athena; Navarra, Jordi; Soto-Faraco, Salvador; Sebastián-Gallés, Núria; Werker, Janet F.

    2013-01-01

    Adults as well as infants have the capacity to discriminate languages based on visual speech alone. Here, we investigated whether adults' ability to discriminate languages based on visual speech cues is influenced by the age of language acquisition. Adult participants who had all learned English (as a first or second language) but did not speak French were shown faces of bilingual (French/English) speakers silently reciting sentences in either language. Using only visual speech information, adults who had learned English from birth or as a second language before the age of 6 could discriminate between French and English significantly better than chance. However, adults who had learned English as a second language after age 6 failed to discriminate these two languages, suggesting that early childhood exposure is crucial for using relevant visual speech information to separate languages visually. These findings raise the possibility that lowered sensitivity to non-native visual speech cues may contribute to the difficulties encountered when learning a new language in adulthood. PMID:24312020

  20. The interlanguage speech intelligibility benefit

    Science.gov (United States)

    Bent, Tessa; Bradlow, Ann R.

    2003-09-01

    This study investigated how native language background influences the intelligibility of speech by non-native talkers for non-native listeners from either the same or a different native language background as the talker. Native talkers of Chinese (n=2), Korean (n=2), and English (n=1) were recorded reading simple English sentences. Native listeners of English (n=21), Chinese (n=21), Korean (n=10), and a mixed group from various native language backgrounds (n=12) then performed a sentence recognition task with the recordings from the five talkers. Results showed that for native English listeners, the native English talker was most intelligible. However, for non-native listeners, speech from a relatively high proficiency non-native talker from the same native language background was as intelligible as speech from a native talker, giving rise to the ``matched interlanguage speech intelligibility benefit.'' Furthermore, this interlanguage intelligibility benefit extended to the situation where the non-native talker and listeners came from different language backgrounds, giving rise to the ``mismatched interlanguage speech intelligibility benefit.'' These findings shed light on the nature of the talker-listener interaction during speech communication.

  1. Free classification of American English dialects by native and non-native listeners.

    Science.gov (United States)

    Clopper, Cynthia G; Bradlow, Ann R

    2009-10-01

    Most second language acquisition research focuses on linguistic structures, and less research has examined the acquisition of sociolinguistic patterns. The current study explored the perceptual classification of regional dialects of American English by native and non-native listeners using a free classification task. Results revealed similar classification strategies for the native and non-native listeners. However, the native listeners were more accurate overall than the non-native listeners. In addition, the non-native listeners were less able to make use of constellations of cues to accurately classify the talkers by dialect. However, the non-native listeners were able to attend to cues that were either phonologically or sociolinguistically relevant in their native language. These results suggest that non-native listeners can use information in the speech signal to classify talkers by regional dialect, but that their lack of signal-independent cultural knowledge about variation in the second language leads to less accurate classification performance.

  2. Discrimination and preference of speech and non-speech sounds in autism patients%孤独症患者言语及非言语声音辨识和偏好特征

    Institute of Scientific and Technical Information of China (English)

    王崇颖; 江鸣山; 徐旸; 马斐然; 石锋

    2011-01-01

    Objective:To explore the discrimination and preference of speech and non-speech sounds in autism patients. Methods: Ten people (5 children vs. 5 adults) diagnosed with autism according to the criteria of Diagnostic and Statistical Manual of Mental Disorders. Fourth Version ( DSM-Ⅳ) were selected from database of Nankai University Center for Behavioural Science. Together with 10 healthy controls with matched age, people with autism were tested by three experiments on speech sounds, pure tone and intonation which were recorded and modified by Praat, a voice analysis software. Their discrimination and preference were collected orally. The exact probability values were calculated. Results: The results showed that there were no significant differences on the discrimination of speech sounds, pure tone and intonation between autism patients and controls ( P > 0. 05) while controls preferred speech and non-speech sounds with higher pitch than autism ( e. g. , - 100Hz/ +50Hz. 2 vs. 7. P < 0. 05:50Hz/250Hz. 4 vs. 10. P < 0. 05) and autism preferred non-speech sounds with lower pitch ( 100Hz/250Hz. 6 vs. 3.P < 0. 05). No significant difference on the preference of intonation between autism and controls ( P > 0. 05) was found. Conclusion:lt shows that people with autism have impaired auditory processing on speech and non-speech sounds.%目的:探究孤独症患者对言语及非言语声音的辨识和偏好特征.方法:从南开大学医学院行为医学中心患者数据库中选取根据美国精神障碍诊断与统计手册第4版(DSM-Ⅳ)诊断标准确诊的孤独症患者10名(儿童和成年人各5例),选取与年龄匹配的正常对照10名.所有被试均接受由专业的语音软件Praat录制和生成的语音音高、纯音音高和韵律的实验测试,口头报告其对言语及非言语声音的辨识和偏好结果.结果:孤独症患者在语音音高、纯音音高和韵律的辨识上和正常对照组差异无统计学意义(均P>0.05).

  3. TRAINING SPEECH SOUND DISCRIMINATION IN CHILDREN WHO MISARTICULATE--A DEMONSTRATION OF THE USE OF TEACHING MACHINE TECHNIQUES IN SPEECH CORRECTION. FINAL REPORT.

    Science.gov (United States)

    HOLLAND, AUDREY L.

    THIS REPORT DISCUSSES THE RESULTS OF A TWO-YEAR DEMONSTRATION PROJECT IN WHICH SCHOOL AGE CHILDREN WITH FUNCTIONAL ARTICULATION DISORDERS ROUTINELY RECEIVED AUDITORY DISCRIMINATION TRAINING BY PROGRAMED INSTRUCTION IN AN ACTUAL CLINICAL SETTING. AUDITORY DISCRIMINATION PROGRAMS FOR THE TEN MOST FREQUENTLY MISARTICULATED ENGLISH CONSONANTS WERE…

  4. Melodic Contour Training and Its Effect on Speech in Noise, Consonant Discrimination, and Prosody Perception for Cochlear Implant Recipients

    Directory of Open Access Journals (Sweden)

    Chi Yhun Lo

    2015-01-01

    Full Text Available Cochlear implant (CI recipients generally have good perception of speech in quiet environments but difficulty perceiving speech in noisy conditions, reduced sensitivity to speech prosody, and difficulty appreciating music. Auditory training has been proposed as a method of improving speech perception for CI recipients, and recent efforts have focussed on the potential benefits of music-based training. This study evaluated two melodic contour training programs and their relative efficacy as measured on a number of speech perception tasks. These melodic contours were simple 5-note sequences formed into 9 contour patterns, such as “rising” or “rising-falling.” One training program controlled difficulty by manipulating interval sizes, the other by note durations. Sixteen adult CI recipients (aged 26–86 years and twelve normal hearing (NH adult listeners (aged 21–42 years were tested on a speech perception battery at baseline and then after 6 weeks of melodic contour training. Results indicated that there were some benefits for speech perception tasks for CI recipients after melodic contour training. Specifically, consonant perception in quiet and question/statement prosody was improved. In comparison, NH listeners performed at ceiling for these tasks. There was no significant difference between the posttraining results for either training program, suggesting that both conferred benefits for training CI recipients to better perceive speech.

  5. The Impact of Non-Native English Teachers' Linguistic Insecurity on Learners' Productive Skills

    Science.gov (United States)

    Daftari, Giti Ehtesham; Tavil, Zekiye Müge

    2017-01-01

    The discrimination between native and non-native English speaking teachers is reported in favor of native speakers in literature. The present study examines the linguistic insecurity of non-native English speaking teachers (NNESTs) and investigates its influence on learners' productive skills by using SPSS software. The eighteen teachers…

  6. The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds.

    Science.gov (United States)

    Kartushina, Natalia; Hervais-Adelman, Alexis; Frauenfelder, Ulrich Hans; Golestani, Narly

    2015-08-01

    Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the same vowels produced by native speakers. The Mahalanobis acoustic distance between non-native productions and target native acoustic spaces was used to assess L2 production accuracy. The experiment shows that 1 h of training per vowel improves the production of four non-native Danish vowels: the learners' productions were closer to the corresponding Danish target vowels after training. The production performance of a control group remained unchanged. Comparisons of pre- and post-training vowel discrimination performance in the experimental group showed improvements in perception. Correlational analyses of training-related changes in production and perception revealed no relationship. These results suggest, first, that this training method is effective in improving non-native vowel production. Second, training purely on production improves perception. Finally, it appears that improvements in production and perception do not systematically progress at equal rates within individuals.

  7. Impact of second-language experience in infancy: brain measures of first- and second-language speech perception.

    Science.gov (United States)

    Conboy, Barbara T; Kuhl, Patricia K

    2011-03-01

    Language experience 'narrows' speech perception by the end of infants' first year, reducing discrimination of non-native phoneme contrasts while improving native-contrast discrimination. Previous research showed that declines in non-native discrimination were reversed by second-language experience provided at 9-10 months, but it is not known whether second-language experience affects first-language speech sound processing. Using event-related potentials (ERPs), we examined learning-related changes in brain activity to Spanish and English phoneme contrasts in monolingual English-learning infants pre- and post-exposure to Spanish from 9.5-10.5 months of age. Infants showed a significant discriminatory ERP response to the Spanish contrast at 11 months (post-exposure), but not at 9 months (pre-exposure). The English contrast elicited an earlier discriminatory response at 11 months than at 9 months, suggesting improvement in native-language processing. The results show that infants rapidly encode new phonetic information, and that improvement in native speech processing can occur during second-language learning in infancy.

  8. Comparison of bimodal and bilateral cochlear implant users on speech recognition with competing talker, music perception, affective prosody discrimination, and talker identification.

    Science.gov (United States)

    Cullington, Helen E; Zeng, Fan-Gang

    2011-02-01

    Despite excellent performance in speech recognition in quiet, most cochlear implant users have great difficulty with speech recognition in noise, music perception, identifying tone of voice, and discriminating different talkers. This may be partly due to the pitch coding in cochlear implant speech processing. Most current speech processing strategies use only the envelope information; the temporal fine structure is discarded. One way to improve electric pitch perception is to use residual acoustic hearing via a hearing aid on the nonimplanted ear (bimodal hearing). This study aimed to test the hypothesis that bimodal users would perform better than bilateral cochlear implant users on tasks requiring good pitch perception. Four pitch-related tasks were used. 1. Hearing in Noise Test (HINT) sentences spoken by a male talker with a competing female, male, or child talker. 2. Montreal Battery of Evaluation of Amusia. This is a music test with six subtests examining pitch, rhythm and timing perception, and musical memory. 3. Aprosodia Battery. This has five subtests evaluating aspects of affective prosody and recognition of sarcasm. 4. Talker identification using vowels spoken by 10 different talkers (three men, three women, two boys, and two girls). Bilateral cochlear implant users were chosen as the comparison group. Thirteen bimodal and 13 bilateral adult cochlear implant users were recruited; all had good speech perception in quiet. There were no significant differences between the mean scores of the bimodal and bilateral groups on any of the tests, although the bimodal group did perform better than the bilateral group on almost all tests. Performance on the different pitch-related tasks was not correlated, meaning that if a subject performed one task well they would not necessarily perform well on another. The correlation between the bimodal users' hearing threshold levels in the aided ear and their performance on these tasks was weak. Although the bimodal cochlear

  9. The Interlanguage Speech Intelligibility Benefit as bias toward native-language phonology

    NARCIS (Netherlands)

    Wang, H.; V.J., van Heuven

    2015-01-01

    Two hypotheses have been advanced in the recent literature with respect to the so-called Interlanguage Speech Intelligibility Benefit (ISIB): a nonnative speaker will be better understood by a another nonnative listener than a native speaker of the target language will be (a) only when the nonnative

  10. Rise Time and Formant Transition Duration in the Discrimination of Speech Sounds: The Ba-Wa Distinction in Developmental Dyslexia

    Science.gov (United States)

    Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Denes

    2011-01-01

    Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory…

  11. Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones

    Science.gov (United States)

    Heinzen, Christina Carolyn

    The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns

  12. Perception of Non-Native Consonant Length Contrast: The Role of Attention in Phonetic Processing

    Science.gov (United States)

    Porretta, Vincent J.; Tucker, Benjamin V.

    2015-01-01

    The present investigation examines English speakers' ability to identify and discriminate non-native consonant length contrast. Three groups (L1 English No-Instruction, L1 English Instruction, and L1 Finnish control) performed a speeded forced-choice identification task and a speeded AX discrimination task on Finnish non-words (e.g.…

  13. Elicitation of the Acoustic Change Complex to Long-Duration Speech Stimuli in Four-Month-Old Infants.

    Science.gov (United States)

    Chen, Ke Heng; Small, Susan A

    2015-01-01

    The acoustic change complex (ACC) is an auditory-evoked potential elicited to changes within an ongoing stimulus that indicates discrimination at the level of the auditory cortex. Only a few studies to date have attempted to record ACCs in young infants. The purpose of the present study was to investigate the elicitation of ACCs to long-duration speech stimuli in English-learning 4-month-old infants. ACCs were elicited to consonant contrasts made up of two concatenated speech tokens. The stimuli included native dental-dental /dada/ and dental-labial /daba/ contrasts and a nonnative Hindi dental-retroflex /daDa/ contrast. Each consonant-vowel speech token was 410 ms in duration. Slow cortical responses were recorded to the onset of the stimulus and to the acoustic change from /da/ to either /ba/ or /Da/ within the stimulus with significantly prolonged latencies compared with adults. ACCs were reliably elicited for all stimulus conditions with more robust morphology compared with our previous findings using stimuli that were shorter in duration. The P1 amplitudes elicited to the acoustic change in /daba/ and /daDa/ were significantly larger compared to /dada/ supporting that the brain discriminated between the speech tokens. These findings provide further evidence for the use of ACCs as an index of discrimination ability.

  14. Elicitation of the Acoustic Change Complex to Long-Duration Speech Stimuli in Four-Month-Old Infants

    Directory of Open Access Journals (Sweden)

    Ke Heng Chen

    2015-01-01

    Full Text Available The acoustic change complex (ACC is an auditory-evoked potential elicited to changes within an ongoing stimulus that indicates discrimination at the level of the auditory cortex. Only a few studies to date have attempted to record ACCs in young infants. The purpose of the present study was to investigate the elicitation of ACCs to long-duration speech stimuli in English-learning 4-month-old infants. ACCs were elicited to consonant contrasts made up of two concatenated speech tokens. The stimuli included native dental-dental /dada/ and dental-labial /daba/ contrasts and a nonnative Hindi dental-retroflex /daDa/ contrast. Each consonant-vowel speech token was 410 ms in duration. Slow cortical responses were recorded to the onset of the stimulus and to the acoustic change from /da/ to either /ba/ or /Da/ within the stimulus with significantly prolonged latencies compared with adults. ACCs were reliably elicited for all stimulus conditions with more robust morphology compared with our previous findings using stimuli that were shorter in duration. The P1 amplitudes elicited to the acoustic change in /daba/ and /daDa/ were significantly larger compared to /dada/ supporting that the brain discriminated between the speech tokens. These findings provide further evidence for the use of ACCs as an index of discrimination ability.

  15. Cross-language perceptual similarity predicts categorial discrimination of American vowels by naïve Japanese listeners.

    Science.gov (United States)

    Strange, Winifred; Hisagi, Miwako; Akahane-Yamada, Reiko; Kubo, Rieko

    2011-10-01

    Current speech perception models propose that relative perceptual difficulties with non-native segmental contrasts can be predicted from cross-language phonetic similarities. Japanese (J) listeners performed a categorical discrimination task in which nine contrasts (six adjacent height pairs, three front/back pairs) involving eight American (AE) vowels [iː, ɪ, ε, æː, ɑː, ʌ, ʊ, uː] in /hVbə/ disyllables were tested. The listeners also completed a perceptual assimilation task (categorization as J vowels with category goodness ratings). Perceptual assimilation patterns (quantified as categorization overlap scores) were highly predictive of discrimination accuracy (r(s)=0.93). Results suggested that J listeners used both spectral and temporal information in discriminating vowel contrasts.

  16. HATE SPEECH AS COMMUNICATION

    National Research Council Canada - National Science Library

    Gladilin Aleksey Vladimirovich

    2012-01-01

    The purpose of the paper is a theoretical comprehension of hate speech from communication point of view, on the one hand, and from the point of view of prejudice, stereotypes and discrimination on the other...

  17. Meaning Discrimination in Bilingual Venda Dictionaries

    Directory of Open Access Journals (Sweden)

    Munzhedzi James Mafela

    2011-10-01

    Full Text Available

    ABSTRACT: Venda, one of the minority languages in South Africa, has few dictionaries. All are translational bilingual dictionaries meant for dictionary users who are non-native speakers of the language. Dictionary users find it difficult to use the bilingual Venda dictionaries because they are confronted with equivalents which they cannot distinguish. In most cases, the equivalents of the entry-words are provided without giving meaning discrimination. Without a good command of Venda and the provision of meaning discrimination, users will find it difficult to make a correct choice of the equivalent for which they are looking. Bilingual Venda dictionaries are therefore not helpful for dictionary users who are non-native speakers of the language. Devices such as giving illustrative examples, indicating parts of speech and adding etymology could be used to solve the problem of meaning discrimination in bilingual Venda dictionaries. This article highlights the problem of the absence of meaning discrimination and suggests solutions to future Venda lexico-graphers in this regard.


    Keywords: BILINGUAL DICTIONARY, MEANING DISCRIMINATION, EQUIVALENCE, ENTRY-WORD, LEXICOGRAPHY, CULTURE, TRANSLATION, SOURCE LANGUAGE, TAR-GET LANGUAGE, SYNONYM, POLYSEMY


    *****

    OPSOMMING: Betekenisonderskeiding in tweetalige Vendawoordeboeke. Venda, een van die minderheidstale in Suid-Afrika, het min woordeboeke. Almal is vertalende tweetalige woordeboeke bedoel vir woordeboekgebruikers wat nie moedertaalsprekers van die taal is nie. Woordeboekgebruikers vind dit moeilik om tweetalige Vendawoordeboeke te gebruik om-dat hulle gekonfronteer word met ekwivalente wat hulle nie kan onderskei nie. In die meeste gevalle word die ekwivalente van die trefwoorde verskaf sonder om betekenisonderskeiding aan te dui. Sonder 'n goeie beheersing van Venda en die aanduiding van betekenisonderskeiding, sal gebruikers dit moeilik vind om die regte keuse van die

  18. Spanish is better than English for discriminating Portuguese vowels: acoustic similarity versus vowel inventory size.

    Science.gov (United States)

    Elvin, Jaydene; Escudero, Paola; Vasiliev, Polina

    2014-01-01

    Second language (L2) learners often struggle to distinguish sound contrasts that are not present in their native language (L1). Models of non-native and L2 sound perception claim that perceptual similarity between L1 and L2 sound contrasts correctly predicts discrimination by naïve listeners and L2 learners. The present study tested the explanatory power of vowel inventory size versus acoustic properties as predictors of discrimination accuracy when naïve Australian English (AusE) and Iberian Spanish (IS) listeners are presented with six Brazilian Portuguese (BP) vowel contrasts. Our results show that IS listeners outperformed AusE listeners, confirming that cross-linguistic acoustic properties, rather than cross-linguistic vowel inventory sizes, successfully predict non-native discrimination difficulty. Furthermore, acoustic distance between BP vowels and closest L1 vowels successfully predicted differential levels of difficulty among the six BP contrasts, with BP /e-i/ and /o-u/ being the most difficult for both listener groups. We discuss the importance of our findings for the adequacy of models of L2 speech perception.

  19. Spanish is better than English for discriminating Portuguese vowels: acoustic similarity versus vowel inventory

    Directory of Open Access Journals (Sweden)

    Jaydene eElvin

    2014-10-01

    Full Text Available Second language (L2 learners often struggle to distinguish sound contrasts that are not present in their native language (L1. Models of non-native and L2 sound perception claim that perceptual similarity between L1 and L2 sound contrasts correctly predicts discrimination by naïve listeners and L2 learners. The present study tested the explanatory power of vowel inventory size versus acoustic properties as predictors of discrimination accuracy when naïve Australian English (AusE and Iberian Spanish (IS listeners are presented with six Brazilian Portuguese (BP vowel contrasts. Our results show that IS listeners outperformed AusE listeners, confirming that cross-linguistic acoustic properties, rather than cross-linguistic vowel inventory sizes, successfully predict non-native discrimination difficulty. Furthermore, acoustic distance between BP vowels and closest L1 vowels successfully predicted differential levels of difficulty among the six BP contrasts, with BP /e-i/ and /o-u/ being the most difficult for both listener groups. We discuss the importance of our findings for the adequacy of models of L2 speech perception.

  20. Spanish is better than English for discriminating Portuguese vowels: acoustic similarity versus vowel inventory size

    Science.gov (United States)

    Elvin, Jaydene; Escudero, Paola; Vasiliev, Polina

    2014-01-01

    Second language (L2) learners often struggle to distinguish sound contrasts that are not present in their native language (L1). Models of non-native and L2 sound perception claim that perceptual similarity between L1 and L2 sound contrasts correctly predicts discrimination by naïve listeners and L2 learners. The present study tested the explanatory power of vowel inventory size versus acoustic properties as predictors of discrimination accuracy when naïve Australian English (AusE) and Iberian Spanish (IS) listeners are presented with six Brazilian Portuguese (BP) vowel contrasts. Our results show that IS listeners outperformed AusE listeners, confirming that cross-linguistic acoustic properties, rather than cross-linguistic vowel inventory sizes, successfully predict non-native discrimination difficulty. Furthermore, acoustic distance between BP vowels and closest L1 vowels successfully predicted differential levels of difficulty among the six BP contrasts, with BP /e-i/ and /o-u/ being the most difficult for both listener groups. We discuss the importance of our findings for the adequacy of models of L2 speech perception. PMID:25400599

  1. Phonetic, Phonemic, and Phonological Factors in Cross-Language Discrimination of Phonotactic Contrasts

    Science.gov (United States)

    Davidson, Lisa

    2011-01-01

    Previous research indicates that multiple levels of linguistic information play a role in the perception and discrimination of non-native phonemes. This study examines the interaction of phonetic, phonemic and phonological factors in the discrimination of non-native phonotactic contrasts. Listeners of Catalan, English, and Russian are presented…

  2. Comprehending non-native speakers: theory and evidence for adjustment in manner of processing.

    Science.gov (United States)

    Lev-Ari, Shiri

    2014-01-01

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker's upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker's language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers.

  3. Multiword Lexical Units and Their Relationship to Impromptu Speech

    Science.gov (United States)

    Hsu, Jeng-yih

    2007-01-01

    Public speaking can be very threatening to any native speakers of English, not to mention non-native EFL learners. Impromptu speech, perhaps the most challenging form of public speaking, is however being promoted in every city of the EFL countries. The case in Taiwan is no exceptional. Every year, dozens of impromptu speech contexts are held…

  4. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition.

    Science.gov (United States)

    Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland

    2017-08-07

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. MUSAN: A Music, Speech, and Noise Corpus

    OpenAIRE

    Snyder, David; Chen, Guoguo; Povey, Daniel

    2015-01-01

    This report introduces a new corpus of music, speech, and noise. This dataset is suitable for training models for voice activity detection (VAD) and music/speech discrimination. Our corpus is released under a flexible Creative Commons license. The dataset consists of music from several genres, speech from twelve languages, and a wide assortment of technical and non-technical noises. We demonstrate use of this corpus for music/speech discrimination on Broadcast news and VAD for speaker identif...

  6. Ecological impacts of non-native species

    Science.gov (United States)

    Wilkinson, John W.

    2012-01-01

    Non-native species are considered one of the greatest threats to freshwater biodiversity worldwide (Drake et al. 1989; Allen and Flecker 1993; Dudgeon et al. 2005). Some of the first hypotheses proposed to explain global patterns of amphibian declines included the effects of non-native species (Barinaga 1990; Blaustein and Wake 1990; Wake and Morowitz 1991). Evidence for the impact of non-native species on amphibians stems (1) from correlative research that relates the distribution or abundance of a species to that of a putative non-native species, and (2) from experimental tests of the effects of a non-native species on survival, growth, development or behaviour of a target species (Kats and Ferrer 2003). Over the past two decades, research on the effects of non-native species on amphibians has mostly focused on introduced aquatic predators, particularly fish. Recent research has shifted to more complex ecological relationships such as influences of sub-lethal stressors (e.g. contaminants) on the effects of non-native species (Linder et al. 2003; Sih et al. 2004), non-native species as vectors of disease (Daszak et al. 2004; Garner et al. 2006), hybridization between non-natives and native congeners (Riley et al. 2003; Storfer et al. 2004), and the alteration of food-webs by non-native species (Nystrom et al. 2001). Other research has examined the interaction of non-native species in terms of facilitation (i.e. one non-native enabling another to become established or spread) or the synergistic effects of multiple non-native species on native amphibians, the so-called invasional meltdown hypothesis (Simerloff and Von Holle 1999). Although there is evidence that some non-native species may interact (Ricciardi 2001), there has yet to be convincing evidence that such interactions have led to an accelerated increase in the number of non-native species and cumulative impacts are still uncertain (Simberloff 2006). Applied research on the control, eradication, and

  7. The Interlanguage Speech Intelligibility Benefit as bias toward native-language phonology

    NARCIS (Netherlands)

    Wang, H.; V.J., van Heuven

    2015-01-01

    Two hypotheses have been advanced in the recent literature with respect to the so-called Interlanguage Speech Intelligibility Benefit (ISIB): a nonnative speaker will be better understood by a another nonnative listener than a native speaker of the target language will be (a) only when the

  8. Cognitive Complexity and Second Language Speech Production.

    Science.gov (United States)

    Appel, Gabriela; Lantolf, James P.

    A study compared the effects of cognitive complexity on the speech production of 14 advanced non-native speakers of English and 14 native English-speakers. Cognitively simple and complex tasks were distinguished based on text type (narrative versus expository). Subjects read one narrative and one expository text in separate sessions, then wrote…

  9. Distributional training of speech sounds can be done with continuous distributions

    NARCIS (Netherlands)

    Wanrooij, K.; Boersma, P.

    2013-01-01

    In previous research on distributional training of non-native speech sounds, distributions were always discontinuous: typically, each of only eight different stimuli was repeated multiple times. The current study examines distributional training with continuous distributions, in which all presented

  10. Detecting categorical perception in continuous discrimination data

    NARCIS (Netherlands)

    Boersma, P.; Chládková, K.

    2010-01-01

    We present a method for assessing categorical perception from continuous discrimination data. Until recently, categorical perception of speech has exclusively been measured by discrimination and identification experiments with a small number of repeatedly presented stimuli. Experiments by Rogers and

  11. Students Writing Emails to Faculty: An Examination of E-Politeness among Native and Non-Native Speakers of English

    Science.gov (United States)

    Biesenbach-Lucas, Sigrun

    2007-01-01

    This study combines interlanguage pragmatics and speech act research with computer-mediated communication and examines how native and non-native speakers of English formulate low- and high-imposition requests to faculty. While some research claims that email, due to absence of non-verbal cues, encourages informal language, other research has…

  12. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    Science.gov (United States)

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  13. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    Directory of Open Access Journals (Sweden)

    Claudia Kubicek

    Full Text Available The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German and non-native (French fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  14. 噪声下言语识别能力对老年性聋患者助听效果的影响%The Hearing Aids Outcome Associated with Speech Discrimination Abilities in Noise in Presbycusis Patients

    Institute of Scientific and Technical Information of China (English)

    彭璐; 梅玲; 张勤; 陈建勇; 李蕴; 任燕; 黄治物

    2015-01-01

    Objective To investigate the relationship between pure -tone audiometry (PTA ) ,speech dis‐crimination abilities in quiet or in noise and the international outcome inventory for hearing aids (IOI-HA) in pres‐bycusis patients .Methods Twenty presbycusis subjects were tested in this study .Pure-tone audiometry (PTA) and speech discrimination threshold were obtained before being fitted with hearing aids .They weared hearig aids more than six months ,and pure-tone audiometry ,speech discrimination scores in quite(the level = 65 dB SPL) and in noise(signal to noise ratio = 10 dB) were carried out in sound field .A stepwise forward multiple-regression analysis was performed to investigate the impact of PTA and speech discrimination scores to IOI-HA .Results The PTAs before or after hearing aid fittings showed a negative association with IOI-HA ,while speech discrimination scores in quiet or in noise before or after hearing aid fittings showed a positive association with IOI-HA .Speech discrimination threshold in noise was identified as a single predictor of IOI-HA(P<0 .001) .Conclusion The relation between speech discrimination scores in noise and IOI-HA suggests that a poor score might limit the hearing aids outcome .The speech discrimination scores in noise help the clinicians predict the outcomes of hearing aid in real‐ities .%目的:研究老年性聋患者助听前后的纯音听阈、安静及噪声下言语识别能力与国际助听器效果评估量表(the international outcome inventory for hearing aids ,IOI-HA)得分的相关性。方法对20例老年性聋患者助听前和佩戴并适应助听器6个月后分别进行纯音听阈测试、安静环境下言语识别率测试及信噪比为10 dB的噪声下言语识别率测试,并于助听后6个月进行IO I-H A评分,助听前的言语识别率测试采用普通话言语测听词表中的单音节词汇表,助听后的言语识别测试采用噪声下汉语普通话测试材料语

  15. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    Directory of Open Access Journals (Sweden)

    Buddhamas eKriengwatana

    2015-08-01

    Full Text Available Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still not fully understood. In this study we compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability. Using a behavioural Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch and zebra finches’ (naïve to human speech ability to categorize /ɪ/ and /ɛ/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. Experiment 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. Therefore, this study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. It additionally provides behavioural evidence contradicting a prior hypothesis that accommodation of speaker differences is achieved via the use of formant ratios. Therefore, investigations of alternative accounts of vowel normalization that incorporate the possibility of an auditory bias for disregarding inter-speaker variability are warranted.

  16. Pragmatic Difficulties in the Production of the Speech Act of Apology by Iraqi EFL Learners

    Directory of Open Access Journals (Sweden)

    Mehdi Falih Al-Ghazalli

    2014-12-01

    Full Text Available The purpose of this paper is to investigate the pragmatic difficulties encountered by Iraqi EFL university students in producing the speech act of apology. Although the act of apology is easy to recognize or use by native speakers of English, non-native speakers generally encounter difficulties in discriminating one speech act from another. The problem can be attributed to two factors: pragma-linguistic and socio-pragmatic knowledge. The aim of this study is(1to evaluate the socio-pragmatic level of interpreting apologies as understood and used by Iraqi EFL university learners, (2 find out the level of difficulty they experience in producing apologies and(3 detect the reasons behind such misinterpretations and misuses. It is hypothesized that the socio-pragmatic interpretation of apology tends to play a crucial role in comprehending what is intended by the speaker. However, cultural gaps can be the main reason behind the EFL learners' inaccurate production of the act of apology. To verify the aforementioned hypotheses, a test has been constructed and administered to a sample of 70 fourth-year Iraqi EFL university learners, morning classes. The subjects' responses have been collected and linguistically analyzed in the light of an eclectic model based on Deutschmann (2003 and Lazare (2004. It has been concluded that the misinterpretation or difficulty Iraqi EFL students have faced is mainly attributed to their lack of socio-pragmatic knowledge. The interference of the learnersʹ first language culture has led to non-native productions of speech act of apology.

  17. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  18. Public Speech.

    Science.gov (United States)

    Green, Thomas F.

    1994-01-01

    Discusses the importance of public speech in society, noting the power of public speech to create a world and a public. The paper offers a theory of public speech, identifies types of public speech, and types of public speech fallacies. Two ways of speaking of the public and of public life are distinguished. (SM)

  19. Oral cavity awareness in nonnative speakers acquiring English.

    Science.gov (United States)

    Lohman, Patricia

    2008-06-01

    This investigation assessed awareness of the oral cavity of nonnative speakers acquiring English. University students (60 men, 60 women) were placed into three equal-size groups. The Less Experienced group lived in the USA less than 6 mo. (M = 3.3 mo., SD = 2.4). The More Experienced group lived in the United States 3 or more years (M = 5.0 yr., SD = 1.9). Native English speakers were the control group. Participants were recruited from undergraduate general education classes and passed a speech screening in English including accurate production of the seven English syllables tested, namely, suh, luh, tuh, kuh, ruh, shuh, and thuh. Participants answered four multiple-choice questions about lingual contact for each of the syllables imitated. Total test mean scores were significantly higher for the More Experienced group. Native speakers performed the task best. Findings support the effects of amount of time speaking the language. Training methods employed to teach English and slight dialectal variations may account for the significant differences seen in the two groups of nonnative speakers. Further study is warranted.

  20. Discrimination of Arabic Contrasts by American Learners

    Science.gov (United States)

    Al Mahmoud, Mahmoud S.

    2013-01-01

    This article reports on second language perception of non-native contrasts. The study specifically tests the perceptual assimilation model (PAM) by examining American learners' ability to discriminate Arabic contrasts. Twenty two native American speakers enrolled in a university level Arabic language program took part in a forced choice AXB…

  1. Speech Problems

    Science.gov (United States)

    ... of your treatment plan may include seeing a speech therapist , a person who is trained to treat speech disorders. How often you have to see the speech therapist will vary — you'll probably start out seeing ...

  2. The Nature of the Phonological Processing in French Dyslexic Children: Evidence for the Phonological Syllable and Linguistic Features' Role in Silent Reading and Speech Discrimination

    Science.gov (United States)

    Maionchi-Pino, Norbert; Magnan, Annie; Ecalle, Jean

    2010-01-01

    This study investigated the status of phonological representations in French dyslexic children (DY) compared with reading level- (RL) and chronological age-matched (CA) controls. We focused on the syllable's role and on the impact of French linguistic features. In Experiment 1, we assessed oral discrimination abilities of pairs of syllables that…

  3. Factors that Enhance English-Speaking Speech-Language Pathologists' Transcription of Cantonese-Speaking Children's Consonants

    Science.gov (United States)

    Lockart, Rebekah; McLeod, Sharynne

    2013-01-01

    Purpose: To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Method: Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Results: Task 1 (baseline) involved…

  4. Phonetic convergence in the speech of Polish learners of English

    OpenAIRE

    Zając, Magdalena

    2015-01-01

    This dissertation examines variability in the phonetic performance of L2 users of English and concentrates on speech convergence as a result of exposure to native and non-native pronunciation. The term speech convergence refers to a process during which speakers adapt their linguistic behaviour according to who they are talking or listening to. Previous studies show that the phenomenon may take place both in a speaker’s L1 (e.g. Giles, 1973; Coupland, 1984; Gregory and Webster,...

  5. Intelligibility of non-natively produced Dutch words: interaction between segmental and suprasegmental errors.

    Science.gov (United States)

    Caspers, Johanneke; Horłoza, Katarzyna

    2012-01-01

    In the field of second language research many adhere to the idea that prosodic errors are more detrimental to the intelligibility of non-native speakers than segmental errors. The current study reports on a series of experiments testing the influence of stress errors and segmental errors, and a combination of these, on native processing of words produced by intermediate speakers of Dutch as a second language with either Mandarin Chinese or French as mother tongue. The results suggest that both stress and segmental errors influence processing, but suprasegmental errors do not outweigh segmental errors. It seems that a more 'foreign' generic pronunciation leads to a greater impact of (supra)segmental errors, suggesting that segmental and prosodic deviations should not be viewed as independent factors in processing non-native speech.

  6. Supervised Speech Separation Based on Deep Learning: An Overview

    OpenAIRE

    Wang, DeLiang; Chen, Jitong

    2017-01-01

    Speech separation is the task of separating target speech from background interference. Traditionally, speech separation is studied as a signal processing problem. A more recent approach formulates speech separation as a supervised learning problem, where the discriminative patterns of speech, speakers, and background noise are learned from training data. Over the past decade, many supervised separation algorithms have been put forward. In particular, the recent introduction of deep learning ...

  7. Durations of American English vowels by native and non-native speakers: acoustic analyses and perceptual effects.

    Science.gov (United States)

    Liu, Chang; Jin, Su-Hyun; Chen, Chia-Tsen

    2014-06-01

    The goal of this study was to examine durations of American English vowels produced by English-, Chinese-, and Korean-native speakers and the effects of vowel duration on vowel intelligibility. Twelve American English vowels were recorded in the /hVd/ phonetic context by native speakers and non-native speakers. The English vowel duration patterns as a function of vowel produced by non-native speakers were generally similar to those produced by native speakers. These results imply that using duration differences across vowels may be an important strategy for non-native speakers' production before they are able to employ spectral cues to produce and perceive English speech sounds. In the intelligibility experiment, vowels were selected from 10 native and non-native speakers and vowel durations were equalized at 170 ms. Intelligibility of vowels with original and equalized durations was evaluated by American English native listeners. Results suggested that vowel intelligibility of native and non-native speakers degraded slightly by 3-8% when durations were equalized, indicating that vowel duration plays a minor role in vowel intelligibility.

  8. The effect of native vowel processing ability and frequency discrimination acuity on the phonetic training of English vowels for native speakers of Greek.

    Science.gov (United States)

    Lengeris, Angelos; Hazan, Valerie

    2010-12-01

    The perception and production of nonnative phones in second language (L2) learners can be improved via auditory training, but L2 learning is often characterized by large differences in performance across individuals. This study examined whether success in learning L2 vowels, via five sessions of high-variability phonetic training, related to the learners' native (L1) vowel processing ability or their frequency discrimination acuity. A group of native speakers of Greek received training, while another completed the pre-/post-tests but without training. Pre-/post-tests assessed different aspects of their L2 and L1 vowel processing and frequency acuity. L2 and L1 vowel processing were assessed via: (a) Natural English (L2) vowel identification in quiet and in multi-talker babble, and natural Greek (L1) vowel identification in babble; (b) the categorization of synthetic English and Greek vowel continua; and (c) discrimination of the same continua. Frequency discrimination acuity was assessed for a nonspeech continuum. Frequency discrimination acuity was related to measures of both L1 and L2 vowel processing, a finding that favors an auditory processing over a speech-specific explanation for individual variability in L2 vowel learning. The most efficient frequency discriminators at pre-test were also the most accurate both in English vowel perception and production after training.

  9. An invasion risk map for non-native aquatic macrophytes of the Iberian Peninsula

    Directory of Open Access Journals (Sweden)

    Argantonio Rodríguez-Merino

    2017-05-01

    Full Text Available Freshwater systems are particularly susceptible to non-native organisms, owing to their high sensitivity to the impacts that are caused by these organisms. Species distribution models, which are based on both environmental and socio-economic variables, facilitate the identification of the most vulnerable areas for the spread of non-native species. We used MaxEnt to predict the potential distribution of 20 non-native aquatic macrophytes in the Iberian Peninsula. Some selected variables, such as the temperature seasonality and the precipitation in the driest quarter, highlight the importance of the climate on their distribution. Notably, the human influence in the territory appears as a key variable in the distribution of studied species. The model discriminated between favorable and unfavorable areas with high accuracy. We used the model to build an invasion risk map of aquatic macrophytes for the Iberian Peninsula that included results from 20 individual models. It showed that the most vulnerable areas are located near to the sea, the major rivers basins, and the high population density areas. These facts suggest the importance of the human impact on the colonization and distribution of non-native aquatic macrophytes in the Iberian Peninsula, and more precisely agricultural development during the Green Revolution at the end of the 70’s. Our work also emphasizes the utility of species distribution models for the prevention and management of biological invasions.

  10. Effect of bilingualism on lexical stress pattern discrimination in French-learning infants.

    Directory of Open Access Journals (Sweden)

    Ranka Bijeljac-Babic

    Full Text Available Monolingual infants start learning the prosodic properties of their native language around 6 to 9 months of age, a fact marked by the development of preferences for predominant prosodic patterns and a decrease in sensitivity to non-native prosodic properties. The present study evaluates the effects of bilingual acquisition on speech perception by exploring how stress pattern perception may differ in French-learning 10-month-olds raised in bilingual as opposed to monolingual environments. Experiment 1 shows that monolinguals can discriminate stress patterns following a long familiarization to one of two patterns, but not after a short familiarization. In Experiment 2, two subgroups of bilingual infants growing up learning both French and another language (varying across infants in which stress is used lexically were tested under the more difficult short familiarization condition: one with balanced input, and one receiving more input in the language other than French. Discrimination was clearly found for the other-language-dominant subgroup, establishing heightened sensitivity to stress pattern contrasts in these bilinguals as compared to monolinguals. However, the balanced bilinguals' performance was not better than that of monolinguals, establishing an effect of the relative balance of the language input. This pattern of results is compatible with the proposal that sensitivity to prosodic contrasts is maintained or enhanced in a bilingual population compared to a monolingual population in which these contrasts are non-native, provided that this dimension is used in one of the two languages in acquisition, and that infants receive enough input from that language.

  11. Phonological Memory, Attention Control, and Musical Ability: Effects of Individual Differences on Rater Judgments of Second Language Speech

    Science.gov (United States)

    Isaacs, Talia; Trofimovich, Pavel

    2011-01-01

    This study examines how listener judgments of second language speech relate to individual differences in listeners' phonological memory, attention control, and musical ability. Sixty native English listeners (30 music majors, 30 nonmusic majors) rated 40 nonnative speech samples for accentedness, comprehensibility, and fluency. The listeners were…

  12. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  13. Discrimination of Multiple Coronal Stop Contrasts in Wubuy (Australia: A Natural Referent Consonant Account.

    Directory of Open Access Journals (Sweden)

    Rikke L Bundgaard-Nielsen

    Full Text Available Native speech perception is generally assumed to be highly efficient and accurate. Very little research has, however, directly examined the limitations of native perception, especially for contrasts that are only minimally differentiated acoustically and articulatorily. Here, we demonstrate that native speech perception may indeed be more difficult than is often assumed, where phonemes are highly similar, and we address the nature and extremes of consonant perception. We present two studies of native and non-native (English perception of the acoustically and articulatorily similar four-way coronal stop contrast /t ʈ [symbol: see text] ȶ/ (apico-alveolar, apico-retroflex, lamino-dental, lamino-alveopalatal of Wubuy, an indigenous language of Australia. The results show that all listeners find contrasts involving /ȶ/ easy to discriminate, but that, for both groups, contrasts involving /t ʈ [symbol: see text]/ are much harder. Where the two groups differ, the results largely reflect native language (Wubuy vs English attunement as predicted by the Perceptual Assimilation Model. We also observe striking perceptual asymmetries in the native listeners' perception of contrasts involving the latter three stops, likely due to the differences in input frequency. Such asymmetries have not previously been observed in adults, and we propose a novel Natural Referent Consonant Hypothesis to account for the results.

  14. Cross-language and second language speech perception

    DEFF Research Database (Denmark)

    Bohn, Ocke-Schwen

    2017-01-01

    in cross-language and second language speech perception research: The mapping issue (the perceptual relationship of sounds of the native and the nonnative language in the mind of the native listener and the L2 learner), the perceptual and learning difficulty/ease issue (how this relationship may or may...... not cause perceptual and learning difficulty), and the plasticity issue (whether and how experience with the nonnative language affects the perceptual organization of speech sounds in the mind of L2 learners). One important general conclusion from this research is that perceptual learning is possible at all......This chapter provides an overview of the main research questions and findings in the areas of second language and cross-language speech perception research, and of the most widely used models that have guided this research. The overview is structured in a way that addresses three overarching topics...

  15. Speech Development

    Science.gov (United States)

    ... The speech-language pathologist should consistently assess your child’s speech and language development, as well as screen for hearing problems (with ... and caregivers play a vital role in a child’s speech and language development. It is important that you talk to your ...

  16. The nature of the phonological processing in French dyslexic children: evidence for the phonological syllable and linguistic features' role in silent reading and speech discrimination.

    Science.gov (United States)

    Maïonchi-Pino, Norbert; Magnan, Annie; Ecalle, Jean

    2010-12-01

    This study investigated the status of phonological representations in French dyslexic children (DY) compared with reading level- (RL) and chronological age-matched (CA) controls. We focused on the syllable's role and on the impact of French linguistic features. In Experiment 1, we assessed oral discrimination abilities of pairs of syllables that varied as a function of voicing, mode or place of articulation, or syllable structure. Results suggest that DY children underperform controls with a 'speed-accuracy' deficit. However, DY children exhibit some similar processing than those highlighted in controls. As in CA and RL controls, DY children have difficulties in processing two sounds that only differ in voicing, and preferentially process obstruent rather than fricative sounds, and more efficiently process CV than CCV syllables. In Experiment 2, we used a modified version of the Colé, Magnan, and Grainger's (Applied Psycholinguistics 20:507-532, 1999) paradigm. Results show that DY children underperform CA controls but outperform RL controls. However, as in CA and RL controls, data reveal that DY children are able to use phonological procedures influenced by initial syllable frequency. Thus, DY children process syllabically high-frequency syllables but phonemically process low-frequency syllables. They also exhibit lexical and syllable frequency effects. Consequently, results provide evidence that DY children performances can be accounted for by laborious phonological syllable-based procedures and also degraded phonological representations.

  17. Genetic Discrimination

    Science.gov (United States)

    ... in Genetics Archive Regulation of Genetic Tests Genetic Discrimination Overview Many Americans fear that participating in research ... I) and employment (Title II). Read more Genetic Discrimination and Other Laws Genetic Discrimination and Other Laws ...

  18. Non-natives: 141 scientists object

    NARCIS (Netherlands)

    Simberloff, D.; Van der Putten, W.H.

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. dsimberloff@utk.edu Jake Alexander Institute of Integrative

  19. Non-natives: 141 scientists object

    NARCIS (Netherlands)

    Simberloff, D.; Van der Putten, W.H.

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. dsimberloff@utk.edu Jake Alexander Institute of Integrative

  20. STUDENTS WRITING EMAILS TO FACULTY: AN EXAMINATION OF E-POLITENESS AMONG NATIVE AND NON-NATIVE SPEAKERS OF ENGLISH

    Directory of Open Access Journals (Sweden)

    Sigrun Biesenbach-Lucas

    2007-02-01

    Full Text Available This study combines interlanguage pragmatics and speech act research with computer-mediated communication and examines how native and non-native speakers of English formulate low- and high-imposition requests to faculty. While some research claims that email, due to absence of non-verbal cues, encourages informal language, other research has claimed the opposite. However, email technology also allows writers to plan and revise messages before sending them, thus affording the opportunity to edit not only for grammar and mechanics, but also for pragmatic clarity and politeness.The study examines email requests sent by native and non-native English speaking graduate students to faculty at a major American university over a period of several semesters and applies Blum-Kulka, House, and Kasper’s (1989 speech act analysis framework – quantitatively to distinguish levels of directness, i.e. pragmatic clarity; and qualitatively to compare syntactic and lexical politeness devices, the request perspectives, and the specific linguistic request realization patterns preferred by native and non-native speakers. Results show that far more requests are realized through direct strategies as well as hints than conventionally indirect strategies typically found in comparative speech act studies. Politeness conventions in email, a text-only medium with little guidance in the academic institutional hierarchy, appear to be a work in progress, and native speakers demonstrate greater resources in creating e-polite messages to their professors than non-native speakers. A possible avenue for pedagogical intervention with regard to instruction in and acquisition of politeness routines in hierarchically upward email communication is presented.

  1. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    Hervé Bourlard; John Dines; Mathew Magimai-Doss; Philip N Garner; David Imseng; Petr Motlicek; Hui Liang; Lakshmi Saheer; Fabio Valente

    2011-10-01

    In this paper, we describe recent work at Idiap Research Institute in the domain of multilingual speech processing and provide some insights into emerging challenges for the research community. Multilingual speech processing has been a topic of ongoing interest to the research community for many years and the field is now receiving renewed interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers. For example, discriminative features are seeing wide application by the speech recognition community, but additional issues arise when using such features in a multilingual setting. Another example is the apparent convergence of speech recognition and speech synthesis technologies in the form of statistical parametric methodologies. This convergence enables the investigation of new approaches to unified modelling for automatic speech recognition and text-to-speech synthesis (TTS) as well as cross-lingual speaker adaptation for TTS. The second driving force is the impetus being provided by both government and industry for technologies to help break down domestic and international language barriers, these also being barriers to the expansion of policy and commerce. Speech-to-speech and speech-to-text translation are thus emerging as key technologies at the heart of which lies multilingual speech processing.

  2. Emotion and lying in a non-native language.

    Science.gov (United States)

    Caldwell-Harris, Catherine L; Ayçiçeği-Dinn, Ayşe

    2009-03-01

    Bilingual speakers frequently report experiencing greater emotional resonance in their first language compared to their second. In Experiment 1, Turkish university students who had learned English as a foreign language had reduced skin conductance responses (SCRs) when listening to emotional phrases in English compared to Turkish, an effect which was most pronounced for childhood reprimands. A second type of emotional language, reading out loud true and false statements, was studied in Experiment 2. Larger SCRs were elicited by lies compared to true statements, and larger SCRs were evoked by English statements compared to Turkish statements. In contrast, ratings of how strongly participants felt they were lying showed that Turkish lies were more strongly felt than English lies. Results suggest that two factors influence the electrodermal activity elicited when bilingual speakers lie in their two languages: arousal due to emotions associated with lying, and arousal due to anxiety about managing speech production in non-native language. Anxiety and emotionality when speaking a non-naive language need to be better understood to inform practices ranging from bilingual psychotherapy to police interrogation of suspects and witnesses.

  3. To What Extent Do We Hear Phonemic Contrasts in a Non-Native Regional Variety? Tracking the Dynamics of Perceptual Processing with EEG

    Science.gov (United States)

    Dufour, Sophie; Brunelliere, Angele; Nguyen, Noel

    2013-01-01

    This combined ERP and behavioral experiment explores the dynamics of processing during the discrimination of vowels in a non-native regional variety. Southern listeners were presented with three word forms, two of which are encountered in both Standard and Southern French ([kot] and [kut]), whereas the third one exists in Standard but not Southern…

  4. Cross-linguistic perspectives on speech assessment in cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Henningsson, Gunilla

    2012-01-01

    This chapter deals with cross linguistic perspectives that need to be taken into account when comparing speech assessment and speech outcome obtained from cleft palate speakers of different languages. Firstly, an overview of consonants and vowels vulnerable to the cleft condition is presented. Then......, consequences for assessment of cleft palate speech by native versus non-native speakers of a language are discussed, as well as the use of phonemic versus phonetic transcription in cross linguistic studies. Specific recommendations for the construction of speech samples in cross linguistic studies are given....... Finally, the influence of different languages on some aspects of language acquisition in young children with cleft palate is presented and discussed. Until recently, not much has been written about cross linguistic perspectives when dealing with cleft palate speech. Most literature about assessment...

  5. Enhanced syllable discrimination thresholds in musicians.

    Directory of Open Access Journals (Sweden)

    Jennifer Zuk

    Full Text Available Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.

  6. Basic humanitarian principles applicable to non-nationals.

    Science.gov (United States)

    Goodwin-gill, G S; Jenny, R K; Perruchoud, R

    1985-01-01

    This article examines the general status in international law of certain fundamental human rights to determine the minimum "no derogation" standards, and then surveys a number of formal agreements between stages governing migration matters, while examining some of the standard-setting work undertaken by the International Labor Organization (ILO) and other institutions. Article 13 of the Universal Declaration of Human Rights, proclaims the right of everyone to leave any country, including his or her own. The anti-discrimination provision is widely drawn and includes national or social origin, birth, or other status. Non-discrimination is frequently the core issue in migration matters; it offers the basis for a principles approach to questions involving non-nationals and their methodological analysis, as well as a standard for the progressive elaboration of institutions and practices. As a general rule, ILO conventions give particular importance to the principle of choice of methods by states for the implementation of standards, as well as to the principle of progressive implementation. Non-discrimination implies equality of opportunity in the work field, inremuneration, job opportunity, trade union rights and benefits, social security, taxation, medical treatment, and accommodation; basic legal guarantees are also matters of concern to migrant workers, including termination of employment, non-renewal of work permits, and expulsion. The generality of human rights is due not because the individual is or is not a member of a partucular group, and claims to such rights are not determinable according to membership, but according to the character of the right in question. The individualized aspect of fundamental human rights requires a case-by-case consideration of claims, and the recognition that to all persons now certain special duties are owed.

  7. Listening to accented speech in a second language: First language and age of acquisition effects.

    Science.gov (United States)

    Larraza, Saioa; Samuel, Arthur G; Oñederra, Miren Lourdes

    2016-11-01

    Bilingual speakers must acquire the phonemic inventory of 2 languages and need to recognize spoken words cross-linguistically; a demanding job potentially made even more difficult due to dialectal variation, an intrinsic property of speech. The present work examines how bilinguals perceive second language (L2) accented speech and where accommodation to dialectal variation takes place. Dialectal effects were analyzed at different levels: An AXB discrimination task tapped phonetic-phonological representations, an auditory lexical-decision task tested for effects in accessing the lexicon, and an auditory priming task looked for semantic processing effects. Within that central focus, the goal was to see whether perceptual adjustment at a given level is affected by 2 main linguistic factors: bilinguals' first language and age of acquisition of the L2. Taking advantage of the cross-linguistic situation of the Basque language, bilinguals with different first languages (Spanish or French) and ages of acquisition of Basque (simultaneous, early, or late) were tested. Our use of multiple tasks with multiple types of bilinguals demonstrates that in spite of very similar discrimination capacity, French-Basque versus Spanish-Basque simultaneous bilinguals' performance on lexical access significantly differed. Similarly, results of the early and late groups show that the mapping of phonetic-phonological information onto lexical representations is a more demanding process that accentuates non-native processing difficulties. L1 and AoA effects were more readily overcome in semantic processing; accented variants regularly created priming effects in the different groups of bilinguals. (PsycINFO Database Record

  8. The Effect of Explicit vs. Implicit Instruction on Mastering the Speech Act of Thanking among Iranian Male and Female EFL Learners

    Science.gov (United States)

    Ghaedrahmat, Mahdi; Alavi Nia, Parviz; Biria, Reza

    2016-01-01

    This pragmatic study investigated the speech act of thanking as used by non-native speakers of English. The study was an attempt to find whether the pragmatic awareness of Iranian EFL learners could be improved through explicit instruction of the structure of the speech act of "Thanking". In fact, this study aimed to find out if there…

  9. Emotion Recognition using Speech Features

    CERN Document Server

    Rao, K Sreenivasa

    2013-01-01

    “Emotion Recognition Using Speech Features” covers emotion-specific features present in speech and discussion of suitable models for capturing emotion-specific information for distinguishing different emotions.  The content of this book is important for designing and developing  natural and sophisticated speech systems. Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about using evidence derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Discussion includes global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; use of complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance;  and pro...

  10. Improvement Comparison of Different Lattice-based Discriminative Training Methods in Chinese-monolingual and Chinese-English-bilingual Speech Recognition%各种不同的基于词格的鉴别性训练方法在中文单语以及中英双语语音识别系统中的性能改善调研及比较

    Institute of Scientific and Technical Information of China (English)

    QIAN Yan-Min; SHAN Yu-Xiang; WANG Lin-Fang; LIU Jia

    2012-01-01

    Discriminative training approaches such as minimum phone error (MPE),feature minimum phone error (fMPE) and boosted maximum mutual information (BMMI) have brought remarkable improvement to the speech community in recent years,however,much work still remains to be done.This paper investigates the performances of three lattice-based discriminative training methods in detail,and does a comparison of different I-smoothing methods to obtain more robust models in the Chinese-monolingual situation.The complementary properties of the different discriminative training methods are explored to perform a system combination by recognizer output voting error reduction (ROVER).Although discriminative training is normally used in monolingual systems,this paper systematically investigates its use for bilingual speech recognition,including MPE,fMPE,and BMMI.A new method is proposed to generate significantly better lattices for training the bilingual model,and complementary discriminative training models are also explored to get the best ROVER performance in the bilingual situation.Experimental results show that all forms of discriminative training can reduce the word error rate in both monolingual and bilingual systems,and that combining complementary discriminative training methods can improve the performance significantly.

  11. NATIVE VS NON-NATIVE ENGLISH TEACHERS

    Directory of Open Access Journals (Sweden)

    Masrizal Masrizal

    2013-02-01

    Full Text Available Although the majority of English language teachers worldwide are non-native English speakers (NNS, no research was conducted on these teachers until recently. A pioneer research by Peter Medgyes in 1994 took quite a long time until the other researchers found their interests in this issue. There is a widespread stereotype that a native speaker (NS is by nature the best person to teach his/her foreign language. In regard to this assumption, we then see a very limited room and opportunities for a non native teacher to teach language that is not his/hers. The aim of this article is to analyze the differences among these teachers in order to prove that non-native teachers have equal advantages that should be taken into account. The writer expects that the result of this short article could be a valuable input to the area of teaching English as a foreign language in Indonesia.

  12. Auditory perceptual simulation: Simulating speech rates or accents?

    Science.gov (United States)

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.

  13. Non-natives: 141 scientists object

    OpenAIRE

    Simberloff, D.; van der Putten, W. H.

    2011-01-01

    Supplementary information to: Non-natives: 141 scientists object Full list of co-signatories to a Correspondence published in Nature 475, 36 (2011); doi: 10.1038/475036a. Daniel Simberloff University of Tennessee, Knoxville, Tennessee, USA. Jake Alexander Institute of Integrative Biology, Zurich, Switzerland. Fred Allendorf University of Montana, Missoula, Montana, USA. James Aronson CEFE/CNRS, Montpellier, France. Pedro M. Antunes Algoma University, Sault Ste. Marie, Onta...

  14. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.

  15. Speech Indexing

    NARCIS (Netherlands)

    Ordelman, R.J.F.; Jong, de F.M.G.; Leeuwen, van D.A.; Blanken, H.M.; de Vries, A.P.; Blok, H.E.; Feng, L.

    2007-01-01

    This chapter will focus on the automatic extraction of information from the speech in multimedia documents. This approach is often referred to as speech indexing and it can be regarded as a subfield of audio indexing that also incorporates for example the analysis of music and sounds. If the objecti

  16. Plowing Speech

    OpenAIRE

    Zla ba sgrol ma

    2009-01-01

    This file contains a plowing speech and a discussion about the speech This collection presents forty-nine audio files including: several folk song genres; folktales and; local history from the Sman shad Valley of Sde dge county World Oral Literature Project

  17. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  18. Defining the Impact of Non-Native Species

    OpenAIRE

    Jeschke, Jonathan M; Bacher, Sven; Tim M Blackburn; Dick, Jaimie T. A.; Essl, Franz; Evans, Thomas; Gaertner, Mirijam; Hulme, Philip E.; Kühn, Ingolf; Mrugała, Agata; Pergl, Jan; Pyšek, Petr; Rabitsch, Wolfgang; Ricciardi, Anthony; Richardson, David M.

    2014-01-01

    Non-native species cause changes in the ecosystems to which they are introduced. These changes, or some of them, are usually termed impacts; they can be manifold and potentially damaging to ecosystems and biodiversity. However, the impacts of most non-native species are poorly understood, and a synthesis of available information is being hindered because authors often do not clearly define impact. We argue that explicitly defining the impact of non-native species will promote progress toward ...

  19. Practitioner perspectives on using nonnative plants for revegetation

    Directory of Open Access Journals (Sweden)

    Elise Gornish

    2016-09-01

    Full Text Available Restoration practitioners use both native and nonnative plant species for revegetation projects. Typically, when rehabilitating damaged working lands, more practitioners consider nonnative plants; while those working to restore habitat have focused on native plants. But this may be shifting. Novel ecosystems (non-analog communities are commonly being discussed in academic circles, while practical factors such as affordability and availability of natives and the need for more drought tolerant species to accommodate climate change may be making nonnative species attractive to land managers. To better understand the current use of nonnatives for revegetation, we surveyed 192 California restoration stakeholders who worked in a variety of habitats. A large portion (42% of them considered nonnatives for their projects, and of survey respondents who did not use nonnatives in vegetation rehabilitation, almost half indicated that they would consider them in the future. Across habitats, the dominant value of nonnatives for vegetation rehabilitation was found to be erosion control, and many respondents noted the high cost and unavailability of natives as important drivers of nonnative use in revegetation projects. Moreover, 37% of respondents noted they had changed their opinion or use of nonnatives in response to climate change.

  20. Non-native educators in English language teaching

    CERN Document Server

    Braine, George

    2013-01-01

    The place of native and non-native speakers in the role of English teachers has probably been an issue ever since English was taught internationally. Although ESL and EFL literature is awash, in fact dependent upon, the scrutiny of non-native learners, interest in non-native academics and teachers is fairly new. Until recently, the voices of non-native speakers articulating their own concerns have been even rarer. This book is a response to this notable vacuum in the ELT literature, providing a forum for language educators from diverse geographical origins and language backgrounds. In additio

  1. Incorporating fragmentation and non-native species into distribution models to inform fluvial fish conservation.

    Science.gov (United States)

    Taylor, Andrew T; Papeş, Monica; Long, James M

    2017-09-06

    Fluvial fishes face increased imperilment from anthropogenic activities, but the specific factors contributing most to range declines are often poorly understood. For example, the shoal bass (Micropterus cataractae) is a fluvial-specialist species experiencing continual range loss, yet how perceived threats have contributed to range loss is largely unknown. We employed species distribution models (SDMs) to disentangle which factors are contributing most to shoal bass range loss by estimating a potential distribution based on natural abiotic factors and by estimating a series of current, occupied distributions that also incorporated variables characterizing land cover, non-native species, and fragmentation intensity (no fragmentation, dams only, and dams and large impoundments). Model construction allowed for interspecific relationships between non-native congeners and shoal bass to vary across fragmentation intensities. Results from the potential distribution model estimated shoal bass presence throughout much of their native basin, whereas models of current occupied distribution illustrated increased range loss as fragmentation intensified. Response curves from current occupied models indicated a potential interaction between fragmentation intensity and the relationship between shoal bass and non-native congeners, wherein non-natives may be favored at the highest fragmentation intensity. Response curves also suggested that free-flowing fragment lengths of > 100 km were necessary to support shoal bass presence. Model evaluation, including an independent validation, suggested models had favorable predictive and discriminative abilities. Similar approaches that use readily-available, diverse geospatial datasets may deliver insights into the biology and conservation needs of other fluvial species facing similar threats. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Price Discrimination

    OpenAIRE

    Armstrong, Mark

    2008-01-01

    This paper surveys recent economic research on price discrimination, both in monopoly and oligopoly markets. Topics include static and dynamic forms of price discrimination, and both final and input markets are considered. Potential antitrust aspects of price discrimination are highlighted throughout the paper. The paper argues that the informational requirements to make accurate policy are very great, and with most forms of price discrimination a laissez-faire policy may be the best availabl...

  3. Subcortical Differentiation of Stop Consonants Relates to Reading and Speech-in-Noise Perception

    National Research Council Canada - National Science Library

    Jane Hornickel; Erika Skoe; Trent Nicol; Steven Zecker; Nina Kraus; Michael M. Merzenich

    2009-01-01

    Children with reading impairments have deficits in phonological awareness, phonemic categorization, speech-in-noise perception, and psychophysical tasks such as frequency and temporal discrimination...

  4. Effects of sentence context in L2 natural speech comprehension

    OpenAIRE

    Fitzpatrick, I.

    2007-01-01

    Electrophysiological studies consistently find N400 effects of semantic incongruity in non-native written language comprehension. Typically these N400 effects are later than N400 effects in native comprehension, suggesting that semantic processing in one’s second language (L2) may be delayed compared to one’s first language (L1). In this study we were firstly interested in replicating the semantic incongruity effect using natural auditory speech, which poses strong demands on the speed of pro...

  5. Cross-language activation in children's speech production: Evidence from second language learners, bilinguals, and trilinguals

    NARCIS (Netherlands)

    Poarch, G.J.; Hell, J.G. van

    2012-01-01

    In five experiments, we examined cross-language activation during speech production in various groups of bilinguals and trilinguals who differed in nonnative language proficiency, language learning background, and age. In Experiments 1, 2, 3, and 5, German 5- to 8-year-old second language learners o

  6. Cross-Language Activation in Children's Speech Production: Evidence from Second Language Learners, Bilinguals, and Trilinguals

    Science.gov (United States)

    Poarch, Gregory J.; van Hell, Janet G.

    2012-01-01

    In five experiments, we examined cross-language activation during speech production in various groups of bilinguals and trilinguals who differed in nonnative language proficiency, language learning background, and age. In Experiments 1, 2, 3, and 5, German 5- to 8-year-old second language learners of English, German-English bilinguals,…

  7. The Usefulness of Automatic Speech Recognition (ASR) Eyespeak Software in Improving Iraqi EFL Students' Pronunciation

    Science.gov (United States)

    Sidgi, Lina Fathi Sidig; Shaari, Ahmad Jelani

    2017-01-01

    The present study focuses on determining whether automatic speech recognition (ASR) technology is reliable for improving English pronunciation to Iraqi EFL students. Non-native learners of English are generally concerned about improving their pronunciation skills, and Iraqi students face difficulties in pronouncing English sounds that are not…

  8. Native-Language Benefit for Understanding Speech-in-Noise: The Contribution of Semantics

    Science.gov (United States)

    Golestani, Narly; Rosen, Stuart; Scott, Sophie K.

    2009-01-01

    Bilinguals are better able to perceive speech-in-noise in their native compared to their non-native language. This benefit is thought to be due to greater use of higher-level, linguistic context in the native language. Previous studies showing this have used sentences and do not allow us to determine which level of language contributes to this…

  9. Cross-language activation in children's speech production: Evidence from second language learners, bilinguals, and trilinguals

    NARCIS (Netherlands)

    Poarch, G.J.; Hell, J.G. van

    2012-01-01

    In five experiments, we examined cross-language activation during speech production in various groups of bilinguals and trilinguals who differed in nonnative language proficiency, language learning background, and age. In Experiments 1, 2, 3, and 5, German 5- to 8-year-old second language learners o

  10. Structural Discrimination

    DEFF Research Database (Denmark)

    Thorsen, Mira Skadegård

    In this article, I discuss structural discrimination, an underrepresented area of study in Danish discrimination and intercultural research. It is defined here as discursive and constitutive, and presented as a central element of my analytical approach. This notion is employed in the with which...... to understand and identify aspects of power and asymmetry in communication and interactions. With this as a defining term, I address how exclusion and discrimination exist, while also being indiscernible, within widely accepted societal norms. I introduce the concepts of microdiscrimination and benevolent...... discrimination as two ways of articulating particular, opaque forms of racial discrimination that occur in everyday Danish (and other) contexts, and have therefore become normalized. I present and discuss discrimination as it surfaces in data from my empirical studies of discrimination in Danish contexts...

  11. Preparing Non-Native English-Speaking ESL Teachers

    Science.gov (United States)

    Shin, Sarah J.

    2008-01-01

    This article addresses the challenges that non-native English-speaking teacher trainees face as they begin teaching English as a Second Language (ESL) in Western, English-speaking countries. Despite a great deal of training, non-native speaker teachers may be viewed as inadequate language teachers because they often lack native speaker competence…

  12. When the Teacher Is a Non-native Speaker

    Institute of Scientific and Technical Information of China (English)

    Pèter Medgyes

    2005-01-01

    @@ In "When the Teacher is a Non-native Speaker," Medgyes examines the differences in teaching behavior between native and non-native teachers of English, and then specifies the causes of those differences. The aim of the discussion is to raise the awareness of both groups of teachers to their respective strengths and weaknesses, and thus help them become better teachers.

  13. The Non-Native English Speaker Teachers in TESOL Movement

    Science.gov (United States)

    Kamhi-Stein, Lía D.

    2016-01-01

    It has been almost 20 years since what is known as the non-native English-speaking (NNES) professionals' movement--designed to increase the status of NNES professionals--started within the US-based TESOL International Association. However, still missing from the literature is an understanding of what a movement is, and why non-native English…

  14. Sound frequency affects speech emotion perception: Results from congenital amusia

    Directory of Open Access Journals (Sweden)

    Sydney eLolli

    2015-09-01

    Full Text Available Congenital amusics, or tone-deaf individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying band-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody (MBEP were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under band-pass and unfiltered speech conditions. Results showed a significant correlation between pitch discrimination threshold and emotion identification accuracy for band-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold > 16 Hz performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between band-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation.

  15. Perception of native and non-native affricate-fricative contrasts: cross-language tests on adults and infants.

    Science.gov (United States)

    Tsao, Feng-Ming; Liu, Huei-Mei; Kuhl, Patricia K

    2006-10-01

    Previous studies have shown improved sensitivity to native-language contrasts and reduced sensitivity to non-native phonetic contrasts when comparing 6-8 and 10-12-month-old infants. This developmental pattern is interpreted as reflecting the onset of language-specific processing around the first birthday. However, generalization of this finding is limited by the fact that studies have yielded inconsistent results and that insufficient numbers of phonetic contrasts have been tested developmentally; this is especially true for native-language phonetic contrasts. Three experiments assessed the effects of language experience on affricate-fricative contrasts in a cross-language study of English and Mandarin adults and infants. Experiment 1 showed that English-speaking adults score lower than Mandarin-speaking adults on Mandarin alveolo-palatal affricate-fricative discrimination. Experiment 2 examined developmental change in the discrimination of this contrast in English- and Mandarin-leaning infants between 6 and 12 months of age. The results demonstrated that native-language performance significantly improved with age while performance on the non-native contrast decreased. Experiment 3 replicated the perceptual improvement for a native contrast: 6-8 and 10-12-month-old English-learning infants showed a performance increase at the older age. The results add to our knowledge of the developmental patterns of native and non-native phonetic perception.

  16. Risk assessment of non-native fishes in the Balkans Region using FISK, the invasiveness screening tool for non-native freshwater fishes

    Directory of Open Access Journals (Sweden)

    P. SIMONOVIC

    2013-06-01

    Full Text Available A high level of freshwater fish endemism in the Balkans Region emphasizes the need for non-native species risk assessments to inform management and control measures, with pre-screening tools, such as the Fish Invasiveness Screening Kit (FISK providing a useful first step. Applied to 43 non-native and translocated freshwater fishes in four Balkan countries, FISK reliably discriminated between invasive and non-invasive species, with a calibration threshold value of 9.5 distinguishing between species of medium and high risk sensu lato of becoming invasive. Twelve of the 43 species were assessed by scientists from two or more Balkan countries, and the remaining 31 species by a single assessor. Using the 9.5 threshold, three species were classed as low risk, 10 as medium risk, and 30 as high risk, with the latter category comprised of 26 moderately high risk, three high risk, and one very high risk species. Confidence levels in the assessments were relatively constant for all species, indicating concordance amongst assessors.

  17. Native and Non-Native Perceptions on a Non-Native Oral Discourse in an Academic Setting

    Directory of Open Access Journals (Sweden)

    Kenan Dikilitaş

    2012-07-01

    Full Text Available This qualitative study investigates discourse-level patterns typically employed by a Turkish lecturer based on the syntactic patterns found in the collected data. More specifically, the study aims to reveal how different native and non-native speakers of English perceive discourse patterns used by a non-native lecturer teaching in English. The data gathered from a Turkish lecturer teaching finance, and the interviews both with the lecturer and the students. The lecturer and the students were videotaped and the data was evaluated by content analysis. The results revealed a difference between the way non-native and native speakers evaluate an oral discourse of a non-native lecturer teaching in English. Native speakers of English found the oral performance moderately comprehensible, while non-native speakers found it relatively comprehensible.

  18. Raspberry, not a car: context predictability and a phonological advantage in early and late learners’ processing of speech in noise

    Science.gov (United States)

    Gor, Kira

    2014-01-01

    Second language learners perform worse than native speakers under adverse listening conditions, such as speech in noise (SPIN). No data are available on heritage language speakers’ (early naturalistic interrupted learners’) ability to perceive SPIN. The current study fills this gap and investigates the perception of Russian speech in multi-talker babble noise by the matched groups of high- and low-proficiency heritage speakers (HSs) and late second language learners of Russian who were native speakers of English. The study includes a control group of Russian native speakers. It manipulates the noise level (high and low), and context cloze probability (high and low). The results of the SPIN task are compared to the tasks testing the control of phonology, AXB discrimination and picture-word discrimination, and lexical knowledge, a word translation task, in the same participants. The increased phonological sensitivity of HSs interacted with their ability to rely on top–down processing in sentence integration, use contextual cues, and build expectancies in the high-noise/high-context condition in a bootstrapping fashion. HSs outperformed oral proficiency-matched late second language learners on SPIN task and two tests of phonological sensitivity. The outcomes of the SPIN experiment support both the early naturalistic advantage and the role of proficiency in HSs. HSs’ ability to take advantage of the high-predictability context in the high-noise condition was mitigated by their level of proficiency. Only high-proficiency HSs, but not any other non-native group, took advantage of the high-predictability context that became available with better phonological processing skills in high-noise. The study thus confirms high-proficiency (but not low-proficiency) HSs’ nativelike ability to combine bottom–up and top–down cues in processing SPIN. PMID:25566130

  19. The relationship between auditory-visual speech perception and language-specific speech perception at the onset of reading instruction in English-speaking children.

    Science.gov (United States)

    Erdener, Doğu; Burnham, Denis

    2013-10-01

    Speech perception is auditory-visual, but relatively little is known about auditory-visual compared with auditory-only speech perception. One avenue for further understanding is via developmental studies. In a recent study, Sekiyama and Burnham (2008) found that English speakers significantly increase their use of visual speech information between 6 and 8 years of age but that this development does not appear to be universal across languages. Here, the possible bases for this language-specific increase among English speakers were investigated. Four groups of English-language children (5, 6, 7, and 8 years) and a group of adults were tested on auditory-visual, auditory-only, and visual-only speech perception; language-specific speech perception with native and non-native speech sounds; articulation; and reading. Results showed that language-specific speech perception and lip-reading ability reliably predicted auditory-visual speech perception in children but that adult auditory-visual speech perception was predicted by auditory-only speech perception. The implications are discussed in terms of both auditory-visual speech perception and language development. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Amharic Speech Recognition for Speech Translation

    OpenAIRE

    Melese, Michael; Besacier, Laurent; Meshesha, Million

    2016-01-01

    International audience; The state-of-the-art speech translation can be seen as a cascade of Automatic Speech Recognition, Statistical Machine Translation and Text-To-Speech synthesis. In this study an attempt is made to experiment on Amharic speech recognition for Amharic-English speech translation in tourism domain. Since there is no Amharic speech corpus, we developed a read-speech corpus of 7.43hr in tourism domain. The Amharic speech corpus has been recorded after translating standard Bas...

  1. Automatic speech recognition a deep learning approach

    CERN Document Server

    Yu, Dong

    2015-01-01

    This book summarizes the recent advancement in the field of automatic speech recognition with a focus on discriminative and hierarchical models. This will be the first automatic speech recognition book to include a comprehensive coverage of recent developments such as conditional random field and deep learning techniques. It presents insights and theoretical foundation of a series of recent models such as conditional random field, semi-Markov and hidden conditional random field, deep neural network, deep belief network, and deep stacking models for sequential learning. It also discusses practical considerations of using these models in both acoustic and language modeling for continuous speech recognition.

  2. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  3. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    OpenAIRE

    Claudia Kubicek; Anne Hillairet de Boisferon; Eve Dupierrix; Olivier Pascalis; Hélène Lœvenbruck; Judit Gervain; Gudrun Schwarzer

    2014-01-01

    International audience; The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infa...

  4. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    Science.gov (United States)

    Arnold, Denis; Tomaschek, Fabian; Sering, Konstantin; Lopez, Florence; Baayen, R Harald

    2017-01-01

    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  5. Speech training alters tone frequency tuning in rat primary auditory cortex

    Science.gov (United States)

    Engineer, Crystal T.; Perez, Claudia A.; Carraway, Ryan S.; Chang, Kevin Q.; Roland, Jarod L.; Kilgard, Michael P.

    2013-01-01

    Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing. PMID:24344364

  6. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  7. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome.

    Science.gov (United States)

    Engineer, Crystal T; Rahebi, Kimiya C; Borland, Michael S; Buell, Elizabeth P; Centanni, Tracy M; Fink, Melyssa K; Im, Kwok W; Wilson, Linda G; Kilgard, Michael P

    2015-11-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial.

  8. Near-Term Fetuses Process Temporal Features of Speech

    Science.gov (United States)

    Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie

    2011-01-01

    The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…

  9. History of nonnative Monk Parakeets in Mexico.

    Science.gov (United States)

    Hobson, Elizabeth A; Smith-Vidaurre, Grace; Salinas-Melgoza, Alejandro

    2017-01-01

    Nonnative Monk Parakeets have been reported in increasing numbers across many cities in Mexico, and were formally classified as an invasive species in Mexico in late 2016. However, there has not been a large-scale attempt to determine how international pet trade and national and international governmental regulations have played a part in colonization, and when the species appeared in different areas. We describe the changes in regulations that led the international pet trade market to shift to Mexico, then used international trade data to determine how many parakeets were commercially imported each year and where those individuals originated. We also quantified the recent increases in Monk Parakeet (Myiopsitta monachus) sightings in Mexico in both the scientific literature and in citizen science reports. We describe the timeline of increased reports to understand the history of nonnative Monk Parakeets in Mexico. As in other areas where the species has colonized, the main mode of transport is through the international pet trade. Over half a million Monk Parakeets were commercially imported to Mexico during 2000-2015, with the majority of importation (90%) occurring in 2008-2014, and almost all (98%) were imported from Uruguay. The earliest record of a free-flying Monk Parakeet was observed during 1994-1995 in Mexico City, but sightings of the parakeets did not become geographically widespread in either the scientific literature or citizen science databases until 2012-2015. By 2015, parakeets had been reported in 97 cities in Mexico. Mexico City has consistently seen steep increases in reporting since this species was first reported in Mexico. Here we find that both national and international legal regulations and health concerns drove a rise and fall in Monk Parakeet pet trade importations, shortly followed by widespread sightings of feral parakeets across Mexico. Further monitoring of introduced Monk Parakeet populations in Mexico is needed to understand the

  10. Speech training alters consonant and vowel responses in multiple auditory cortex fields.

    Science.gov (United States)

    Engineer, Crystal T; Rahebi, Kimiya C; Buell, Elizabeth P; Fink, Melyssa K; Kilgard, Michael P

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Spatial discrimination and visual discrimination

    DEFF Research Database (Denmark)

    Haagensen, Annika M. J.; Grand, Nanna; Klastrup, Signe

    2013-01-01

    Two methods investigating learning and memory in juvenile Gottingen minipigs were evaluated for potential use in preclinical toxicity testing. Twelve minipigs were tested using a spatial hole-board discrimination test including a learning phase and two memory phases. Five minipigs were tested...... in a visual discrimination test. The juvenile minipigs were able to learn the spatial hole-board discrimination test and showed improved working and reference memory during the learning phase. Performance in the memory phases was affected by the retention intervals, but the minipigs were able to remember...... the concept of the test in both memory phases. Working memory and reference memory were significantly improved in the last trials of the memory phases. In the visual discrimination test, the minipigs learned to discriminate between the three figures presented to them within 9-14 sessions. For the memory test...

  12. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  13. Speech dynamics

    NARCIS (Netherlands)

    Pols, L.C.W.

    2011-01-01

    In order for speech to be informative and communicative, segmental and suprasegmental variation is mandatory. Only this leads to meaningful words and sentences. The building blocks are no stable entities put next to each other (like beads on a string or like printed text), but there are gradual tran

  14. Effects of speech clarity on recognition memory for spoken sentences.

    Science.gov (United States)

    Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka

    2012-01-01

    Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.

  15. The Attitudes and Perceptions of Non-Native English Speaking ...

    African Journals Online (AJOL)

    The Attitudes and Perceptions of Non-Native English Speaking Adults toward Explicit Grammar Instruction. ... to excel in their academic careers, obtain good jobs, and interact well with those who speak English. ... AJOL African Journals Online.

  16. Learning new sounds of speech: reallocation of neural substrates.

    Science.gov (United States)

    Golestani, Narly; Zatorre, Robert J

    2004-02-01

    Functional magnetic resonance imaging (fMRI) was used to investigate changes in brain activity related to phonetic learning. Ten monolingual English-speaking subjects were scanned while performing an identification task both before and after five sessions of training with a Hindi dental-retroflex nonnative contrast. Behaviorally, training resulted in an improvement in the ability to identify the nonnative contrast. Imaging results suggest that the successful learning of a nonnative phonetic contrast results in the recruitment of the same areas that are involved during the processing of native contrasts, including the left superior temporal gyrus, insula-frontal operculum, and inferior frontal gyrus. Additionally, results of correlational analyses between behavioral improvement and the blood-oxygenation-level-dependent (BOLD) signal obtained during the posttraining Hindi task suggest that the degree of success in learning is accompanied by more efficient neural processing in classical frontal speech regions, and by a reduction of deactivation relative to a noise baseline condition in left parietotemporal speech regions.

  17. Real-Time Speech/Music Classification With a Hierarchical Oblique Decision Tree

    Science.gov (United States)

    2008-04-01

    REAL-TIME SPEECH/ MUSIC CLASSIFICATION WITH A HIERARCHICAL OBLIQUE DECISION TREE Jun Wang, Qiong Wu, Haojiang Deng, Qin Yan Institute of Acoustics...time speech/ music classification with a hierarchical oblique decision tree. A set of discrimination features in frequency domain are selected...handle signals without discrimination and can not work properly in the existence of multimedia signals. This paper proposes a real-time speech/ music

  18. Age-related sensitive periods influence visual language discrimination in adults

    OpenAIRE

    2013-01-01

    Adults as well as infants have the capacity to discriminate languages based on visual speech alone. Here, we investigated whether adults' ability to discriminate languages based on visual speech cues is influenced by the age of language acquisition. Adult participants who had all learned English (as a first or second language) but did not speak French were shown faces of bilingual (French/English) speakers silently reciting sentences in either language. Using only visual speech information, a...

  19. Nonnative Speaker-Initiated Repair in A Sequential Complex Context

    DEFF Research Database (Denmark)

    Yufu, Mamiko

    Repair has been one of the main subjects of conversation analytical studies and the focus is often put on achieving mutual understanding. However, there are also some phenomena unique to a contact situation, which may be due to restricted linguistic knowledge of nonnative speakers, difference...... to such factors as how Germans see Japanese, the interference of Japanese conversational styles, etc. Through the analyses of nonnative speaker-initiated repair, the context-sensitive complexities are demonstrated in this paper....

  20. Nonnative Speaker-Initiated Repair in A Sequential Complex Context

    DEFF Research Database (Denmark)

    Yufu, Mamiko

    Repair has been one of the main subjects of conversation analytical studies and the focus is often put on achieving mutual understanding. However, there are also some phenomena unique to a contact situation, which may be due to restricted linguistic knowledge of nonnative speakers, difference...... to such factors as how Germans see Japanese, the interference of Japanese conversational styles, etc. Through the analyses of nonnative speaker-initiated repair, the context-sensitive complexities are demonstrated in this paper....

  1. Comparing speech characteristics in spinocerebellar ataxias type 3 and type 6 with Friedreich ataxia.

    Science.gov (United States)

    Brendel, Bettina; Synofzik, Matthis; Ackermann, Hermann; Lindig, Tobias; Schölderle, Theresa; Schöls, Ludger; Ziegler, Wolfram

    2015-01-01

    Patterns of dysarthria in spinocerebellar ataxias (SCAs) and their discriminative features still remain elusive. Here we aimed to compare dysarthria profiles of patients with (SCA3 and SCA6 vs. Friedreich ataxia (FRDA), focussing on three particularly vulnerable speech parameters (speaking rate, prosodic modulation, and intelligibility) in ataxic dysarthria as well as on a specific oral non-speech variable of ataxic impairment, i.e., the irregularity of oral motor diadochokinesis (DDK). 30 Patients with SCA3, SCA6, and FRDA, matched for group size (n = 10 each), disease severity, and disease duration produced various speech samples and DDK tasks. A discriminant analysis was used to differentiate speech and non-speech parameters between groups. Regularity of DDK was specifically impaired in SCA3, whereas impairments of speech parameters, i.e., rate and modulation were stronger affected in SCA6. Speech parameters are particularly vulnerable in SCA6, while non-speech oral motor features are notably impaired in SCA3.

  2. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  3. Speech communications in noise

    Science.gov (United States)

    1984-07-01

    The physical characteristics of speech, the methods of speech masking measurement, and the effects of noise on speech communication are investigated. Topics include the speech signal and intelligibility, the effects of noise on intelligibility, the articulation index, and various devices for evaluating speech systems.

  4. Simulated Critical Differences for Speech Reception Thresholds

    Science.gov (United States)

    Pedersen, Ellen Raben; Juhl, Peter Møller

    2017-01-01

    Purpose: Critical differences state by how much 2 test results have to differ in order to be significantly different. Critical differences for discrimination scores have been available for several decades, but they do not exist for speech reception thresholds (SRTs). This study presents and discusses how critical differences for SRTs can be…

  5. Speech and Language Development in ADHD

    OpenAIRE

    J Gordon Millichap

    1999-01-01

    Speech discrimination and phonological working memory were examined in children with ADHD (N=9), ADHD plus developmental coordination disorder (ADHD+DCD) (N=13), and 19 age-matched controls, in a study at the Neuropediatric Unit, Karolinska Institute, Stockholm, Sweden.

  6. Voicing discrimination in multilingual and multiethnic Netherlands

    African Journals Online (AJOL)

    Kate H

    This paper explores political discourse on two public issues involving discrimination in the. Netherlands ..... 'they' hate 'us' (the West, non-Muslims). ... election night speech may well lead to his ... As in many other countries, social media function as a garbage belt for discriminatory ..... communication in the public sphere.

  7. Discriminative training of self-structuring hidden control neural models

    DEFF Research Database (Denmark)

    Sørensen, Helge Bjarup Dissing; Hartmann, Uwe; Hunnerup, Preben

    1995-01-01

    This paper presents a new training algorithm for self-structuring hidden control neural (SHC) models. The SHC models were trained non-discriminatively for speech recognition applications. Better recognition performance can generally be achieved, if discriminative training is applied instead. Thus...

  8. Brain Potentials to Speech and Acoustic Sound Discrimination Uncover the Origin of Individual Differences in Perceiving the Sounds of A Second Language%二语语音辨别能力个体差异来源:来自ERP研究的证据

    Institute of Scientific and Technical Information of China (English)

    范若琳; 莫雷; 徐贵平; 钟伟芳; 周莹; 杨力

    2014-01-01

    As to the origin of individual differences in perceiving the sounds of a second language, the scientific community has been divided. There are two alternative explanations: a general psychoacoustic origin vs. a general speech one. A previous study (Díaz et al., 2008) has shown that such individual variability is linked to the perceivers’ general speech abilities. However, our research casts doubt on the conclusion for two reasons. Firstly, this study exclusively focused on speech sounds in the same language family, rather than explored the languages from different language families. It has only been proved that individual variability in L2 is related to varied sensitivities to the speech sounds within the same language family rather than in the general speech system including different language families. Moreover, the study selected pure tones as acoustic materials, neglecting another important acoustic signals, complex sounds. It is obvious that we can’t draw the conclusion that ability of processing general sounds has no impact on discrimination of speech sounds in L2, without studying complex tones. Here, studying speech sounds from different language families and complex sounds, the main purpose of present study was to explore whether the individual differences in perceiving L2 stem from their ability of processing general phonetic signals or phonetic stimuli within the specific language family, and farther explore whether such individual variability deeply stems from individual sensitivity to complex tones. In the present study, 14 L2 good perceivers (GP) and 14 L2 poor perceivers (PP), in order to participate in the following ERP experiment, were selected from 130 healthy Cantonese (L1)-mandarin (L2) bilinguals according to their performances in a behavior task. To precisely measure the participants’ sound discrimination, MMN elicited by oddball paradigm was recorded in the following experiment. The ERP experiment consists of three sections, including

  9. Going to a Speech Therapist

    Science.gov (United States)

    ... Video: Getting an X-ray Going to a Speech Therapist KidsHealth > For Kids > Going to a Speech Therapist ... therapists (also called speech-language pathologists ). What Do Speech Therapists Help With? Speech therapists help people of all ...

  10. The Functional Connectome of Speech Control.

    Science.gov (United States)

    Fuertinger, Stefan; Horwitz, Barry; Simonyan, Kristina

    2015-07-01

    In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI) data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs) in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively forged the formation

  11. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  12. Speech research

    Science.gov (United States)

    1992-06-01

    Phonology is traditionally seen as the discipline that concerns itself with the building blocks of linguistic messages. It is the study of the structure of sound inventories of languages and of the participation of sounds in rules or processes. Phonetics, in contrast, concerns speech sounds as produced and perceived. Two extreme positions on the relationship between phonological messages and phonetic realizations are represented in the literature. One holds that the primary home for linguistic symbols, including phonological ones, is the human mind, itself housed in the human brain. The second holds that their primary home is the human vocal tract.

  13. Engineering biofuel tolerance in non-native producing microorganisms.

    Science.gov (United States)

    Jin, Hu; Chen, Lei; Wang, Jiangxin; Zhang, Weiwen

    2014-01-01

    Large-scale production of renewable biofuels through microbiological processes has drawn significant attention in recent years, mostly due to the increasing concerns on the petroleum fuel shortages and the environmental consequences of the over-utilization of petroleum-based fuels. In addition to native biofuel-producing microbes that have been employed for biofuel production for decades, recent advances in metabolic engineering and synthetic biology have made it possible to produce biofuels in several non-native biofuel-producing microorganisms. Compared to native producers, these non-native systems carry the advantages of fast growth, simple nutrient requirements, readiness for genetic modifications, and even the capability to assimilate CO2 and solar energy, making them competitive alternative systems to further decrease the biofuel production cost. However, the tolerance of these non-native microorganisms to toxic biofuels is naturally low, which has restricted the potentials of their application for high-efficiency biofuel production. To address the issues, researches have been recently conducted to explore the biofuel tolerance mechanisms and to construct robust high-tolerance strains for non-native biofuel-producing microorganisms. In this review, we critically summarize the recent progress in this area, focusing on three popular non-native biofuel-producing systems, i.e. Escherichia coli, Lactobacillus and photosynthetic cyanobacteria.

  14. Defining the impact of non-native species.

    Science.gov (United States)

    Jeschke, Jonathan M; Bacher, Sven; Blackburn, Tim M; Dick, Jaimie T A; Essl, Franz; Evans, Thomas; Gaertner, Mirijam; Hulme, Philip E; Kühn, Ingolf; Mrugała, Agata; Pergl, Jan; Pyšek, Petr; Rabitsch, Wolfgang; Ricciardi, Anthony; Richardson, David M; Sendek, Agnieszka; Vilà, Montserrat; Winter, Marten; Kumschick, Sabrina

    2014-10-01

    Non-native species cause changes in the ecosystems to which they are introduced. These changes, or some of them, are usually termed impacts; they can be manifold and potentially damaging to ecosystems and biodiversity. However, the impacts of most non-native species are poorly understood, and a synthesis of available information is being hindered because authors often do not clearly define impact. We argue that explicitly defining the impact of non-native species will promote progress toward a better understanding of the implications of changes to biodiversity and ecosystems caused by non-native species; help disentangle which aspects of scientific debates about non-native species are due to disparate definitions and which represent true scientific discord; and improve communication between scientists from different research disciplines and between scientists, managers, and policy makers. For these reasons and based on examples from the literature, we devised seven key questions that fall into 4 categories: directionality, classification and measurement, ecological or socio-economic changes, and scale. These questions should help in formulating clear and practical definitions of impact to suit specific scientific, stakeholder, or legislative contexts. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.

  15. Speech Sound Processing Deficits and Training-Induced Neural Plasticity in Rats with Dyslexia Gene Knockdown

    Science.gov (United States)

    Centanni, Tracy M.; Chen, Fuyi; Booker, Anne M.; Engineer, Crystal T.; Sloan, Andrew M.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.

    2014-01-01

    In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments. PMID:24871331

  16. Hyperarticulation of vowels enhances phonetic change responses in both native and non-native speakers of English: evidence from an auditory event-related potential study.

    Science.gov (United States)

    Uther, Maria; Giannakopoulou, Anastasia; Iverson, Paul

    2012-08-27

    The finding that hyperarticulation of vowel sounds occurs in certain speech registers (e.g., infant- and foreigner-directed speech) suggests that hyperarticulation may have a didactic function in facilitating acquisition of new phonetic categories in language learners. This event-related potential study tested whether hyperarticulation of vowels elicits larger phonetic change responses, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP) and tested native and non-native speakers of English. Data from 11 native English-speaking and 10 native Greek-speaking participants showed that Greek speakers in general had smaller MMNs compared to English speakers, confirming previous studies demonstrating sensitivity of the MMN to language background. In terms of the effect of hyperarticulation, hyperarticulated stimuli elicited larger MMNs for both language groups, suggesting vowel space expansion does elicit larger pre-attentive phonetic change responses. Interestingly Greek native speakers showed some P3a activity that was not present in the English native speakers, raising the possibility that additional attentional switch mechanisms are activated in non-native speakers compared to native speakers. These results give general support for models of speech learning such as Kuhl's Native Language Magnet enhanced (NLM-e) theory. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  17. LIBERDADE DE EXPRESSÃO E DISCURSO DO ÓDIO NO BRASIL / FREE SPEECH AND HATE SPEECH IN BRAZIL

    Directory of Open Access Journals (Sweden)

    Nevita Maria Pessoa de Aquino Franca Luna

    2014-12-01

    Full Text Available The purpose of this article is to analyze the restriction of free speech when it comes close to hate speech. In this perspective, the aim of this study is to answer the question: what is the understanding adopted by the Brazilian Supreme Court in cases involving the conflict between free speech and hate speech? The methodology combines a bibliographic review on the theoretical assumptions of the research (concept of free speech and hate speech, and understanding of the rights of defense of traditionally discriminated minorities and empirical research (documental and jurisprudential analysis of judged cases of American Court, German Court and Brazilian Court. Firstly, free speech is discussed, defining its meaning, content and purpose. Then, the hate speech is pointed as an inhibitor element of free speech for offending members of traditionally discriminated minorities, who are outnumbered or in a situation of cultural, socioeconomic or political subordination. Subsequently, are discussed some aspects of American (negative freedom and German models (positive freedom, to demonstrate that different cultures adopt different legal solutions. At the end, it is concluded that there is an approximation of the Brazilian understanding with the German doctrine, from the analysis of landmark cases as the publisher Siegfried Ellwanger (2003 and the Samba School Unidos do Viradouro (2008. The Brazilian comprehension, a multicultural country made up of different ethnicities, leads to a new process of defending minorities who, despite of involving the collision of fundamental rights (dignity, equality and freedom, is still restrained by incompatible barriers of a contemporary pluralistic democracy.

  18. NIS occurrence - Non-native species impacts on threatened and endangered salmonids

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The objectives of this project: a) Identify the distribution of non-natives in the Columbia River Basin b) Highlight the impacts of non-natives on salmonids c)...

  19. Auditory skills and brain morphology predict individual differences in adaptation to degraded speech.

    Science.gov (United States)

    Erb, Julia; Henry, Molly J; Eisner, Frank; Obleser, Jonas

    2012-07-01

    Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesised that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested 18 normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centred on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased grey matter volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Speech recognition using articulatory and excitation source features

    CERN Document Server

    Rao, K Sreenivasa

    2017-01-01

    This book discusses the contribution of articulatory and excitation source information in discriminating sound units. The authors focus on excitation source component of speech -- and the dynamics of various articulators during speech production -- for enhancement of speech recognition (SR) performance. Speech recognition is analyzed for read, extempore, and conversation modes of speech. Five groups of articulatory features (AFs) are explored for speech recognition, in addition to conventional spectral features. Each chapter provides the motivation for exploring the specific feature for SR task, discusses the methods to extract those features, and finally suggests appropriate models to capture the sound unit specific knowledge from the proposed features. The authors close by discussing various combinations of spectral, articulatory and source features, and the desired models to enhance the performance of SR systems.

  1. Linguistic influences in adult perception of non-native vowel contrasts.

    Science.gov (United States)

    Polka, L

    1995-02-01

    Perception of natural productions of two German vowels contrasts, /y/ vs /u/ and /Y/ vs /U/, was examined in monolingual English-speaking adults. Subjects were tested on multiple exemplars of the contrasting vowels produced in a dVt syllable by a native German speaker. Discrimination accuracy in an AXB discrimination task was well above chance for both contrasts. Most of the English adults failed to attain "nativelike" discrimination accuracy for the lax vowel pair /U/ vs /Y/, whereas all subjects showed nativelike performance in discriminating the tense vowel pair /u/ vs /y/. Results of a keyword identification and rating task provided evidence that English listeners' mapping of the German vowel to English vowel categories can be characterized as a category goodness difference assimilation, and that the difference in category goodness was more pronounced for the tense vowel pair than for the lax vowel pair. The results failed to support the hypothesis that the acoustic structure of vowels consistently favors auditory coding. Overall, the findings are compatible with existing data on discrimination of cross-language consonant contrasts in natural speech and suggest that linguistic experience shapes the discrimination of vowels and consonants as phonetic segmental units in similar ways.

  2. Morphological change and phenotypic plasticity in native and non-native pumpkinseed sunfish in response to sustained water velocities.

    Science.gov (United States)

    Yavno, S; Fox, M G

    2013-11-01

    Phenotypic plasticity can contribute to the proliferation and invasion success of nonindigenous species by promoting phenotypic changes that increase fitness, facilitate range expansion and improve survival. In this study, differences in phenotypic plasticity were investigated using young-of-year pumpkinseed sunfish from colonies established with lentic and lotic populations originating in Canada (native) and Spain (non-native). Individuals were subjected to static and flowing water treatments for 80 days. Inter- and intra-population differences were tested using ancova and discriminant function analysis, and differences in phenotypic plasticity were tested through a manova of discriminant function scores. Differences between Iberian and North American populations were observed in dorsal fin length, pectoral fin position and caudal peduncle length. Phenotypic plasticity had less influence on morphology than genetic factors, regardless of population origin. Contrary to predictions, Iberian pumpkinseed exhibited lower levels of phenotypic plasticity than native populations, suggesting that canalization may have occurred in the non-native populations during the processes of introduction and range expansion.

  3. Speech production, Psychology of

    NARCIS (Netherlands)

    Schriefers, H.J.; Vigliocco, G.

    2015-01-01

    Research on speech production investigates the cognitive processes involved in transforming thoughts into speech. This article starts with a discussion of the methodological issues inherent to research in speech production that illustrates how empirical approaches to speech production must differ fr

  4. New tests of the distal speech rate effect: examining cross-linguistic generalization.

    Science.gov (United States)

    Dilley, Laura C; Morrill, Tuuli H; Banzina, Elina

    2013-01-01

    Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664-1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In Experiment 1, native Russian speakers listened to Russian sentences which had been subjected to rate manipulations and performed a lexical report task. Experiment 2 investigated speech rate effects in cross-language speech perception; non-native speakers of Russian of both high and low proficiency were tested on the same Russian sentences as in Experiment 1. They decided between two lexical interpretations of a critical portion of the sentence, where one choice contained more phonological material than the other (e.g., /str'na/ "side" vs. /str'na/ "country"). In both experiments, with native and non-native speakers of Russian, context speech rate and the relative duration of the critical sentence portion were found to influence the amount of phonological material perceived. The results support the generalized rate normalization hypothesis, according to which the content perceived in a spectrally ambiguous stretch of speech depends on the duration of that content relative to the surrounding speech, while showing that the findings of Dilley and Pitt (2010) extend to a variety of morphosyntactic contexts and a new language, Russian. Findings indicate that relative timing cues across an utterance can be critical to accurate lexical perception by both native and non-native speakers.

  5. New tests of the distal speech rate effect: Examining cross-linguistic generalization

    Directory of Open Access Journals (Sweden)

    Laura eDilley

    2013-12-01

    Full Text Available Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664-1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In Experiment 1, native Russian speakers listened to Russian sentences which had been subjected to rate manipulations and performed a lexical report task. Experiment 2 investigated speech rate effects in cross-language speech perception; non-native speakers of Russian of both high and low proficiency were tested on the same Russian sentences as in Experiment 1. They decided between two lexical interpretations of a critical portion of the sentence, where one choice contained more phonological material than the other (e.g., /stərʌ'na/ side vs. /strʌ'na/ country. In both experiments, with native and non-native speakers of Russian, context speech rate and the relative duration of the critical sentence portion were found to influence the amount of phonological material perceived. The results support the generalized rate normalization hypothesis, according to which the content perceived in a spectrally ambiguous stretch of speech depends on the duration of that content relative to the surrounding speech, while showing that the findings of Dilley and Pitt (2010 extend to a variety of morphosyntactic contexts and a new language, Russian. Findings indicate that relative timing cues across an utterance can be critical to accurate lexical perception by both native and non-native speakers.

  6. The neural processing of foreign-accented speech and its relationship to listener bias

    Directory of Open Access Journals (Sweden)

    Han-Gyol eYi

    2014-10-01

    Full Text Available Foreign-accented speech often presents a challenging listening condition. In addition to deviations from the target speech norms related to the inexperience of the nonnative speaker, listener characteristics may play a role in determining intelligibility levels. We have previously shown that an implicit visual bias for associating East Asian faces and foreignness predicts the listeners’ perceptual ability to process Korean-accented English audiovisual speech (Yi et al., 2013. Here, we examine the neural mechanism underlying the influence of listener bias to foreign faces on speech perception. In a functional magnetic resonance imaging (fMRI study, native English speakers listened to native- and Korean-accented English sentences, with or without faces. The participants’ Asian-foreign association was measured using an implicit association test (IAT, conducted outside the scanner. We found that foreign-accented speech evoked greater activity in the bilateral primary auditory cortices and the inferior frontal gyri, potentially reflecting greater computational demand. Higher IAT scores, indicating greater bias, were associated with increased BOLD response to foreign-accented speech with faces in the primary auditory cortex, the early node for spectrotemporal analysis. We conclude the following: (1 foreign-accented speech perception places greater demand on the neural systems underlying speech perception; (2 face of the talker can exaggerate the perceived foreignness of foreign-accented speech; (3 implicit Asian-foreign association is associated with decreased neural efficiency in early spectrotemporal processing.

  7. 78 FR 49717 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ... reasons that STS ] has not been more widely utilized. Are people with speech disabilities not connected to... COMMISSION 47 CFR Part 64 Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With...

  8. Speech Enhancement

    DEFF Research Database (Denmark)

    Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll;

    of methods and have been introduced in somewhat different contexts. Linear filtering methods originate in stochastic processes, while subspace methods have largely been based on developments in numerical linear algebra and matrix approximation theory. This book bridges the gap between these two classes......Speech enhancement is a classical problem in signal processing, yet still largely unsolved. Two of the conventional approaches for solving this problem are linear filtering, like the classical Wiener filter, and subspace methods. These approaches have traditionally been treated as different classes...... of methods by showing how the ideas behind subspace methods can be incorporated into traditional linear filtering. In the context of subspace methods, the enhancement problem can then be seen as a classical linear filter design problem. This means that various solutions can more easily be compared...

  9. Speech therapy with obturator.

    Science.gov (United States)

    Shyammohan, A; Sreenivasulu, D

    2010-12-01

    Rehabilitation of speech is tantamount to closure of defect in cases with velopharyngeal insufficiency. Often the importance of speech therapy is sidelined during the fabrication of obturators. Usually the speech part is taken up only at a later stage and is relegated entirely to a speech therapist without the active involvement of the prosthodontist. The article suggests a protocol for speech therapy in such cases to be done in unison with a prosthodontist.

  10. Discrimination of speaker size from syllable phrasesa)

    OpenAIRE

    Ives, D. Timothy; Smith, David R R; Patterson, Roy D.

    2005-01-01

    The length of the vocal tract is correlated with speaker size and, so, speech sounds have information about the size of the speaker in a form that is interpretable by the listener. A wide range of different vocal tract lengths exist in the population and humans are able to distinguish speaker size from the speech. Smith et al. [J. Acoust. Soc. Am. 117, 305–318 (2005)] presented vowel sounds to listeners and showed that the ability to discriminate speaker size extends beyond the normal range o...

  11. The Comparing Auditory Discrimination in Blind and Sighted Subjects

    Directory of Open Access Journals (Sweden)

    Dr. Hassan Ashayeri

    2000-05-01

    Full Text Available Studying auditory discrimination in children and the role it plays in acquiring language skills is of great importance. Also the relationship between articulation disorder and the ability to discriminate the speech sound is an important topic for speech and language researchers. Previous event- related potentials (ERPs studies have suggested a possible participation of the visual cortex of the blind subjects were asked to discriminate 100 couple Farsi words (auditory discrimination tack while they were listening them from recorded tape. The results showed that the blinds were able to discriminate heard material better than sighted subjects. (Prro.05 According to this study in blind subjects conical are as normally reserved for vision may be activated by other sensory modalities. This is in accordance with previous studies. We suggest that auditory cortex expands in blind humans.

  12. Initial Teacher Training Courses and Non-Native Speaker Teachers

    Science.gov (United States)

    Anderson, Jason

    2016-01-01

    This article reports on a study contrasting 41 native speakers (NSs) and 38 non-native speakers (NNSs) of English from two short initial teacher training courses, the Cambridge Certificate in English Language Teaching to Adults and the Trinity College London CertTESOL. After a brief history and literature review, I present findings on teachers'…

  13. Initial Teacher Training Courses and Non-Native Speaker Teachers

    Science.gov (United States)

    Anderson, Jason

    2016-01-01

    This article reports on a study contrasting 41 native speakers (NSs) and 38 non-native speakers (NNSs) of English from two short initial teacher training courses, the Cambridge Certificate in English Language Teaching to Adults and the Trinity College London CertTESOL. After a brief history and literature review, I present findings on teachers'…

  14. The Ceremonial Elements of Non-Native Cultures.

    Science.gov (United States)

    Horwood, Bert

    1994-01-01

    Explores reasons behind the wrongful adoption of Native American ceremonies by Euro-Americans. Focuses on the need for ceremony, its relevance to environmental education, and the fact that some immigrant cultural traditions neither fit this new land nor value the earth. Suggests how non-Natives can express their connection to the land by creating…

  15. Privilege (or "Noblesse Oblige") of the Nonnative Speaker of Russian.

    Science.gov (United States)

    Garza, Thomas J.

    This paper responds to Claire Kramsch's essay on the demise of the notion of the idealized native speaker as the model for second language learning and implications for second languages and cultures education. Focusing on the nonnative speaker of Russian and Russian language education in the United States, it asserts that both the quantity and…

  16. Non-Native University Students' Perception of Plagiarism

    Science.gov (United States)

    Ahmad, Ummul Khair; Mansourizadeh, Kobra; Ai, Grace Koh Ming

    2012-01-01

    Plagiarism is a complex issue especially among non-native students and it has received a lot of attention from researchers and scholars of academic writing. Some scholars attribute this problem to cultural perceptions and different attitudes toward texts. This study evaluates student perception of different aspects of plagiarism. A small group of…

  17. How TESOL Educators Teach Nonnative English-Speaking Teachers

    Science.gov (United States)

    Frazier, Stefan; Phillabaum, Scott

    2012-01-01

    This paper reports the results of a survey of California TESOL educators about issues related to nonnative English-speaking teachers (NNESTs). A good deal of research suggests that NNESTs are as effective, if not more so, than native English-speaking teachers (NESTs) and that their treatment in today's work world should be reconsidered; in…

  18. Improved discriminative training for generative model

    Institute of Scientific and Technical Information of China (English)

    WU Ya-hui; GUO Jun; LIU Gang

    2009-01-01

    This article proposes a model combination method to enhance the discriminability of the generative model. Generative and discriminative models have different optimization objectives and have their own advantages and drawbacks. The method proposed in this article intends to strike a balance between the two models mentioned above. It extracts the discriminative parameter from the generative model and generates a new model based on a multi-model combination. The weight for combining is determined by the ratio of the inter-variance to the intra-variance of the classes. The higher the ratio is, the greater the weight is, and the more discriminative the model will be. Experiments on speech recognition demonstrate that the performance of the new model outperforms the model trained with the traditional generative method.

  19. Delayed Speech or Language Development

    Science.gov (United States)

    ... to 2-Year-Old Delayed Speech or Language Development KidsHealth > For Parents > Delayed Speech or Language Development ... child is right on schedule. Normal Speech & Language Development It's important to discuss early speech and language ...

  20. Representation Learning Based Speech Assistive System for Persons With Dysarthria.

    Science.gov (United States)

    Chandrakala, S; Rajeswari, Natarajan

    2017-09-01

    An assistive system for persons with vocal impairment due to dysarthria converts dysarthric speech to normal speech or text. Because of the articulatory deficits, dysarthric speech recognition needs a robust learning technique. Representation learning is significant for complex tasks such as dysarthric speech recognition. We focus on robust representation for dysarthric speech recognition that involves recognizing sequential patterns of varying length utterances. We propose a hybrid framework that uses a generative learning based data representation with a discriminative learning based classifier. In this hybrid framework, we propose to use Example Specific Hidden Markov Models (ESHMMs) to obtain log-likelihood scores for a dysarthric speech utterance to form fixed dimensional score vector representation. This representation is used as an input to discriminative classifier such as support vector machine.The performance of the proposed approach is evaluatedusingUA-Speechdatabase.The recognitionaccuracy is much better than the conventional hidden Markov model based approach and Deep Neural Network-Hidden Markov Model (DNN-HMM). The efficiency of the discriminative nature of score vector representation is proved for "very low" intelligibility words.

  1. [Improvement in Phoneme Discrimination in Noise in Normal Hearing Adults].

    Science.gov (United States)

    Schumann, A; Garea Garcia, L; Hoppe, U

    2017-02-01

    Objective: The study's aim was to examine the possibility to train phoneme-discrimination in noise with normal hearing adults, and its effectivity on speech recognition in noise. A specific computerised training program was used, consisting of special nonsense-syllables with background noise, to train participants' discrimination ability. Material and Methods: 46 normal hearing subjects took part in this study, 28 as training group participants, 18 as control group participants. Only the training group subjects were asked to train over a period of 3 weeks, twice a week for an hour with a computer-based training program. Speech recognition in noise were measured pre- to posttraining for the training group subjects with the Freiburger Einsilber Test. The control group subjects obtained test and restest measures within a 2-3 week break. For the training group follow-up speech recognition was measured 2-3 months after the end of the training. Results: The majority of training group subjects improved their phoneme discrimination significantly. Besides, their speech recognition in noise improved significantly during the training compared to the control group, and remained stable for a period of time. Conclusions: Phonem-Discrimination in noise can be trained by normal hearing adults. The improvements have got a positiv effect on speech recognition in noise, also for a longer period of time.

  2. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ... COMMISSION 47 CFR Part 64 Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With... this document, the Commission amends telecommunications relay services (TRS) mandatory...

  3. Musician enhancement for speech-in-noise.

    Science.gov (United States)

    Parbery-Clark, Alexandra; Skoe, Erika; Lam, Carrie; Kraus, Nina

    2009-12-01

    To investigate the effect of musical training on speech-in-noise (SIN) performance, a complex task requiring the integration of working memory and stream segregation as well as the detection of time-varying perceptual cues. Previous research has indicated that, in combination with lifelong experience with musical stream segregation, musicians have better auditory perceptual skills and working memory. It was hypothesized that musicians would benefit from these factors and perform better on speech perception in noise than age-matched nonmusician controls. The performance of 16 musicians and 15 nonmusicians was compared on clinical measures of speech perception in noise-QuickSIN and Hearing-In-Noise Test (HINT). Working memory capacity and frequency discrimination were also assessed. All participants had normal hearing and were between the ages of 19 and 31 yr. To be categorized as a musician, participants needed to have started musical training before the age of 7 yr, have 10 or more years of consistent musical experience, and have practiced more than three times weekly within the 3 yr before study enrollment. Nonmusicians were categorized by the failure to meet the musician criteria, along with not having received musical training within the 7 yr before the study. Musicians outperformed the nonmusicians on both QuickSIN and HINT, in addition to having more fine-grained frequency discrimination and better working memory. Years of consistent musical practice correlated positively with QuickSIN, working memory, and frequency discrimination but not HINT. The results also indicate that working memory and frequency discrimination are more important for QuickSIN than for HINT. Musical experience appears to enhance the ability to hear speech in challenging listening environments. Large group differences were found for QuickSIN, and the results also suggest that this enhancement is derived in part from musicians' enhanced working memory and frequency discrimination. For HINT

  4. Spectro-Temporal Analysis of Speech for Spanish Phoneme Recognition

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Serrano, Javier; Carrabina, Jordi

    2012-01-01

    State of the art speech recognition systems (ASR), mostly use Mel-Frequency cepstral coefficients (MFCC), as acoustic features. In this paper, we propose a new discriminative analysis of acoustic features, based on spectrogram analysis. Both spectral and temporal variations of speech signal...... and enhanced by means of bi-cubic interpolation. An adaptive strategy is proposed for the size of patches over the time to construct unique length vectors for different phonemes. These vectors are classified based on K-nearest neighbor (KNN) and linear discriminative analysis (LDA) and reduced rank LDA (RLDA...

  5. Speech and Language Impairments

    Science.gov (United States)

    ... impairment. Many children are identified as having a speech or language impairment after they enter the public school system. A teacher may notice difficulties in a child’s speech or communication skills and refer the child for ...

  6. Trainable unit selection speech synthesis under statistical framework

    Institute of Scientific and Technical Information of China (English)

    WANG RenHua; DAI LiRong; LING ZhenHua; HU Yu

    2009-01-01

    This paper proposes a trainable unit selection speech synthesis method based on statistical modeling framework. At training stage, acoustic features are extracted from the training database and statistical models are estimated for each feature. During synthesis, the optimal candidate unit sequence is searched out from the database following the maximum likelihood criterion derived from the trained models. Finally, the waveforms of the optimal candidate units are concatenated to produce synthetic speech. Experiment results show that this method can improve the automation of system construction and naturalness of synthetic speech effectively compared with the conventional unit selection synthe-sis method. Furthermore, this paper presents a minimum unit selection error model training criterion according to the characteristics of unit selection speech synthesis and adopts discriminative training for model parameter estimation. This criterion can finally achieve the full automation of system con-struction and improve the naturalness of synthetic speech further.

  7. Novel Extended Phonemic Set for Mandarin Continuous Speech Recognition

    Institute of Scientific and Technical Information of China (English)

    谢湘; 匡镜明

    2003-01-01

    An extended phonemic set of mandarin from the view of speech recognition is proposed. This set absorbs most principles of some other existing phonemic sets for mandarin, like Worldbet and SAMPA-C, and also takes advantage of some practical experiences from speech recognition research for increasing the discriminability between word models. And the experiments in speaker independent continuous speech recognition show that hidden Markov models defined by this phonemic set have a better performance than those based on initial/final units of mandarin and have a very compact size.

  8. Kalispel Non-Native Fish Suppression Project 2007 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Wingert, Michele; Andersen, Todd [Kalispel Natural Resource Department

    2008-11-18

    Non-native salmonids are impacting native salmonid populations throughout the Pend Oreille Subbasin. Competition, hybridization, and predation by non-native fish have been identified as primary factors in the decline of some native bull trout (Salvelinus confluentus) and westslope cutthroat trout (Oncorhynchus clarki lewisi) populations. In 2007, the Kalispel Natural Resource Department (KNRD) initiated the Kalispel Nonnative Fish Suppression Project. The goal of this project is to implement actions to suppress or eradicate non-native fish in areas where native populations are declining or have been extirpated. These projects have previously been identified as critical to recovering native bull trout and westslope cutthroat trout (WCT). Lower Graham Creek was invaded by non-native rainbow (Oncorhynchus mykiss) and brook trout (Salvelinus fontinalis) after a small dam failed in 1991. By 2003, no genetically pure WCT remained in the lower 700 m of Graham Creek. Further invasion upstream is currently precluded by a relatively short section of steep, cascade-pool stepped channel section that will likely be breached in the near future. In 2008, a fish management structure (barrier) was constructed at the mouth of Graham Creek to preclude further invasion of non-native fish into Graham Creek. The construction of the barrier was preceded by intensive electrofishing in the lower 700 m to remove and relocate all captured fish. Westslope cutthroat trout have recently been extirpated in Cee Cee Ah Creek due to displacement by brook trout. We propose treating Cee Cee Ah Creek with a piscicide to eradicate brook trout. Once eradication is complete, cutthroat trout will be translocated from nearby watersheds. In 2004, the Washington Department of Fish and Wildlife (WDFW) proposed an antimycin treatment within the subbasin; the project encountered significant public opposition and was eventually abandoned. However, over the course of planning this 2004 project, little public

  9. Assessing speech perception in Swedish school-aged children: preliminary data on the Listen-Say test.

    Science.gov (United States)

    Nakeva von Mentzer, Cecilia; Sundström, Martina; Enqvist, Karin; Hällgren, Mathias

    2017-10-10

    To meet the need for a linguistic speech perception test in Swedish, the 'Listen-Say test' was developed. Minimal word pairs were used as speech material to assess seven phonetic contrasts in two auditory backgrounds. In the present study, children's speech discrimination skills in quiet and in four-talker (4T) speech background were examined. Associations with lexical-access skills and academic achievement were explored. The study included 27 school children 7-9 years of age. Overall, the children discriminated phonetic contrasts well in both conditions (quiet: Mdn 95%; 4T speech; Mdn 91% correct). A significant effect of 4T speech background was evident in three of the contrasts, connected to place of articulation, voicing and syllable complexity. Reaction times for correctly identified target words were significantly longer in the quiet condition, possibly reflecting a need for further balancing of the test order. Overall speech discrimination accuracy was moderately to highly correlated with lexical-access ability. Children identified as having high concentration ability by their teacher had the highest speech discrimination scores in both conditions followed by children identified as having high reading ability. The first wave of data collection with the Listen-Say test indicates that the test appears to be sensitive to predicted perceptual difficulties of phonetic contrasts particularly in noise. The clinical benefit of using a procedure where speech discrimination, lexical-access ability and academic achievement are taken into account is discussed as well as issues for further test refinement.

  10. Speech 7 through 12.

    Science.gov (United States)

    Nederland Independent School District, TX.

    GRADES OR AGES: Grades 7 through 12. SUBJECT MATTER: Speech. ORGANIZATION AND PHYSICAL APPEARANCE: Following the foreward, philosophy and objectives, this guide presents a speech curriculum. The curriculum covers junior high and Speech I, II, III (senior high). Thirteen units of study are presented for junior high, each unit is divided into…

  11. Audibility of American English vowels produced by English-, Chinese-, and Korean-native speakers in long-term speech-shaped noise.

    Science.gov (United States)

    Liu, Chang; Jin, Su-Hyun

    2011-12-01

    The purpose of this study was to evaluate whether there were significant differences in audibility of American English vowels in noise produced by non-native and native speakers. Detection thresholds for 12 English vowels with equalized durations of 170 ms produced by 10 English-, Chinese- and Korean-native speakers were measured for young normal-hearing English-native listeners in the presence of speech-shaped noise presented at 70 dB SPL. Similar patterns of vowel detection thresholds as a function of the vowel category were found for native and non-native speakers, with the highest thresholds for /u/ and /ʊ/ and lowest thresholds for /i/ and /e/. In addition, vowel detection thresholds for non-native speakers were significantly lower and showed greater speaker variability than those for native speakers. Thresholds for vowel detection predicted from an excitation-pattern model corresponded well to behavioral thresholds, implying that vowel detection was primarily determined by the vowel spectrum regardless of speaker language background. Both behavioral and predicted thresholds showed that vowel audibility was similar or even better for non-native speakers than for native speakers, indicating that vowel audibility did not account for non-native speakers' lower-than-native intelligibility in noise. Effects of non-native speakers' English proficiency level on vowel audibility are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Fighting discrimination.

    Science.gov (United States)

    Wientjens, Wim; Cairns, Douglas

    2012-10-01

    In the fight against discrimination, the IDF launched the first ever International Charter of Rights and Responsibilities of People with Diabetes in 2011: a balance between rights and duties to optimize health and quality of life, to enable as normal a life as possible and to reduce/eliminate the barriers which deny realization of full potential as members of society. It is extremely frustrating to suffer blanket bans and many examples exist, including insurance, driving licenses, getting a job, keeping a job and family affairs. In this article, an example is given of how pilots with insulin treated diabetes are allowed to fly by taking the responsibility of using special blood glucose monitoring protocols. At this time the systems in the countries allowing flying for pilots with insulin treated diabetes are applauded, particularly the USA for private flying, and Canada for commercial flying. Encouraging developments may be underway in the UK for commercial flying and, if this materializes, could be used as an example for other aviation authorities to help adopt similar protocols. However, new restrictions implemented by the new European Aviation Authority take existing privileges away for National Private Pilot Licence holders with insulin treated diabetes in the UK.

  13. Pitch discrimination associated with phonological awareness: Evidence from congenital amusia.

    Science.gov (United States)

    Sun, Yanan; Lu, Xuejing; Ho, Hao Tam; Thompson, William Forde

    2017-03-13

    Research suggests that musical skills are associated with phonological abilities. To further investigate this association, we examined whether phonological impairments are evident in individuals with poor music abilities. Twenty individuals with congenital amusia and 20 matched controls were assessed on a pure-tone pitch discrimination task, a rhythm discrimination task, and four phonological tests. Amusic participants showed deficits in discriminating pitch and discriminating rhythmic patterns that involve a regular beat. At a group level, these individuals performed similarly to controls on all phonological tests. However, eight amusics with severe pitch impairment, as identified by the pitch discrimination task, exhibited significantly worse performance than all other participants in phonological awareness. A hierarchical regression analysis indicated that pitch discrimination thresholds predicted phonological awareness beyond that predicted by phonological short-term memory and rhythm discrimination. In contrast, our rhythm discrimination task did not predict phonological awareness beyond that predicted by pitch discrimination thresholds. These findings suggest that accurate pitch discrimination is critical for phonological processing. We propose that deficits in early-stage pitch discrimination may be associated with impaired phonological awareness and we discuss the shared role of pitch discrimination for processing music and speech.

  14. Morphological change and phenotypic plasticity in native and non-native pumpkinseed sunfish in response to competition

    Science.gov (United States)

    Yavno, Stan; Rooke, Anna C.; Fox, Michael G.

    2014-06-01

    Non-indigenous species are oftentimes exposed to ecosystems with unfamiliar species, and organisms that exhibit a high degree of phenotypic plasticity may be better able to contend with the novel competitors that they may encounter during range expansion. In this study, differences in morphological plasticity were investigated using young-of-year pumpkinseed sunfish ( Lepomis gibbosus) from native North American and non-native European populations. Two Canadian populations, isolated from bluegill sunfish ( L. macrochirus) since the last glaciation, and two Spanish populations, isolated from bluegill since their introduction in Europe, were reared in a common environment using artificial enclosures. Fish were subjected to allopatric (without bluegill) or sympatric (with bluegill) conditions, and differences in plasticity were tested through a MANOVA of discriminant function scores. All pumpkinseed populations exhibited dietary shifts towards more benthivorous prey when held with bluegill. Differences between North American and European populations were observed in body dimensions, gill raker length and pelvic fin position. Sympatric treatments induced an increase in body width and a decrease in caudal peduncle length in native fish; non-native fish exhibited longer caudal peduncle lengths when held in sympatry with bluegill. Overall, phenotypic plasticity influenced morphological divergence less than genetic factors, regardless of population. Contrary to predictions, pumpkinseeds from Europe exhibited lower levels of phenotypic plasticity than Canadian populations, suggesting that European pumpkinseeds are more canalized than their North American counterparts.

  15. Positive Effects of Nonnative Invasive Phragmites australis on Larval Bullfrogs

    OpenAIRE

    Mary Alta Rogalski; David Kiernan Skelly

    2012-01-01

    BACKGROUND: Nonnative Phragmites australis (common reed) is one of the most intensively researched and managed invasive plant species in the United States, yet as with many invasive species, our ability to predict, control or understand the consequences of invasions is limited. Rapid spread of dense Phragmites monocultures has prompted efforts to limit its expansion and remove existing stands. Motivation for large-scale Phragmites eradication programs includes purported negative impacts on na...

  16. Discrimination and Anti-discrimination in Denmark

    DEFF Research Database (Denmark)

    Olsen, Tore Vincents

    The purpose of this report is to describe and analyse Danish anti-discrimination legislation and the debate about discrimination in Denmark in order to identify present and future legal challenges. The main focus is the implementation of the EU anti-discrimination directives in Danish law...

  17. Discrimination and Anti-discrimination in Denmark

    DEFF Research Database (Denmark)

    Olsen, Tore Vincents

    The purpose of this report is to describe and analyse Danish anti-discrimination legislation and the debate about discrimination in Denmark in order to identify present and future legal challenges. The main focus is the implementation of the EU anti-discrimination directives in Danish law...

  18. Drivers of Non-Native Aquatic Species Invasions across the ...

    Science.gov (United States)

    Background/Question/Methods Mapping the geographic distribution of non-native aquatic species is a critically important precursor to understanding the anthropogenic and environmental factors that drive freshwater biological invasions. Such efforts are often limited to local scales and/or to a single taxa, missing the opportunity to observe and understand the drivers of macroscale invasion patterns at sub-continental or continental scales. Here we map the distribution of exotic freshwater species richness across the continental United States using publicly accessible species occurrence data (e.g GBIF) and investigate the role of human activity in driving macroscale patterns of aquatic invasion. Using a dasymetric model of human population density and a spatially explicit model of recreational freshwater fishing demand, we analyzed the effect of these metrics of human influence on non-native aquatic species richness at the watershed scale, while controlling for spatial and sampling bias. We also assessed the effects that a temporal mismatch between occurrence data (collected since 1815) and cross-sectional predictors (developed using 2010 data) may have on model fit. Results/Conclusions Our results indicated that non-native aquatic species richness exhibits a highly patchy distribution, with hotspots in the Northeast, Great Lakes, Florida, and human population centers on the Pacific coast. These richness patterns are correlated with population density, but are m

  19. The Role of Corticostriatal Systems in Speech Category Learning.

    Science.gov (United States)

    Yi, Han-Gyol; Maddox, W Todd; Mumford, Jeanette A; Chandrasekaran, Bharath

    2016-04-01

    One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning.

  20. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia.

  1. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  2. The Interlanguage Speech Intelligibility Benefit as Bias Toward Native-Language Phonology.

    Science.gov (United States)

    Wang, Hongyan; van Heuven, Vincent J

    2015-12-01

    Two hypotheses have been advanced in the recent literature with respect to the so-called Interlanguage Speech Intelligibility Benefit (ISIB): a nonnative speaker will be better understood by a another nonnative listener than a native speaker of the target language will be (a) only when the nonnatives share the same native language (matched interlanguage) or (b) even when the nonnatives have different mother tongues (non-matched interlanguage). Based on a survey of published experimental materials, the present article will demonstrate that both the restricted (a) and the generalized (b) hypotheses are false when the ISIB effect is evaluated in terms of absolute intelligibility scores. We will then propose a simple way to compute a relative measure for the ISIB (R-ISIB), which we claim is a more insightful way of evaluating the interlanguage benefit, and test the hypotheses in relative (R-ISIB) terms on the same literature data. We then find that our R-ISIB measure only supports the more restricted hypothesis (a) while rejecting the more general hypothesis (b). This finding shows that the native language shared by the interactants biases the listener toward interpreting sounds in terms of the phonology of the shared mother tongue.

  3. Malaysian University Students’ Attitudes towards Six Varieties of Accented Speech in English

    Directory of Open Access Journals (Sweden)

    Zainab Thamer Ahmed

    2014-10-01

    Full Text Available Previous language attitude studies indicated that in many countries all over the world, English language learners perceived native accents either American or British, more positively than the non-native accents such as the Japanese, Korean, and Austrian accents. However, in Malaysia it is still unclear which accent Malaysian learners of English tend to perceive more positively (Pillai 2009. The verbal-guise technique and accent recognition item were adopted as indirect and direct instruments in gathering data to obtain data to clarify the inquiry. The sample includes 120 Malaysian university students and they were immersed in several speech accent situations to elicit feedback on their perceptions. Essentially two research questions are addressed: 1 What are Malaysian university students’ attitudes toward native and non-native English accents? 2 How familiar are students with accents?  The results indicated that the students had a bias towards in-group accent, meaning that they evaluated non-native lecturers’ accents more positively. These results supported the ‘social identity theory’ consistent with many previous language attitude studies of this nature. The Malaysian students were seen to be able to distinguish between native and non-native accents although there was much confusion between British and American accents.

  4. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  5. Cortical encoding and neurophysiological tracking of intensity and pitch cues signaling English stress patterns in native and nonnative speakers.

    Science.gov (United States)

    Chung, Wei-Lun; Bidelman, Gavin M

    2016-01-01

    We examined cross-language differences in neural encoding and tracking of intensity and pitch cues signaling English stress patterns. Auditory mismatch negativities (MMNs) were recorded in English and Mandarin listeners in response to contrastive English pseudowords whose primary stress occurred either on the first or second syllable (i.e., "nocTICity" vs. "NOCticity"). The contrastive syllable stress elicited two consecutive MMNs in both language groups, but English speakers demonstrated larger responses to stress patterns than Mandarin speakers. Correlations between the amplitude of ERPs and continuous changes in the running intensity and pitch of speech assessed how well each language group's brain activity tracked these salient acoustic features of lexical stress. We found that English speakers' neural responses tracked intensity changes in speech more closely than Mandarin speakers (higher brain-acoustic correlation). Findings demonstrate more robust and precise processing of English stress (intensity) patterns in early auditory cortical responses of native relative to nonnative speakers. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. A note on the acoustic-phonetic characteristics of non-native English vowels produced in noise

    Science.gov (United States)

    Li, Chi-Nin; Munro, Murray J.

    2003-10-01

    The Lombard reflex occurs when people unconsciously raise their vocal levels in the presence of loud background noise. Previous work has established that utterances produced in noisy environments exhibit increases in vowel duration and fundamental frequency (F0), and a shift in formant center frequencies for F1 and F2. Most studies of the Lombard reflex have been conducted with native speakers; research with second-language speakers is much less common. The present study examined the effects of the Lombard reflex on foreign-accented English vowel productions. Seven female Cantonese speakers and a comparison group of English speakers were recorded producing three vowels (/i u a/) in /bVt/ context in quiet and in 70 dB of masking noise. Vowel durations, F0, and the first two formants for each of the three vowels were measured. Analyses revealed that vowel durations and F0 were greater in the vowels produced in noise than those produced in quiet in most cases. First formants, but not F2, were consistently higher in Lombard speech than in normal speech. The findings suggest that non-native English speakers exhibit acoustic-phonetic patterns similar to those of native speakers when producing English vowels in noisy conditions.

  7. Conditional advice and inducements: are readers sensitive to implicit speech acts during comprehension?

    Science.gov (United States)

    Haigh, Matthew; Stewart, Andrew J; Wood, Jeffrey S; Connell, Louise

    2011-03-01

    Conditionals can implicitly convey a range of speech acts including promises, tips, threats and warnings. These are traditionally divided into the broader categories of advice (tips and warnings) and inducements (promises and threats). One consequence of this distinction is that speech acts from within the same category should be harder to differentiate than those from different categories. We examined this in two self-paced reading experiments. Experiment 1 revealed a rapid processing penalty when inducements (promises) and advice (tips) were anaphorically referenced using a mismatching speech act. In Experiment 2 a delayed penalty was observed when a speech act (promise or threat) was referenced by a mismatching speech act from the same category of inducements. This suggests that speech acts from the same category are harder to discriminate than those from different categories. Our findings not only support a semantic distinction between speech act categories, but also reveal pragmatic differences within categories. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Processing changes when listening to foreign-accented speech

    Directory of Open Access Journals (Sweden)

    Carlos eRomero-Rivas

    2015-03-01

    Full Text Available This study investigates the mechanisms responsible for fast changes in processing foreign-accented speech. Event Related brain Potentials (ERPs were obtained while native speakers of Spanish listened to native and foreign-accented speakers of Spanish. We observed a less positive P200 component for foreign-accented speech relative to native speech comprehension. This suggests that the extraction of spectral information and other important acoustic features was hampered during foreign-accented speech comprehension. However, the amplitude of the N400 component for foreign-accented speech comprehension decreased across the experiment, suggesting the use of a higher level, lexical mechanism. Furthermore, during native speech comprehension, semantic violations in the critical words elicited an N400 effect followed by a late positivity. During foreign-accented speech comprehension, semantic violations only elicited an N400 effect. Overall, our results suggest that, despite a lack of improvement in phonetic discrimination, native listeners experience changes at lexical-semantic levels of processing after brief exposure to foreign-accented speech. Moreover, these results suggest that lexical access, semantic integration and linguistic re-analysis processes are permeable to external factors, such as the accent of the speaker.

  9. Processing changes when listening to foreign-accented speech

    Science.gov (United States)

    Romero-Rivas, Carlos; Martin, Clara D.; Costa, Albert

    2015-01-01

    This study investigates the mechanisms responsible for fast changes in processing foreign-accented speech. Event Related brain Potentials (ERPs) were obtained while native speakers of Spanish listened to native and foreign-accented speakers of Spanish. We observed a less positive P200 component for foreign-accented speech relative to native speech comprehension. This suggests that the extraction of spectral information and other important acoustic features was hampered during foreign-accented speech comprehension. However, the amplitude of the N400 component for foreign-accented speech comprehension decreased across the experiment, suggesting the use of a higher level, lexical mechanism. Furthermore, during native speech comprehension, semantic violations in the critical words elicited an N400 effect followed by a late positivity. During foreign-accented speech comprehension, semantic violations only elicited an N400 effect. Overall, our results suggest that, despite a lack of improvement in phonetic discrimination, native listeners experience changes at lexical-semantic levels of processing after brief exposure to foreign-accented speech. Moreover, these results suggest that lexical access, semantic integration and linguistic re-analysis processes are permeable to external factors, such as the accent of the speaker. PMID:25859209

  10. Impacts of fire on non-native plant recruitment in black spruce forests of interior Alaska

    Science.gov (United States)

    Conway, Alexandra J.; Jean, Mélanie

    2017-01-01

    Climate change is expected to increase the extent and severity of wildfires throughout the boreal forest. Historically, black spruce (Picea mariana (Mill.) B.S.P.) forests in interior Alaska have been relatively free of non-native species, but the compounding effects of climate change and an altered fire regime could facilitate the expansion of non-native plants. We tested the effects of wildfire on non-native plant colonization by conducting a seeding experiment of non-native plants on different substrate types in a burned black spruce forest, and surveying for non-native plants in recently burned and mature black spruce forests. We found few non-native plants in burned or mature forests, despite their high roadside presence, although invasion of some burned sites by dandelion (Taraxacum officinale) indicated the potential for non-native plants to move into burned forest. Experimental germination rates were significantly higher on mineral soil compared to organic soil, indicating that severe fires that combust much of the organic layer could increase the potential for non-native plant colonization. We conclude that fire disturbances that remove the organic layer could facilitate the invasion of non-native plants providing there is a viable seed source and dispersal vector. PMID:28158284

  11. Prediction of Speech Recognition in Cochlear Implant Users by Adapting Auditory Models to Psychophysical Data

    Directory of Open Access Journals (Sweden)

    Svante Stadler

    2009-01-01

    Full Text Available Users of cochlear implants (CIs vary widely in their ability to recognize speech in noisy conditions. There are many factors that may influence their performance. We have investigated to what degree it can be explained by the users' ability to discriminate spectral shapes. A speech recognition task has been simulated using both a simple and a complex models of CI hearing. The models were individualized by adapting their parameters to fit the results of a spectral discrimination test. The predicted speech recognition performance was compared to experimental results, and they were significantly correlated. The presented framework may be used to simulate the effects of changing the CI encoding strategy.

  12. Language-experience facilitates discrimination of /d-th/ in monolingual and bilingual acquisition of English.

    Science.gov (United States)

    Sundara, Megha; Polka, Linda; Genesee, Fred

    2006-06-01

    To trace how age and language experience shape the discrimination of native and non-native phonetic contrasts, we compared 4-year-olds learning either English or French or both and simultaneous bilingual adults on their ability to discriminate the English /d-th/ contrast. Findings show that the ability to discriminate the native English contrast improved with age. However, in the absence of experience with this contrast, discrimination of French children and adults remained unchanged during development. Furthermore, although simultaneous bilingual and monolingual English adults were comparable, children exposed to both English and French were poorer at discriminating this contrast when compared to monolingual English-learning 4-year-olds. Thus, language experience facilitates perception of the English /d-th/ contrast and this facilitation occurs later in development when English and French are acquired simultaneously. The difference between bilingual and monolingual acquisition has implications for language organization in children with simultaneous exposure.

  13. Speech-to-Speech Relay Service

    Science.gov (United States)

    ... to make an STS call. You are then connected to an STS CA who will repeat your spoken words, making the spoken words clear to the other party. Persons with speech disabilities may also receive STS calls. The calling ...

  14. The influence of phonetic dimensions on aphasic speech perception

    NARCIS (Netherlands)

    de Kok, D.A.; Jonkers, R.; Bastiaanse, Y.R.M.

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with 'audiovisual', 'auditory only' and

  15. The Influence of Phonetic Dimensions on Aphasic Speech Perception

    Science.gov (United States)

    Hessler, Dorte; Jonkers, Roel; Bastiaanse, Roelien

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with "audiovisual", "auditory only" and "visual only" stimulus display. Subjects had to…

  16. Hate Speech: Political Correctness v. the First Amendment.

    Science.gov (United States)

    Stern, Ralph D.

    Both freedom of speech and freedom from discrimination are generally accepted expressions of public policy. The application of these policies, however, leads to conflicts that pose both practical and conceptual problems. This paper presents a review of court litigation and addresses the question of how to reconcile the conflicting societal goals…

  17. Influence of musical training on perception of L2 speech

    NARCIS (Netherlands)

    Sadakata, M.; Zanden, L.D.T. van der; Sekiyama, K.

    2010-01-01

    The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and vowels

  18. The influence of phonetic dimensions on aphasic speech perception

    NARCIS (Netherlands)

    de Kok, D.A.; Jonkers, R.; Bastiaanse, Y.R.M.

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with 'audiovisual', 'auditory only' and '

  19. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  20. Exploration of Speech Planning and Producing by Speech Error Analysis

    Institute of Scientific and Technical Information of China (English)

    冷卉

    2012-01-01

    Speech error analysis is an indirect way to discover speech planning and producing processes. From some speech errors made by people in their daily life, linguists and learners can reveal the planning and producing processes more easily and clearly.

  1. The Badness of Discrimination

    DEFF Research Database (Denmark)

    Lippert-Rasmussen, Kasper

    2006-01-01

    The most blatant forms of discrimination are morally outrageous and very obviously so; but the nature and boundaries of discrimination are more controversial, and it is not clear whether all forms of discrimination are morally bad; nor is it clear why objectionable cases of discrimination are bad....... In this paper I address these issues. First, I offer a taxonomy of discrimination. I then argue that discrimination is bad, when it is, because it harms people. Finally, I criticize a rival, disrespect-based account according to which discrimination is bad regardless of whether it causes harm....

  2. Indirect Speech Acts

    Institute of Scientific and Technical Information of China (English)

    李威

    2001-01-01

    Indirect speech acts are frequently used in verbal communication, the interpretation of them is of great importance in order to meet the demands of the development of students' communicative competence. This paper, therefore, intends to present Searle' s indirect speech acts and explore the way how indirect speech acts are interpreted in accordance with two influential theories. It consists of four parts. Part one gives a general introduction to the notion of speech acts theory. Part two makes an elaboration upon the conception of indirect speech act theory proposed by Searle and his supplement and development of illocutionary acts. Part three deals with the interpretation of indirect speech acts. Part four draws implication from the previous study and also serves as the conclusion of the dissertation.

  3. Speech Perception Deficits in Poor Readers: A Reply to Denenberg's Critique.

    Science.gov (United States)

    Studdert-Kennedy, Michael; Mody, Maria; Brady, Susan

    2000-01-01

    This rejoinder to a critique of the authors' research on speech perception deficits in poor readers answers the specific criticisms and reaffirms their conclusion that the difficulty some poor readers have with rapid /ba/-/da/ discrimination does not stem from difficulty in discriminating the rapid spectral transitions at stop-vowel syllable…

  4. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Science.gov (United States)

    Kos, Marko; Grašič, Matej; Kačič, Zdravko

    2009-12-01

    This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE). The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  5. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Directory of Open Access Journals (Sweden)

    Zdravko Kačič

    2009-01-01

    Full Text Available This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE. The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  6. The Nature of Auditory Discrimination Problems in Children with Specific Language Impairment: An MMN Study

    Science.gov (United States)

    Davids, Nina; Segers, Eliane; van den Brink, Danielle; Mitterer, Holger; van Balkom, Hans; Hagoort, Peter; Verhoeven, Ludo

    2011-01-01

    Many children with specific language impairment (SLI) show impairments in discriminating auditorily presented stimuli. The present study investigates whether these discrimination problems are speech specific or of a general auditory nature. This was studied using a linguistic and nonlinguistic contrast that were matched for acoustic complexity in…

  7. Esophageal speeches modified by the Speech Enhancer Program®

    OpenAIRE

    Manochiopinig, Sriwimon; Boonpramuk, Panuthat

    2014-01-01

    Esophageal speech appears to be the first choice of speech treatment for a laryngectomy. However, many laryngectomy people are unable to speak well. The aim of this study was to evaluate post-modified speech quality of Thai esophageal speakers using the Speech Enhancer Program®. The method adopted was to approach five speech–language pathologists to assess the speech accuracy and intelligibility of the words and continuing speech of the seven laryngectomy people. A comparison study was conduc...

  8. Native and Non-native English Teachers' Perceptions of their Professional Identity: Convergent or Divergent?

    Directory of Open Access Journals (Sweden)

    Zia Tajeddin

    2016-10-01

    Full Text Available There is still a preference for native speaker teachers in the language teaching profession, which is supposed to influence the self-perceptions of native and nonnative teachers. However, the status of English as a globalized language is changing the legitimacy of native/nonnative teacher dichotomy. This study sought to investigate native and nonnative English-speaking teachers’ perceptions about native and nonnative teachers’ status and the advantages and disadvantages of being a native or nonnative teacher. Data were collected by means of a questionnaire and a semi-structured interview. A total of 200 native and nonnative teachers of English from the UK and the US, i.e. the inner circle, and Turkey and Iran, the expanding circle, participated in this study. A significant majority of nonnative teachers believed that native speaker teachers have better speaking proficiency, better pronunciation, and greater self-confidence. The findings also showed nonnative teachers’ lack of self-confidence and awareness of their role and status compared with native-speaker teachers, which could be the result of existing inequities between native and nonnative English-speaking teachers in ELT. The findings also revealed that native teachers disagreed more strongly with the concept of native teachers’ superiority over nonnative teachers. Native teachers argued that nonnative teachers have a good understanding of teaching methodology whereas native teachers are more competent in correct language. It can be concluded that teacher education programs in the expanding-circle countries should include materials for teachers to raise their awareness of their own professional status and role and to remove their misconception about native speaker fallacy.

  9. Socially-Tolerable Discrimination

    OpenAIRE

    J. Atsu Amegashie

    2008-01-01

    History is replete with overt discrimination of various forms. However, these forms of discrimination are not equally tolerable. For example, discrimination based on immutable or prohibitively unalterable characteristics such as race or gender is much less acceptable. Why? I develop a simple model of conflict which is driven by either racial (gender) discrimination or generational discrimination (i.e., young versus old). I show that there exist parameters of the model where racial (gender) di...

  10. Phonological representations are unconsciously used when processing complex, non-speech signals.

    Directory of Open Access Journals (Sweden)

    Mahan Azadpour

    Full Text Available Neuroimaging studies of speech processing increasingly rely on artificial speech-like sounds whose perceptual status as speech or non-speech is assigned by simple subjective judgments; brain activation patterns are interpreted according to these status assignments. The naïve perceptual status of one such stimulus, spectrally-rotated speech (not consciously perceived as speech by naïve subjects, was evaluated in discrimination and forced identification experiments. Discrimination of variation in spectrally-rotated syllables in one group of naïve subjects was strongly related to the pattern of similarities in phonological identification of the same stimuli provided by a second, independent group of naïve subjects, suggesting either that (1 naïve rotated syllable perception involves phonetic-like processing, or (2 that perception is solely based on physical acoustic similarity, and similar sounds are provided with similar phonetic identities. Analysis of acoustic (Euclidean distances of center frequency values of formants and phonetic similarities in the perception of the vowel portions of the rotated syllables revealed that discrimination was significantly and independently influenced by both acoustic and phonological information. We conclude that simple subjective assessments of artificial speech-like sounds can be misleading, as perception of such sounds may initially and unconsciously utilize speech-like, phonological processing.

  11. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  12. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  13. Advances in Speech Recognition

    CERN Document Server

    Neustein, Amy

    2010-01-01

    This volume is comprised of contributions from eminent leaders in the speech industry, and presents a comprehensive and in depth analysis of the progress of speech technology in the topical areas of mobile settings, healthcare and call centers. The material addresses the technical aspects of voice technology within the framework of societal needs, such as the use of speech recognition software to produce up-to-date electronic health records, not withstanding patients making changes to health plans and physicians. Included will be discussion of speech engineering, linguistics, human factors ana

  14. Landscape genetics of the nonnative red fox of California.

    Science.gov (United States)

    Sacks, Benjamin N; Brazeal, Jennifer L; Lewis, Jeffrey C

    2016-07-01

    Invasive mammalian carnivores contribute disproportionately to declines in global biodiversity. In California, nonnative red foxes (Vulpes vulpes) have significantly impacted endangered ground-nesting birds and native canids. These foxes derive primarily from captive-reared animals associated with the fur-farming industry. Over the past five decades, the cumulative area occupied by nonnative red fox increased to cover much of central and southern California. We used a landscape-genetic approach involving mitochondrial DNA (mtDNA) sequences and 13 microsatellites of 402 nonnative red foxes removed in predator control programs to investigate source populations, contemporary connectivity, and metapopulation dynamics. Both markers indicated high population structuring consistent with origins from multiple introductions and low subsequent gene flow. Landscape-genetic modeling indicated that population connectivity was especially low among coastal sampling sites surrounded by mountainous wildlands but somewhat higher through topographically flat, urban and agricultural landscapes. The genetic composition of populations tended to be stable for multiple generations, indicating a degree of demographic resilience to predator removal programs. However, in two sites where intensive predator control reduced fox abundance, we observed increases in immigration, suggesting potential for recolonization to counter eradication attempts. These findings, along with continued genetic monitoring, can help guide localized management of foxes by identifying points of introductions and routes of spread and evaluating the relative importance of reproduction and immigration in maintaining populations. More generally, the study illustrates the utility of a landscape-genetic approach for understanding invasion dynamics and metapopulation structure of one of the world's most destructive invasive mammals, the red fox.

  15. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  16. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Feeding Your 1- to 2-Year-Old Speech-Language Therapy KidsHealth > For Parents > Speech-Language Therapy A ... with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech disorder refers ...

  17. Language Distance and Non-Native Syntactic Processing: Evidence from Event-Related Potentials

    Science.gov (United States)

    Zawiszewski, Adam; Gutierrez, Eva; Fernandez, Beatriz; Laka, Itziar

    2011-01-01

    In this study, we explore native and non-native syntactic processing, paying special attention to the language distance factor. To this end, we compared how native speakers of Basque and highly proficient non-native speakers of Basque who are native speakers of Spanish process certain core aspects of Basque syntax. Our results suggest that…

  18. Native and Nonnative Teachers of L2 Pronunciation: Effects on Learner Performance

    Science.gov (United States)

    Levis, John M.; Sonsaat, Sinem; Link, Stephanie; Barriuso, Taylor Anne

    2016-01-01

    Both native and nonnative language teachers often find pronunciation a difficult skill to teach because of inadequate training or uncertainty about the effectiveness of instruction. But nonnative language teachers may also see themselves as inadequate models for pronunciation, leading to increased uncertainty about whether they should teach…

  19. Grammatical versus Pragmatic Error: Employer Perceptions of Nonnative and Native English Speakers

    Science.gov (United States)

    Wolfe, Joanna; Shanmugaraj, Nisha; Sipe, Jaclyn

    2016-01-01

    Many communication instructors make allowances for grammatical error in nonnative English speakers' writing, but do businesspeople do the same? We asked 169 businesspeople to comment on three versions of an email with different types of errors. We found that businesspeople do make allowances for errors made by nonnative English speakers,…

  20. Student Perceptions of How TESOL Educators Teach Nonnative English-Speaking Teachers

    Science.gov (United States)

    Phillabaum, Scott; Frazier, Stefan

    2013-01-01

    Recent research on how TESOL professionals educate nonnative English-speaking students in MA programs indicates a general conviction that native-speaking and nonnative-speaking MA students should be treated equally during their studies in MA programs. Absent from this discussion and much of the literature on this topic, however, are the voices of…

  1. Chinese Fantasy Novel: Empirical Study on New Word Teaching for Non-Native Learners

    Science.gov (United States)

    Meng, Bok Check; Soon, Goh Ying

    2014-01-01

    Giving additional learning materials such as Chinese fantasy novel to non-native learners can be strenuous. This study seeks to render empirical support on the usefulness of the use of new words in Chinese fantasy novel to enhance vocabulary learning among the non-native learners of Chinese. In general, the students agreed that they like to learn…

  2. Determinants of success in native and non-native listening comprehension: an individual differences approach

    NARCIS (Netherlands)

    S. Andringa; N. Olsthoorn; C. van Beuningen; R. Schoonen; J. Hulstijn

    2012-01-01

    The goal of this study was to explain individual differences in both native and non-native listening comprehension; 121 native and 113 non-native speakers of Dutch were tested on various linguistic and nonlinguistic cognitive skills thought to underlie listening comprehension. Structural equation mo

  3. Co-occurring nonnative woody shrubs have additive and non-additive soil legacies

    DEFF Research Database (Denmark)

    Kuebbing, Sara E; Patterson, Courtney M; Classen, Aimée T;

    2016-01-01

    To maximize limited conservation funds and prioritize management projects that are likely to succeed, accurate assessment of invasive nonnative species impacts is essential. A common challenge to prioritization is a limited knowledge of the difference between the impacts of a single nonnative spe...

  4. The Factors Influencing the Motivational Strategy Use of Non-Native English Teachers

    Science.gov (United States)

    Solak, Ekrem; Bayar, Adem

    2014-01-01

    Motivation can be considered one of the most important factors determining success in language classroom. Therefore, this research aims to determine the variables influencing the motivational strategies used by non-native English teachers in Turkish context. 122 non-native English teachers teaching English at a state-run university prep school…

  5. The Identity (Re)Construction of Nonnative English Teachers Stepping into Native Turkish Teachers' Shoes

    Science.gov (United States)

    Mutlu, Sevcan; Ortaçtepe, Deniz

    2016-01-01

    The present study explored the identity (re)construction of five nonnative English teachers who went to the USA on a prestigious scholarship for one year to teach their native language, Turkish. In that sense, it investigated how this shift from being a nonnative English teacher to a native Turkish teacher influenced their self-image,…

  6. Cognitive and Emotional Evaluation of Two Educational Outdoor Programs Dealing with Non-Native Bird Species

    Science.gov (United States)

    Braun, Michael; Buyer, Regine; Randler, Christoph

    2010-01-01

    "Non-native organisms are a major threat to biodiversity". This statement is often made by biologists, but general conclusions cannot be drawn easily because of contradictory evidence. To introduce pupils aged 11-14 years to this topic, we employed an educational program dealing with non-native animals in Central Europe. The pupils took part in a…

  7. The effect of L1 orthography on non-native vowel perception

    NARCIS (Netherlands)

    Escudero, P.; Wanrooij, K.E.

    2010-01-01

    Previous research has shown that orthography influences the learning and processing of spoken non-native words. In this paper, we examine the effect of L1 orthography on non-native sound perception. In Experiment 1, 204 Spanish learners of Dutch and a control group of 20 native speakers of Dutch

  8. The Effect of L1 Orthography on Non-Native Vowel Perception

    Science.gov (United States)

    Escudero, Paola; Wanrooij, Karin

    2010-01-01

    Previous research has shown that orthography influences the learning and processing of spoken non-native words. In this paper, we examine the effect of L1 orthography on non-native sound perception. In Experiment 1, 204 Spanish learners of Dutch and a control group of 20 native speakers of Dutch were asked to classify Dutch vowel tokens by…

  9. Other-Repair in Japanese Conversation Between Nonnative and Native Speakers.

    Science.gov (United States)

    Hosoda, Yuri

    2000-01-01

    Looks at conversation in Japanese between native and nonnative speaking peers. Focuses on other-repair, taking the conversation analytic approach to phenomena in nonnative discourse. Identifies a range of speaking practices as well as embodied resources that are understood by recipients as inviting or initiating other repair. (Author/VWL)

  10. Facing Innovation: Preparing Lecturers for English-Medium Instruction in a Non-Native Context.

    Science.gov (United States)

    Klaassen, R. G.; De Graaff, E.

    2001-01-01

    Discusses the effects of training on the teaching staff in an innovation process that is the implementation of English-medium instruction by non-native speaking lecturers to non-native speaking students. The workshop turned out to be the most appropriate professional development for the first two phases in the innovation process. (Contains 13…

  11. Cross-Linguistic Influence in Non-Native Languages: Explaining Lexical Transfer Using Language Production Models

    Science.gov (United States)

    Burton, Graham

    2013-01-01

    The focus of this research is on the nature of lexical cross-linguistic influence (CLI) between non-native languages. Using oral interviews with 157 L1 Italian high-school students studying English and German as non-native languages, the project investigated which kinds of lexis appear to be more susceptible to transfer from German to English and…

  12. Structural Correlates for Lexical Efficiency and Number of Languages in Non-Native Speakers of English

    Science.gov (United States)

    Grogan, A.; Parker Jones, O.; Ali, N.; Crinion, J.; Orabona, S.; Mechias, M. L.; Ramsden, S.; Green, D. W.; Price, C. J.

    2012-01-01

    We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investigate whether the efficiency of word processing in the non-native language (lexical efficiency) and the number of non-native languages spoken (2+ versus 1) were related to local differences in the brain structure of bilingual and multilingual speakers.…

  13. Managing conflicts arising from fisheries enhancements based on non-native fishes in southern Africa.

    Science.gov (United States)

    Ellender, B R; Woodford, D J; Weyl, O L F; Cowx, I G

    2014-12-01

    Southern Africa has a long history of non-native fish introductions for the enhancement of recreational and commercial fisheries, due to a perceived lack of suitable native species. This has resulted in some important inland fisheries being based on non-native fishes. Regionally, these introductions are predominantly not benign, and non-native fishes are considered one of the main threats to aquatic biodiversity because they affect native biota through predation, competition, habitat alteration, disease transfer and hybridization. To achieve national policy objectives of economic development, food security and poverty eradication, countries are increasingly looking towards inland fisheries as vehicles for development. As a result, conflicts have developed between economic and conservation objectives. In South Africa, as is the case for other invasive biota, the control and management of non-native fishes is included in the National Environmental Management: Biodiversity Act. Implementation measures include import and movement controls and, more recently, non-native fish eradication in conservation priority areas. Management actions are, however, complicated because many non-native fishes are important components in recreational and subsistence fisheries that contribute towards regional economies and food security. In other southern African countries, little attention has focussed on issues and management of non-native fishes, and this is cause for concern. This paper provides an overview of introductions, impacts and fisheries in southern Africa with emphasis on existing and evolving legislation, conflicts, implementation strategies and the sometimes innovative approaches that have been used to prioritize conservation areas and manage non-native fishes.

  14. Cross-Linguistic Influence in Non-Native Languages: Explaining Lexical Transfer Using Language Production Models

    Science.gov (United States)

    Burton, Graham

    2013-01-01

    The focus of this research is on the nature of lexical cross-linguistic influence (CLI) between non-native languages. Using oral interviews with 157 L1 Italian high-school students studying English and German as non-native languages, the project investigated which kinds of lexis appear to be more susceptible to transfer from German to English and…

  15. Strategies for Nonnative-English-Speaking Teachers' Continued Development as Professionals

    Science.gov (United States)

    De Oliveira, Luciana C.

    2011-01-01

    This article contributes to the literature on nonnative-English-speaking (NNES) teachers by providing specific ways in which they can use their nonnative status in the classroom and in their professional work in the field of teaching English to speakers of other languages (TESOL). Drawing on the author's own experiences as an English learner, she…

  16. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  17. Speech Compression for Noise-Corrupted Thai Expressive Speech

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2011-01-01

    Full Text Available Problem statement: In speech communication, speech coding aims at preserving the speech quality with lower coding bitrate. When considering the communication environment, various types of noises deteriorates the speech quality. The expressive speech with different speaking styles may cause different speech quality with the same coding method. Approach: This research proposed a study of speech compression for noise-corrupted Thai expressive speech by using two coding methods of CS-ACELP and MP-CELP. The speech material included a hundredmale speech utterances and a hundred female speech utterances. Four speaking styles included enjoyable, sad, angry and reading styles. Five sentences of Thai speech were chosen. Three types of noises were included (train, car and air conditioner. Five levels of each type of noise were varied from 0-20 dB. The subjective test of mean opinion score was exploited in the evaluation process. Results: The experimental results showed that CS-ACELP gave the better speech quality than that of MP-CELP at all three bitrates of 6000, 8600-12600 bps. When considering the levels of noise, the 20-dB noise gave the best speech quality, while 0-dB noise gave the worst speech quality. When considering the speech gender, female speech gave the better results than that of male speech. When considering the types of noise, the air-conditioner noise gave the best speech quality, while the train noise gave the worst speech quality. Conclusion: From the study, it can be seen that coding methods, types of noise, levels of noise, speech gender influence on the coding speech quality.

  18. Contrasting xylem vessel constraints on hydraulic conductivity between native and non-native woody understory species

    Directory of Open Access Journals (Sweden)

    Maria S Smith

    2013-11-01

    Full Text Available We examined the hydraulic properties of 82 native and non-native woody species common to forests of Eastern North America, including several congeneric groups, representing a range of anatomical wood types. We observed smaller conduit diameters with greater frequency in non-native species, corresponding to lower calculated potential vulnerability to cavitation index. Non-native species exhibited higher vessel-grouping in metaxylem compared with native species, however, solitary vessels were more prevalent in secondary xylem. Higher frequency of solitary vessels in secondary xylem was related to a lower potential vulnerability index. We found no relationship between anatomical characteristics of xylem, origin of species and hydraulic conductivity, indicating that non-native species did not exhibit advantageous hydraulic efficiency over native species. Our results confer anatomical advantages for non-native species under the potential for cavitation due to freezing, perhaps permitting extended growing seasons.

  19. Gender Discrimination in English

    Institute of Scientific and Technical Information of China (English)

    廖敏慧

    2014-01-01

    Gender discrimination in language is usually defined as discrimination based on sex, especially discrimination against women. With the rise of women’s liberation movement in the 1960s and 1970s, and the improvement of women’s social status in recent years, gender discrimination in English attracts more and more attention. Based on previous studies, this thesis first dis⁃cusses the manifestations of gender discrimination in English vocabulary and address terms, then analyzes the factors of gender dis⁃crimination in English from social and cultural perspectives, finally puts forward some methods that are good for avoiding or elim⁃inating gender discrimination in English.

  20. IITKGP-SESC: Speech Database for Emotion Analysis

    Science.gov (United States)

    Koolagudi, Shashidhar G.; Maity, Sudhamay; Kumar, Vuppala Anil; Chakrabarti, Saswat; Rao, K. Sreenivasa

    In this paper, we are introducing the speech database for analyzing the emotions present in speech signals. The proposed database is recorded in Telugu language using the professional artists from All India Radio (AIR), Vijayawada, India. The speech corpus is collected by simulating eight different emotions using the neutral (emotion free) statements. The database is named as Indian Institute of Technology Kharagpur Simulated Emotion Speech Corpus (IITKGP-SESC). The proposed database will be useful for characterizing the emotions present in speech. Further, the emotion specific knowledge present in speech at different levels can be acquired by developing the emotion specific models using the features from vocal tract system, excitation source and prosody. This paper describes the design, acquisition, post processing and evaluation of the proposed speech database (IITKGP-SESC). The quality of the emotions present in the database is evaluated using subjective listening tests. Finally, statistical models are developed using prosodic features, and the discrimination of the emotions is carried out by performing the classification of emotions using the developed statistical models.

  1. Formant discrimination in noise for isolated vowels

    Science.gov (United States)

    Liu, Chang; Kewley-Port, Diane

    2004-11-01

    Formant discrimination for isolated vowels presented in noise was investigated for normal-hearing listeners. Discrimination thresholds for F1 and F2, for the seven American English vowels /eye, smcapi, eh, æ, invv, aye, you/, were measured under two types of noise, long-term speech-shaped noise (LTSS) and multitalker babble, and also under quiet listening conditions. Signal-to-noise ratios (SNR) varied from -4 to +4 dB in steps of 2 dB. All three factors, formant frequency, signal-to-noise ratio, and noise type, had significant effects on vowel formant discrimination. Significant interactions among the three factors showed that threshold-frequency functions depended on SNR and noise type. The thresholds at the lowest levels of SNR were highly elevated by a factor of about 3 compared to those in quiet. The masking functions (threshold vs SNR) were well described by a negative exponential over F1 and F2 for both LTSS and babble noise. Speech-shaped noise was a slightly more effective masker than multitalker babble, presumably reflecting small benefits (1.5 dB) due to the temporal variation of the babble. .

  2. Tracking Speech Sound Acquisition

    Science.gov (United States)

    Powell, Thomas W.

    2011-01-01

    This article describes a procedure to aid in the clinical appraisal of child speech. The approach, based on the work by Dinnsen, Chin, Elbert, and Powell (1990; Some constraints on functionally disordered phonologies: Phonetic inventories and phonotactics. "Journal of Speech and Hearing Research", 33, 28-37), uses a railway idiom to track gains in…

  3. Preschool Connected Speech Inventory.

    Science.gov (United States)

    DiJohnson, Albert; And Others

    This speech inventory developed for a study of aurally handicapped preschool children (see TM 001 129) provides information on intonation patterns in connected speech. The inventory consists of a list of phrases and simple sentences accompanied by pictorial clues. The test is individually administered by a teacher-examiner who presents the spoken…

  4. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  5. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  6. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  7. Tracking Speech Sound Acquisition

    Science.gov (United States)

    Powell, Thomas W.

    2011-01-01

    This article describes a procedure to aid in the clinical appraisal of child speech. The approach, based on the work by Dinnsen, Chin, Elbert, and Powell (1990; Some constraints on functionally disordered phonologies: Phonetic inventories and phonotactics. "Journal of Speech and Hearing Research", 33, 28-37), uses a railway idiom to track gains in…

  8. Free Speech Yearbook 1976.

    Science.gov (United States)

    Phifer, Gregg, Ed.

    The articles collected in this annual address several aspects of First Amendment Law. The following titles are included: "Freedom of Speech As an Academic Discipline" (Franklyn S. Haiman), "Free Speech and Foreign-Policy Decision Making" (Douglas N. Freeman), "The Supreme Court and the First Amendment: 1975-1976"…

  9. Preschool Connected Speech Inventory.

    Science.gov (United States)

    DiJohnson, Albert; And Others

    This speech inventory developed for a study of aurally handicapped preschool children (see TM 001 129) provides information on intonation patterns in connected speech. The inventory consists of a list of phrases and simple sentences accompanied by pictorial clues. The test is individually administered by a teacher-examiner who presents the spoken…

  10. Advertising and Free Speech.

    Science.gov (United States)

    Hyman, Allen, Ed.; Johnson, M. Bruce, Ed.

    The articles collected in this book originated at a conference at which legal and economic scholars discussed the issue of First Amendment protection for commercial speech. The first article, in arguing for freedom for commercial speech, finds inconsistent and untenable the arguments of those who advocate freedom from regulation for political…

  11. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds Voted For Schorr Inquiry" by Richard Lyons, "Erosion of the…

  12. Charisma in business speeches

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter

    2016-01-01

    to business speeches. Consistent with the public opinion, our findings are indicative of Steve Jobs being a more charismatic speaker than Mark Zuckerberg. Beyond previous studies, our data suggest that rhythm and emphatic accentuation are also involved in conveying charisma. Furthermore, the differences......Charisma is a key component of spoken language interaction; and it is probably for this reason that charismatic speech has been the subject of intensive research for centuries. However, what is still largely missing is a quantitative and objective line of research that, firstly, involves analyses...... of the acoustic-prosodic signal, secondly, focuses on business speeches like product presentations, and, thirdly, in doing so, advances the still fairly fragmentary evidence on the prosodic correlates of charismatic speech. We show that the prosodic features of charisma in political speeches also apply...

  13. Positive effects of nonnative invasive Phragmites australis on larval bullfrogs.

    Directory of Open Access Journals (Sweden)

    Mary Alta Rogalski

    Full Text Available BACKGROUND: Nonnative Phragmites australis (common reed is one of the most intensively researched and managed invasive plant species in the United States, yet as with many invasive species, our ability to predict, control or understand the consequences of invasions is limited. Rapid spread of dense Phragmites monocultures has prompted efforts to limit its expansion and remove existing stands. Motivation for large-scale Phragmites eradication programs includes purported negative impacts on native wildlife, a view based primarily on observational results. We took an experimental approach to test this assumption, estimating the effects of nonnative Phragmites australis on a native amphibian. METHODOLOGY/PRINCIPAL FINDINGS: Concurrent common garden and reciprocal transplant field experiments revealed consistently strong positive influences of Phragmites on Rana catesbeiana (North American bullfrog larval performance. Decomposing Phragmites litter appears to contribute to the effect. CONCLUSIONS/SIGNIFICANCE: Positive effects of Phragmites merit further research, particularly in regions where both Phragmites and R. catesbeiana are invasive. More broadly, the findings of this study reinforce the importance of experimental evaluations of the effects of biological invasion to make informed conservation and restoration decisions.

  14. Lexical support for phonetic perception during nonnative spoken word recognition.

    Science.gov (United States)

    Samuel, Arthur G; Frost, Ram

    2015-12-01

    Second language comprehension is generally not as efficient and effective as native language comprehension. In the present study, we tested the hypothesis that lower-level processes such as lexical support for phonetic perception are a contributing factor to these differences. For native listeners, it has been shown that the perception of ambiguous acoustic–phonetic segments is driven by lexical factors (Samuel Psychological Science, 12, 348-351, 2001). Here, we tested whether nonnative listeners can use lexical context in the same way. Native Hebrew speakers living in Israel were tested with American English stimuli. When subtle acoustic cues in the stimuli worked against the lexical context, these nonnative speakers showed no evidence of lexical guidance of phonetic perception. This result conflicts with the performance of native speakers, who demonstrate lexical effects on phonetic perception even with conflicting acoustic cues. When stimuli without any conflicting cues were used, the native Hebrew subjects produced results similar to those of native English speakers, showing lexical support for phonetic perception in their second language. In contrast, native Arabic speakers, who were less proficient in English than the native Hebrew speakers, showed no ability to use lexical activation to support phonetic perception, even without any conflicting cues. These results reinforce previous demonstrations of lexical support of phonetic perception and demonstrate how proficiency modulates the use of lexical information in driving phonetic perception.

  15. Aquatic macroinvertebrate responses to native and non-native predators

    Directory of Open Access Journals (Sweden)

    Haddaway N. R.

    2014-01-01

    Full Text Available Non-native species can profoundly affect native ecosystems through trophic interactions with native species. Native prey may respond differently to non-native versus native predators since they lack prior experience. Here we investigate antipredator responses of two common freshwater macroinvertebrates, Gammarus pulex and Potamopyrgus jenkinsi, to olfactory cues from three predators; sympatric native fish (Gasterosteus aculeatus, sympatric native crayfish (Austropotamobius pallipes, and novel invasive crayfish (Pacifastacus leniusculus. G. pulex responded differently to fish and crayfish; showing enhanced locomotion in response to fish, but a preference for the dark over the light in response to the crayfish. P.jenkinsi showed increased vertical migration in response to all three predator cues relative to controls. These different responses to fish and crayfish are hypothesised to reflect the predators’ differing predation types; benthic for crayfish and pelagic for fish. However, we found no difference in response to native versus invasive crayfish, indicating that prey naiveté is unlikely to drive the impacts of invasive crayfish. The Predator Recognition Continuum Hypothesis proposes that benefits of generalisable predator recognition outweigh costs when predators are diverse. Generalised responses of prey as observed here will be adaptive in the presence of an invader, and may reduce novel predators’ potential impacts.

  16. Effect of explicit dimensional instruction on speech category learning.

    Science.gov (United States)

    Chandrasekaran, Bharath; Yi, Han-Gyol; Smayda, Kirsten E; Maddox, W Todd

    2016-02-01

    Learning nonnative speech categories is often considered a challenging task in adulthood. This difficulty is driven by cross-language differences in weighting critical auditory dimensions that differentiate speech categories. For example, previous studies have shown that differentiating Mandarin tonal categories requires attending to dimensions related to pitch height and direction. Relative to native speakers of Mandarin, the pitch direction dimension is underweighted by native English speakers. In the current study, we examined the effect of explicit instructions (dimension instruction) on native English speakers' Mandarin tone category learning within the framework of a dual-learning systems (DLS) model. This model predicts that successful speech category learning is initially mediated by an explicit, reflective learning system that frequently utilizes unidimensional rules, with an eventual switch to a more implicit, reflexive learning system that utilizes multidimensional rules. Participants were explicitly instructed to focus and/or ignore the pitch height dimension, the pitch direction dimension, or were given no explicit prime. Our results show that instruction instructing participants to focus on pitch direction, and instruction diverting attention away from pitch height, resulted in enhanced tone categorization. Computational modeling of participant responses suggested that instruction related to pitch direction led to faster and more frequent use of multidimensional reflexive strategies and enhanced perceptual selectivity along the previously underweighted pitch direction dimension.

  17. Brain structure is related to speech perception abilities in bilinguals.

    Science.gov (United States)

    Burgaleta, Miguel; Baus, Cristina; Díaz, Begoña; Sebastián-Gallés, Núria

    2014-07-01

    Morphology of the human brain predicts the speed at which individuals learn to distinguish novel foreign speech sounds after laboratory training. However, little is known about the neuroanatomical basis of individual differences in speech perception when a second language (L2) has been learned in natural environments for extended periods of time. In the present study, two samples of highly proficient bilinguals were selected according to their ability to distinguish between very similar L2 sounds, either isolated (prelexical) or within words (lexical). Structural MRI was acquired and processed to estimate vertex-wise indices of cortical thickness (CT) and surface area (CSA), and the association between cortical morphology and behavioral performance was inspected. Results revealed that performance in the lexical task was negatively associated with the thickness of the left temporal cortex and angular gyrus, as well as with the surface area of the left precuneus. Our findings, consistently with previous fMRI studies, demonstrate that morphology of the reported areas is relevant for word recognition based on phonological information. Further, we discuss the possibility that increased CT and CSA in sound-to-meaning mapping regions, found for poor non-native speech sounds perceivers, would have plastically arisen after extended periods of increased functional activity during L2 exposure.

  18. The role of rhythm in perceiving speech in noise: a comparison of percussionists, vocalists and non-musicians

    OpenAIRE

    Slater, Jessica; Kraus, Nina

    2015-01-01

    The natural rhythms of speech help a listener follow what is being said, especially in noisy conditions. There is increasing evidence for links between rhythm abilities and language skills; however, the role of rhythm-related expertise in perceiving speech in noise is unknown. The present study assesses musical competence (rhythmic and melodic discrimination), speech-in-noise perception and auditory working memory in young adult percussionists, vocalists and non-musicians. Outcomes reveal tha...

  19. Defining the Impact of Non-Native Species

    Science.gov (United States)

    Jeschke, Jonathan M; Bacher, Sven; Blackburn, Tim M; Dick, Jaimie T A; Essl, Franz; Evans, Thomas; Gaertner, Mirijam; Hulme, Philip E; Kühn, Ingolf; Mrugała, Agata; Pergl, Jan; Pyšek, Petr; Rabitsch, Wolfgang; Ricciardi, Anthony; Richardson, David M; Sendek, Agnieszka; VilÀ, Montserrat; Winter, Marten; Kumschick, Sabrina

    2014-01-01

    Non-native species cause changes in the ecosystems to which they are introduced. These changes, or some of them, are usually termed impacts; they can be manifold and potentially damaging to ecosystems and biodiversity. However, the impacts of most non-native species are poorly understood, and a synthesis of available information is being hindered because authors often do not clearly define impact. We argue that explicitly defining the impact of non-native species will promote progress toward a better understanding of the implications of changes to biodiversity and ecosystems caused by non-native species; help disentangle which aspects of scientific debates about non-native species are due to disparate definitions and which represent true scientific discord; and improve communication between scientists from different research disciplines and between scientists, managers, and policy makers. For these reasons and based on examples from the literature, we devised seven key questions that fall into 4 categories: directionality, classification and measurement, ecological or socio-economic changes, and scale. These questions should help in formulating clear and practical definitions of impact to suit specific scientific, stakeholder, or legislative contexts. Definiendo el Impacto de las Especies No-Nativas Resumen Las especies no-nativas pueden causar cambios en los ecosistemas donde son introducidas. Estos cambios, o algunos de ellos, usualmente se denominan como impactos; estos pueden ser variados y potencialmente dañinos para los ecosistemas y la biodiversidad. Sin embargo, los impactos de la mayoría de las especies no-nativas están pobremente entendidos y una síntesis de información disponible se ve obstaculizada porque los autores continuamente no definen claramente impacto. Discutimos que definir explícitamente el impacto de las especies no-nativas promoverá el progreso hacia un mejor entendimiento de las implicaciones de los cambios a la biodiversidad y los

  20. Unsupervised Linear Discriminant Analysis

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An algorithm for unsupervised linear discriminant analysis was presented. Optimal unsupervised discriminant vectors are obtained through maximizing covariance of all samples and minimizing covariance of local k-nearest neighbor samples. The experimental results show our algorithm is effective.

  1. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...... from computational auditory scene analysis and further support the hypothesis that the SNRenv is a powerful metric for speech intelligibility prediction....

  2. English sentence recognition in speech-shaped noise and multi-talker babble for English-, Chinese-, and Korean-native listeners.

    Science.gov (United States)

    Jin, Su-Hyun; Liu, Chang

    2012-11-01

    This study aimed to investigate English sentence recognition in quiet and two types of maskers, multi-talker babble (MTB) and long-term speech-shaped noise (LTSSN), with varied signal-to-noise ratios, for English-, Chinese-, and Korean-native listeners. Results showed that first, sentence recognition for non-native listeners was affected more by background noise than that for native listeners; second, the masking effects of LTSSN were similar between Chinese and Korean listeners, but the masking effects of MTB were greater for Chinese than for Korean listeners, suggesting possible interaction effects between the non-native listener's native language and speech-like competing noise in sentence recognition.

  3. Airline Price Discrimination

    OpenAIRE

    Stacey, Brian

    2015-01-01

    Price discrimination enjoys a long history in the airline industry. Borenstein (1989) discusses price discrimination through frequent flyer programs from 1985 as related to the Piedmont-US Air merger, price discrimination strategies have grown in size and scope since then. From Saturday stay over requirements to varying costs based on time of purchase, the airline industry is uniquely situated to enjoy the fruits of price discrimination.

  4. Enhanced cognitive and perceptual processing: A computational basis for the musician advantage in speech learning

    Directory of Open Access Journals (Sweden)

    Kirsten eSmayda

    2015-05-01

    Full Text Available Long-term music training can positively impact speech processing. A recent framework developed to explain such cross-domain plasticity posits that music training-related advantages in speech processing are due to shared cognitive and perceptual processes between music and speech. Although perceptual and cognitive processing advantages due to music training have been independently demonstrated, to date no study has examined perceptual and cognitive processing within the context of a single task. The present study examines the impact of long-term music training on speech learning from a rigorous, computational perspective derived from signal detection theory. Our computational models provide independent estimates of cognitive and perceptual processing in native English-speaking musicians (n=15, mean age= 25 years and non-musicians (n=15, mean age= 23 years learning to categorize non-native lexical pitch patterns (Mandarin tones. Musicians outperformed non-musicians in this task. Model-based analyses suggested that musicians shifted from simple unidimensional decision strategies to more optimal multidimensional decision strategies sooner than non-musicians. In addition, musicians used optimal decisional strategies more often than non-musicians. However, musicians and non-musicians who used multidimensional strategies showed no difference in performance. We estimated parameters that quantify the magnitude of perceptual variability along two dimensions that are critical for tone categorization: pitch height and pitch direction. Both musicians and non-musicians showed a decrease in perceptual variability along the pitch height dimension, but only musicians showed a significant reduction in perceptual variability along the pitch direction dimension. Notably, these advantages persisted during a generalization phase, when no feedback was provided. These results provide an insight into the mechanisms underlying the musician advantage observed in non-native

  5. Pitch deviation analysis of pathological voice in connected speech.

    Science.gov (United States)

    Laflen, J Brandon; Lazarus, Cathy L; Amin, Milan R

    2008-02-01

    This study compares normal and pathologic voices using a novel voice analysis algorithm that examines pitch deviation during connected speech. The study evaluates the clinical potential of the algorithm as a mechanism to distinguish between normal and pathologic voices using connected speech. Adult vocalizations from normal subjects and patients with known benign free-edge vocal fold lesions were analyzed. Recordings had been previously obtained in quiet under controlled conditions. Two phrases and sustained /a/ were recorded per subject. The subject populations consisted of 10 normal and 31 abnormal subjects. The voice analysis algorithm generated 2-dimensional patterns that represent pitch deviation in time and under variable window widths. Measures were collected from these patterns for window widths between 10 and 250 ms. For comparison, jitter and shimmer measures were collected from sustained /a/ by means of the Computerized Speech Lab (CSL). A t-test and tests of sensitivity and specificity assessed discrimination between normal and abnormal populations. More than 58% of the measures collected from connected speech outperformed the CSL jitter and shimmer measures in population discrimination. Twenty-five percent of the experimental measures (including /a/) indicated significantly different populations (p connected speech.

  6. Sperry Univac speech communications technology

    Science.gov (United States)

    Medress, Mark F.

    1977-01-01

    Technology and systems for effective verbal communication with computers were developed. A continuous speech recognition system for verbal input, a word spotting system to locate key words in conversational speech, prosodic tools to aid speech analysis, and a prerecorded voice response system for speech output are described.

  7. Voice and Speech after Laryngectomy

    Science.gov (United States)

    Stajner-Katusic, Smiljka; Horga, Damir; Musura, Maja; Globlek, Dubravka

    2006-01-01

    The aim of the investigation is to compare voice and speech quality in alaryngeal patients using esophageal speech (ESOP, eight subjects), electroacoustical speech aid (EACA, six subjects) and tracheoesophageal voice prosthesis (TEVP, three subjects). The subjects reading a short story were recorded in the sound-proof booth and the speech samples…

  8. Speech Correction in the Schools.

    Science.gov (United States)

    Eisenson, Jon; Ogilvie, Mardel

    An introduction to the problems and therapeutic needs of school age children whose speech requires remedial attention, the text is intended for both the classroom teacher and the speech correctionist. General considerations include classification and incidence of speech defects, speech correction services, the teacher as a speaker, the mechanism…

  9. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  10. Speech processing in mobile environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book focuses on speech processing in the presence of low-bit rate coding and varying background environments. The methods presented in the book exploit the speech events which are robust in noisy environments. Accurate estimation of these crucial events will be useful for carrying out various speech tasks such as speech recognition, speaker recognition and speech rate modification in mobile environments. The authors provide insights into designing and developing robust methods to process the speech in mobile environments. Covering temporal and spectral enhancement methods to minimize the effect of noise and examining methods and models on speech and speaker recognition applications in mobile environments.

  11. Early detection of nonnative alleles in fish populations: When sample size actually matters

    Science.gov (United States)

    Croce, Patrick Della; Poole, Geoffrey C.; Payne, Robert A.; Gresswell, Bob

    2017-01-01

    Reliable detection of nonnative alleles is crucial for the conservation of sensitive native fish populations at risk of introgression. Typically, nonnative alleles in a population are detected through the analysis of genetic markers in a sample of individuals. Here we show that common assumptions associated with such analyses yield substantial overestimates of the likelihood of detecting nonnative alleles. We present a revised equation to estimate the likelihood of detecting nonnative alleles in a population with a given level of admixture. The new equation incorporates the effects of the genotypic structure of the sampled population and shows that conventional methods overestimate the likelihood of detection, especially when nonnative or F-1 hybrid individuals are present. Under such circumstances—which are typical of early stages of introgression and therefore most important for conservation efforts—our results show that improved detection of nonnative alleles arises primarily from increasing the number of individuals sampled rather than increasing the number of genetic markers analyzed. Using the revised equation, we describe a new approach to determining the number of individuals to sample and the number of diagnostic markers to analyze when attempting to monitor the arrival of nonnative alleles in native populations.

  12. Relative Contributions of the Dorsal vs. Ventral Speech Streams to Speech Perception are Context Dependent: a lesion study

    Directory of Open Access Journals (Sweden)

    Corianne Rogalsky

    2014-04-01

    Full Text Available The neural basis of speech perception has been debated for over a century. While it is generally agreed that the superior temporal lobes are critical for the perceptual analysis of speech, a major current topic is whether the motor system contributes to speech perception, with several conflicting findings attested. In a dorsal-ventral speech stream framework (Hickok & Poeppel 2007, this debate is essentially about the roles of the dorsal versus ventral speech processing streams. A major roadblock in characterizing the neuroanatomy of speech perception is task-specific effects. For example, much of the evidence for dorsal stream involvement comes from syllable discrimination type tasks, which have been found to behaviorally doubly dissociate from auditory comprehension tasks (Baker et al. 1981. Discrimination task deficits could be a result of difficulty perceiving the sounds themselves, which is the typical assumption, or it could be a result of failures in temporary maintenance of the sensory traces, or the comparison and/or the decision process. Similar complications arise in perceiving sentences: the extent of inferior frontal (i.e. dorsal stream activation during listening to sentences increases as a function of increased task demands (Love et al. 2006. Another complication is the stimulus: much evidence for dorsal stream involvement uses speech samples lacking semantic context (CVs, non-words. The present study addresses these issues in a large-scale lesion-symptom mapping study. 158 patients with focal cerebral lesions from the Mutli-site Aphasia Research Consortium underwent a structural MRI or CT scan, as well as an extensive psycholinguistic battery. Voxel-based lesion symptom mapping was used to compare the neuroanatomy involved in the following speech perception tasks with varying phonological, semantic, and task loads: (i two discrimination tasks of syllables (non-words and words, respectively, (ii two auditory comprehension tasks

  13. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...... egalitarian reasons for free speech - reasons from overall welfare, from autonomy and from respect for the equality of citizens - it is argued that these reasons outweigh the proposed reasons for curbing culturally offensive speech. Currently controversial cases such as that of the Danish Cartoon Controversy...

  14. The Rhetoric in English Speech

    Institute of Scientific and Technical Information of China (English)

    马鑫

    2014-01-01

    English speech has a very long history and always attached importance of people highly. People usually give a speech in economic activities, political forums and academic reports to express their opinions to investigate or persuade others. English speech plays a rather important role in English literature. The distinct theme of speech should attribute to the rhetoric. It discusses parallelism, repetition and rhetorical question in English speech, aiming to help people appreciate better the charm of them.

  15. Discriminately Decreasing Discriminability with Learned Image Filters

    CERN Document Server

    Whitehill, Jacob

    2011-01-01

    In machine learning and computer vision, input images are often filtered to increase data discriminability. In some situations, however, one may wish to purposely decrease discriminability of one classification task (a "distractor" task), while simultaneously preserving information relevant to another (the task-of-interest): For example, it may be important to mask the identity of persons contained in face images before submitting them to a crowdsourcing site (e.g., Mechanical Turk) when labeling them for certain facial attributes. Another example is inter-dataset generalization: when training on a dataset with a particular covariance structure among multiple attributes, it may be useful to suppress one attribute while preserving another so that a trained classifier does not learn spurious correlations between attributes. In this paper we present an algorithm that finds optimal filters to give high discriminability to one task while simultaneously giving low discriminability to a distractor task. We present r...

  16. Bayesian estimation of keyword confidence in Chinese continuous speech recognition

    Institute of Scientific and Technical Information of China (English)

    HAO Jie; LI Xing

    2003-01-01

    In a syllable-based speaker-independent Chinese continuous speech recognition system based on classical Hidden Markov Model (HMM), a Bayesian approach of keyword confidence estimation is studied, which utilizes both acoustic layer scores and syllable-based statistical language model (LM) score. The Maximum a posteriori (MAP) confidence measure is proposed, and the forward-backward algorithm calculating the MAP confidence scores is deduced. The performance of the MAP confidence measure is evaluated in keyword spotting application and the experiment results show that the MAP confidence scores provide high discriminability for keyword candidates. Furthermore, the MAP confidence measure can be applied to various speech recognition applications.

  17. Unenthusiastic Europeans or Affected English: the Impact of Intonation on the Overall Make-up of Speech

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2005-06-01

    Full Text Available Attitudes and emotions are expressed by linguistic as well as extra-linguistic features. The linguistic features comprise the lexis, the word-order and the intonation of the utterance. The purpose of this article is to examine the impact of intonation on our perception of speech. I will attempt to show that our expression, as well as our perception and understanding of attitudes and emotions are realized in accordance with the intonation patterns typical of the mother tongue. When listening to non-native speakers using our mother tongue we expect and tolerate errors in pronunciation, grammar and lexis but are quite ignorant and intolerant of non-native intonation patterns. Foreigners often sound unenthusiastic to native English ears. On the basis of the results obtained from an analysis of speech produced by 21 non-native speakers of English, including Slovenes, I will show that the reasons for such an impression of being unenthusiastic stem from different tonality and tonicity rules, as well as from the lack of the fall-rise tone and a very narrow pitch range with no or very few pitch jumps or slumps.

  18. Feedback in online course for non-native English-speaking students

    CERN Document Server

    Olesova, Larisa

    2013-01-01

    Feedback in Online Course for Non-Native English-Speaking Students is an investigation of the effectiveness of audio and text feedback provided in English in an online course for non-native English-speaking students. The study presents results showing how audio and text feedback can impact on non-native English-speaking students' higher-order learning as they participate in an asynchronous online course. It also discusses the results of how students perceive both types of the feedback provided. In addition, the study examines how the impact and perceptions differ when the instructor giving the

  19. Infant directed speech and the development of speech perception: enhancing development or an unintended consequence?

    Science.gov (United States)

    McMurray, Bob; Kovack-Lesh, Kristine A; Goodwin, Dresden; McEchron, William

    2013-11-01

    Infant directed speech (IDS) is a speech register characterized by simpler sentences, a slower rate, and more variable prosody. Recent work has implicated it in more subtle aspects of language development. Kuhl et al. (1997) demonstrated that segmental cues for vowels are affected by IDS in a way that may enhance development: the average locations of the extreme "point" vowels (/a/, /i/ and /u/) are further apart in acoustic space. If infants learn speech categories, in part, from the statistical distributions of such cues, these changes may specifically enhance speech category learning. We revisited this by asking (1) if these findings extend to a new cue (Voice Onset Time, a cue for voicing); (2) whether they extend to the interior vowels which are much harder to learn and/or discriminate; and (3) whether these changes may be an unintended phonetic consequence of factors like speaking rate or prosodic changes associated with IDS. Eighteen caregivers were recorded reading a picture book including minimal pairs for voicing (e.g., beach/peach) and a variety of vowels to either an adult or their infant. Acoustic measurements suggested that VOT was different in IDS, but not in a way that necessarily supports better development, and that these changes are almost entirely due to slower rate of speech of IDS. Measurements of the vowel suggested that in addition to changes in the mean, there was also an increase in variance, and statistical modeling suggests that this may counteract the benefit of any expansion of the vowel space. As a whole this suggests that changes in segmental cues associated with IDS may be an unintended by-product of the slower rate of speech and different prosodic structure, and do not necessarily derive from a motivation to enhance development.

  20. Visual speech form influences the speed of auditory speech processing.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2013-09-01

    An important property of visual speech (movements of the lips and mouth) is that it generally begins before auditory speech. Research using brain-based paradigms has demonstrated that seeing visual speech speeds up the activation of the listener's auditory cortex but it is not clear whether these observed neural processes link to behaviour. It was hypothesized that the very early portion of visual speech (occurring before auditory speech) will allow listeners to predict the following auditory event and so facilitate the speed of speech perception. This was tested in the current behavioural experiments. Further, we tested whether the salience of the visual speech played a role in this speech facilitation effect (Experiment 1). We also determined the relative contributions that visual form (what) and temporal (when) cues made (Experiment 2). The results showed that visual speech cues facilitated response times and that this was based on form rather than temporal cues. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Anxiety and ritualized speech

    Science.gov (United States)

    Lalljee, Mansur; Cook, Mark

    1975-01-01

    The experiment examines the effects on a number of words that seem irrelevant to semantic communication. The Units of Ritualized Speech (URSs) considered are: 'I mean', 'in fact', 'really', 'sort of', 'well' and 'you know'. (Editor)

  2. Anxiety and ritualized speech

    Science.gov (United States)

    Lalljee, Mansur; Cook, Mark

    1975-01-01

    The experiment examines the effects on a number of words that seem irrelevant to semantic communication. The Units of Ritualized Speech (URSs) considered are: 'I mean', 'in fact', 'really', 'sort of', 'well' and 'you know'. (Editor)

  3. Speech intelligibility in hospitals.

    Science.gov (United States)

    Ryherd, Erica E; Moeller, Michael; Hsu, Timothy

    2013-07-01

    Effective communication between staff members is key to patient safety in hospitals. A variety of patient care activities including admittance, evaluation, and treatment rely on oral communication. Surprisingly, published information on speech intelligibility in hospitals is extremely limited. In this study, speech intelligibility measurements and occupant evaluations were conducted in 20 units of five different U.S. hospitals. A variety of unit types and locations were studied. Results show that overall, no unit had "good" intelligibility based on the speech intelligibility index (SII > 0.75) and several locations found to have "poor" intelligibility (SII speech intelligibility across a variety of hospitals and unit types, offers some evidence of the positive impact of absorption on intelligibility, and identifies areas for future research.

  4. Speech disorders - children

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/001430.htm Speech disorders - children To use the sharing features on ... 2017, A.D.A.M., Inc. Duplication for commercial use must be authorized in writing by ADAM ...

  5. Speech impairment (adult)

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003204.htm Speech impairment (adult) To use the sharing features on ... 2017, A.D.A.M., Inc. Duplication for commercial use must be authorized in writing by ADAM ...

  6. Recognizing GSM Digital Speech

    OpenAIRE

    Gallardo-Antolín, Ascensión; Peláez-Moreno, Carmen; Díaz-de-María, Fernando

    2005-01-01

    The Global System for Mobile (GSM) environment encompasses three main problems for automatic speech recognition (ASR) systems: noisy scenarios, source coding distortion, and transmission errors. The first one has already received much attention; however, source coding distortion and transmission errors must be explicitly addressed. In this paper, we propose an alternative front-end for speech recognition over GSM networks. This front-end is specially conceived to be effective against source c...

  7. Speech Compression and Synthesis

    Science.gov (United States)

    1980-10-01

    phonological rules combined with diphone improved the algorithms used by the phonetic synthesis prog?Im for gain normalization and time... phonetic vocoder, spectral template. i0^Th^TreprtTörc"u’d1sTuV^ork for the past two years on speech compression’and synthesis. Since there was an...from Block 19: speech recognition, pnoneme recogmtion. initial design for a phonetic recognition program. We also recorded ana partially labeled a

  8. Recognizing GSM Digital Speech

    OpenAIRE

    2005-01-01

    The Global System for Mobile (GSM) environment encompasses three main problems for automatic speech recognition (ASR) systems: noisy scenarios, source coding distortion, and transmission errors. The first one has already received much attention; however, source coding distortion and transmission errors must be explicitly addressed. In this paper, we propose an alternative front-end for speech recognition over GSM networks. This front-end is specially conceived to be effective against source c...

  9. The Visual Mismatch Negativity Elicited with Visual Speech Stimuli

    Directory of Open Access Journals (Sweden)

    Benjamin T. Files

    2013-07-01

    Full Text Available The visual mismatch negativity (vMMN, deriving from the brain’s response to stimulus deviance, is thought to be generated by the cortex that represents the stimulus. The vMMN response to visual speech stimuli was used in a study of the lateralization of visual speech processing. Previous research suggested that the right posterior temporal cortex has specialization for processing simple non-speech face gestures, and the left posterior temporal cortex has specialization for processing visual speech gestures. Here, visual speech consonant-vowel (CV stimuli with controlled perceptual dissimilarities were presented in an electroencephalography (EEG vMMN paradigm. The vMMNs were obtained using the comparison of event-related potentials (ERPs for separate CVs in their roles as deviant versus their roles as standard. Four separate vMMN contrasts were tested, two with the perceptually far deviants (i.e., zha or fa and two with the near deviants (i.e., zha or ta. Only far deviants evoked the vMMN response over the left posterior temporal cortex. All four deviants evoked vMMNs over the right posterior temporal cortex. The results are interpreted as evidence that the left posterior temporal cortex represents speech stimuli that are perceived as different consonants, and the right posterior temporal cortex represents face gestures that may not be discriminable as different CVs.

  10. Automatically identifying characteristic features of non-native English accents

    NARCIS (Netherlands)

    Bloem, Jelke; Wieling, Martijn; Nerbonne, John; Côté, Marie-Hélène; Knooihuizen, Remco; Nerbonne, John

    2016-01-01

    In this work, we demonstrate the application of statistical measures from dialectometry to the study of accented English speech. This new methodology enables a more quantitative approach to the study of accents. Studies on spoken dialect data have shown that a combination of representativeness (the

  11. Perception of police on discrimination in Serbia

    Directory of Open Access Journals (Sweden)

    Zekavica Radomir

    2014-01-01

    Full Text Available This paper presents and analyses results deriving from the research on the attitudes of criminal investigation officers in five police departments in Serbia: Belgrade, Novi Sad, Novi Pazar, Subotica and Vranje. The case studies examined the attitudes of members of criminal investigation police and their perception(s of discrimination towards vulnerable groups. The study aimed to determine the level of animosity exhibited in speech, to analyse socio-ethnic distance, to observe reactions towards measures designed to improve the situation of vulnerable groups, to consider the relationship among institutions regarding their responsibility for the occurrence of discrimination and its impact on the reduction of it, to discuss personal experiences of discrimination and to analyse attitudes regarding certain claims of a stereotypical character. Moreover, the paper also presents a comparative analysis of similar surveys on the perception of citizens towards discrimination that have thus far been conducted in Serbia. The results demonstrated that the police in Serbia did not exhibit a particularly discriminatory attitude towards citizens. It is important to note that the most prominent socio-ethnic distances were exhibited in relation to Roma and members of the LGBT community.

  12. Native language shapes automatic neural processing of speech.

    Science.gov (United States)

    Intartaglia, Bastien; White-Schwoch, Travis; Meunier, Christine; Roman, Stéphane; Kraus, Nina; Schön, Daniele

    2016-08-01

    The development of the phoneme inventory is driven by the acoustic-phonetic properties of one's native language. Neural representation of speech is known to be shaped by language experience, as indexed by cortical responses, and recent studies suggest that subcortical processing also exhibits this attunement to native language. However, most work to date has focused on the differences between tonal and non-tonal languages that use pitch variations to convey phonemic categories. The aim of this cross-language study is to determine whether subcortical encoding of speech sounds is sensitive to language experience by comparing native speakers of two non-tonal languages (French and English). We hypothesized that neural representations would be more robust and fine-grained for speech sounds that belong to the native phonemic inventory of the listener, and especially for the dimensions that are phonetically relevant to the listener such as high frequency components. We recorded neural responses of American English and French native speakers, listening to natural syllables of both languages. Results showed that, independently of the stimulus, American participants exhibited greater neural representation of the fundamental frequency compared to French participants, consistent with the importance of the fundamental frequency to convey stress patterns in English. Furthermore, participants showed more robust encoding and more precise spectral representations of the first formant when listening to the syllable of their native language as compared to non-native language. These results align with the hypothesis that language experience shapes sensory processing of speech and that this plasticity occurs as a function of what is meaningful to a listener.

  13. Comparison of acoustically coupled and mechanically coupled speech.

    Science.gov (United States)

    Cook, R O; Hamm, C W; Thomas, W G; Royster, L H

    1981-01-01

    Phonetically balanced work lists were mechanically coupled onto the ossicular chain of anesthetized guinea pigs by piezoelectric-type drivers and the resulting cochlear microphonic (CM) recorded on magnetic tape. Similar recordings of the CM resulting from free field tympanic membrane stimulation by hi-fi speakers were also obtained. The recordings were compared by conventional discrimination testing. In discrimination testing of all the raw recordings, listeners achieved essentially perfect scores. Addition of masking noise sufficient to reduce mean discrimination scores to 65-70% revealed no significant discrimination differences. When piezoelectrically initiated, CM-derived lists were compared with similar lists passed through hearing aids in an anechoic chamber, the preference of a panel of listeners for the quality of mechanically coupled speech was significantly higher. Mechanical-acoustical displacement equivalency at normal physiological levels and freedom from mechanical-electrical artifact were demonstrated by measurement of ossicular chain displacement by a fiber optic lever displacement transducer.

  14. A window into the intoxicated mind? Speech as an index of psychoactive drug effects.

    Science.gov (United States)

    Bedi, Gillinder; Cecchi, Guillermo A; Slezak, Diego F; Carrillo, Facundo; Sigman, Mariano; de Wit, Harriet

    2014-09-01

    Abused drugs can profoundly alter mental states in ways that may motivate drug use. These effects are usually assessed with self-report, an approach that is vulnerable to biases. Analyzing speech during intoxication may present a more direct, objective measure, offering a unique 'window' into the mind. Here, we employed computational analyses of speech semantic and topological structure after ±3,4-methylenedioxymethamphetamine (MDMA; 'ecstasy') and methamphetamine in 13 ecstasy users. In 4 sessions, participants completed a 10-min speech task after MDMA (0.75 and 1.5 mg/kg), methamphetamine (20 mg), or placebo. Latent Semantic Analyses identified the semantic proximity between speech content and concepts relevant to drug effects. Graph-based analyses identified topological speech characteristics. Group-level drug effects on semantic distances and topology were assessed. Machine-learning analyses (with leave-one-out cross-validation) assessed whether speech characteristics could predict drug condition in the individual subject. Speech after MDMA (1.5 mg/kg) had greater semantic proximity than placebo to the concepts friend, support, intimacy, and rapport. Speech on MDMA (0.75 mg/kg) had greater proximity to empathy than placebo. Conversely, speech on methamphetamine was further from compassion than placebo. Classifiers discriminated between MDMA (1.5 mg/kg) and placebo with 88% accuracy, and MDMA (1.5 mg/kg) and methamphetamine with 84% accuracy. For the two MDMA doses, the classifier performed at chance. These data suggest that automated semantic speech analyses can capture subtle alterations in mental state, accurately discriminating between drugs. The findings also illustrate the potential for automated speech-based approaches to characterize clinically relevant alterations to mental state, including those occurring in psychiatric illness.

  15. Retaining a Foothold on the Slippery Paths of Academia: University Women, Indirect Discrimination, and the Academic Marketplace

    Science.gov (United States)

    Wilson, Jacqueline Z.; Marks, Genee; Noone, Lynne; Hamilton-Mackenzie, Jennifer

    2010-01-01

    This paper examines indirect discrimination in Australian universities that tends to obstruct and delay women's academic careers. The topic is defined and contextualised via a 1998 speech by the Australian Human Rights Commission's Sex Discrimination Commissioner, juxtaposed with a brief contemporaneous exemplar. The paper discusses the prevalence…

  16. 2011 Invasive Non-native Plant Inventory dataset : Quivira National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This dataset is a product of the 2011 invasive non-native plant inventory conducted at Quivira National Wildlife Refuge by Utah State University. This inventory...

  17. Recreational freshwater fishing drives non-native aquatic species richness patterns at a continental scale

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aim. Mapping the geographic distribution of non-native aquatic species is a critically important precursor to understanding the anthropogenic and environmental...

  18. Non-native Chinese Foreign Language (CFL) Teachers: Identity and Discourse

    DEFF Research Database (Denmark)

    Zhang, Chun

    2014-01-01

    Abstract Native Chinese foreign language (CFL) teacher identity is an emerging subject of research interest in the teacher education. Yet, limited study has been done on the construction of Non-native CFL teachers in their home culture. Guided by a concept of teacher identity......-in-discourse, the paper reports on a qualitative study that explores how three Non-native CFL teachers construct their teacher identity as they interact with Danish students while teaching CFL at one Danish university. Data collected from in-depth interviews over a period of two years show that the Non-native CFL...... teachers face tensions and challenges in constructing their identities as CFL teachers, and the tensions and challenges that arose from Danish teaching culture could influence the Non-native CFL teachers' contributions to CFL teaching in their home cultures. The findings further show that in order to cope...

  19. Computer-based speech therapy for childhood speech sound disorders.

    Science.gov (United States)

    Furlong, Lisa; Erickson, Shane; Morris, Meg E

    2017-07-01

    With the current worldwide workforce shortage of Speech-Language Pathologists, new and innovative ways of delivering therapy to children with speech sound disorders are needed. Computer-based speech therapy may be an effective and viable means of addressing service access issues for children with speech sound disorders. To evaluate the efficacy of computer-based speech therapy programs for children with speech sound disorders. Studies reporting the efficacy of computer-based speech therapy programs were identified via a systematic, computerised database search. Key study characteristics, results, main findings and details of computer-based speech therapy programs were extracted. The methodological quality was evaluated using a structured critical appraisal tool. 14 studies were identified and a total of 11 computer-based speech therapy programs were evaluated. The results showed that computer-based speech therapy is associated with positive clinical changes for some children with speech sound disorders. There is a need for collaborative research between computer engineers and clinicians, particularly during the design and development of computer-based speech therapy programs. Evaluation using rigorous experimental designs is required to understand the benefits of computer-based speech therapy. The reader will be able to 1) discuss how computerbased speech therapy has the potential to improve service access for children with speech sound disorders, 2) explain the ways in which computer-based speech therapy programs may enhance traditional tabletop therapy and 3) compare the features of computer-based speech therapy programs designed for different client populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. SPEECH DISORDERS ENCOUNTERED DURING SPEECH THERAPY AND THERAPY TECHNIQUES

    Directory of Open Access Journals (Sweden)

    İlhan ERDEM

    2013-06-01

    Full Text Available Speech which is a physical and mental process, agreed signs and sounds to create a sense of mind to the message that change . Process to identify the sounds of speech it is essential to know the structure and function of various organs which allows to happen the conversation. Speech is a physical and mental process so many factors can lead to speech disorders. Speech disorder can be about language acquisitions as well as it can be caused medical and psychological many factors. Disordered speech, language, medical and psychological conditions as well as acquisitions also be caused by many factors. Speaking, is the collective work of many organs, such as an orchestra. Mental dimension of the speech disorder which is a very complex skill so it must be found which of these obstacles inhibit conversation. Speech disorder is a defect in speech flow, rhythm, tizliğinde, beats, the composition and vocalization. In this study, speech disorders such as articulation disorders, stuttering, aphasia, dysarthria, a local dialect speech, , language and lip-laziness, rapid speech peech defects in a term of language skills. This causes of speech disorders were investigated and presented suggestions for remedy was discussed.

  1. [Speech perception in the first two years].

    Science.gov (United States)

    Bertoncini, J; Cabrera, L

    2014-10-01

    The development of speech perception relies upon early auditory capacities (i.e. discrimination, segmentation and representation). Infants are able to discriminate most of the phonetic contrasts occurring in natural languages, and at the end of the first year, this universal ability starts to narrow down to the contrasts used in the environmental language. During the second year, this specialization is characterized by the development of comprehension, lexical organization and word production. That process appears now as the result of multiple interactions between perceptual, cognitive and social developing abilities. Distinct factors like word acquisition, sensitivity to the statistical properties of the input, or even the nature of the social interactions, might play a role at one time or another during the acquisition of phonological patterns. Experience with the native language is necessary for phonetic segments to be functional units of perception and for speech sound representations (words, syllables) to be more specified and phonetically organized. This evolution goes on beyond 24 months of age in a learning context characterized from the early stages by the interaction with other developing (linguistic and non-linguistic) capacities.

  2. Positive effects of non-native grasses on the growth of a native annual in a southern california ecosystem.

    Science.gov (United States)

    Pec, Gregory J; Carlton, Gary C

    2014-01-01

    Fire disturbance is considered a major factor in the promotion of non-native plant species. Non-native grasses are adapted to fire and can alter environmental conditions and reduce resource availability in native coastal sage scrub and chaparral communities of southern California. In these communities persistence of non-native grasses following fire can inhibit establishment and growth of woody species. This may allow certain native herbaceous species to colonize and persist beneath gaps in the canopy. A field manipulative experiment with control, litter, and bare ground treatments was used to examine the impact of non-native grasses on growth and establishment of a native herbaceous species, Cryptantha muricata. C. muricata seedling survival, growth, and reproduction were greatest in the control treatment where non-native grasses were present. C. muricata plants growing in the presence of non-native grasses produced more than twice the number of flowers and more than twice the reproductive biomass of plants growing in the treatments where non-native grasses were removed. Total biomass and number of fruits were also greater in the plants growing in the presence of non-native grasses. Total biomass and reproductive biomass was also greater in late germinants than early germinants growing in the presence of non-native grasses. This study suggests a potential positive effect of non-native grasses on the performance of a particular native annual in a southern California ecosystem.

  3. Positive Effects of Non-Native Grasses on the Growth of a Native Annual in a Southern California Ecosystem

    Science.gov (United States)

    Pec, Gregory J.; Carlton, Gary C.

    2014-01-01

    Fire disturbance is considered a major factor in the promotion of non-native plant species. Non-native grasses are adapted to fire and can alter environmental conditions and reduce resource availability in native coastal sage scrub and chaparral communities of southern California. In these communities persistence of non-native grasses following fire can inhibit establishment and growth of woody species. This may allow certain native herbaceous species to colonize and persist beneath gaps in the canopy. A field manipulative experiment with control, litter, and bare ground treatments was used to examine the impact of non-native grasses on growth and establishment of a native herbaceous species, Cryptantha muricata. C. muricata seedling survival, growth, and reproduction were greatest in the control treatment where non-native grasses were present. C. muricata plants growing in the presence of non-native grasses produced more than twice the number of flowers and more than twice the reproductive biomass of plants growing in the treatments where non-native grasses were removed. Total biomass and number of fruits were also greater in the plants growing in the presence of non-native grasses. Total biomass and reproductive biomass was also greater in late germinants than early germinants growing in the presence of non-native grasses. This study suggests a potential positive effect of non-native grasses on the performance of a particular native annual in a southern California ecosystem. PMID:25379790

  4. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  5. SVM with discriminative dynamic time alignment

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the past several years, support vector machines (SVM) have achieved a huge success in many field, especially in pattern recognition. But the standard SVM cannot deal with length-variable vectors, which is one severe obstacle for its applications to some important areas, such as speech recognition and part-of-speech tagging. The paper proposed a novel SVM with discriminative dynamic time alignment (DDTA-SVM) to solve this problem. When training DDTA-SVM classifier, according to the category information of the training Samples, different time alignment strategies were adopted to manipulate them in the kernel functions, which contributed to great improvement for training speed and generalization capability of the classifier. Since the alignment operator was embedded in kernel functions, the training algorithms of standard SVM were still compatible in DDTA-SVM. In order to increase the reliability of the classification, a new classification algorithm was suggested. The preliminary experimental results on Chinese confusable syllables speech classification task show that DDTA-SVM obtains faster convergence speed and better classification performance than dynamic time alignment kernel SVM (DTAK-SVM).Moreover, DDTA-SVM also gives higher classification precision compared to the conventional HMM. This proves that the proposed method is effective, especially for confusable lengthvariable pattern classification tasks.

  6. The role of rhythm in perceiving speech in noise: a comparison of percussionists, vocalists and non-musicians.

    Science.gov (United States)

    Slater, Jessica; Kraus, Nina

    2016-02-01

    The natural rhythms of speech help a listener follow what is being said, especially in noisy conditions. There is increasing evidence for links between rhythm abilities and language skills; however, the role of rhythm-related expertise in perceiving speech in noise is unknown. The present study assesses musical competence (rhythmic and melodic discrimination), speech-in-noise perception and auditory working memory in young adult percussionists, vocalists and non-musicians. Outcomes reveal that better ability to discriminate rhythms is associated with better sentence-in-noise (but not words-in-noise) perception across all participants. These outcomes suggest that sensitivity to rhythm helps a listener understand unfolding speech patterns in degraded listening conditions, and that observations of a "musician advantage" for speech-in-noise perception may be mediated in part by superior rhythm skills.

  7. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Speech-language pathologists (SLPs), often informally known as speech therapists, are professionals educated in the study of human ... Palate Hearing Evaluation in Children Going to a Speech Therapist Stuttering Hearing Impairment Speech Problems Cleft Lip and ...

  8. Non-native fishes in Florida freshwaters: a literature review and synthesis

    Science.gov (United States)

    Schofield, Pamela J.; Loftus, William F.

    2015-01-01

    Non-native fishes have been known from freshwater ecosystems of Florida since the 1950s, and dozens of species have established self-sustaining populations. Nonetheless, no synthesis of data collected on those species in Florida has been published until now. We searched the literature for peer-reviewed publications reporting original data for 42 species of non-native fishes in Florida that are currently established, were established in the past, or are sustained by human intervention. Since the 1950s, the number of non-native fish species increased steadily at a rate of roughly six new species per decade. Studies documented (in decreasing abundance): geographic location/range expansion, life- and natural-history characteristics (e.g., diet, habitat use), ecophysiology, community composition, population structure, behaviour, aquatic-plant management, and fisheries/aquaculture. Although there is a great deal of taxonomic uncertainty and confusion associated with many taxa, very few studies focused on clarifying taxonomic ambiguities of non-native fishes in the State. Most studies were descriptive; only 15 % were manipulative. Risk assessments, population-control studies and evaluations of effects of non-native fishes were rare topics for research, although they are highly valued by natural-resource managers. Though some authors equated lack of data with lack of effects, research is needed to confirm or deny conclusions. Much more is known regarding the effects of lionfish (Pterois spp.) on native fauna, despite its much shorter establishment time. Natural-resource managers need biological and ecological information to make policy decisions regarding non-native fishes. Given the near-absence of empirical data on effects of Florida non-native fishes, and the lengthy time-frames usually needed to collect such information, we provide suggestions for data collection in a manner that may be useful in the evaluation and prediction of non-native fish effects.

  9. Spatial arrangement overrules environmental factors to structure native and non-native assemblages of synanthropic harvestmen.

    Directory of Open Access Journals (Sweden)

    Christoph Muster

    Full Text Available Understanding how space affects the occurrence of native and non-native species is essential for inferring processes that shape communities. However, studies considering spatial and environmental variables for the entire community - as well as for the native and non-native assemblages in a single study - are scarce for animals. Harvestmen communities in central Europe have undergone drastic turnovers during the past decades, with several newly immigrated species, and thus provide a unique system to study such questions. We studied the wall-dwelling harvestmen communities from 52 human settlements in Luxembourg and found the assemblages to be largely dominated by non-native species (64% of specimens. Community structure was analysed using Moran's eigenvector maps as spatial variables, and landcover variables at different radii (500 m, 1000 m, 2000 m in combination with climatic parameters as environmental variables. A surprisingly high portion of pure spatial variation (15.7% of total variance exceeded the environmental (10.6% and shared (4% components of variation, but we found only minor differences between native and non-native assemblages. This could result from the ecological flexibility of both, native and non-native harvestmen that are not restricted to urban habitats but also inhabit surrounding semi-natural landscapes. Nevertheless, urban landcover variables explained more variation in the non-native community, whereas coverage of semi-natural habitats (forests, rivers at broader radii better explained the native assemblage. This indicates that some urban characteristics apparently facilitate the establishment of non-native species. We found no evidence for competitive replacement of native by invasive species, but a community with novel combination of native and non-native species.

  10. Effects of Nonnative Ungulate Removal on Plant Communities and Soil Biogeochemistry in Tropical Forests

    Science.gov (United States)

    Cole, R. J.; Litton, C. M.; Giardina, C. P.; Sparks, J. P.

    2014-12-01

    Non-native ungulates have substantial impacts on native ecosystems globally, altering both plant communities and soil biogeochemistry. Across tropical and temperate ecosystems, land managers fence and remove non-native ungulates to conserve native biodiversity, a costly management action, yet long-term outcomes are not well quantified. Specifically, knowledge gaps include: (i) the magnitude and time frame of plant community recovery; (ii) the response of non-native invasive plants; and (iii) changes to soil biogeochemistry. In 2010, we established a series of paired ungulate presence vs. removal plots that span a 20 yr. chronosequence in tropical montane wet forests on the Island of Hawaii to quantify the impacts and temporal legacy of feral pig removal on plant communities and soil biogeochemistry. We also compared soil biogeochemistry in targeted areas of low and high feral pig impact. Our work shows that both native and non-native vegetation respond positively to release from top-down control following removal of feral pigs, but species of high conservation concern recover only if initially present at the time of non-native ungulate removal. Feral pig impacts on soil biogeochemistry appear to last for at least 20 years following ungulate removal. We observed that both soil physical and chemical properties changed with feral pig removal. Soil bulk density and volumetric water content decreased while extractable base cations and inorganic N increased in low vs. high feral pig impact areas. We hypothesize that altered soil biogeochemistry facilitates continued invasions by non-native plants, even decades after non-native ungulate removal. Future work will concentrate on comparisons between wet and dry forest ecosystems and test whether manipulation of soil nutrients can be used to favor native vs. non-native plant establishment.

  11. Trophic consequences of non-native pumpkinseed Lepomis gibbosus for native pond fishes

    OpenAIRE

    Copp, G. H.; Britton, J R; Guo, Z.; Edmonds-Brown, V; Pegg, Josie; L. VILIZZI; Davison, P.

    2017-01-01

    Introduced non-native fishes can cause considerable adverse impacts on freshwater ecosystems. The pumpkinseed Lepomis gibbosus, a North American centrarchid, is one of the most widely distributed non-native fishes in Europe, having established self-sustaining populations in at least 28 countries, including the U.K. where it is predicted to become invasive under warmer climate conditions. To predict the consequences of increased invasiveness, a field experiment was completed over a summer peri...

  12. Speech processing using maximum likelihood continuity mapping

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, John E. (Santa Fe, NM)

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  13. Speech processing using maximum likelihood continuity mapping

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.E.

    2000-04-18

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  14. Managing the reaction effects of speech disorders on speech ...

    African Journals Online (AJOL)

    Speech disorders is responsible for defective speaking. It is usually ... They occur as a result of persistent frustrations which speech defectives usually encounter for speaking defectively. This paper ... AJOL African Journals Online. HOW TO ...

  15. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  16. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  17. Setting Priorities for Monitoring and Managing Non-native Plants: Toward a Practical Approach

    Science.gov (United States)

    Koch, Christiane; Jeschke, Jonathan M.; Overbeck, Gerhard E.; Kollmann, Johannes

    2016-09-01

    Land managers face the challenge to set priorities in monitoring and managing non-native plant species, as resources are limited and not all non-natives become invasive. Existing frameworks that have been proposed to rank non-native species require extensive information on their distribution, abundance, and impact. This information is difficult to obtain and often not available for many species and regions. National watch or priority lists are helpful, but it is questionable whether they provide sufficient information for environmental management on a regional scale. We therefore propose a decision tree that ranks species based on more simple albeit robust information, but still provides reliable management recommendations. To test the decision tree, we collected and evaluated distribution data from non-native plants in highland grasslands of Southern Brazil. We compared the results with a national list from the Brazilian Invasive Species Database for the state to discuss advantages and disadvantages of the different approaches on a regional scale. Out of 38 non-native species found, only four were also present on the national list. If management would solely rely on this list, many species that were identified as spreading based on the decision tree would go unnoticed. With the suggested scheme, it is possible to assign species to active management, to monitoring, or further evaluation. While national lists are certainly important, management on a regional scale should employ additional tools that adequately consider the actual risk of non-natives to become invasive.

  18. Exploring Public Perception of Non-native Species from a Visions of Nature Perspective

    Science.gov (United States)

    Verbrugge, Laura N. H.; Van den Born, Riyan J. G.; Lenders, H. J. Rob

    2013-12-01

    Not much is known about lay public perceptions of non-native species and their underlying values. Public awareness and engagement, however, are important aspects in invasive species management. In this study, we examined the relations between the lay public's visions of nature, their knowledge about non-native species, and their perceptions of non-native species and invasive species management with a survey administered in the Netherlands. Within this framework, we identified three measures for perception of non-native species: perceived risk, control and engagement. In general, respondents scored moderate values for perceived risk and personal engagement. However, in case of potential ecological or human health risks, control measures were supported. Respondents' images of the human-nature relationship proved to be relevant in engagement in problems caused by invasive species and in recognizing the need for control, while images of nature appeared to be most important in perceiving risks to the environment. We also found that eradication of non-native species was predominantly opposed for species with a high cuddliness factor such as mammals and bird species. We conclude that lay public perceptions of non-native species have to be put in a wider context of visions of nature, and we discuss the implications for public support for invasive species management.

  19. The Efficient Coding of Speech: Cross-Linguistic Differences.

    Science.gov (United States)

    Guevara Erra, Ramon; Gervain, Judit

    2016-01-01

    Neural coding in the auditory system has been shown to obey the principle of efficient neural coding. The statistical properties of speech appear to be particularly well matched to the auditory neural code. However, only English has so far been analyzed from an efficient coding perspective. It thus remains unknown whether such an approach is able to capture differences between the sound patterns of different languages. Here, we use independent component analysis to derive information theoretically optimal, non-redundant codes (filter populations) for seven typologically distinct languages (Dutch, English, Japanese, Marathi, Polish, Spanish and Turkish) and relate the statistical properties of these filter populations to documented differences in the speech rhythms (Analysis 1) and consonant inventories (Analysis 2) of these languages. We show that consonant class membership plays a particularly important role in shaping the statistical structure of speech in different languages, suggesting that acoustic transience, a property that discriminates consonant classes from one another, is highly relevant for efficient coding.

  20. SPEECH/MUSIC CLASSIFICATION USING WAVELET BASED FEATURE EXTRACTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Thiruvengatanadhan Ramalingam

    2014-01-01

    Full Text Available Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Due to the increasing size of the multimedia sources speech and music classification is one of the most important issues for multimedia information retrieval. In this work a speech/music discrimination system is developed which utilizes the Discrete Wavelet Transform (DWT as the acoustic feature. Multi resolution analysis is the most significant statistical way to extract the features from the input signal and in this study, a method is deployed to model the extracted wavelet feature. Support Vector Machines (SVM are based on the principle of structural risk minimization. SVM is applied to classify audio into their classes namely speech and music, by learning from training data. Then the proposed method extends the application of Gaussian Mixture Models (GMM to estimate the probability density function using maximum likelihood decision methods. The system shows significant results with an accuracy of 94.5%.

  1. Development of a Mandarin-English Bilingual Speech Recognition System for Real World Music Retrieval

    Science.gov (United States)

    Zhang, Qingqing; Pan, Jielin; Lin, Yang; Shao, Jian; Yan, Yonghong

    In recent decades, there has been a great deal of research into the problem of bilingual speech recognition-to develop a recognizer that can handle inter- and intra-sentential language switching between two languages. This paper presents our recent work on the development of a grammar-constrained, Mandarin-English bilingual Speech Recognition System (MESRS) for real world music retrieval. Two of the main difficult issues in handling the bilingual speech recognition systems for real world applications are tackled in this paper. One is to balance the performance and the complexity of the bilingual speech recognition system; the other is to effectively deal with the matrix language accents in embedded language**. In order to process the intra-sentential language switching and reduce the amount of data required to robustly estimate statistical models, a compact single set of bilingual acoustic models derived by phone set merging and clustering is developed instead of using two separate monolingual models for each language. In our study, a novel Two-pass phone clustering method based on Confusion Matrix (TCM) is presented and compared with the log-likelihood measure method. Experiments testify that TCM can achieve better performance. Since potential system users' native language is Mandarin which is regarded as a matrix language in our application, their pronunciations of English as the embedded language usually contain Mandarin accents. In order to deal with the matrix language accents in embedded language, different non-native adaptation approaches are investigated. Experiments show that model retraining method outperforms the other common adaptation methods such as Maximum A Posteriori (MAP). With the effective incorporation of approaches on phone clustering and non-native adaptation, the Phrase Error Rate (PER) of MESRS for English utterances was reduced by 24.47% relatively compared to the baseline monolingual English system while the PER on Mandarin utterances was

  2. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  3. Automatic speech recognition An evaluation of Google Speech

    OpenAIRE

    Stenman, Magnus

    2015-01-01

    The use of speech recognition is increasing rapidly and is now available in smart TVs, desktop computers, every new smart phone, etc. allowing us to talk to computers naturally. With the use in home appliances, education and even in surgical procedures accuracy and speed becomes very important. This thesis aims to give an introduction to speech recognition and discuss its use in robotics. An evaluation of Google Speech, using Google’s speech API, in regards to word error rate and translation ...

  4. The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.

    Directory of Open Access Journals (Sweden)

    Fang Liu

    Full Text Available Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.

  5. Differential Diagnosis of Severe Speech Disorders Using Speech Gestures

    Science.gov (United States)

    Bahr, Ruth Huntley

    2005-01-01

    The differentiation of childhood apraxia of speech from severe phonological disorder is a common clinical problem. This article reports on an attempt to describe speech errors in children with childhood apraxia of speech on the basis of gesture use and acoustic analyses of articulatory gestures. The focus was on the movement of articulators and…

  6. Attention fine-tunes auditory-motor processing of speech sounds.

    Science.gov (United States)

    Möttönen, Riikka; van de Ven, Gido M; Watkins, Kate E

    2014-03-12

    The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.

  7. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  8. Denial Denied: Freedom of Speech

    Directory of Open Access Journals (Sweden)

    Glen Newey

    2009-12-01

    Full Text Available Free speech is a widely held principle. This is in some ways surprising, since formal and informal censorship of speech is widespread, and rather different issues seem to arise depending on whether the censorship concerns who speaks, what content is spoken or how it is spoken. I argue that despite these facts, free speech can indeed be seen as a unitary principle. On my analysis, the core of the free speech principle is the denial of the denial of speech, whether to a speaker, to a proposition, or to a mode of expression. Underlying free speech is the principle of freedom of association, according to which speech is both a precondition of future association (e.g. as a medium for negotiation and a mode of association in its own right. I conclude by applying this account briefly to two contentious issues: hate speech and pornography.

  9. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  10. The Speech Anxiety Thoughts Inventory: scale development and preliminary psychometric data.

    Science.gov (United States)

    Cho, Yongrae; Smits, Jasper A J; Telch, Michael J

    2004-01-01

    Cognitions have been known to play a central role in the development, maintenance, and treatment of speech anxiety. However, few instruments are currently available to assess cognitive contents associated with speech anxiety. This report describes three studies examining the psychometric characteristics of a revised English version of the Speech Anxiety Thoughts Inventory (SATI)-an instrument measuring maladaptive cognitions associated with speech anxiety. In Study 1, factor analyses of the SATI revealed a two-factor solution-"prediction of poor performance" and "fear of negative evaluation by audience", respectively. In Study 2, the two-factor structure was replicated. In addition, results revealed stability over a four-week period, high internal consistency, and good convergent and discriminant validity. In Study 3, the scale demonstrated sensitivity to change following brief exposure-based treatments. These findings suggest that the SATI is a highly reliable, valid measure to assess cognitive features of speech anxiety.

  11. A new frequency scale of Chinese whispered speech in the application of speaker identification

    Institute of Scientific and Technical Information of China (English)

    LIN Wei; YANG Lili; XU Boling

    2006-01-01

    In this paper, the frequency characteristics of Chinese whispered speech were investigated by a filter bank analysis. It was shown that the first and the third formants were more important than the other formants in the speaker identification of Chinese whispered speech. The experiment showed that the 800-1200 Hz and 2800-3200 Hz ranges were the most significant frequency ranges in discriminating the speaker. Based on this result, a new feature scale named whisper sensitive scale (WSS) was proposed to replace the common scale, Mel scale, and to extract the cepstral coefficient from whispered speech signal. Furthermore, a speaker identification system in whispered speech was presented based on the modified Hidden Markov Models integrating advantages of WSCC (the whisper sensitive cepstral coefficient) and LPCC. And the new system performed better in solving the problem of speaker identification of Chinese whispered speech than the traditional method.

  12. Intelligibility of time-compressed speech: the effect of uniform versus non-uniform time-compression algorithms.

    Science.gov (United States)

    Schlueter, Anne; Lemke, Ulrike; Kollmeier, Birger; Holube, Inga

    2014-03-01

    For assessing hearing aid algorithms, a method is sought to shift the threshold of a speech-in-noise test to (mostly positive) signal-to-noise ratios (SNRs) that allow discrimination across algorithmic settings and are most relevant for hearing-impaired listeners in daily life. Hence, time-compressed speech with higher speech rates was evaluated to parametrically increase the difficulty of the test while preserving most of the relevant acoustical speech cues. A uniform and a non-uniform algorithm were used to compress the sentences of the German Oldenburg Sentence Test at different speech rates. In comparison, the non-uniform algorithm exhibited greater deviations from the targeted time compression, as well as greater changes of the phoneme duration, spectra, and modulation spectra. Speech intelligibility for fast Oldenburg sentences in background noise at different SNRs was determined with 48 normal-hearing listeners. The results confirmed decreasing intelligibility with increasing speech rate. Speech had to be compressed to more than 30% of its original length to reach 50% intelligibility at positive SNRs. Characteristics influencing the discrimination ability of the test for assessing effective SNR changes were investigated. Subjective and objective measures indicated a clear advantage of the uniform algorithm in comparison to the non-uniform algorithm for the application in speech-in-noise tests.

  13. Discrimination against Black Students

    Science.gov (United States)

    Aloud, Ashwaq; Alsulayyim, Maryam

    2016-01-01

    Discrimination is a structured way of abusing people based on racial differences, hence barring them from accessing wealth, political participation and engagement in many spheres of human life. Racism and discrimination are inherently rooted in institutions in the society, the problem has spread across many social segments of the society including…

  14. INTERSECTIONAL DISCRIMINATION AGAINST CHILDREN

    DEFF Research Database (Denmark)

    Ravnbøl, Camilla Ida

    This paper adds a perspective to existing research on child protection by engaging in a debate on intersectional discrimination and its relationship to child protection. The paper has a twofold objective, (1) to further establish intersectionality as a concept to address discrimination against ch...... children, and (2) to illustrate the importance of addressing intersectionality within rights-based programmes of child protection....

  15. Flash-Type Discrimination

    Science.gov (United States)

    Koshak, William J.

    2010-01-01

    This viewgraph presentation describes the significant progress made in the flash-type discrimination algorithm development. The contents include: 1) Highlights of Progress for GLM-R3 Flash-Type discrimination Algorithm Development; 2) Maximum Group Area (MGA) Data; 3) Retrieval Errors from Simulations; and 4) Preliminary Global-scale Retrieval.

  16. INTERSECTIONAL DISCRIMINATION AGAINST CHILDREN

    DEFF Research Database (Denmark)

    Ravnbøl, Camilla Ida

    This paper adds a perspective to existing research on child protection by engaging in a debate on intersectional discrimination and its relationship to child protection. The paper has a twofold objective, (1) to further establish intersectionality as a concept to address discrimination against...

  17. Speech spectrogram expert

    Energy Technology Data Exchange (ETDEWEB)

    Johannsen, J.; Macallister, J.; Michalek, T.; Ross, S.

    1983-01-01

    Various authors have pointed out that humans can become quite adept at deriving phonetic transcriptions from speech spectrograms (as good as 90percent accuracy at the phoneme level). The authors describe an expert system which attempts to simulate this performance. The speech spectrogram expert (spex) is actually a society made up of three experts: a 2-dimensional vision expert, an acoustic-phonetic expert, and a phonetics expert. The visual reasoning expert finds important visual features of the spectrogram. The acoustic-phonetic expert reasons about how visual features relates to phonemes, and about how phonemes change visually in different contexts. The phonetics expert reasons about allowable phoneme sequences and transformations, and deduces an english spelling for phoneme strings. The speech spectrogram expert is highly interactive, allowing users to investigate hypotheses and edit rules. 10 references.

  18. RECOGNISING SPEECH ACTS

    Directory of Open Access Journals (Sweden)

    Phyllis Kaburise

    2012-09-01

    Full Text Available Speech Act Theory (SAT, a theory in pragmatics, is an attempt to describe what happens during linguistic interactions. Inherent within SAT is the idea that language forms and intentions are relatively formulaic and that there is a direct correspondence between sentence forms (for example, in terms of structure and lexicon and the function or meaning of an utterance. The contention offered in this paper is that when such a correspondence does not exist, as in indirect speech utterances, this creates challenges for English second language speakers and may result in miscommunication. This arises because indirect speech acts allow speakers to employ various pragmatic devices such as inference, implicature, presuppositions and context clues to transmit their messages. Such devices, operating within the non-literal level of language competence, may pose challenges for ESL learners.

  19. Protection limits on free speech

    Institute of Scientific and Technical Information of China (English)

    李敏

    2014-01-01

    Freedom of speech is one of the basic rights of citizens should receive broad protection, but in the real context of China under what kind of speech can be protected and be restricted, how to grasp between state power and free speech limit is a question worth considering. People tend to ignore the freedom of speech and its function, so that some of the rhetoric cannot be demonstrated in the open debates.

  20. Influence of musical training on understanding voiced and whispered speech in noise.

    Science.gov (United States)

    Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J

    2014-01-01

    This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.

  1. Non-Native Pre-Service English Teachers’ Narratives about Their Pronunciation Learning and Implications for Pronunciation Training

    OpenAIRE

    Chin Wen Chien

    2014-01-01

    This study analyzes 58 non-native pre-service elementary school English teachers’ narratives about their pronunciation learning and teaching. Two important findings emerge in this study.  First, participants did not have the same attitude toward their roles as non-native English speakers regarding pronunciation learning and teaching. Second, regardless of their attitude or roles as non-native English speakers, participants claimed that when they become language teachers in the future, they wi...

  2. The University and Free Speech

    OpenAIRE

    Grcic, Joseph

    2014-01-01

    Free speech is a necessary condition for the growth of knowledge and the implementation of real and rational democracy. Educational institutions play a central role in socializing individuals to function within their society. Academic freedom is the right to free speech in the context of the university and tenure, properly interpreted, is a necessary component of protecting academic freedom and free speech.

  3. Designing speech for a recipient

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    is investigated on three candidates for so-called ‘simplified registers’: speech to children (also called motherese or baby talk), speech to foreigners (also called foreigner talk) and speech to robots. The volume integrates research from various disciplines, such as psychology, sociolinguistics...

  4. ADMINISTRATIVE GUIDE IN SPEECH CORRECTION.

    Science.gov (United States)

    HEALEY, WILLIAM C.

    WRITTEN PRIMARILY FOR SCHOOL SUPERINTENDENTS, PRINCIPALS, SPEECH CLINICIANS, AND SUPERVISORS, THIS GUIDE OUTLINES THE MECHANICS OF ORGANIZING AND CONDUCTING SPEECH CORRECTION ACTIVITIES IN THE PUBLIC SCHOOLS. IT INCLUDES THE REQUIREMENTS FOR CERTIFICATION OF A SPEECH CLINICIAN IN MISSOURI AND DESCRIBES ESSENTIAL STEPS FOR THE DEVELOPMENT OF A…

  5. SPEECH DISORDERS ENCOUNTERED DURING SPEECH THERAPY AND THERAPY TECHNIQUES

    OpenAIRE

    2013-01-01

    Speech which is a physical and mental process, agreed signs and sounds to create a sense of mind to the message that change . Process to identify the sounds of speech it is essential to know the structure and function of various organs which allows to happen the conversation. Speech is a physical and mental process so many factors can lead to speech disorders. Speech disorder can be about language acquisitions as well as it can be caused medical and psychological many factors. Disordered sp...

  6. Freedom of Speech and Hate Speech: an analysis of possible limits for freedom of speech

    National Research Council Canada - National Science Library

    Riva Sobrado de Freitas; Matheus Felipe de Castro

    2013-01-01

      In a view to determining the outlines of the Freedom of Speech and to specify its contents, we face hate speech as an offensive and repulsive manifestation, particularly directed to minority groups...

  7. Gopherus agassizii (Desert Tortoise). Non-native seed dispersal

    Science.gov (United States)

    Ennen, J.R.; Loughran, Caleb L.; Lovich, Jeffrey E.

    2011-01-01

    Sahara Mustard (Brassica tournefortii) is a non-native, highly invasive weed species of southwestern U.S. deserts. Sahara Mustard is a hardy species, which flourishes under many conditions including drought and in both disturbed and undisturbed habitats (West and Nabhan 2002. In B. Tellman [ed.], Invasive Plants: Their Occurrence and Possible Impact on the Central Gulf Coast of Sonora and the Midriff Islands in the Sea of Cortes, pp. 91–111. University of Arizona Press, Tucson). Because of this species’ ability to thrive in these habitats, B. tournefortii has been able to propagate throughout the southwestern United States establishing itself in the Mojave and Sonoran Deserts in Arizona, California, Nevada, and Utah. Unfortunately, naturally disturbed areas created by native species, such as the Desert Tortoise (Gopherus agassizii), within these deserts could have facilitated the propagation of B. tournefortii. (Lovich 1998. In R. G. Westbrooks [ed.], Invasive Plants, Changing the Landscape of America: Fact Book, p. 77. Federal Interagency Committee for the Management of Noxious and Exotic Weeds [FICMNEW], Washington, DC). However, Desert Tortoises have never been directly observed dispersing Sahara Mustard seeds. Here we present observations of two Desert Tortoises dispersing Sahara Mustard seeds at the interface between the Mojave and Sonoran deserts in California.

  8. Native and Non-Native English Language Teachers

    Directory of Open Access Journals (Sweden)

    Ian Walkinshaw

    2014-05-01

    Full Text Available The English language teaching industry in East and Southeast Asia subscribes to an assumption that native English-speaking teachers (NESTs are the gold standard of spoken and written language, whereas non-native English-speaking teachers (non-NESTs are inferior educators because they lack this innate linguistic skill. But does this premise correspond with the views of second language learners? This article reports on research carried out with university students in Vietnam and Japan exploring the advantages and disadvantages of learning English from NESTs and non-NESTs. Contrary to the above notion, our research illuminated a number of perceived advantages—and disadvantages—in both types of teachers. Students viewed NESTs as models of pronunciation and correct language use, as well as being repositories of cultural knowledge, but they also found NESTs poor at explaining grammar, and their different cultures created tension. Non-NESTs were perceived as good teachers of grammar, and had the ability to resort to the students’ first language when necessary. Students found classroom interaction with non-NESTs easier because of their shared culture. Non-NESTs’ pronunciation was often deemed inferior to that of NESTs, but also easier to comprehend. Some respondents advocated learning from both types of teachers, depending on learners’ proficiency and the skill being taught.

  9. Reading fluency and speech perception speed of beginning readers with persistent reading problems: the perception of initial stop consonants and consonant clusters

    NARCIS (Netherlands)

    Snellings, P.; van der Leij, A.; Blok, H.; de Jong, P.F.

    2010-01-01

    This study investigated the role of speech perception accuracy and speed in fluent word decoding of reading disabled (RD) children. A same-different phoneme discrimination task with natural speech tested the perception of single consonants and consonant clusters by young but persistent RD children.

  10. An Ecosystem-Service Approach to Evaluate the Role of Non-Native Species in Urbanized Wetlands

    Science.gov (United States)

    Yam, Rita S. W.; Huang, Ko-Pu; Hsieh, Hwey-Lian; Lin, Hsing-Juh; Huang, Shou-Chung

    2015-01-01

    Natural wetlands have been increasingly transformed into urbanized ecosystems commonly colonized by stress-tolerant non-native species. Although non-native species present numerous threats to natural ecosystems, some could provide important benefits to urbanized ecosystems. This study investigated the extent of colonization by non-native fish and bird species of three urbanized wetlands in subtropical Taiwan. Using literature data the role of each non-native species in the urbanized wetland was evaluated by their effect (benefits/damages) on ecosystem services (ES) based on their ecological traits. Our sites were seriously colonized by non-native fishes (39%–100%), but wetland ES. Our results indicated the importance of non-native fishes in supporting ES by serving as food source to fish-eating waterbirds (native, and migratory species) due to their high abundance, particularly for Oreochromis spp. However, all non-native birds are regarded as “harmful” species causing important ecosystem disservices, and thus eradication of these bird-invaders from urban wetlands would be needed. This simple framework for role evaluation of non-native species represents a holistic and transferable approach to facilitate decision making on management priority of non-native species in urbanized wetlands. PMID:25860870

  11. Speech transmission index from running speech: A neural network approach

    Science.gov (United States)

    Li, F. F.; Cox, T. J.

    2003-04-01

    Speech transmission index (STI) is an important objective parameter concerning speech intelligibility for sound transmission channels. It is normally measured with specific test signals to ensure high accuracy and good repeatability. Measurement with running speech was previously proposed, but accuracy is compromised and hence applications limited. A new approach that uses artificial neural networks to accurately extract the STI from received running speech is developed in this paper. Neural networks are trained on a large set of transmitted speech examples with prior knowledge of the transmission channels' STIs. The networks perform complicated nonlinear function mappings and spectral feature memorization to enable accurate objective parameter extraction from transmitted speech. Validations via simulations demonstrate the feasibility of this new method on a one-net-one-speech extract basis. In this case, accuracy is comparable with normal measurement methods. This provides an alternative to standard measurement techniques, and it is intended that the neural network method can facilitate occupied room acoustic measurements.

  12. Non-native species in the vascular flora of highlands and mountains of Iceland

    Directory of Open Access Journals (Sweden)

    Pawel Wasowicz

    2016-01-01

    Full Text Available The highlands and mountains of Iceland are one of the largest remaining wilderness areas in Europe. This study aimed to provide comprehensive and up-to-date data on non-native plant species in these areas and to answer the following questions: (1 How many non-native vascular plant species inhabit highland and mountainous environments in Iceland? (2 Do temporal trends in the immigration of alien species to Iceland differ between highland and lowland areas? (3 Does the incidence of alien species in the disturbed and undisturbed areas within Icelandic highlands differ? (4 Does the spread of non-native species in Iceland proceed from lowlands to highlands? and (5 Can we detect hot-spots in the distribution of non-native taxa within the highlands? Overall, 16 non-native vascular plant species were detected, including 11 casuals and 5 naturalized taxa (1 invasive. Results showed that temporal trends in alien species immigration to highland and lowland areas are similar, but it is clear that the process of colonization of highland areas is still in its initial phase. Non-native plants tended to occur close to man-made infrastructure and buildings including huts, shelters, roads etc. Analysis of spatio-temporal patterns showed that the spread within highland areas is a second step in non-native plant colonization in Iceland. Several statically significant hot spots of alien plant occurrences were identified using the Getis-Ord Gi* statistic and these were linked to human disturbance. This research suggests that human-mediated dispersal is the main driving force increasing the risk of invasion in Iceland’s highlands and mountain areas.

  13. Non-native species in the vascular flora of highlands and mountains of Iceland.

    Science.gov (United States)

    Wasowicz, Pawel

    2016-01-01

    The highlands and mountains of Iceland are one of the largest remaining wilderness areas in Europe. This study aimed to provide comprehensive and up-to-date data on non-native plant species in these areas and to answer the following questions: (1) How many non-native vascular plant species inhabit highland and mountainous environments in Iceland? (2) Do temporal trends in the immigration of alien species to Iceland differ between highland and lowland areas? (3) Does the incidence of alien species in the disturbed and undisturbed areas within Icelandic highlands differ? (4) Does the spread of non-native species in Iceland proceed from lowlands to highlands? and (5) Can we detect hot-spots in the distribution of non-native taxa within the highlands? Overall, 16 non-native vascular plant species were detected, including 11 casuals and 5 naturalized taxa (1 invasive). Results showed that temporal trends in alien species immigration to highland and lowland areas are similar, but it is clear that the process of colonization of highland areas is still in its initial phase. Non-native plants tended to occur close to man-made infrastructure and buildings including huts, shelters, roads etc. Analysis of spatio-temporal patterns showed that the spread within highland areas is a second step in non-native plant colonization in Iceland. Several statically significant hot spots of alien plant occurrences were identified using the Getis-Ord Gi* statistic and these were linked to human disturbance. This research suggests that human-mediated dispersal is the main driving force increasing the risk of invasion in Iceland's highlands and mountain areas.

  14. Discriminability and identification of English vowels by native Japanese speakers in different consonantal contexts

    Science.gov (United States)

    Nozawa, Takeshi; Frieda, Elaina M.; Wayland, Ratree

    2003-10-01

    The purpose of the present experiment was to examine the effects of consonantal context on discrimination and identification of English vowels by native Japanese speakers learning English in Japan. A number of studies have assessed the effects of consonantal contexts on the perception of nonnative vowels. For instance, Strange et al. (1996, 2001) found that perceptual assimilation of nonnative vowels is affected by consonantal contexts, and Morrison (2002) has shown that Japanese speakers use durational cues to perceive English /i/-/I/. The present study revealed that consonantal context affects discriminability and identification of each English vowel differently. Of all the six vowel contrasts tested, /i/-/I/ was the most likely to be affected by voicing status of the surrounding consonants with it being easier to discriminate in voiceless consonantal contexts. Moreover, /I/ is more likely to be equated with the Japanese short vowel /i/ in a voiceless consonantal context which is in keeping with Morrison (2002). /æ/-/opena/, on the other hand, is the most strongly affected by the place of articulation of the preceding consonants. [Work supported by Grant-in-Aid for Scientific Research (C)(1)(1410635).

  15. FET frequency discriminator

    Science.gov (United States)

    Mawhinney, F. D.

    1982-03-01

    The FET Frequency Discriminator is an experimental microwave frequency discriminator developed for use in a specialized set-on VCO frequency memory system. Additional development and evaluation work has been done during this program to more fully determine the applicability of the FET frequency discriminator as a low-cost, expendable receiver front-end for both surveillance and ECM systems. Various methods for adjusting the frequency-to-voltage characteristic of the discriminator as well as the effects of detector characteristics and ambient temperature changes were evaluated. A number of discriminators for use in the 7- to 11-GHz and the 11to 18-GHz bands were fabricated and tested. Interim breadboard and final packaged models were either delivered or installed in developmental frequency systems. The major limitations and deficiencies of the FET frequency discriminator that were reviewed during the program include the effects of temperature, input power level variations, nonlinearity, and component repeatability. Additional effort will be required to advance the developmental status of the FET frequency discriminator to the level necessary for inclusion in low-cost receiver systems, but the basic simplicity of the approach continues to show much promise.

  16. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference

    OpenAIRE

    Byeongwook Lee; Kwang-Hyun Cho

    2016-01-01

    Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintai...

  17. Speech Coding Development for Audio fixing Using Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Mr. S. Nageswara Rao 1, Dr. C.D. Naidu 2, Dr. K. Jaya Sankar 3

    2012-12-01

    Full Text Available A new method for the enhancement of speech signals contaminated by speech-correlated noise, such as that in the output of a speech coder, is presented. This module is based on numerical speech processing algorithms which modelise the infected ear and generates the stimulus signals for the cilia cells (brain. The method is also based on constrained optimization of a criterion. This interface uses a gamma chirp filter bank constituted of 16 band pass filters based on IIR filters. The implemented method is on a block by- block basis and uses two constraints. A first constraint ensures that the signal power is preserved. A modification constraint ensures that the power of the difference of the enhanced and unenhanced signal is less than a fraction of the power of the unenhanced signal. The applied method is to increase the periodicity of the speech signal. Sounds that are not nearly periodic are perceptually unaffected by the optimization because of the modification constraint. The results demonstrated a degree of discrimination and interferences between different sounds especially in multi speaker environment.

  18. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...

  19. Speech and Hearing Therapy.

    Science.gov (United States)

    Sakata, Reiko; Sakata, Robert

    1978-01-01

    In the public school, the speech and hearing therapist attempts to foster child growth and development through the provision of services basic to awareness of self and others, management of personal and social interactions, and development of strategies for coping with the handicap. (MM)

  20. Perceptual learning in speech

    NARCIS (Netherlands)

    Norris, D.; McQueen, J.M.; Cutler, A.

    2003-01-01

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listener