WorldWideScience

Sample records for spoken pseudoword processing

  1. The neural correlates of morphological complexity processing: Detecting structure in pseudowords.

    Science.gov (United States)

    Schuster, Swetlana; Scharinger, Mathias; Brooks, Colin; Lahiri, Aditi; Hartwigsen, Gesa

    2018-03-02

    Morphological complexity is a highly debated issue in visual word recognition. Previous neuroimaging studies have shown that speakers are sensitive to degrees of morphological complexity. Two-step derived complex words (bridging through bridge N  > bridge V  > bridging) led to more enhanced activation in the left inferior frontal gyrus than their 1-step derived counterparts (running through run V  > running). However, it remains unclear whether sensitivity to degrees of morphological complexity extends to pseudowords. If this were the case, it would indicate that abstract knowledge of morphological structure is independent of lexicality. We addressed this question by investigating the processing of two sets of pseudowords in German. Both sets contained morphologically viable two-step derived pseudowords differing in the number of derivational steps required to access an existing lexical representation and therefore the degree of structural analysis expected during processing. Using a 2 × 2 factorial design, we found lexicality effects to be distinct from processing signatures relating to structural analysis in pseudowords. Semantically-driven processes such as lexical search showed a more frontal distribution while combinatorial processes related to structural analysis engaged more parietal parts of the network. Specifically, more complex pseudowords showed increased activation in parietal regions (right superior parietal lobe and left precuneus) relative to pseudowords that required less structural analysis to arrive at an existing lexical representation. As the two sets were matched on cohort size and surface form, these results highlight the role of internal levels of morphological structure even in forms that do not possess a lexical representation. © 2018 Wiley Periodicals, Inc.

  2. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Processing complex pseudo-words in mild cognitive impairment: The interaction of preserved morphological rule knowledge with compromised cognitive ability.

    Science.gov (United States)

    Manouilidou, Christina; Dolenc, Barbara; Marvin, Tatjana; Pirtošek, Zvezdan

    2016-01-01

    Mild cognitive impairment (MCI) affects the cognitive performance of elderly adults. However, the level of severity is not high enough to be diagnosed with dementia. Previous research reports subtle language impairments in individuals with MCI specifically in domains related to lexical meaning. The present study used both off-line (grammaticality judgment) and on-line (lexical decision) tasks to examine aspects of lexical processing and how they are affected by MCI. 21 healthy older adults and 23 individuals with MCI saw complex pseudo-words that violated various principles of word formation in Slovenian and decided if each letter string was an actual word of their language. The pseudo-words ranged in their degree of violability. A task effect was found, with MCI performance to be similar to that of healthy controls in the off-line task but different in the on-line task. Overall, the MCI group responded slower than the elderly controls. No significant differences were observed in the off-line task, while the on-line task revealed a main effect of Violation type, a main effect of Group and a significant Violation × Group interaction reflecting a difficulty for the MCI group to process pseudo-words in real time. That is, while individuals with MCI seem to preserve morphological rule knowledge, they experience additional difficulties while processing complex pseudo-words. This was attributed to an executive dysfunction associated with MCI that delays the recognition of ungrammatical formations.

  4. LEXICAL NEIGHBOURHOOD EFFECTS IN PSEUDOWORD SPELLING

    OpenAIRE

    Marie-Josephe eTainturier

    2013-01-01

    The general aim of this study is to contribute to a better understanding of the cognitive processes that underpin skilled adult spelling. More specifically, it investigates the influence of lexical neighbours on pseudo-word spelling with the goal of providing a more detailed account of the interaction between lexical and sublexical sources of knowledge in spelling. In prior research examining this topic, subjects typically heard lists composed of both words and pseudo-words and had to make a ...

  5. A study of base frequency in Spanish skilled and reading-disabled children: all children benefit from morphological processing in defining complex pseudowords.

    Science.gov (United States)

    Lázaro, Miguel

    2012-05-01

    In this study, the base frequency (BF) effect is explored in reading-disabled and skilled readers of Spanish. A pseudoword definition task was completed by two groups of children. The pseudowords were composed from existing stems and affixes. The results show a facilitatory BF effect, suggesting that all children benefited from this aspect of morphology. A significant effect of group was also observed, showing that skilled readers scored better than reading-disabled children. The interaction between these variables was not significant. The overall pattern of data suggests that all children benefited from morphological processing to perform the definition task but that phonological difficulties in reading-disabled children prevented them from benefitting from the BF effect as much as their skilled peers. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Lexical neighborhood effects in pseudoword spelling.

    Science.gov (United States)

    Tainturier, Marie-Josèphe; Bosse, Marie-Line; Roberts, Daniel J; Valdois, Sylviane; Rapp, Brenda

    2013-01-01

    The general aim of this study is to contribute to a better understanding of the cognitive processes that underpin skilled adult spelling. More specifically, it investigates the influence of lexical neighbors on pseudo-word spelling with the goal of providing a more detailed account of the interaction between lexical and sublexical sources of knowledge in spelling. In prior research examining this topic, adult participants typically heard lists composed of both words and pseudo-words and had to make a lexical decision to each stimulus before writing the pseudo-words. However, these priming paradigms are susceptible to strategic influence and may therefore not give a clear picture of the processes normally engaged in spelling unfamiliar words. In our two Experiments involving 71 French-speaking literate adults, only pseudo-words were presented which participants were simply requested to write to dictation using the first spelling that came to mind. Unbeknownst to participants, pseudo-words varied according to whether they did or did not have a phonological word neighbor. Results revealed that low-probability phoneme/grapheme mappings (e.g., /o/ -> aud in French) were used significantly more often in spelling pseudo-words with a close phonological lexical neighbor with that spelling (e.g., /krepo/ derived from "crapaud," /krapo/) than in spelling pseudo-words with no close neighbors (e.g., /frøpo/). In addition, the strength of this lexical influence increased with the lexical frequency of the word neighbors as well as with their degree of phonetic overlap with the pseudo-word targets. These results indicate that information from lexical and sublexical processes is integrated in the course of spelling, and a specific theoretical account as to how such integration may occur is introduced.

  7. LEXICAL NEIGHBOURHOOD EFFECTS IN PSEUDOWORD SPELLING

    Directory of Open Access Journals (Sweden)

    Marie-Josephe eTainturier

    2013-11-01

    Full Text Available The general aim of this study is to contribute to a better understanding of the cognitive processes that underpin skilled adult spelling. More specifically, it investigates the influence of lexical neighbours on pseudo-word spelling with the goal of providing a more detailed account of the interaction between lexical and sublexical sources of knowledge in spelling. In prior research examining this topic, subjects typically heard lists composed of both words and pseudo-words and had to make a lexical decision to each stimulus before writing the pseudo-words. However, these priming paradigms are susceptible to strategic influence and may therefore not give a clear picture of the processes normally engaged in spelling unfamiliar words. In our two Experiments involving 71 French speaking literature adults, only pseudo-words were presented which participants were simply requested to write to dictation using the first spelling that came to mind. Unbeknown to participants, pseudo-words varied according to whether they did or did not have a phonological word neighbour. Results revealed that low-probability phoneme/grapheme mappings (e.g., /o/ -> aud in French were used significantly more often in spelling pseudo-words with a close phonological lexical neighbour with that spelling (e.g., /krepo/ derived from crapaud, /krapo/ than in spelling pseudo-words with no close neighbours (e.g., /frøpo/. In addition, the strength of this lexical influence increased with the lexical frequency of the word neighbours as well as with their degree of phonetic overlap with the pseudo-word targets. These results indicate that the activation from lexical and sublexical processes is integrated in the course of spelling, and a specific theoretical account as to how such integration may occur is introduced.

  8. An fMRI study of concreteness effects in spoken word recognition.

    Science.gov (United States)

    Roxbury, Tracy; McMahon, Katie; Copland, David A

    2014-09-30

    Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.

  9. How Language Affects Children’s Use of Derivational Morphology in Visual Word and Pseudoword Processing: Evidence from a Cross-Language Study

    Directory of Open Access Journals (Sweden)

    Séverine eCasalis

    2015-04-01

    Full Text Available Developing readers have been shown to rely on morphemes in visual word recognition across several naming, lexical decision and priming experiments. However, the impact of morphology in reading is not consistent across studies with differing results emerging not only between but also within writing systems. Here, we report a cross-language experiment involving the English and French languages, which aims to compare directly the impact of morphology in word recognition in the two languages. Monolingual French-speaking and English-speaking children matched for grade level (Part 1 and for age (Part 2 participated in the study. Two lexical decision tasks (one in French, one in English featured words and pseudowords with exactly the same structure in each language. The presence of a root (R+ and a suffix ending (S+ was manipulated orthogonally, leading to four possible combinations in words (R+S+: e.g. postal; R+S-: e.g. turnip; R-S+: e.g. rascal; and R-S-: e.g. bishop and in pseudowords (R+S+: e.g. pondal; R+S-: e.g. curlip; R-S+: e.g. vosnal; and R-S-: e.g. hethop. Results indicate that the presence of morphemes facilitates children’s recognition of words and impedes their ability to reject pseudowords in both languages. Nevertheless, effects extend across accuracy and latencies in French but are restricted to accuracy in English, suggesting a higher degree of morphological processing efficiency in French. We argue that the inconsistencies found between languages emphasise the need for developmental models of word recognition to integrate a morpheme level whose elaboration is tuned by the productivity and transparency of the derivational system.

  10. How language affects children's use of derivational morphology in visual word and pseudoword processing: evidence from a cross-language study.

    Science.gov (United States)

    Casalis, Séverine; Quémart, Pauline; Duncan, Lynne G

    2015-01-01

    Developing readers have been shown to rely on morphemes in visual word recognition across several naming, lexical decision and priming experiments. However, the impact of morphology in reading is not consistent across studies with differing results emerging not only between but also within writing systems. Here, we report a cross-language experiment involving the English and French languages, which aims to compare directly the impact of morphology in word recognition in the two languages. Monolingual French-speaking and English-speaking children matched for grade level (Part 1) and for age (Part 2) participated in the study. Two lexical decision tasks (one in French, one in English) featured words and pseudowords with exactly the same structure in each language. The presence of a root (R+) and a suffix ending (S+) was manipulated orthogonally, leading to four possible combinations in words (R+S+: e.g., postal; R+S-: e.g., turnip; R-S+: e.g., rascal; and R-S-: e.g., bishop) and in pseudowords (R+S+: e.g., pondal; R+S-: e.g., curlip; R-S+: e.g., vosnal; and R-S-: e.g., hethop). Results indicate that the presence of morphemes facilitates children's recognition of words and impedes their ability to reject pseudowords in both languages. Nevertheless, effects extend across accuracy and latencies in French but are restricted to accuracy in English, suggesting a higher degree of morphological processing efficiency in French. We argue that the inconsistencies found between languages emphasize the need for developmental models of word recognition to integrate a morpheme level whose elaboration is tuned by the productivity and transparency of the derivational system.

  11. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  12. Lexical neighborhood effects in pseudoword spelling

    OpenAIRE

    Tainturier, Marie-Jos?phe; Bosse, Marie-Line; Roberts, Daniel J.; Valdois, Sylviane; Rapp, Brenda

    2013-01-01

    International audience; The general aim of this study is to contribute to a better understanding of the cognitive processes that underpin skilled adult spelling. More specifically, it investigates the influence of lexical neighbors on pseudo-word spelling with the goal of providing a more detailed account of the interaction between lexical and sublexical sources of knowledge in spelling. In prior research examining this topic, adult participants typically heard lists composed of both words and ...

  13. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    Science.gov (United States)

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  14. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    Science.gov (United States)

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  15. Children's Verbal Working Memory: Role of Processing Complexity in Predicting Spoken Sentence Comprehension

    Science.gov (United States)

    Magimairaj, Beula M.; Montgomery, James W.

    2012-01-01

    Purpose: This study investigated the role of processing complexity of verbal working memory tasks in predicting spoken sentence comprehension in typically developing children. Of interest was whether simple and more complex working memory tasks have similar or different power in predicting sentence comprehension. Method: Sixty-five children (6- to…

  16. The role of planum temporale in processing accent variation in spoken language comprehension

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and

  17. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Processing Relationships Between Language-Being-Spoken and Other Speech Dimensions in Monolingual and Bilingual Listeners.

    Science.gov (United States)

    Vaughn, Charlotte R; Bradlow, Ann R

    2017-12-01

    While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners' bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.

  19. Embedded Stem Priming Effects in Prefixed and Suffixed Pseudowords

    Science.gov (United States)

    Beyersmann, Elisabeth; Cavalli, Eddy; Casalis, Séverine; Colé, Pascale

    2016-01-01

    Previous research has repeatedly revealed evidence for morpho-orthographic priming effects in suffixed words. However, evidence for the morphological chunking of prefixed words is sparse and ambiguous. The goal of the present study was to directly contrast the processing of prefixed and suffixed pseudowords within the same experiment. We carried…

  20. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  1. Symbolic gestures and spoken language are processed by a common neural system.

    Science.gov (United States)

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-08

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.

  2. The role of the left Brodmann's areas 44 and 45 in reading words and pseudowords

    OpenAIRE

    Heim, S.; Alter, K.; Ischebeck, A.; Amunts, K.; Eickhoff, S.; Mohlberg, H.; Zilles, K.; von Cramon, D.; Friederici, A.

    2005-01-01

    In this functional magnetic resonance imaging (fMRI) study, we investigated the influence of two task (lexical decision, LDT; phonological decision, PDT) on activation in Broca's region (left Brodmann's areas [BA] 44 and 45) during the processing of visually presented words and pseudowords. Reaction times were longer for pseudowords than words in LDT but did not differ in PDT. By combining the fMRI data with cytoarchitectonic anatomical probability maps, we demonstrated that the left BA 44 an...

  3. Functional Brain Activation Differences in School-Age Children with Speech Sound Errors: Speech and Print Processing

    Science.gov (United States)

    Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.

    2012-01-01

    Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…

  4. Using the readiness potential of button-press and verbal response within spoken language processing.

    Science.gov (United States)

    Jansen, Stefanie; Wesselmeier, Hendrik; de Ruiter, Jan P; Mueller, Horst M

    2014-07-30

    Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  6. Stress Assignment in Reading Italian Polysyllabic Pseudowords

    Science.gov (United States)

    Sulpizio, Simone; Arduino, Lisa S.; Paizi, Despina; Burani, Cristina

    2013-01-01

    In 4 naming experiments we investigated how Italian readers assign stress to pseudowords. We assessed whether participants assign stress following distributional information such as stress neighborhood (the proportion and number of existent words sharing orthographic ending and stress pattern) and whether such distributional information affects…

  7. Feature Statistics Modulate the Activation of Meaning During Spoken Word Processing.

    Science.gov (United States)

    Devereux, Barry J; Taylor, Kirsten I; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K

    2016-03-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation. Copyright © 2015 The Authors. Cognitive Science published by Cognitive Science Society, Inc.

  8. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    Science.gov (United States)

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  9. Spoken Lebanese.

    Science.gov (United States)

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  10. Tarzan Helps to Find Dyslexics: A Pseudo-Word Test

    Science.gov (United States)

    Takala, Marjatta; Kuusela, Jorma

    2009-01-01

    A pseudo-word test called Tarzan will be presented and standard scores for high school-aged students between 16 and 18 will be suggested. The test uses E. R. Burroughs' text, in which pseudo-words are added in order to study phonological coding and, through that, possible dyslexia. Girls performed better on the test and their scores correlated…

  11. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  12. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    Science.gov (United States)

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  13. Effects of prosody on spoken Thai word perception in pre-attentive brain processing: a pilot study

    Directory of Open Access Journals (Sweden)

    Kittipun Arunphalungsanti

    2016-12-01

    Full Text Available This study aimed to investigate the effect of the unfamiliar stressed prosody on spoken Thai word perception in the pre-attentive processing of the brain evaluated by the N2a and brain wave oscillatory activity. EEG recording was obtained from eleven participants, who were instructed to ignore the sound stimuli while watching silent movies. Results showed that prosody of unfamiliar stress word perception elicited N2a component and the quantitative EEG analysis found that theta and delta wave powers were principally generated in the frontal area. It was possible that the unfamiliar prosody with different frequencies, duration and intensity of the sound of Thai words induced highly selective attention and retrieval of information from the episodic memory of the pre-attentive stage of speech perception. This brain electrical activity evidence could be used for further study in the development of valuable clinical tests to evaluate the frontal lobe function in speech perception.

  14. Word and Pseudoword Superiority Effects: Evidence From a Shallow Orthography Language.

    Science.gov (United States)

    Ripamonti, Enrico; Luzzatti, Claudio; Zoccolotti, Pierluigi; Traficante, Daniela

    2017-08-03

    The Word Superiority Effect (WSE) denotes better recognition of a letter embedded in a word rather than in a pseudoword. Along with WSE, also a Pseudoword Superiority Effect (PSE) has been described: it is easier to recognize a letter in a legal pseudoword than in an unpronounceable nonword. At the current state of the art, both WSE and PSE have been mainly tested with English speakers. The present study uses the Reicher-Wheeler paradigm with native speakers of Italian (a shallow orthography language). Differently from English and French, we found WSE for RTs only, whereas PSE was significant for both accuracy and reaction times (RTs). This finding indicates that, in the Reicher-Wheeler task, readers of a shallow orthography language can effectively rely on both the lexical and the sublexical routes. As to the effect of letter position, a clear advantage for the first letter position emerged, a finding suggesting a fine-grained processing of the letter strings with coding of letter position, and indicating the role of visual acuity and crowding factors.

  15. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Brian eRiordan

    2015-05-01

    Full Text Available Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions. and interrogative (Where are the lions? sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2 and in a task using mixed sentence types (Experiment 3. We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.

  16. Electrophysiological evidence for the involvement of the approximate number system in preschoolers' processing of spoken number words.

    Science.gov (United States)

    Pinhas, Michal; Donohue, Sarah E; Woldorff, Marty G; Brannon, Elizabeth M

    2014-09-01

    Little is known about the neural underpinnings of number word comprehension in young children. Here we investigated the neural processing of these words during the crucial developmental window in which children learn their meanings and asked whether such processing relies on the Approximate Number System. ERPs were recorded as 3- to 5-year-old children heard the words one, two, three, or six while looking at pictures of 1, 2, 3, or 6 objects. The auditory number word was incongruent with the number of visual objects on half the trials and congruent on the other half. Children's number word comprehension predicted their ERP incongruency effects. Specifically, children with the least number word knowledge did not show any ERP incongruency effects, whereas those with intermediate and high number word knowledge showed an enhanced, negative polarity incongruency response (N(inc)) over centroparietal sites from 200 to 500 msec after the number word onset. This negativity was followed by an enhanced, positive polarity incongruency effect (P(inc)) that emerged bilaterally over parietal sites at about 700 msec. Moreover, children with the most number word knowledge showed ratio dependence in the P(inc) (larger for greater compared with smaller numerical mismatches), a hallmark of the Approximate Number System. Importantly, a similar modulation of the P(inc) from 700 to 800 msec was found in children with intermediate number word knowledge. These results provide the first neural correlates of spoken number word comprehension in preschoolers and are consistent with the view that children map number words onto approximate number representations before they fully master the verbal count list.

  17. Comprehension of derivational morphemes in words and pseudo-words in semantic variant primary progressive aphasia

    Directory of Open Access Journals (Sweden)

    Noémie Auclair-Ouellet

    2014-04-01

    The results of the word condition alone cannot rule out the possibility that errors in the svPPA group were caused by difficulty in understanding words rather than in processing derivational morphemes. However, the lexical context provided in this condition did not speed-up the performance of svPPA individuals as it did in the control group. Most importantly, results from the pseudo-word condition showed that in the svPPA group, the association between the morpheme and its meaning was not performed as readily and reliably as in the control group. These results support the involvement of semantic memory in morphological processing.

  18. How Do Raters Judge Spoken Vocabulary?

    Science.gov (United States)

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  19. Attention to language: novel MEG paradigm for registering involuntary language processing in the brain.

    Science.gov (United States)

    Shtyrov, Yury; Smith, Marie L; Horner, Aidan J; Henson, Richard; Nathan, Pradeep J; Bullmore, Edward T; Pulvermüller, Friedemann

    2012-09-01

    Previous research indicates that, under explicit instructions to listen to spoken stimuli or in speech-oriented behavioural tasks, the brain's responses to senseless pseudowords are larger than those to meaningful words; the reverse is true in non-attended conditions. These differential responses could be used as a tool to trace linguistic processes in the brain and their interaction with attention. However, as previous studies relied on explicit instructions to attend or ignore the stimuli, a technique for automatic attention modulation (i.e., not dependent on explicit instruction) would be more advantageous, especially when cooperation with instructions may not be guaranteed (e.g., neurological patients, children etc). Here we present a novel paradigm in which the stimulus context automatically draws attention to speech. In a non-attend passive auditory oddball sequence, rare words and pseudowords were presented among frequent non-speech tones of variable frequency and length. The low percentage of spoken stimuli guarantees an involuntary attention switch to them. The speech stimuli, in turn, could be disambiguated as words or pseudowords only in their end, at the last phoneme, after the attention switch would have already occurred. Our results confirmed that this paradigm can indeed be used to induce automatic shifts of attention to spoken input. At ~250ms after the stimulus onset, a P3a-like neuromagnetic deflection was registered to spoken (but not tone) stimuli indicating an involuntary attention shift. Later, after the word-pseudoword divergence point, we found a larger oddball response to pseudowords than words, best explained by neural processes of lexical search facilitated through increased attention. Furthermore, we demonstrate a breakdown of this orderly pattern of neurocognitive processes as a result of sleep deprivation. The new paradigm may thus be an efficient way to assess language comprehension processes and their dynamic interaction with those

  20. Musicians' working memory for tones, words, and pseudowords.

    Science.gov (United States)

    Benassi-Werke, Mariana E; Queiroz, Marcelo; Araújo, Rúben S; Bueno, Orlando F A; Oliveira, Maria Gabriela M

    2012-01-01

    Studies investigating factors that influence tone recognition generally use recognition tests, whereas the majority of the studies on verbal material use self-generated responses in the form of serial recall tests. In the present study we intended to investigate whether tonal and verbal materials share the same cognitive mechanisms, by presenting an experimental instrument that evaluates short-term and working memories for tones, using self-generated sung responses that may be compared to verbal tests. This paradigm was designed according to the same structure of the forward and backward digit span tests, but using digits, pseudowords, and tones as stimuli. The profile of amateur singers and professional singers in these tests was compared in forward and backward digit, pseudoword, tone, and contour spans. In addition, an absolute pitch experimental group was included, in order to observe the possible use of verbal labels in tone memorization tasks. In general, we observed that musical schooling has a slight positive influence on the recall of tones, as opposed to verbal material, which is not influenced by musical schooling. Furthermore, the ability to reproduce melodic contours (up and down patterns) is generally higher than the ability to reproduce exact tone sequences. However, backward spans were lower than forward spans for all stimuli (digits, pseudowords, tones, contour). Curiously, backward spans were disproportionately lower for tones than for verbal material-that is, the requirement to recall sequences in backward rather than forward order seems to differentially affect tonal stimuli. This difference does not vary according to musical expertise.

  1. A common neural system is activated in hearing non-signers to process French sign language and spoken French.

    Science.gov (United States)

    Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie

    2011-01-15

    We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    Science.gov (United States)

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Training Pseudoword Reading in Acquired Dyslexia: A Phonological Complexity Approach

    Science.gov (United States)

    Riley, Ellyn A.; Thompson, Cynthia K.

    2015-01-01

    Background Individuals with acquired phonological dyslexia experience difficulty associating written letters with corresponding sounds, especially in pseudowords. Previous studies have shown that reading can be improved in these individuals by training letter-sound correspondence, practicing phonological skills, or using combined approaches. However, generalization to untrained items is typically limited. Aims We investigated whether principles of phonological complexity can be applied to training letter-sound correspondence reading in acquired phonological dyslexia to improve generalization to untrained words. Based on previous work in other linguistic domains, we hypothesized that training phonologically “more complex” material (i.e., consonant clusters with small sonority differences) would result in generalization to phonologically “less complex” material (i.e., consonant clusters with larger sonority differences), but this generalization pattern would not be demonstrated when training the “less complex” material. Methods & Procedures We used a single-participant, multiple baseline design across participants and behaviors to examine phonological complexity as a training variable in five individuals. Based on participants' error data from a previous experiment, a “more complex” onset and a “less complex” onset were selected for training for each participant. Training order assignment was pseudo-randomized and counterbalanced across participants. Three participants were trained in the “more complex” condition and two in the “less complex” condition while tracking oral reading accuracy of both onsets. Outcomes & Results As predicted, participants trained in the “more complex” condition demonstrated improved pseudoword reading of the trained cluster and generalization to pseudowords with the untrained, “simple” onset, but not vice versa. Conclusions These findings suggest phonological complexity can be used to improve

  4. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  5. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  6. Featuring Old/New Recognition: The Two Faces of the Pseudoword Effect

    Science.gov (United States)

    Joordens, Steve; Ozubko, Jason D.; Niewiadomski, Marty W.

    2008-01-01

    In his analysis of the pseudoword effect, [Greene, R.L. (2004). Recognition memory for pseudowords. "Journal of Memory and Language," 50, 259-267.] suggests nonwords can feel more familiar that words in a recognition context if the orthographic features of the nonword match well with the features of the items presented at study. One possible…

  7. Speed discrimination predicts word but not pseudo-word reading rate in adults and children.

    Science.gov (United States)

    Main, Keith L; Pestilli, Franco; Mezer, Aviv; Yeatman, Jason; Martin, Ryan; Phipps, Stephanie; Wandell, Brian

    2014-11-01

    Visual processing in the magnocellular pathway is a reputed influence on word recognition and reading performance. However, the mechanisms behind this relationship are still unclear. To explore this concept, we measured reading rate, speed-discrimination, and contrast detection thresholds in adults and children with a wide range of reading abilities. We found that speed discrimination thresholds are higher in children than in adults and are correlated with age. Speed discrimination thresholds are also correlated with reading rates but only for real words, not pseudo-words. Conversely, we found no correlations between contrast detection thresholds and the reading rates. We also found no correlations between speed discrimination or contrast detection and WASI subtest scores. These findings indicate that familiarity is a factor in magnocellular operations that may influence reading rate. We suggest this effect supports the idea that the magnocellular pathway contributes to word reading through an analysis of letter position. Published by Elsevier Inc.

  8. Teaching Spoken Spanish

    Science.gov (United States)

    Lipski, John M.

    1976-01-01

    The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)

  9. Teaching the Spoken Language.

    Science.gov (United States)

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  10. Neuroimaging studies of word and pseudoword reading: consistencies, inconsistencies, and limitations.

    Science.gov (United States)

    Mechelli, Andrea; Gorno-Tempini, Maria Luisa; Price, Cathy J

    2003-02-15

    Several functional neuroimaging studies have compared words and pseudowords to test different cognitive models of reading. There are difficulties with this approach, however, because cognitive models do not make clear-cut predictions at the neural level. Therefore, results can only be interpreted on the basis of prior knowledge of cognitive anatomy. Furthermore, studies comparing words and pseudowords have produced inconsistent results. The inconsistencies could reflect false-positive results due to the low statistical thresholds applied or confounds from nonlexical aspects of the stimuli. Alternatively, they may reflect true effects that are inconsistent across subjects; dependent on experimental parameters such as stimulus rate or duration; or not replicated across studies because of insufficient statistical power. In this fMRI study, we investigate consistent and inconsistent differences between word and pseudoword reading in 20 subjects, and distinguish between effects associated with increases and decreases in activity relative to fixation. In addition, the interaction of word type with stimulus duration is explored. We find that words and pseudowords activate the same set of regions relative to fixation, and within this system, there is greater activation for pseudowords than words in the left frontal operculum, left posterior inferior temporal gyrus, and the right cerebellum. The only effects of words relative to pseudowords consistent over subjects are due to decreases in activity for pseudowords relative to fixation; and there are no significant interactions between word type and stimulus duration. Finally, we observe inconsistent but highly significant effects of word type at the individual subject level. These results (i) illustrate that pseudowords place increased demands on areas that have previously been linked to lexical retrieval, and (ii) highlight the importance of including one or more baselines to qualify word type effects. Furthermore, (iii

  11. Accessing the spoken word

    OpenAIRE

    Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard

    2005-01-01

    Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...

  12. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  13. SPOKEN CORPORA: RATIONALE AND APPLICATION

    Directory of Open Access Journals (Sweden)

    John Newman

    2008-12-01

    Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.

  14. Growth of Word and Pseudoword Reading Efficiency in Alphabetic Orthographies: Impact of Consistency.

    Science.gov (United States)

    Caravolas, Markéta

    2017-07-01

    Word and pseudoword reading are related abilities fundamental to reading development in alphabetic orthographies. They are respectively assumed to index children's orthographic representations of words, which are in turn acquired through the underlying "self-teaching mechanism" of alphabetic pseudoword decoding. Little is known about concurrent growth trajectories of these skills in the early grades among children learning different alphabetic orthographies. In the present study, between- and within-group latent growth models of word and pseudoword reading efficiency were tested on data spanning Grades 1 and 2 from learners of the inconsistent English and consistent Czech and Slovak orthographies. Several language-general patterns emerged. Significant growth was observed for both skills in all languages. Growth was faster for word than pseudoword reading efficiency, and strong lexicality effects that increased over time were obtained across languages. Language-specific patterns were also found. In line with predictions about the costs of learning lower-consistency orthographies, readers of English experienced relatively slower growth on both reading skills. However, their lag was smaller, and evident only at the latter two time points for word reading. In contrast, on pseudoword reading, the English group performed considerably less well than their Czech and Slovak peers at every time point. Thus, weaker decoding skills were the main contributor to the larger lexicality effects of the English group. These findings are considered within the frame of recent theorizing about the effect of orthographic consistency on decoding as a self-teaching mechanism in alphabetic reading acquisition.

  15. Immediate memory for pseudowords and phonological awareness are associated in adults and pre-reading children

    Science.gov (United States)

    Clark, Nathaniel B.; McRoberts, Gerald W.; Van Dyke, Julie A.; Shankweiler, Donald P.; Braze, David

    2016-01-01

    This study investigated phonological components of reading skill at two ages, using a novel pseudoword repetition task for assessing phonological memory (PM). Pseudowords were designed to incorporate control over segmental, prosodic and lexical features. In experiment 1, the materials were administered to 3- and 4-year-old children together with a standardized test of phonological awareness (PA). PA and pseudoword repetition showed a moderate positive correlation, independent of age. Experiment 2, which targeted young adults, employed the same pseudoword materials, with a different administration protocol, together with standardized indices of PA, other memory measures, and decoding skill. The results showed moderate to strong positive correlations among our novel pseudoword repetition task, measures of PM and PA, and decoding. Together, the findings demonstrate the feasibility of assessing PM with the same carefully controlled materials at widely spaced points in age, adding to present resources for assessing phonological memory and better enabling future studies to map the development of relationships among phonological capabilities in both typically developing children and those with language-related impairments. PMID:22690715

  16. Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: a neurophysiological study.

    Science.gov (United States)

    Colin, C; Zuinen, T; Bayard, C; Leybaert, J

    2013-06-01

    Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  17. The effect of phonotactic probability and neighbourhood density on pseudoword learning in 6- and 7-year-old children

    NARCIS (Netherlands)

    van der Kleij, S.W.; Rispens, J.E.; Scheper, A.R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high

  18. The Inclusion of Pseudowords within the Year One Phonics "Screening Check" in English Primary Schools

    Science.gov (United States)

    Gibson, Howard; England, Jennifer

    2016-01-01

    The paper highlights problems surrounding the Year 1 Phonics Screening Check that has accompanied the legislative framework for synthetic phonics in English primary schools. It investigates the inclusion of pseudowords and raises questions regarding their generation and categorization, the rationale for their inclusion and the assumption that the…

  19. The neural circuitry involved in the reading of German words and pseudowords: A PET study

    NARCIS (Netherlands)

    Hagoort, P.; Indefrey, P.; Brown, C.; Herzog, H.; Steinmetz, H.; Seitz, R.J.

    1999-01-01

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O) butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning. The results of 11 healthy, right-handed volunteers in the age

  20. Can cognitive models explain brain activation during word and pseudoword reading? A meta-analysis of 36 neuroimaging studies.

    Science.gov (United States)

    Taylor, J S H; Rastle, Kathleen; Davis, Matthew H

    2013-07-01

    Reading in many alphabetic writing systems depends on both item-specific knowledge used to read irregular words (sew, yacht) and generative spelling-sound knowledge used to read pseudowords (tew, yash). Research into the neural basis of these abilities has been directed largely by cognitive accounts proposed by the dual-route cascaded and triangle models of reading. We develop a framework that enables predictions for neural activity to be derived from cognitive models of reading using 2 principles: (a) the extent to which a model component or brain region is engaged by a stimulus and (b) how much effort is exerted in processing that stimulus. To evaluate the derived predictions, we conducted a meta-analysis of 36 neuroimaging studies of reading using the quantitative activation likelihood estimation technique. Reliable clusters of activity are localized during word versus pseudoword and irregular versus regular word reading and demonstrate a great deal of convergence between the functional organization of the reading system put forward by cognitive models and the neural systems activated during reading tasks. Specifically, left-hemisphere activation clusters are revealed reflecting orthographic analysis (occipitotemporal cortex), lexical and/or semantic processing (anterior fusiform, middle temporal gyrus), spelling-sound conversion (inferior parietal cortex), and phonological output resolution (inferior frontal gyrus). Our framework and results establish that cognitive models of reading are relevant for interpreting neuroimaging studies and that neuroscientific studies can provide data relevant for advancing cognitive models. This article thus provides a firm empirical foundation from which to improve integration between cognitive and neural accounts of the reading process. 2013 APA, all rights reserved

  1. Utility of spoken dialog systems

    CSIR Research Space (South Africa)

    Barnard, E

    2008-12-01

    Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...

  2. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  3. Convergent and diagnostic validity of STAVUX, a word and pseudoword spelling test for adults.

    Science.gov (United States)

    Östberg, Per; Backlund, Charlotte; Lindström, Emma

    2016-10-01

    Few comprehensive spelling tests are available in Swedish, and none have been validated in adults with reading and writing disorders. The recently developed STAVUX test includes word and pseudoword spelling subtests with high internal consistency and adult norms stratified by education. This study evaluated the convergent and diagnostic validity of STAVUX in adults with dyslexia. Forty-six adults, 23 with dyslexia and 23 controls, took STAVUX together with a standard word-decoding test and a self-rated measure of spelling skills. STAVUX subtest scores showed moderate to strong correlations with word-decoding scores and predicted self-rated spelling skills. Word and pseudoword subtest scores both predicted dyslexia status. Receiver-operating characteristic (ROC) analysis showed excellent diagnostic discriminability. Sensitivity was 91% and specificity 96%. In conclusion, the results of this study support the convergent and diagnostic validity of STAVUX.

  4. Elementary School Students' Spoken Activities and Their Responses in Math Learning by Peer-Tutoring

    Science.gov (United States)

    Baiduri

    2017-01-01

    Students' activities in the learning process are very important to indicate the quality of learning process. One of which is spoken activity. This study was intended to analyze the elementary school students' spoken activities and their responses in joining Math learning process by peer-tutoring. Descriptive qualitative design was piloted by means…

  5. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  6. Event-related potentials reflecting the frequency of unattended spoken words

    DEFF Research Database (Denmark)

    Shtyrov, Yury; Kimppa, Lilli; Pulvermüller, Friedemann

    2011-01-01

    , in passive non-attend conditions, with acoustically matched high- and low-frequency words along with pseudo-words. Using factorial and correlation analyses, we found that already at ~120 ms after the spoken stimulus information was available, amplitude of brain responses was modulated by the words' lexical...... for the most frequent word stimuli, later-on (~270 ms), a more global lexicality effect with bilateral perisylvian sources was found for all stimuli, suggesting faster access to more frequent lexical entries. Our results support the account of word memory traces as interconnected neuronal circuits, and suggest......How are words represented in the human brain and can these representations be qualitatively assessed with respect to their structure and properties? Recent research demonstrates that neurophysiological signatures of individual words can be measured when subjects do not focus their attention...

  7. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  8. Correlative Conjunctions in Spoken Texts

    Czech Academy of Sciences Publication Activity Database

    Poukarová, Petra

    2017-01-01

    Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  9. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...

  10. New evidence for phonological processing during visual word recognition: the case of Arabic.

    Science.gov (United States)

    Bentin, S; Ibrahim, R

    1996-03-01

    Lexical decision and naming were examined with words and pseudowords in literary Arabic and with transliterations of words in a Palestinian dialect that has no written form. Although the transliterations were visually unfamiliar, they were not easily rejected in lexical decision, and they were more slowly accepted in phonologically based lexical decision. Naming transliterations of spoken words was slower than naming of literary words and pseudowords. Apparently, phonological computation is mandatory for both lexical decision and naming. A large frequency effect in both lexical decision and naming suggests that addressed phonology is an option for familiar orthographic patterns. The frequency effect on processing transliterations indicated that lexical phonology is involved with prelexical phonological computation even if addressed phonology is not possible. These data support a combination between a cascade-type process, in which partial products of the grapheme-to-phoneme translation activate phonological units in the lexicon, and an interactive model, in which the activated lexical units feed back, shaping the prelexical phonological computation process.

  11. SOA-dependent N400 and P300 semantic priming effects using pseudoword primes and a delayed lexical decision.

    Science.gov (United States)

    Hill, Holger; Ott, Friederike; Weisbrod, Matthias

    2005-06-01

    In a previous semantic priming study, we found a semantic distance effect on the lexical-decision-related P300 when SOA was short (150 ms) only, but no different RT and N400 priming effects between short and long (700 ms) SOAs. To investigate this further, we separated priming from lexical decision, using a delayed lexical decision in the present study. In the short SOA only, primed targets evoked an early peaking (approximately 480 ms) P300-like component, probably because the subject detected the semantic relationship implicitly. We hypothesize that in tasks requiring an immediate lexical decision, this early P300 and the later lexical decision P300 (approximately 600 ms) are additive. Secondly, we found both a direct and an indirect priming effect for both SOAs for the ERP amplitude of the N400 time window. However the N400 component itself was considerably larger in the long SOA than in the short SOA. We interpreted this finding as an ERP correlate for deeper semantic processing in the long SOA, due to increased attention that was provoked by the use of pseudoword primes. In contrast, in the short SOA, subjects might have used a shallowed semantic processing. N400, P300, and RTs are sensitive to semantic priming-but the modulation patterns are not consistent. This raises the question as to which variable reflects an immediate physiological correlate of semantic priming, and which variable reflects co-occurring processes associated with semantic priming.

  12. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    Science.gov (United States)

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  13. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  14. Increased facilitatory connectivity from the pre-SMA to the left dorsal premotor cortex during pseudoword repetition

    DEFF Research Database (Denmark)

    Hartwigsen, Gesa; Saur, Dorothee; Price, Cathy J

    2013-01-01

    repetition. The optimal model was identified with Bayesian model selection and reflected a network with driving input to pre-SMA and an increase in facilitatory drive from pre-SMA to PMd during repetition of pseudowords. The task-specific increase in effective connectivity from pre-SMA to left PMd suggests......Previous studies have demonstrated that the repetition of pseudowords engages a network of premotor areas for articulatory planning and articulation. However, it remains unclear how these premotor areas interact and drive one another during speech production. We used fMRI with dynamic causal...... were common to repetition in both modalities. We thus obtained three seed regions: the bilateral pre-SMA, left dorsal premotor cortex (PMd), and left ventral premotor cortex that were used to test 63 different models of effective connectivity in the premotor network for pseudoword relative to word...

  15. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    Science.gov (United States)

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  16. Porting a spoken language identification systen to a new environment.

    CSIR Research Space (South Africa)

    Peche, M

    2008-11-01

    Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...

  17. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.

  18. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  19. Neural Correlates of Sublexical Processing in Phonological Working Memory

    Science.gov (United States)

    McGettigan, Carolyn; Warren, Jane E.; Eisner, Frank; Marshall, Chloe R.; Shanmugalingam, Pradheep; Scott, Sophie K.

    2011-01-01

    This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural…

  20. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    Science.gov (United States)

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  1. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  2. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  3. Talker and background noise specificity in spoken word recognition memory

    Directory of Open Access Journals (Sweden)

    Angela Cooper

    2017-11-01

    Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.

  4. Spoken Language Understanding Software for Language Learning

    Directory of Open Access Journals (Sweden)

    Hassan Alam

    2008-04-01

    Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.

  5. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  6. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  7. Artfulness in Young Children's Spoken Narratives

    Science.gov (United States)

    Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.

    2010-01-01

    Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…

  8. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  9. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  10. Italian children with dyslexia are also poor in reading English words, but accurate in reading English pseudowords.

    Science.gov (United States)

    Palladino, Paola; Bellagamba, Isabella; Ferrari, Marcella; Cornoldi, Cesare

    2013-08-01

    It has been argued that children with dyslexia (DC) are poor at learning a foreign language (L2) and, in particular, reading foreign words. This assumption is so general that an Italian law (law 170, October, 2010) has established that DC may be completely exempted from foreign language learning and, in any case, should not be engaged in tuition via written material. However, evidence of L2 difficulties of DC is scarce and, in particular, absent for Italian children learning English. This absence of data is problematic, as it precludes information on the pattern of weaknesses and strengths, which could be found in DC. The present paper assessed these issues by administering an English word and pseudoword reading test to 23 DC and to 23 control children, matched for age, gender, schooling and IQ. The patterns of difficulties were examined individually for accuracy and speed, and the role of measures of native (L1) competence in L2 difficulties was also taken into account. Results confirmed that Italian DC are also poor in reading English words. However, they are accurate in reading pseudowords, suggesting that they have assimilated English pronunciation rules. Difficulties in L2 were, to some extent, but not completely, explained by difficulties in reading in L1. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Role of Working Memory in Children's Understanding Spoken Narrative: A Preliminary Investigation

    Science.gov (United States)

    Montgomery, James W.; Polunenko, Anzhela; Marinellie, Sally A.

    2009-01-01

    The role of phonological short-term memory (PSTM), attentional resource capacity/allocation, and processing speed on children's spoken narrative comprehension was investigated. Sixty-seven children (6-11 years) completed a digit span task (PSTM), concurrent verbal processing and storage (CPS) task (resource capacity/allocation), auditory-visual…

  12. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  13. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  14. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  15. Dust, a spoken word poem by Guante

    Directory of Open Access Journals (Sweden)

    Kyle Tran Myhre

    2017-06-01

    Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.

  16. Native language, spoken language, translation and trade

    OpenAIRE

    Jacques Melitz; Farid Toubal

    2012-01-01

    We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...

  17. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  18. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  19. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  20. Strategies to Reduce the Negative Effects of Spoken Explanatory Text on Integrated Tasks

    Science.gov (United States)

    Singh, Anne-Marie; Marcus, Nadine; Ayres, Paul

    2017-01-01

    Two experiments involving 125 grade-10 students learning about commerce investigated strategies to overcome the transient information effect caused by explanatory spoken text. The transient information effect occurs when learning is reduced as a result of information disappearing before the learner has time to adequately process it, or link it…

  1. How Are Pronunciation Variants of Spoken Words Recognized? A Test of Generalization to Newly Learned Words

    Science.gov (United States)

    Pitt, Mark A.

    2009-01-01

    One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…

  2. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    Science.gov (United States)

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  3. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    Science.gov (United States)

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  4. Elementary School Students’ Spoken Activities and their Responses in Math Learning by Peer-Tutoring

    Directory of Open Access Journals (Sweden)

    Baiduri

    2017-04-01

    Full Text Available Students’ activities in the learning process are very important to indicate the quality of learning process. One of which is spoken activity. This study was intended to analyze the elementary school students’ spoken activities and their responses in joining Math learning process by peer-tutoring. Descriptive qualitative design was piloted by means of implementing the qualitative approach and case study. Further, the data were collected from observation, field note, interview, and questionnaire that were administered to 24 fifth-graders of First State Elementary School of Kunjang, Kediri, East Java Indonesia. The design was that four students were recruited as the tutors; while the rest was subdivided into four different groups. The data taken from the observation and questionnaire were analyzed descriptively which were later categorized into various categories starting from poor category to the excellent one. The data collected from the interview were analyzed through the interactive model, data reduction, data exposing, and summation. The findings exhibited that the tutors’ spoken activities covering: questioning, answering, explaining, discussing, and presenting, were improved during three meetings and sharply developed in general. In addition, the students’ spoken activities that engaged some groups were considered good. Besides, there was a linear and positive interconnectedness between tutors’ activity and their groups’ activities.

  5. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  6. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    Science.gov (United States)

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  7. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  8. Direction Asymmetries in Spoken and Signed Language Interpreting

    Science.gov (United States)

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  9. Spoken and Written Communication: Are Five Vowels Enough?

    Science.gov (United States)

    Abbott, Gerry

    The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…

  10. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  11. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  12. Enhancing the Performance of Female Students in Spoken English

    Science.gov (United States)

    Inegbeboh, Bridget O.

    2009-01-01

    Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…

  13. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...

  14. Spoken language corpora for the nine official African languages of ...

    African Journals Online (AJOL)

    Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...

  15. Distinguish Spoken English from Written English: Rich Feature Analysis

    Science.gov (United States)

    Tian, Xiufeng

    2013-01-01

    This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…

  16. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  17. What Comes First, What Comes Next: Information Packaging in Written and Spoken Language

    Directory of Open Access Journals (Sweden)

    Vladislav Smolka

    2017-07-01

    Full Text Available The paper explores similarities and differences in the strategies of structuring information at sentence level in spoken and written language, respectively. In particular, it is concerned with the position of the rheme in the sentence in the two different modalities of language, and with the application and correlation of the end-focus and the end-weight principles. The assumption is that while there is a general tendency in both written and spoken language to place the focus in or close to the final position, owing to the limitations imposed by short-term memory capacity (and possibly by other factors, for the sake of easy processibility, it may occasionally be more felicitous in spoken language to place the rhematic element in the initial position or at least close to the beginning of the sentence. The paper aims to identify differences in the function of selected grammatical structures in written and spoken language, respectively, and to point out circumstances under which initial focus is a convenient alternative to the usual end-focus principle.

  18. Presentation video retrieval using automatically recovered slide and spoken text

    Science.gov (United States)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  19. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

     This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...... model roughly explains 50% of the observed variance in the user satisfaction based on measures of task success and speech recognition accuracy, a result similar to those obtained at AT&T. The applied methods are discussed and evaluated critically....

  20. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  1. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  2. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  3. Functional MRI activation in children with and without dyslexia during pseudoword aural repeat and visual decode: before and after treatment.

    Science.gov (United States)

    Richards, Todd; Berninger, Virginia; Winn, William; Stock, Pat; Wagner, Richard; Muse, Andrea; Maravilla, Kenneth

    2007-11-01

    Children without dyslexia (n=10) received nonphonological treatment, and those with dyslexia received phonological (n=11) or nonphonological (n=9) treatment. Before and after treatment they performed aural repeat, visual decode, and aural match pseudoword tasks during functional MRI scanning that separated stimulus input from response production. Group map analysis indicated that children with dyslexia overactivated compared with good readers during the aural-repeat/aural-match contrast in bilateral frontal (Brodmann's area [BA] 3, 4, 5, 6, 9), left parietal (BA 2, 3), left temporal (BA 38), and right temporal (BA 20, 21, 37) regions (stimulus input) and underactivated in right frontal (BA 24, 32) and right insula (BA 48) regions (response production); they underactivated in BA 19/V5 during the visual-decode/aural-match contrast (response production). Individual brain analysis for children with dyslexia revealed that during the aural-repeat/aural-match contrast (stimulus input), phonological treatment decreased and normalized activation in left supramarginal gyrus and postcentral gyrus. Nonphonological treatment increased and normalized activation during the visual-decode/aural-match contrast (response production) in BA19/V5 and changed activation in the same direction as good readers during aural-repeat/aural-match contrast (stimulus input) in left postcentral gyrus. The significance of the findings for competing theories of dyslexia is discussed. PsycINFO Database Record (c) 2007 APA, all rights reserved.

  4. Does IQ affect the functional brain network involved in pseudoword reading in students with reading disability? A magnetoencephalography study.

    Science.gov (United States)

    Simos, Panagiotis G; Rezaie, Roozbeh; Papanicolaou, Andrew C; Fletcher, Jack M

    2014-01-01

    The study examined whether individual differences in performance and verbal IQ affect the profiles of reading-related regional brain activation in 127 students experiencing reading difficulties and typical readers. Using magnetoencephalography in a pseudoword read-aloud task, we compared brain activation profiles of students experiencing word-level reading difficulties who did (n = 29) or did not (n = 36) meet the IQ-reading achievement discrepancy criterion. Typical readers assigned to a lower-IQ (n = 18) or a higher IQ (n = 44) subgroup served as controls. Minimum norm estimates of regional cortical activity revealed that the degree of hypoactivation in the left superior temporal and supramarginal gyri in both RD subgroups was not affected by IQ. Moreover, IQ did not moderate the positive association between degree of activation in the left fusiform gyrus and phonological decoding ability. We did find, however, that the hypoactivation of the left pars opercularis in RD was restricted to lower-IQ participants. In accordance with previous morphometric and fMRI studies, degree of activity in inferior frontal, and inferior parietal regions correlated with IQ across reading ability subgroups. Results are consistent with current views questioning the relevance of IQ-discrepancy criteria in the diagnosis of dyslexia.

  5. LANGUAGE POLICIES PURSUED IN THE AXIS OF OTHERING AND IN THE PROCESS OF CONVERTING SPOKEN LANGUAGE OF TURKS LIVING IN RUSSIA INTO THEIR WRITTEN LANGUAGE / RUSYA'DA YASAYAN TÜRKLERİN KONUSMA DİLLERİNİN YAZI DİLİNE DÖNÜSTÜRÜLME SÜRECİ VE ÖTEKİLESTİRME EKSENİNDE İZLENEN DİL POLİTİKALARI

    Directory of Open Access Journals (Sweden)

    Süleyman Kaan YALÇIN (M.A.H.

    2008-12-01

    Full Text Available Language is an object realized in two ways; spokenlanguage and written language. Each language can havethe characteristics of a spoken language, however, everylanguage can not have the characteristics of a writtenlanguage since there are some requirements for alanguage to be deemed as a written language. Theserequirements are selection, coding, standardization andbecoming widespread. It is necessary for a language tomeet these requirements in either natural or artificial wayso to be deemed as a written language (standardlanguage.Turkish language, which developed as a singlewritten language till 13th century, was divided intolanguages as West Turkish and North-East Turkish bymeeting the requirements of a written language in anatural way. Following this separation and through anatural process, it showed some differences in itself;however, the policy of converting the spoken language ofeach Turkish clan into their written language -the policypursued by Russia in a planned way- turned Turkish,which came to 20th century as a few written languagesinto20 different written languages. Implementation ofdiscriminatory language policies suggested by missionerssuch as Slinky and Ostramov to Russian Government,imposing of Cyrillic alphabet full of different andunnecessary signs on each Turkish clan by force andothering activities of Soviet boarding schools opened hadconsiderable effects on the said process.This study aims at explaining that the conversionof spoken languages of Turkish societies in Russia intotheir written languages did not result from a naturalprocess; the historical development of Turkish languagewhich is shaped as 20 separate written languages onlybecause of the pressure exerted by political will; and how the Russian subjected language concept -which is thememory of a nation- to an artificial process.

  6. An fMRI study of concreteness effects during spoken word recognition in aging. Preservation or attenuation?

    Directory of Open Access Journals (Sweden)

    Tracy eRoxbury

    2016-01-01

    Full Text Available It is unclear whether healthy aging influences concreteness effects (ie. the processing advantage seen for concrete over abstract words and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete versus abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing.

  7. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  8. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    OpenAIRE

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2012-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...

  9. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  10. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  11. Verb Errors in Advanced Spoken English

    Directory of Open Access Journals (Sweden)

    Tomáš Gráf

    2017-07-01

    Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.

  12. Time-compressed spoken words enhance driving performance in complex visual scenarios : evidence of crossmodal semantic priming effects in basic cognitive experiments and applied driving simulator studies

    OpenAIRE

    Castronovo, Angela

    2014-01-01

    Would speech warnings be a good option to inform drivers about time-critical traffic situations? Even though spoken words take time until they can be understood, listening is well trained from the earliest age and happens quite automatically. Therefore, it is conceivable that spoken words could immediately preactivate semantically identical (but physically diverse) visual information, and thereby enhance respective processing. Interestingly, this implies a crossmodal semantic effect of audito...

  13. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  14. Automatic disambiguation of morphosyntax in spoken language corpora

    OpenAIRE

    Parisse , Christophe; Le Normand , Marie-Thérèse

    2000-01-01

    International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...

  15. Effects of speech clarity on recognition memory for spoken sentences.

    Science.gov (United States)

    Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka

    2012-01-01

    Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.

  16. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  17. Vowel and Consonant Replacements in the Spoken French of Ijebu Undergraduate French Learners in Selected Universities in South West of Nigeria

    Directory of Open Access Journals (Sweden)

    Iyiola Amos Damilare

    2015-04-01

    Full Text Available Substitution is a phonological process in language. Existing studies have examined deletion in several languages and dialects with less attention paid to the spoken French of Ijebu Undergraduates. This article therefore examined substitution as a dominant phenomenon in the spoken French of thirty-four Ijebu Undergraduate French Learners (IUFLs in Selected Universities in South West of Nigeria with a view to establishing the dominance of substitution in the spoken French of IUFLs. The data collection was through tape-recording of participants’ production of 30 sentences containing both French vowel and consonant sounds. The results revealed inappropriate replacement of vowel and consonant in the medial and final positions in the spoken French of IUFLs.

  18. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Semantic Richness Effects in Spoken Word Recognition: A Lexical Decision and Semantic Categorization Megastudy.

    Science.gov (United States)

    Goh, Winston D; Yap, Melvin J; Lau, Mabel C; Ng, Melvin M R; Tan, Luuan-Chin

    2016-01-01

    A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction-faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features-while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed.

  20. "Visual" Cortex Responds to Spoken Language in Blind Children.

    Science.gov (United States)

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  1. An exaggerated effect for proper nouns in a case of superior written over spoken word production.

    Science.gov (United States)

    Kemmerer, David; Tranel, Daniel; Manzel, Ken

    2005-02-01

    We describe a brain-damaged subject, RR, who manifests superior written over spoken naming of concrete entities from a wide range of conceptual domains. His spoken naming difficulties are due primarily to an impairment of lexical-phonological processing, which implies that his successful written naming does not depend on prior access to the sound structures of words. His performance therefore provides further support for the "orthographic autonomy hypothesis," which maintains that written word production is not obligatorily mediated by phonological knowledge. The case of RR is especially interesting, however, because for him the dissociation between impaired spoken naming and relatively preserved written naming is significantly greater for two categories of unique concrete entities that are lexicalised as proper nouns-specifically, famous faces and famous landmarks-than for five categories of nonunique (i.e., basic level) concrete entities that are lexicalised as common nouns-specifically, animals, fruits/vegetables, tools/utensils, musical instruments, and vehicles. Furthermore, RR's predominant error types in the oral modality are different for the two types of stimuli: omissions for unique entities vs. semantic errors for nonunique entities. We consider two alternative explanations for RR's extreme difficulty in producing the spoken forms of proper nouns: (1) a disconnection between the meanings of proper nouns and the corresponding word nodes in the phonological output lexicon; or (2) damage to the word nodes themselves. We argue that RR's combined behavioural and lesion data do not clearly adjudicate between the two explanations, but that they favour the first explanation over the second.

  2. Toddlers' sensitivity to within-word coarticulation during spoken word recognition: Developmental differences in lexical competition.

    Science.gov (United States)

    Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix

    2016-12-01

    To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Does textual feedback hinder spoken interaction in natural language?

    Science.gov (United States)

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.

  4. Children show right-lateralized effects of spoken word-form learning.

    Directory of Open Access Journals (Sweden)

    Anni Nora

    Full Text Available It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG to track cortical correlates of incidental learning of meaningless word forms over two days as 6-8-year-olds overtly repeated them. Native (Finnish pseudowords were compared with words of foreign sound structure (Korean to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody.

  5. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  6. The determinants of spoken and written picture naming latencies.

    Science.gov (United States)

    Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel

    2002-02-01

    The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.

  7. The relation of the number of languages spoken to performance in different cognitive abilities in old age.

    Science.gov (United States)

    Ihle, Andreas; Oris, Michel; Fagot, Delphine; Kliegel, Matthias

    2016-12-01

    Findings on the association of speaking different languages with cognitive functioning in old age are inconsistent and inconclusive so far. Therefore, the present study set out to investigate the relation of the number of languages spoken to cognitive performance and its interplay with several other markers of cognitive reserve in a large sample of older adults. Two thousand eight hundred and twelve older adults served as sample for the present study. Psychometric tests on verbal abilities, basic processing speed, and cognitive flexibility were administered. In addition, individuals were interviewed on their different languages spoken on a regular basis, educational attainment, occupation, and engaging in different activities throughout adulthood. Higher number of languages regularly spoken was significantly associated with better performance in verbal abilities and processing speed, but unrelated to cognitive flexibility. Regression analyses showed that the number of languages spoken predicted cognitive performance over and above leisure activities/physical demand of job/gainful activity as respective additional predictor, but not over and above educational attainment/cognitive level of job as respective additional predictor. There was no significant moderation of the association of the number of languages spoken with cognitive performance in any model. Present data suggest that speaking different languages on a regular basis may additionally contribute to the build-up of cognitive reserve in old age. Yet, this may not be universal, but linked to verbal abilities and basic cognitive processing speed. Moreover, it may be dependent on other types of cognitive stimulation that individuals also engaged in during their life course.

  8. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  9. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    Science.gov (United States)

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  10. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  11. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  12. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Kate H

    assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...

  13. Using the Corpus of Spoken Afrikaans to generate an Afrikaans ...

    African Journals Online (AJOL)

    This paper presents two chatbot systems, ALICE and. Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the. Corpus of Spoken Afrikaans (Korpus Gesproke Afrikaans) to retrain the ALICE chatbot system with human ...

  14. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Therefore, this paper examined vowel insertion in the spoken French of 50 Ijebu Undergraduate French Learners (IUFLs) in Selected Universities in South West of Nigeria. The data collection for this study was through tape-recording of participants' production of 30 sentences containing both French vowel and consonant ...

  15. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  16. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  17. Flipper: An Information State Component for Spoken Dialogue Systems

    NARCIS (Netherlands)

    ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn

    This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.

  18. Pair Counting to Improve Grammar and Spoken Fluency

    Science.gov (United States)

    Hanson, Stephanie

    2017-01-01

    English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…

  19. A memory-based shallow parser for spoken Dutch

    NARCIS (Netherlands)

    Canisius, S.V.M.; van den Bosch, A.; Decadt, B.; Hoste, V.; De Pauw, G.

    2004-01-01

    We describe the development of a Dutch memory-based shallow parser. The availability of large treebanks for Dutch, such as the one provided by the Spoken Dutch Corpus, allows memory-based learners to be trained on examples of shallow parsing taken from the treebank, and act as a shallow parser after

  20. The Link between Vocabulary Knowledge and Spoken L2 Fluency

    Science.gov (United States)

    Hilton, Heather

    2008-01-01

    In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…

  1. Oral and Literate Strategies in Spoken and Written Narratives.

    Science.gov (United States)

    Tannen, Deborah

    1982-01-01

    Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…

  2. Evaluation of Noisy Transcripts for Spoken Document Retrieval

    NARCIS (Netherlands)

    van der Werff, Laurens Bastiaan

    2012-01-01

    This thesis introduces a novel framework for the evaluation of Automatic Speech Recognition (ASR) transcripts in an Spoken Document Retrieval (SDR) context. The basic premise is that ASR transcripts must be evaluated by measuring the impact of noise in the transcripts on the search results of a

  3. Phonological Interference in the Spoken English Performance of the ...

    African Journals Online (AJOL)

    This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...

  4. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  5. Spoken Idiom Recognition: Meaning Retrieval and Word Expectancy

    Science.gov (United States)

    Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou

    2005-01-01

    This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…

  6. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  7. An Analysis of Spoken Grammar: The Case for Production

    Science.gov (United States)

    Mumford, Simon

    2009-01-01

    Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…

  8. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  9. Lexical competition in non-native spoken-word recognition

    NARCIS (Netherlands)

    Weber, A.C.; Cutler, A.

    2004-01-01

    Six eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target

  10. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...

  11. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    Science.gov (United States)

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  12. Interaction in Spoken Word Recognition Models: Feedback Helps

    Science.gov (United States)

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  13. Interaction in Spoken Word Recognition Models: Feedback Helps

    Directory of Open Access Journals (Sweden)

    James S. Magnuson

    2018-04-01

    Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  14. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    Science.gov (United States)

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  15. Understanding Non-Restrictive "Which"-Clauses in Spoken English, Which Is Not an Easy Thing.

    Science.gov (United States)

    Tao, Hongyin; McCarthy, Michael J.

    2001-01-01

    Reexamines the notion of non-restrictive relative clauses (NRRCs) in light of spoken corpus evidence, based on analysis of 692 occurrences of non-restrictive "which"-clauses in British and American spoken English data. Reviews traditional conceptions of NRRCs and recent work on the broader notion of subordination in spoken grammar.…

  16. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  17. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  18. Castsearch - Context Based Spoken Document Retrieval

    DEFF Research Database (Denmark)

    Mølgaard, Lasse Lohilahti; Jørgensen, Kasper Winther; Hansen, Lars Kai

    2007-01-01

    The paper describes our work on the development of a system for retrieval of relevant stories from broadcast news. The system utilizes a combination of audio processing and text mining. The audio processing consists of a segmentation step that partitions the audio into speech and music. The speech...

  19. Semantic Relations Cause Interference in Spoken Language Comprehension When Using Repeated Definite References, Not Pronouns.

    Science.gov (United States)

    Peters, Sara A; Boiteau, Timothy W; Almor, Amit

    2016-01-01

    The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis.

  20. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-05-16

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements.

  1. High arousal words influence subsequent processing of neutral information: evidence from event-related potentials.

    Science.gov (United States)

    Hinojosa, José A; Méndez-Bértolo, Constantino; Pozo, Miguel A

    2012-11-01

    Recent data suggest that word valence modulates subsequent cognitive processing. However, the contribution of word arousal is less understood. In this study, behavioral and electrophysiological measures to neutral nouns and pseudowords that were preceded by either a high-arousal or a low-arousal word were recorded during a lexical decision task. Effects were found at an electrophysiological level. Target words and pseudowords elicited enhanced N100 amplitudes when they were preceded by high- compared to low-arousing words. This effect may reflect perceptual potentiation during the allocation of attentional resources when the new stimulus is processed. Enhanced amplitudes in a late positivity when target words and pseudowords followed high-arousal primes were also observed, which could be related to sustained attention during supplementary analyses at a post-lexical level. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Beta oscillations reflect memory and motor aspects of spoken word production.

    Science.gov (United States)

    Piai, Vitória; Roelofs, Ardi; Rommers, Joost; Maris, Eric

    2015-07-01

    Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band. © 2015 Wiley Periodicals, Inc.

  3. Spoken Document Retrieval Based on Confusion Network with Syllable Fragments

    Directory of Open Access Journals (Sweden)

    Zhang Lei

    2012-11-01

    Full Text Available This paper addresses the problem of spoken document retrieval under noisy conditions by incorporating sound selection of a basic unit and an output form of a speech recognition system. Syllable fragment is combined with a confusion network in a spoken document retrieval task. After selecting an appropriate syllable fragment, a lattice is converted into a confusion network that is able to minimize the word error rate instead of maximizing the whole sentence recognition rate. A vector space model is adopted in the retrieval task where tf-idf weights are derived from the posterior probability. The confusion network with syllable fragments is able to improve the mean of average precision (MAP score by 0.342 and 0.066 over one-best scheme and the lattice.

  4. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  5. METONYMY BASED ON CULTURAL BACKGROUND KNOWLEDGE AND PRAGMATIC INFERENCING: EVIDENCE FROM SPOKEN DISCOURSE

    Directory of Open Access Journals (Sweden)

    Arijana Krišković

    2009-01-01

    Full Text Available Th e characterization of metonymy as a conceptual tool for guiding inferencing in language has opened a new fi eld of study in cognitive linguistics and pragmatics. To appreciate the value of metonymy for pragmatic inferencing, metonymy should not be viewed as performing only its prototypical referential function. Metonymic mappings are operative in speech acts at the level of reference, predication, proposition and illocution. Th e aim of this paper is to study the role of metonymy in pragmatic inferencing in spoken discourse in televison interviews. Case analyses of authentic utterances classifi ed as illocutionary metonymies following the pragmatic typology of metonymic functions are presented. Th e inferencing processes are facilitated by metonymic connections existing between domains or subdomains in the same functional domain. It has been widely accepted by cognitive linguists that universal human knowledge and embodiment are essential for the interpretation of metonymy. Th is analysis points to the role of cultural background knowledge in understanding target meanings. All these aspects of metonymic connections are exploited in complex inferential processes in spoken discourse. In most cases, metaphoric mappings are also a part of utterance interpretation.

  6. Oscillatory Brain Responses Reflect Anticipation during Comprehension of Speech Acts in Spoken Dialog.

    Science.gov (United States)

    Gisladottir, Rosa S; Bögels, Sara; Levinson, Stephen C

    2018-01-01

    Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts , from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialog. Participants listened to short, spoken dialogs with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

  7. Oscillatory Brain Responses Reflect Anticipation during Comprehension of Speech Acts in Spoken Dialog

    Directory of Open Access Journals (Sweden)

    Rosa S. Gisladottir

    2018-02-01

    Full Text Available Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialog. Participants listened to short, spoken dialogs with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers. The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

  8. Linguistic adaptations during spoken and multimodal error resolution.

    Science.gov (United States)

    Oviatt, S; Bernard, J; Levow, G A

    1998-01-01

    Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.

  9. Automatic disambiguation of morphosyntax in spoken language corpora.

    Science.gov (United States)

    Parisse, C; Le Normand, M T

    2000-08-01

    The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.

  10. Reliability and validity of the C-BiLLT: a new instrument to assess comprehension of spoken language in young children with cerebral palsy and complex communication needs.

    Science.gov (United States)

    Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2014-09-01

    In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.

  11. Interface for Barge-in Free Spoken Dialogue System Based on Sound Field Reproduction and Microphone Array

    Directory of Open Access Journals (Sweden)

    Hinamoto Yoichi

    2007-01-01

    Full Text Available A barge-in free spoken dialogue interface using sound field control and microphone array is proposed. In the conventional spoken dialogue system using an acoustic echo canceller, it is indispensable to estimate a room transfer function, especially when the transfer function is changed by various interferences. However, the estimation is difficult when the user and the system speak simultaneously. To resolve the problem, we propose a sound field control technique to prevent the response sound from being observed. Combined with a microphone array, the proposed method can achieve high elimination performance with no adaptive process. The efficacy of the proposed interface is ascertained in the experiments on the basis of sound elimination and speech recognition.

  12. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Common neural substrates for inhibition of spoken and manual responses.

    Science.gov (United States)

    Xue, Gui; Aron, Adam R; Poldrack, Russell A

    2008-08-01

    The inhibition of speech acts is a critical aspect of human executive control over thought and action, but its neural underpinnings are poorly understood. Using functional magnetic resonance imaging and the stop-signal paradigm, we examined the neural correlates of speech control in comparison to manual motor control. Initiation of a verbal response activated left inferior frontal cortex (IFC: Broca's area). Successful inhibition of speech (naming of letters or pseudowords) engaged a region of right IFC (including pars opercularis and anterior insular cortex) as well as presupplementary motor area (pre-SMA); these regions were also activated by successful inhibition of a hand response (i.e., a button press). Moreover, the speed with which subjects inhibited their responses, stop-signal reaction time, was significantly correlated between speech and manual inhibition tasks. These findings suggest a functional dissociation of left and right IFC in initiating versus inhibiting vocal responses, and that manual responses and speech acts share a common inhibitory mechanism localized in the right IFC and pre-SMA.

  14. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  15. Psycholinguistic norms for action photographs in French and their relationships with spoken and written latencies.

    Science.gov (United States)

    Bonin, Patrick; Boyer, Bruno; Méot, Alain; Fayol, Michel; Droit, Sylvie

    2004-02-01

    A set of 142 photographs of actions (taken from Fiez & Tranel, 1997) was standardized in French on name agreement, image agreement, conceptual familiarity, visual complexity, imageability, age of acquisition, and duration of the depicted actions. Objective word frequency measures were provided for the infinitive modal forms of the verbs and for the cumulative frequency of the verbal forms associated with the photographs. Statistics on the variables collected for action items were provided and compared with the statistics on the same variables collected for object items. The relationships between these variables were analyzed, and certain comparisons between the current database and other similar published databases of pictures of actions are reported. Spoken and written naming latencies were also collected for the photographs of actions, and multiple regression analyses revealed that name agreement, image agreement, and age of acquisition are the major determinants of action naming speed. Finally, certain analyses were performed to compare object and action naming times. The norms and the spoken and written naming latencies corresponding to the pictures are available on the Internet (http://www.psy.univ-bpclermont.fr/~pbonin/pbonin-eng.html) and should be of great use to researchers interested in the processing of actions.

  16. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  17. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    Directory of Open Access Journals (Sweden)

    Sarah eHirschmüller

    2016-01-01

    Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  18. Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding

    NARCIS (Netherlands)

    Brown, C.M.; Berkum, J.J.A. van; Hagoort, P.

    2000-01-01

    A study is presented on the effects of discourse-semantic and lexical-syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior

  19. Spoken English Language Development Among Native Signing Children With Cochlear Implants

    OpenAIRE

    Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah

    2013-01-01

    Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...

  20. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  1. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    Science.gov (United States)

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  2. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  3. Speech Recognition System and Formant Based Analysis of Spoken Arabic Vowels

    Science.gov (United States)

    Alotaibi, Yousef Ajami; Hussain, Amir

    Arabic is one of the world's oldest languages and is currently the second most spoken language in terms of number of speakers. However, it has not received much attention from the traditional speech processing research community. This study is specifically concerned with the analysis of vowels in modern standard Arabic dialect. The first and second formant values in these vowels are investigated and the differences and similarities between the vowels are explored using consonant-vowels-consonant (CVC) utterances. For this purpose, an HMM based recognizer was built to classify the vowels and the performance of the recognizer analyzed to help understand the similarities and dissimilarities between the phonetic features of vowels. The vowels are also analyzed in both time and frequency domains, and the consistent findings of the analysis are expected to facilitate future Arabic speech processing tasks such as vowel and speech recognition and classification.

  4. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available rates when no Japanese acoustic models are constructed. An increasing amount of Japanese training data is used to train the language classifier of an English-only (E), an English-French (EF), and an English-French-Portuguese PPR system. ple.... Experimental design 3.1. Corpora Because of their role as world languages that are widely spoken in Africa, our initial LID system was designed to distinguish between English, French and Portuguese. We therefore trained phone recognizers and language...

  5. Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.

    Science.gov (United States)

    Brimo, Danielle; Lund, Emily; Sapp, Alysha

    2017-12-18

    Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below

  6. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  7. THE RECOGNITION OF SPOKEN MONO-MORPHEMIC COMPOUNDS IN CHINESE

    Directory of Open Access Journals (Sweden)

    Yu-da Lai

    2012-12-01

    Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.

  8. Event related potentials during the visual discrimination of words and pseudowords by children Potenciais evocados cognitivos na discriminação visual entre palavras e pseudopalavras em crianças

    Directory of Open Access Journals (Sweden)

    Lineu C. Fonseca

    2006-09-01

    Full Text Available Event related potentials (ERPs in reading were studied in children in a word and pseudoword discriminating task. Seventy-nine children (9 to 11 year old, all with no elements suggesting brain injury and with school performance compatible with their age were studied. The ERP were registered as there were presented, visually, successively and in a random manner, 100 words and 100 pseudowords. For each stimulus the child pressed a key corresponding to the discrimination between word and pseudoword. The register was carried out for the electrodes of the 10-20 system and the mean amplitudes and latency peaks measured and also the amplitude from 200 to 550 milliseconds. The most significant differences between the ERPs occurred in Cz, with greater negativity for the mean of the amplitude between 425 and 550 milliseconds for pseudowords (N400. The N400 was more precocious in 11 year old. The influence of age was thus evident and also the differences in ERPs between words and pseudowords.Foram estudadas 79 crianças (9 a 11 anos sem elementos sugestivos de comprometimento cerebral e com desempenho escolar compatível com a idade. Os PCL foram registrados enquanto eram apresentados, visualmente, sucessivamente e de modo aleatório, 100 palavras e 100 pseudopalavras. Para cada estímulo a criança acionava uma tecla correspondente à discriminação entre palavra e pseudopalavra. O registro foi realizado para os eletrodos do sistema 10-20 e foram feitas medidas de médias de amplitude e de latências de pico e de amplitude de 200 a 550 milissegundos. Em Cz ocorreram as diferenças mais significativas entre os PCL, com maior negatividade da média de amplitude entre 425 e 550 milissegundos para pseudopalavras (N400. O N400 foi mais precoce nas crianças com 11 anos. Evidenciou-se assim a influência da idade e as diferenças no PCL entre palavras e pseudopalavras.

  9. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Directory of Open Access Journals (Sweden)

    Michael Inspector

    Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  10. Long-term temporal tracking of speech rate affects spoken-word recognition.

    Science.gov (United States)

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  11. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    Science.gov (United States)

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  12. The Frequency and Functions of "Just" in British Academic Spoken English

    Science.gov (United States)

    Grant, Lynn E.

    2011-01-01

    This study investigates the frequency and functions of "just" in British academic spoken English. It adopts the meanings of "just" established by Lindemann and Mauranen, 2001, taken from the occurrences of "just" across five speech events in the Michigan Corpus of Academic Spoken English (MICASE) to see if they also apply to occurrences of "just"…

  13. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  14. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    Science.gov (United States)

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  15. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  16. What does že jo (and že ne) mean in spoken dialogue

    Czech Academy of Sciences Publication Activity Database

    Komrsková, Zuzana

    2017-01-01

    Roč. 68, č. 2 (2017), s. 229-237 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : spoken languge * spoken corpus * tag question * responze word Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  17. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  18. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  19. Word Up: Using Spoken Word and Hip Hop Subject Matter in Pre-College Writing Instruction.

    Science.gov (United States)

    Sirc, Geoffrey; Sutton, Terri

    2009-01-01

    In June 2008, the Department of English at the University of Minnesota partnered with the Minnesota Spoken Word Association to inaugurate an outreach literacy program for local high-school students and teachers. The four-day institute, named "In Da Tradition," used spoken word and hip hop to teach academic and creative writing to core-city…

  20. Using the TED Talks to Evaluate Spoken Post-editing of Machine Translation

    DEFF Research Database (Denmark)

    Liyanapathirana, Jeevanthi; Popescu-Belis, Andrei

    2016-01-01

    . To obtain a data set with spoken post-editing information, we use the French version of TED talks as the source texts submitted to MT, and the spoken English counterparts as their corrections, which are submitted to an ASR system. We experiment with various levels of artificial ASR noise and also...

  1. Four Functionally Distinct Regions in the Left Supramarginal Gyrus Support Word Processing.

    Science.gov (United States)

    Oberhuber, M; Hope, T M H; Seghier, M L; Parker Jones, O; Prejawa, S; Green, D W; Price, C J

    2016-09-06

    We used fMRI in 85 healthy participants to investigate whether different parts of the left supramarginal gyrus (SMG) are involved in processing phonological inputs and outputs. The experiment involved 2 tasks (speech production (SP) and one-back (OB) matching) on 8 different types of stimuli that systematically varied the demands on sensory processing (visual vs. auditory), sublexical phonological input (words and pseudowords vs. nonverbal stimuli), and semantic content (words and objects vs. pseudowords and meaningless baseline stimuli). In ventral SMG, we found an anterior subregion associated with articulatory sequencing (for SP > OB matching) and a posterior subregion associated with auditory short-term memory (for all auditory > visual stimuli and written words and pseudowords > objects). In dorsal SMG, a posterior subregion was most highly activated by words, indicating a role in the integration of sublexical and lexical cues. In anterior dorsal SMG, activation was higher for both pseudoword reading and object naming compared with word reading, which is more consistent with executive demands than phonological processing. The dissociation of these four "functionally-distinct" regions, all within left SMG, has implications for differentiating between different types of phonological processing, understanding the functional anatomy of language and predicting the effect of brain damage. © The Author 2016. Published by Oxford University Press.

  2. The Relationship between Phonological and Auditory Processing and Brain Organization in Beginning Readers

    Science.gov (United States)

    Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.

    2013-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…

  3. Spoken commands control robot that handles radioactive materials

    International Nuclear Information System (INIS)

    Phelan, P.F.; Keddy, C.; Beugelsdojk, T.J.

    1989-01-01

    Several robotic systems have been developed by Los Alamos National Laboratory to handle radioactive material. Because of safety considerations, the robotic system must be under direct human supervision and interactive control continuously. In this paper, we describe the implementation of a voice-recognition system that permits this control, yet allows the robot to perform complex preprogrammed manipulations without the operator's intervention. To provide better interactive control, we connected to the robot's control computer, a speech synthesis unit, which provides audible feedback to the operator. Thus upon completion of a task or if an emergency arises, an appropriate spoken message can be reported by the control computer. The training programming and operation of this commercially available system are discussed, as are the practical problems encountered during operations

  4. Computational Interpersonal Communication: Communication Studies and Spoken Dialogue Systems

    Directory of Open Access Journals (Sweden)

    David J. Gunkel

    2016-09-01

    Full Text Available With the advent of spoken dialogue systems (SDS, communication can no longer be considered a human-to-human transaction. It now involves machines. These mechanisms are not just a medium through which human messages pass, but now occupy the position of the other in social interactions. But the development of robust and efficient conversational agents is not just an engineering challenge. It also depends on research in human conversational behavior. It is the thesis of this paper that communication studies is best situated to respond to this need. The paper argues: 1 that research in communication can supply the information necessary to respond to and resolve many of the open problems in SDS engineering, and 2 that the development of SDS applications can provide the discipline of communication with unique opportunities to test extant theory and verify experimental results. We call this new area of interdisciplinary collaboration “computational interpersonal communication” (CIC

  5. Predicting user mental states in spoken dialogue systems

    Science.gov (United States)

    Callejas, Zoraida; Griol, David; López-Cózar, Ramón

    2011-12-01

    In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  6. Predicting user mental states in spoken dialogue systems

    Directory of Open Access Journals (Sweden)

    Griol David

    2011-01-01

    Full Text Available Abstract In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  7. CONVERTING RETRIEVED SPOKEN DOCUMENTS INTO TEXT USING AN AUTO ASSOCIATIVE NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2016-06-01

    Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.

  8. Interferência da língua falada na escrita de crianças: processos de apagamento da oclusiva dental /d/ e da vibrante final /r/ Interference of the spoken language on children's writing: cancellation processes of the dental occlusive /d/ and final vibrant /r/

    Directory of Open Access Journals (Sweden)

    Socorro Cláudia Tavares de Sousa

    2009-01-01

    Full Text Available O presente trabalho tem como objetivo investigar a influência da língua falada na escrita de crianças em relação aos fenômenos do cancelamento da dental /d/ e da vibrante final /r/. Elaboramos e aplicamos um instrumento de pesquisa em alunos do Ensino Fundamental em escolas de Fortaleza. Para a análise dos dados obtidos, utilizamos o software SPSS. Os resultados nos revelaram que o sexo masculino e as palavras polissílabas são fatores que influenciam, de forma parcial, a realização da variável dependente /no/ e que os verbos e o nível de escolaridade são elementos condicionadores para o cancelamento da vibrante final /r/.The present study aims to investigate the influence of the spoken language in children's writing in relation to the phenomena of cancellation of dental /d/ and final vibrant /r/. We elaborated and applied a research instrument to children from primary school in Fortaleza. We used the software SPSS to analyze the data. The results showed that the male sex and the words which have three or more syllable are factors that influence, in part, the realization of the dependent variable /no/ and that verbs and level of education are conditioners elements for the cancellation of the final vibrant /r/.

  9. The Effect of Lexical Frequency on Spoken Word Recognition in Young and Older Listeners

    Science.gov (United States)

    Revill, Kathleen Pirog; Spieler, Daniel H.

    2011-01-01

    When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults’ eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age. PMID:21707175

  10. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  11. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available This article introduces the first Spoken Language Identification system developed to distinguish among all eleven of South Africa’s official languages. The PPR-LM (Parallel Phoneme Recognition followed by Language Modeling) architecture...

  12. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...

  13. Initial fieldwork for LWAZI: a telephone-based spoken dialog system for rural South Africa

    CSIR Research Space (South Africa)

    Gumede, T

    2009-03-01

    Full Text Available government information and services. Our interviews, focus group discussions and surveys revealed that Lwazi, a telephone-based spoken dialog system, could greatly support current South African government efforts to effectively connect citizens to available...

  14. Children reading spoken words: interactions between vocabulary and orthographic expectancy.

    Science.gov (United States)

    Wegener, Signy; Wang, Hua-Chen; de Lissa, Peter; Robidoux, Serje; Nation, Kate; Castles, Anne

    2017-07-12

    There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., 'nesh', 'coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. © 2017 John Wiley & Sons Ltd.

  15. Give and take: syntactic priming during spoken language comprehension.

    Science.gov (United States)

    Thothathiri, Malathi; Snedeker, Jesse

    2008-07-01

    Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven more elusive, fueling claims that comprehension is less dependent on general syntactic representations and more dependent on lexical knowledge. In three experiments we explored syntactic priming during spoken language comprehension. Participants acted out double-object (DO) or prepositional-object (PO) dative sentences while their eye movements were recorded. Prime sentences used different verbs and nouns than the target sentences. In target sentences, the onset of the direct-object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (DO e.g., Show the horse the book; PO e.g., Show the horn to the dog). We measured the difference in looks to the potential recipient and the potential theme during the ambiguous interval. In all experiments, participants who heard DO primes showed a greater preference for the recipient over the theme than those who heard PO primes, demonstrating across-verb priming during online language comprehension. These results accord with priming found in production studies, indicating a role for abstract structural information during comprehension as well as production.

  16. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    Science.gov (United States)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  17. Spoken Language Production in Young Adults: Examining Syntactic Complexity.

    Science.gov (United States)

    Nippold, Marilyn A; Frantz-Kaspar, Megan W; Vigeland, Laura M

    2017-05-24

    In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment. Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density. Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task. Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.

  18. Spoken language interface for a network management system

    Science.gov (United States)

    Remington, Robert J.

    1999-11-01

    Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.

  19. Native Language Spoken as a Risk Marker for Tooth Decay.

    Science.gov (United States)

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  20. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within

  1. Spoken English and the question of grammar: the role of the functional model

    OpenAIRE

    Coffin, Caroline

    2003-01-01

    Given the nature of spoken text, the first requirement of an appropriate grammar is its ability to account for stretches of language (including recurring types of text or genres), in addition to clause level patterns. Second, the grammatical model needs to be part of a wider theory of language that recognises the functional nature and educational purposes of spoken text. The model also needs to be designed in a\\ud sufficiently comprehensive way so as to account for grammatical forms in speech...

  2. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers With Down Syndrome.

    Science.gov (United States)

    Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth

    2015-07-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase.

  3. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  4. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals.

    Science.gov (United States)

    Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda

    2010-09-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.

  5. Inferior Frontal Cortex Contributions to the Recognition of Spoken Words and Their Constituent Speech Sounds.

    Science.gov (United States)

    Rogers, Jack C; Davis, Matthew H

    2017-05-01

    Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.

  6. SyllabO+: A new tool to study sublexical phenomena in spoken Quebec French.

    Science.gov (United States)

    Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale

    2017-10-01

    Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.

  7. Textual, Genre and Social Features of Spoken Grammar: A Corpus-Based Approach

    Directory of Open Access Journals (Sweden)

    Carmen Pérez-Llantada

    2009-02-01

    Full Text Available This paper describes a corpus-based approach to teaching and learning spoken grammar for English for Academic Purposes with reference to Bhatia’s (2002 multi-perspective model for discourse analysis: a textual perspective, a genre perspective and a social perspective. From a textual perspective, corpus-informed instruction helps students identify grammar items through statistical frequencies, collocational patterns, context-sensitive meanings and discoursal uses of words. From a genre perspective, corpus observation provides students with exposure to recurrent lexico-grammatical patterns across different academic text types (genres. From a social perspective, corpus models can be used to raise learners’ awareness of how speakers’ different discourse roles, discourse privileges and power statuses are enacted in their grammar choices. The paper describes corpus-based instructional procedures, gives samples of learners’ linguistic output, and provides comments on the students’ response to this method of instruction. Data resulting from the assessment process and student production suggest that corpus-informed instruction grounded in Bhatia’s multi-perspective model can constitute a pedagogical approach in order to i obtain positive student responses from input and authentic samples of grammar use, ii help students identify and understand the textual, genre and social aspects of grammar in real contexts of use, and therefore iii help develop students’ ability to use grammar accurately and appropriately.

  8. Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe.

    Science.gov (United States)

    Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene

    2014-01-01

    Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.

  9. THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING (CLT TO TEACH SPOKEN RECOUNTS IN SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Eri Rusnawati

    2016-10-01

    Full Text Available Tujuan dari penelitian ini adalah untuk menggambarkan penerapan metode Communicative Language Teaching/CLT untuk pembelajaran spoken recount. Penelitian ini menelaah data yang kualitatif. Penelitian ini mengambarkan fenomena yang terjadi di dalam kelas. Data studi ini adalah perilaku dan respon para siswa dalam pembelajaran spoken recount dengan menggunakan metode CLT. Subjek penelitian ini adalah para siswa kelas X SMA Negeri 1 Kuaro yang terdiri dari 34 siswa. Observasi dan wawancara dilakukan dalam rangka untuk mengumpulkan data dalam mengajarkan spoken recount melalui tiga aktivitas (presentasi, bermain-peran, serta melakukan prosedur. Dalam penelitian ini ditemukan beberapa hal antara lain bahwa CLT meningkatkan kemampuan berbicara siswa dalam pembelajaran recount. Berdasarkan pada grafik peningkatan, disimpulkan bahwa tata bahasa, kosakata, pengucapan, kefasihan, serta performa siswa mengalami peningkatan. Ini berarti bahwa performa spoken recount dari para siswa meningkat. Andaikata presentasi ditempatkan di bagian akhir dari langkah-langkah aktivitas, peforma spoken recount para siswa bahkan akan lebih baik lagi. Kesimpulannya adalah bahwa implementasi metode CLT beserta tiga praktiknya berkontribusi pada peningkatan kemampuan berbicara para siswa dalam pembelajaran recount dan bahkan metode CLT mengarahkan mereka untuk memiliki keberanian dalam mengonstruksi komunikasi yang bermakna dengan percaya diri. Kata kunci: Communicative Language Teaching (CLT, recount, berbicara, respon siswa

  10. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  11. A Descriptive Study of Registers Found in Spoken and Written Communication (A Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Nurul Hidayah

    2016-07-01

    Full Text Available This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. The writer relates it to the study of register in spoken and written communication. The technique of analyzing the data using descriptive method. The types of register in this term will be separated into formal register and informal register, and identify the meaning of register.

  12. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception.

    Science.gov (United States)

    Liebenthal, Einat; Silbersweig, David A; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.

  13. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  14. Four Functionally Distinct Regions in the Left Supramarginal Gyrus Support Word Processing

    OpenAIRE

    Oberhuber, M.; Hope, T. M. H.; Seghier, M. L.; Parker Jones, O.; Prejawa, S.; Green, D. W.; Price, C. J

    2016-01-01

    We used fMRI in 85 healthy participants to investigate whether different parts of the left supramarginal gyrus (SMG) are involved in processing phonological inputs and outputs. The experiment involved 2 tasks (speech production (SP) and one-back (OB) matching) on 8 different types of stimuli that systematically varied the demands on sensory processing (visual vs. auditory), sublexical phonological input (words and pseudowords vs. nonverbal stimuli), and semantic content (words and objects vs....

  15. Orthographic consistency affects spoken word recognition at different grain-sizes

    DEFF Research Database (Denmark)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous.......g., lobe) faster than words with consistent rhymes where the vowel has a less typical spelling (e.g., loaf). The present study extends previous literature by showing that auditory word recognition is affected by orthographic regularities at different grain sizes, just like written word recognition...... and spelling. The theoretical and methodological implications for future research in spoken word recognition are discussed....

  16. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  17. The power of the spoken word in life, psychiatry, and psychoanalysis--a contribution to interpersonal psychoanalysis.

    Science.gov (United States)

    Lothane, Zvi

    2007-09-01

    Starting with a 1890 essay by Freud, the author goes in search of an interpersonal psychology native to Freud's psychoanalytic method and to in psychoanalysis and the interpersonal method in psychiatry. This derives from the basic interpersonal nature of the human situation in the lives of individuals and social groups. Psychiatry, the healing of the soul, and psychotherapy, therapy of the soul, are examined from the perspective of the communication model, based on the essential interpersonal function of language and the spoken word: persons addressing speeches to themselves and to others in relations, between family members, others in society, and the professionals who serve them. The communicational model is also applied in examining psychiatric disorders and psychiatric diagnoses, as well as psychodynamic formulas, which leads to a reformulation of the psychoanalytic therapy as a process. A plea is entered to define psychoanalysis as an interpersonal discipline, in analogy to Sullivan's interpersonal psychiatry.

  18. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  19. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  20. Corrective Feedback, Spoken Accuracy and Fluency, and the Trade-Off Hypothesis

    Science.gov (United States)

    Chehr Azad, Mohammad Hassan; Farrokhi, Farahman; Zohrabi, Mohammad

    2018-01-01

    The current study was an attempt to investigate the effects of different corrective feedback (CF) conditions on Iranian EFL learners' spoken accuracy and fluency (AF) and the trade-off between them. Consequently, four pre-intermediate intact classes were randomly selected as the control, delayed explicit metalinguistic CF, extensive recast, and…

  1. Investigating L2 Spoken English through the Role Play Learner Corpus

    Science.gov (United States)

    Nava, Andrea; Pedrazzini, Luciana

    2011-01-01

    We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…

  2. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  3. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  4. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    Science.gov (United States)

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  5. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  6. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    Science.gov (United States)

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  7. Difference between Written and Spoken Czech: The Case of Verbal Nouns Denoting an Action

    Czech Academy of Sciences Publication Activity Database

    Kolářová, V.; Kolář, Jan; Mikulová, M.

    2017-01-01

    Roč. 107, č. 1 (2017), s. 19-38 ISSN 0032-6585 Institutional support: RVO:67985840 Keywords : written Czech * spoken Czech * verbal nouns Subject RIV: AI - Linguistics OBOR OECD: Pure mathematics https://www.degruyter.com/view/j/pralin.2017.107.issue-1/pralin-2017-0002/pralin-2017-0002. xml

  8. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  9. Cooperativity in Human-Machine and Human-Human Spoken Dialogue.

    Science.gov (United States)

    Bernsen, Niels Ole; And Others

    1996-01-01

    Presents principles of dialog cooperativity derived from a corpus of task-oriented spoken human-machine dialog. Analyzes the "corpus" to produce a set of dialog design principles intended to prevent users from having to initiate clarification and repair metacommunication that the system would not understand. Proposes a "more…

  10. Between Syntax and Pragmatics: The Causal Conjunction Protože in Spoken and Written Czech

    Czech Academy of Sciences Publication Activity Database

    Čermáková, Anna; Komrsková, Zuzana; Kopřivová, Marie; Poukarová, Petra

    -, 25.04.2017 (2017), s. 393-414 ISSN 2509-9507 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : Causality * Discourse marker * Spoken language * Czech Subject RIV: AI - Linguistics OBOR OECD: Linguistics https://link.springer.com/content/pdf/10.1007%2Fs41701-017-0014-y.pdf

  11. Teaching Spoken Discourse Markers Explicitly: A Comparison of III and PPP

    Science.gov (United States)

    Jones, Christian; Carter, Ronald

    2014-01-01

    This article reports on mixed methods classroom research carried out at a British university. The study investigates the effectiveness of two different explicit teaching frameworks, Illustration--Interaction--Induction (III) and Present--Practice--Produce (PPP) used to teach the same spoken discourse markers (DMs) to two different groups of…

  12. Beta oscillations reflect memory and motor aspects of spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Rommers, J.; Maris, E.G.G.

    2015-01-01

    Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in

  13. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome

    Science.gov (United States)

    Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth

    2015-01-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…

  14. Evaluating spoken dialogue systems according to de-facto standards: A case study

    NARCIS (Netherlands)

    Möller, S.; Smeele, P.; Boland, H.; Krebber, J.

    2007-01-01

    In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During

  15. The Role of Oral Communicative Tasks (OCT) in Developing the Spoken Proficiency of Engineering Students

    Science.gov (United States)

    Shantha, S.; Mekala, S.

    2017-01-01

    The mastery of speaking skills in English has become a major requisite in engineering industry. Engineers are expected to possess speaking skills for executing their routine activities and career prospects. The article focuses on the experimental study conducted to improve English spoken proficiency of Indian engineering students using task-based…

  16. Effects of Tasks on Spoken Interaction and Motivation in English Language Learners

    Science.gov (United States)

    Carrero Pérez, Nubia Patricia

    2016-01-01

    Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…

  17. KANNADA--A CULTURAL INTRODUCTION TO THE SPOKEN STYLES OF THE LANGUAGE.

    Science.gov (United States)

    KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM

    THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…

  18. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    Science.gov (United States)

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  19. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  20. Authentic ESL Spoken Materials: Soap Opera and Sitcom versus Natural Conversation

    Science.gov (United States)

    Al-Surmi, Mansoor Ali

    2012-01-01

    TV shows, especially soap operas and sitcoms, are usually considered by ESL practitioners as a source of authentic spoken conversational materials presumably because they reflect the linguistic features of natural conversation. However, practitioners might be faced with the dilemma of how to evaluate whether such conversational materials reflect…

  1. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  2. Perception and Lateralization of Spoken Emotion by Youths with High-Functioning Forms of Autism

    Science.gov (United States)

    Baker, Kimberly F.; Montgomery, Allen A.; Abramson, Ruth

    2010-01-01

    The perception and the cerebral lateralization of spoken emotions were investigated in children and adolescents with high-functioning forms of autism (HFFA), and age-matched typically developing controls (TDC). A dichotic listening task using nonsense passages was used to investigate the recognition of four emotions: happiness, sadness, anger, and…

  3. User-Centred Design for Chinese-Oriented Spoken English Learning System

    Science.gov (United States)

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  4. Difference between Written and Spoken Czech: The Case of Verbal Nouns Denoting an Action

    Czech Academy of Sciences Publication Activity Database

    Kolářová, V.; Kolář, Jan; Mikulová, M.

    2017-01-01

    Roč. 107, č. 1 (2017), s. 19-38 ISSN 0032-6585 Institutional support: RVO:67985840 Keywords : written Czech * spoken Czech * verbal nouns Subject RIV: AI - Linguistics OBOR OECD: Pure mathematics https://www.degruyter.com/view/j/pralin.2017.107.issue-1/pralin-2017-0002/pralin-2017-0002.xml

  5. Chunk Learning and the Development of Spoken Discourse in a Japanese as a Foreign Language Classroom

    Science.gov (United States)

    Taguchi, Naoko

    2007-01-01

    This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…

  6. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  8. Improving the Grammatical Accuracy of the Spoken English of Indonesian International Kindergarten Students

    Science.gov (United States)

    Gozali, Imelda; Harjanto, Ignatius

    2014-01-01

    The need to improve the spoken English of kindergarten students in an international preschool in Surabaya prompted this Classroom Action Research (CAR). It involved the implementation of Form-Focused Instruction (FFI) strategy coupled with Corrective Feedback (CF) in Grammar lessons. Four grammar topics were selected, namely Regular Plural form,…

  9. Mental Imagery as Revealed by Eye Movements and Spoken Predicates: A Test of Neurolinguistic Programming.

    Science.gov (United States)

    Elich, Matthew; And Others

    1985-01-01

    Tested Bandler and Grinder's proposal that eye movement direction and spoken predicates are indicative of sensory modality of imagery. Subjects reported images in the three modes, but no relation between imagery and eye movements or predicates was found. Visual images were most vivid and often reported. Most subjects rated themselves as visual,…

  10. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    Science.gov (United States)

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  11. Probabilistic Phonotactics as a Cue for Recognizing Spoken Cantonese Words in Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2017-01-01

    Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using…

  12. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    Science.gov (United States)

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  13. Spoken language interaction with model uncertainty: an adaptive human-robot interaction system

    Science.gov (United States)

    Doshi, Finale; Roy, Nicholas

    2008-12-01

    Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.

  14. The Spoken Word, the Book and the Image in the Work of Evangelization

    Directory of Open Access Journals (Sweden)

    Jerzy Strzelczyk

    2017-06-01

    Full Text Available Little is known about the ‘material’ equipment of the early missionaries who set out to evangelize pagans and apostates, since the authors of the sources focused mainly on the successes (or failures of the missions. Information concerning the ‘infrastructure’ of missions is rather occasional and of fragmentary nature. The major part in the process of evangelization must have been played by the spoken word preached indirectly or through an interpreter, at least in the areas and milieus remote from the centers of ancient civilization. It could not have been otherwise when coming into contact with communities which did not know the art of reading, still less writing. A little more attention is devoted to the other two media, that is, the written word and the images. The significance of the written word was manifold, and – at least as the basic liturgical books are concerned (the missal, the evangeliary? – the manuscripts were indispensable elements of missionaries’ equipment. In certain circumstances the books which the missionaries had at their disposal could acquire special – even magical – significance, the most comprehensible to the Christianized people (the examples given: the evangeliary of St. Winfried-Boniface in the face of death at the hands of a pagan Frisian, the episode with a manuscript in the story of Anskar’s mission written by Rimbert. The role of the plastic art representations (images during the missions is much less frequently mentioned in the sources. After quoting a few relevant examples (Bede the Venerable, Ermoldus Nigellus, Paul the Deacon, Thietmar of Merseburg, the author also cites an interesting, although not entirely successful, attempt to use drama to instruct the Livonians in the faith while converting them to Christianity, which was reported by Henry of Latvia.

  15. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  16. Early processing of orthographic language membership information in bilingual visual word recognition: Evidence from ERPs.

    Science.gov (United States)

    Hoversten, Liv J; Brothers, Trevor; Swaab, Tamara Y; Traxler, Matthew J

    2017-08-01

    For successful language comprehension, bilinguals often must exert top-down control to access and select lexical representations within a single language. These control processes may critically depend on identification of the language to which a word belongs, but it is currently unclear when different sources of such language membership information become available during word recognition. In the present study, we used event-related potentials to investigate the time course of influence of orthographic language membership cues. Using an oddball detection paradigm, we observed early neural effects of orthographic bias (Spanish vs. English orthography) that preceded effects of lexicality (word vs. pseudoword). This early orthographic pop-out effect was observed for both words and pseudowords, suggesting that this cue is available prior to full lexical access. We discuss the role of orthographic bias for models of bilingual word recognition and its potential role in the suppression of nontarget lexical information. Published by Elsevier Ltd.

  17. A neural mechanism for recognizing speech spoken by different speakers

    NARCIS (Netherlands)

    Kreitewolf, Jens; Gaudrain, Etienne; von Kriegstein, Katharina

    2014-01-01

    Understanding speech from different speakers is a sophisticated process, particularly because the same acoustic parameters convey important information about both the speech message and the person speaking. How the human brain accomplishes speech recognition under such conditions is unknown. One

  18. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  19. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  20. The emergence and development of a spoken standard in England (1400-1926)

    DEFF Research Database (Denmark)

    Nielsen, Hans Frede

    2017-01-01

    of English orthography in the seventeenth and eighteenth centuries spelling conventions often came to serve as guidelines for proper pronunciation, a notion that was rejected, however, by elocutionists before and just after 1800. It was only with the introduction of compulsory education in the latter half...... of the nineteenth century and with the establishment of the "great public boarding-schools" in particular, that a non-localized spoken standard based on educated southern speech came into being. "Public School English" had become a class dialect, and under the name of "Received Pronunciation" (a term devised......The beginnings of a spoken standard in England go back to late Middle English and early Modern English times, where southern speech and especially the idiom of the Court, London and the Home Counties acquired prestige beyond that of other regional dialects. With the increasing stabilization...

  1. The power of the spoken word: sociolinguistic cues influence the misinformation effect.

    Science.gov (United States)

    Vornik, Lana A; Sharman, Stefanie J; Garry, Maryanne

    2003-01-01

    We investigated whether the sociolinguistic information delivered by spoken, accented postevent narratives would influence the misinformation effect. New Zealand subjects listened to misleading postevent information spoken in either a New Zealand (NZ) or North American (NA) accent. Consistent with earlier research, we found that NA accents were seen as more powerful and more socially attractive. We found that accents per se had no influence on the misinformation effect but sociolinguistic factors did: both power and social attractiveness affected subjects' susceptibility to misleading postevent suggestions. When subjects rated the speaker highly on power, social attractiveness did not matter; they were equally misled. However, when subjects rated the speaker low on power, social attractiveness did matter: subjects who rated the speaker high on social attractiveness were more misled than subjects who rated it lower. There were similar effects for confidence. These results have implications for our understanding of social influences on the misinformation effect.

  2. Comparing spoken language treatments for minimally verbal preschoolers with autism spectrum disorders.

    Science.gov (United States)

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-02-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.

  3. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    Science.gov (United States)

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (Pphenylketonuria subjects.

  4. Why not model spoken word recognition instead of phoneme monitoring?

    NARCIS (Netherlands)

    Vroomen, J.; de Gelder, B.

    2000-01-01

    Norris, McQueen & Cutler present a detailed account of the decision stage of the phoneme monitoring task. However, we question whether this contributes to our understanding of the speech recognition process itself, and we fail to see why phonotactic knowledge is playing a role in phoneme

  5. Enhancing spoken connected-digit recognition accuracy by error ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    In http://web.syr.edu/ rrosenqu/ecc/main.htm. Sethi A, Rajaraman V, Kenjale P 1978 An error-correcting coding system for alphanumeric data. Inf. Process. Lett. 7: 72–77. Wagner N, Putter P 1989 Error detecting decimal digits. Commun. ACM 32: 106–110. Wagner N R 2002 The laws of cryptography: Coping with decimal ...

  6. A neural mechanism for recognizing speech spoken by different speakers.

    Science.gov (United States)

    Kreitewolf, Jens; Gaudrain, Etienne; von Kriegstein, Katharina

    2014-05-01

    Understanding speech from different speakers is a sophisticated process, particularly because the same acoustic parameters convey important information about both the speech message and the person speaking. How the human brain accomplishes speech recognition under such conditions is unknown. One view is that speaker information is discarded at early processing stages and not used for understanding the speech message. An alternative view is that speaker information is exploited to improve speech recognition. Consistent with the latter view, previous research identified functional interactions between the left- and the right-hemispheric superior temporal sulcus/gyrus, which process speech- and speaker-specific vocal tract parameters, respectively. Vocal tract parameters are one of the two major acoustic features that determine both speaker identity and speech message (phonemes). Here, using functional magnetic resonance imaging (fMRI), we show that a similar interaction exists for glottal fold parameters between the left and right Heschl's gyri. Glottal fold parameters are the other main acoustic feature that determines speaker identity and speech message (linguistic prosody). The findings suggest that interactions between left- and right-hemispheric areas are specific to the processing of different acoustic features of speech and speaker, and that they represent a general neural mechanism when understanding speech from different speakers. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Spoken Grammar: An Urgent Necessity in the EFL Context

    Science.gov (United States)

    Al-wossabi, Sami A.

    2014-01-01

    Recent studies in corpus linguistics have revealed apparent inconsistencies between the prescriptive grammar presented in EFL textbooks and the type of grammar used in the speech of native speakers. Such variations and learning gaps deprive EFL learners of the actual use of English and delay their oral/aural developmental processes. The focus of…

  8. Silent Letters Are Activated in Spoken Word Recognition

    Science.gov (United States)

    Ranbom, Larissa J.; Connine, Cynthia M.

    2011-01-01

    Four experiments are reported that investigate processing of mispronounced words for which the phonological form is inconsistent with the graphemic form (words spelled with silent letters). Words produced as mispronunciations that are consistent with their spelling were more confusable with their citation form counterpart than mispronunciations…

  9. Spoken word production: A theory of lexical access

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2001-01-01

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker's focusing on a target concept and ending with the initiation of articulation. The initial

  10. Attention demands of spoken word planning: A review

    NARCIS (Netherlands)

    Roelofs, A.P.A.; Piai, V.

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot

  11. a model for incremental grounding in spoken dialogue systems

    NARCIS (Netherlands)

    Visser, Thomas; Traum, David; DeVault, David; op den Akker, Hendrikus J.A.

    2012-01-01

    Recent advances in incremental language processing for dialogue systems promise to enable more natural conversation between humans and computers. By analyzing the user's utterance while it is still in progress, systems can provide more human-like overlapping and backchannel responses to convey their

  12. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    Science.gov (United States)

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  13. A Study on Motivation and Strategy Use of Bangladeshi University Students to Learn Spoken English

    OpenAIRE

    Mst. Moriam, Quadir

    2008-01-01

    This study discusses motivation and strategy use of university students to learn spoken English in Bangladesh. A group of 355 (187 males and 168 females) university students participated in this investigation. To measure learners' degree of motivation a modified version of questionnaire used by Schmidt et al. (1996) was administered. Participants reported their strategy use on a modified version of SILL, the Strategy Inventory for Language Learning, version 7.0 (Oxford, 1990). In order to fin...

  14. Sentence Recognition in Quiet and Noise by Pediatric Cochlear Implant Users: Relationships to Spoken Language.

    Science.gov (United States)

    Eisenberg, Laurie S; Fisher, Laurel M; Johnson, Karen C; Ganguly, Dianne Hammes; Grace, Thelma; Niparko, John K

    2016-02-01

    We investigated associations between sentence recognition and spoken language for children with cochlear implants (CI) enrolled in the Childhood Development after Cochlear Implantation (CDaCI) study. In a prospective longitudinal study, sentence recognition percent-correct scores and language standard scores were correlated at 48-, 60-, and 72-months post-CI activation. Six tertiary CI centers in the United States. Children with CIs participating in the CDaCI study. Cochlear implantation. Sentence recognition was assessed using the Hearing In Noise Test for Children (HINT-C) in quiet and at +10, +5, and 0 dB signal-to-noise ratio (S/N). Spoken language was assessed using the Clinical Assessment of Spoken Language (CASL) core composite and the antonyms, paragraph comprehension (syntax comprehension), syntax construction (expression), and pragmatic judgment tests. Positive linear relationships were found between CASL scores and HINT-C sentence scores when the sentences were delivered in quiet and at +10 and +5 dB S/N, but not at 0 dB S/N. At 48 months post-CI, sentence scores at +10 and +5 dB S/N were most strongly associated with CASL antonyms. At 60 and 72 months, sentence recognition in noise was most strongly associated with paragraph comprehension and syntax construction. Children with CIs learn spoken language in a variety of acoustic environments. Despite the observed inconsistent performance in different listening situations and noise-challenged environments, many children with CIs are able to build lexicons and learn the rules of grammar that enable recognition of sentences.

  15. The Role of Oral Communicative Tasks (OCT) in Developing the Spoken Proficiency of Engineering Students

    OpenAIRE

    S. Shantha; S. Mekala

    2017-01-01

    The mastery of speaking skills in English has become a major requisite in engineering industry. Engineers are expected to possess speaking skills for executing their routine activities and career prospects. The article focuses on the experimental study conducted to improve English spoken proficiency of Indian engineering students using task-based approach. Tasks are activities that concentrates on the learners in providing the main context and focus for learning. Therefore, a task facilitates...

  16. Discourse context and the recognition of reduced and canonical spoken words

    OpenAIRE

    Brouwer, S.; Mitterer, H.; Huettig, F.

    2013-01-01

    In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" ...

  17. Working memory affects older adults' use of context in spoken-word recognition.

    Science.gov (United States)

    Janse, Esther; Jesse, Alexandra

    2014-01-01

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

  18. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    Directory of Open Access Journals (Sweden)

    Juan Manuel Montero

    2013-08-01

    Full Text Available We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO. In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA. NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users’ frustrations and, ultimately, improving their satisfaction.

  19. I feel you: the design and evaluation of a domotic affect-sensitive spoken conversational agent.

    Science.gov (United States)

    Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel

    2013-08-13

    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.

  20. Spoken sentence production in college students with dyslexia: working memory and vocabulary effects.

    Science.gov (United States)

    Wiseheart, Rebecca; Altmann, Lori J P

    2017-11-21

    Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group differences can be attributable to cognitive differences between groups. Fifty-one college students with and without dyslexia were asked to produce sentences from stimuli comprising a verb and two nouns. Verb types varied in argument structure and morphological form and nouns varied in animacy. Outcome measures were precision (measured by fluency, grammaticality and completeness) and efficiency (measured by response times). Vocabulary and working memory tests were also administered and used as predictors of sentence production performance. Relative to non-dyslexic peers, students with dyslexia responded significantly slower and produced sentences that were significantly less precise in terms of fluency, grammaticality and completeness. The primary predictors of precision and efficiency were working memory, which differed between groups, and vocabulary, which did not. College students with dyslexia were significantly less facile and flexible on this spoken sentence-production task than typical readers, which is consistent with previous studies of school-age children with dyslexia. Group differences in performance were traced primarily to limited working memory, and were somewhat mitigated by strong vocabulary. © 2017 Royal College of Speech and Language Therapists.

  1. Adaptation to Pronunciation Variations in Indonesian Spoken Query-Based Information Retrieval

    Science.gov (United States)

    Lestari, Dessi Puji; Furui, Sadaoki

    Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.

  2. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  3. Computational modeling of turn-taking dynamics in spoken conversations

    OpenAIRE

    Chowdhury, Shammur Absar

    2017-01-01

    The study of human interaction dynamics has been at the center for multiple research disciplines in- cluding computer and social sciences, conversational analysis and psychology, for over decades. Recent interest has been shown with the aim of designing computational models to improve human-machine interaction system as well as support humans in their decision-making process. Turn-taking is one of the key aspects of conversational dynamics in dyadic conversations and is an integral part of hu...

  4. Endowing Spoken Language Dialogue System with Emotional Intelligence

    DEFF Research Database (Denmark)

    André, Elisabeth; Rehm, Matthias; Minker, Wolfgang

    2004-01-01

    While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic variations in order to improve the user’s perception of the interaction. To accomplish this goal, our approach integrates a social theory of polite...... of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available....

  5. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  6. The Self-Organization of a Spoken Word

    Directory of Open Access Journals (Sweden)

    John G. eHolden

    2012-07-01

    Full Text Available Pronunciation time probability density and hazard functions from large speeded wordnaming data sets were assessed for empirical patterns consistent with multiplicative andreciprocal feedback dynamics—interaction dominant dynamics. Lognormal and inversepower-law distributions are associated with multiplicative and interdependent dynamicsin many natural systems. Mixtures of lognormal and inverse power-law distributionsoffered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald—alternatives corresponding to additive, superposed, component processes. Theevidence for interaction dominant dynamics suggests fundamental links between theobserved coordinative synergies that support speech production and the shapes ofpronunciation time distributions.

  7. Children’s recall of words spoken in their first and second language:Effects of signal-to-noise ratio and reverberation time

    Directory of Open Access Journals (Sweden)

    Anders eHurtig

    2016-01-01

    Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.

  8. Open-Source Multi-Language Audio Database for Spoken Language Processing Applications

    Science.gov (United States)

    2012-12-01

    widespread use of internet acronyms such as “brb” and “ lol ” occurred occasionally in causal speech, implying the assimilation of today’s digital...such as “brb” and “ lol ” occurred occasionally in causal speech, implying the assimilation of today’s digital jargon in verbal communication. 4.3

  9. Great expectations: Specific lexical anticipation influences the processing of spoken language

    Directory of Open Access Journals (Sweden)

    Nieuwland Mante S

    2007-10-01

    Full Text Available Abstract Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs we tested whether these on-line predictions are based on a message-level representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusion When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.

  10. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition

    Science.gov (United States)

    Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland

    2018-01-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…

  11. Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications

    Science.gov (United States)

    2015-03-01

    frontend for representing speech information. This feature set presents a detailed look at one general flavor of time-frequency features, focusing on...The next step was to segment the signal into overlapping frames, using a Kaiser window with β of 6 (similar to a Hamming window). A 512 point FFT of...15, pp.209-243, New York : Wiley, 1963. [5] E. Zwicker and H. Fastl, Psychoacoustics, Facts and Models, Chapter 3, pp.25-28, Springer-Verlag 1990

  12. Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension

    Science.gov (United States)

    Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue

    2016-01-01

    Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…

  13. An overview of technology for spoken interaction with machines

    Science.gov (United States)

    Hunt, M. J.

    1988-02-01

    This report provides a non-mathematical introduction to speech input and output technology. It is divided into three parts. The first presents necessary background information on speech: on its nature, its production and perception, and on methods of analysis and coding used in speech I/O. A central message is that our subjective impression of speech is misleading and causes us to underestimate the complexity of speech communication. The second part is concerned with speech output and discusses the trade-offs that must be made between the quality and flexibility of the speech generated and the complexity and storage requirements of the speech output system. The final - and longest - part of the report deals with speech recognition. Arguments are presented in favor of statistical rather than rule-based approaches to speech recognition. The categories of recognizer currently available and the algorithms they use are briefly described, with the general conclusion that the performance obtained depends critically on the training process: on the type and quantity of the training material and on the amount of information derived from it. Three more detailed sections cover spectral representations and distance measures, the particular set of representations classed as auditory models, and techniques for handling noise and distortions. The last section discusses the difficulties of specifying recognizer performance, and recommends that all performance measurements should be treated with circumspection.

  14. Integration of Pragmatic and Phonetic Cues in Spoken Word Recognition

    Science.gov (United States)

    Rohde, Hannah; Ettlinger, Marc

    2015-01-01

    Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the two most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the/hi/∼/∫i/continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time-course of this interaction and discussing how different models of cue integration could be adapted to account for our results. PMID:22250908

  15. Predictors of spoken language development following pediatric cochlear implantation.

    Science.gov (United States)

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  16. Contribution of writing to reading: Dissociation between cognitive and motor process in the left dorsal premotor cortex.

    Science.gov (United States)

    Pattamadilok, Chotiga; Ponz, Aurélie; Planton, Samuel; Bonnard, Mireille

    2016-04-01

    Functional brain imaging studies reported activation of the left dorsal premotor cortex (PMd), that is, a main area in the writing network, in reading tasks. However, it remains unclear whether this area is causally relevant for written stimulus recognition or its activation simply results from a passive coactivation of reading and writing networks. Here, we used chronometric paired-pulse transcranial magnetic stimulation (TMS) to address this issue by disrupting the activity of the PMd, the so-called Exner's area, while participants performed a lexical decision task. Both words and pseudowords were presented in printed and handwritten characters. The latter was assumed to be closely associated with motor representations of handwriting gestures. We found that TMS over the PMd in relatively early time-windows, i.e., between 60 and 160 ms after the stimulus onset, increased reaction times to pseudoword without affecting word recognition. Interestingly, this result pattern was found for both printed and handwritten characters, that is, regardless of whether the characters evoked motor representations of writing actions. Our result showed that under some circumstances the activation of the PMd does not simply result from passive association between reading and writing networks but has a functional role in the reading process. At least, at an early stage of written stimuli recognition, this role seems to depend on a common sublexical and serial process underlying writing and pseudoword reading rather than on an implicit evocation of writing actions during reading as typically assumed. © 2016 Wiley Periodicals, Inc.

  17. Role of Working Memory Storage and Attention Focus Switching in Children’s Comprehension of Spoken Object Relative Sentences

    Directory of Open Access Journals (Sweden)

    Mianisha C. Finney

    2014-01-01

    Full Text Available The present study evaluated a two-mechanism memory model of the online auditory comprehension of object relative (OR sentences in 7–11-year-old typically developing children. Mechanisms of interest included working memory storage (WMS and attention focus switching. We predicted that both mechanisms would be important for comprehension. Forty-four children completed a listening span task indexing WMS, an auditory attention focus switching task, and an agent selection task indexing spoken sentence comprehension. Regression analyses indicated that WMS and attention focus switching accuracy each accounted for significant and unique variance in the children’s OR comprehension after accounting for age. Results were interpreted to suggest that WMS is important for OR comprehension by supporting children’s ability to retain both noun phrase 1 and noun phrase 2 prior to their reactivating noun phrase 1 from memory in order to integrate it into a developing structure. Attention focus switching was interpreted to be critical in supporting children’s noun phrase 1 reactivation, as they needed to switch their focus of attention momentarily away from ongoing language processing to memory retrieval.

  18. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  19. Modality differences between written and spoken story retelling in healthy older adults

    Directory of Open Access Journals (Sweden)

    Jessica Ann Obermeyer

    2015-04-01

    Methods: Ten native English speaking healthy elderly participants between the ages of 50 and 80 were recruited. Exclusionary criteria included neurological disease/injury, history of learning disability, uncorrected hearing or vision impairment, history of drug/alcohol abuse and presence of cognitive decline (based on Cognitive Linguistic Quick Test. Spoken and written discourse was analyzed for micro linguistic measures including total words, percent correct information units (CIUs; Nicholas & Brookshire, 1993 and percent complete utterances (CUs; Edmonds, et al. 2009. CIUs measure relevant and informative words while CUs focus at the sentence level and measure whether a relevant subject and verb and object (if appropriate are present. Results: Analysis was completed using Wilcoxon Rank Sum Test due to small sample size. Preliminary results revealed that healthy elderly people produced significantly more words in spoken retellings than written retellings (p=.000; however, this measure contrasted with %CIUs and %CUs with participants producing significantly higher %CIUs (p=.000 and %CUs (p=.000 in written story retellings than in spoken story retellings. Conclusion: These findings indicate that written retellings, while shorter, contained higher accuracy at both a word (CIU and sentence (CU level. This observation could be related to the ability to revise written text and therefore make it more concise, whereas the nature of speech results in more embellishment and “thinking out loud,” such as comments about the task, associated observations about the story, etc. We plan to run more participants and conduct a main concepts analysis (before conference time to gain more insight into modality differences and implications.

  20. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script.

    Science.gov (United States)

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  2. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  3. The reciprocal relations between morphological processes and reading.

    Science.gov (United States)

    Kruk, Richard S; Bergman, Krista

    2013-01-01

    Reciprocal relations between emerging morphological processes and reading skills were examined in a longitudinal study tracking children from Grade 1 through Grade 3. The aim was to examine predictive relationships between productive morphological processing involving composing and decomposing of inflections and derivations, reading ability for pseudoword and word decoding, and word and passage reading comprehension after controlling for initial abilities in reading, morphological processing, phonological awareness, and vocabulary. Reciprocal influences were indicated by predictive relations among initial morphological processes and later reading abilities co-occurring with relationships between initial reading abilities and later morphological processes. Using multilevel modeling, decomposing and composing were found to predict emerging word decoding and word and passage comprehension but not pseudoword decoding. Reading comprehension predicted growth in decomposing. Subsequent regression analyses of model-estimated early linear growth in predictors and later linear growth in outcomes showed that early growth in morphological processes predicted later growth in word decoding and passage comprehension. Although reciprocal relations between emerging morphological processes and reading skills were observed, the different patterns on each side of the reciprocal "coin" indicated that the mechanisms underlying predictive influences are likely different but related to quality of lexical representations. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Long-term repetition priming in spoken and written word production: evidence for a contribution of phonology to handwriting.

    Science.gov (United States)

    Damian, Markus F; Dorjee, Dusana; Stadthagen-Gonzalez, Hans

    2011-07-01

    Although it is relatively well established that access to orthographic codes in production tasks is possible via an autonomous link between meaning and spelling (e.g., Rapp, Benzing, & Caramazza, 1997), the relative contribution of phonology to orthographic access remains unclear. Two experiments demonstrated persistent repetition priming in spoken and written single-word responses, respectively. Two further experiments showed priming from spoken to written responses and vice versa, which is interpreted as reflecting a role of phonology in constraining orthographic access. A final experiment showed priming from spoken onto written responses even when participants engaged in articulatory suppression during writing. Overall, the results support the view that access to orthography codes is accomplished via both the autonomous link between meaning and spelling and an indirect route via phonology.

  5. A re-examination of (the same using data from spoken english A re-examination of (the same using data from spoken english

    Directory of Open Access Journals (Sweden)

    Jean Wong

    2008-04-01

    Full Text Available This paper reports on a qualitative discourse analysis of 290 tokens of (the same occurring in spoken American English. Our study of these naturally occurring tokens extends and elaborates on the analysis of this expression that was proposed by Halliday and Hasan (l976. We also review other prior research on (the same in our attempt to provide data-based answers to the following three questions: (1 under what conditions is the definite article the obligatory or optional with same? (2 what are the head nouns that typically follow same and why is there sometimes no head noun? (3 what type(s of cohesive relationships can (the same signal in spoken English discourse? Finally, we explore some typical pedagogical treatments of (the same in current ESL/EFL textbooks and reference grammars. Then we make our own suggestions regarding how teachers of English as a second or foreign language might go about presenting this useful expression to their learners. Este estudo apresenta uma análise qualitativa do discurso de 290 ocorrências de (the same no Inglês Americano falado. Nosso estudo sobre essas ocorrências naturais amplia e elabora a análise desta expressão que foi proposta por Halliday e Hassan (1976. Também revisamos investigações posteriores sobre (the same com o intuito de fornecer respostas fundamentadas em um banco de dados para as três seguintes perguntas: (1 em quais condições o artigo definido (the é obrigatório ou opcional juntamente a same? (2 quais são os principais substantivos que tipicamente seguem same e por que, às vezes, não há substantivo? (3 que tipo(s de relações coesivas pode (the same indicar no discurso do Inglês falado? Finalmente, exploramos alguns tratamentos pedagógicos típicos de (the same nos atuais livros-texto e gramáticas de Inglês – L2/LE. Em seguida, sugerimos como os professores de Inglês, como segunda língua ou língua estrangeira, poderiam ensinar essa útil expressão para seus

  6. Word learning and phonetic processing in preschool-age children.

    Science.gov (United States)

    Havy, Mélanie; Bertoncini, Josiane; Nazzi, Thierry

    2011-01-01

    Consonants and vowels have been shown to play different relative roles in different processes, including retrieving known words from pseudowords during adulthood or simultaneously learning two phonetically similar pseudowords during infancy or toddlerhood. The current study explores the extent to which French-speaking 3- to 5-year-olds exhibit a so-called "consonant bias" in a task simulating word acquisition, that is, when learning new words for unfamiliar objects. In Experiment 1, the to-be-learned words differed both by a consonant and a vowel (e.g., /byf/-/duf/), and children needed to choose which of the two objects to associate with a third one whose name differed from both objects by either a consonant or a vowel (e.g., /dyf/). In such a conflict condition, children needed to favor (or neglect) either consonant information or vowel information. The results show that only 3-year-olds preferentially chose the consonant identity, thereby neglecting the vowel change. The older children (and adults) did not exhibit any response bias. In Experiment 2, children needed to pick up one of two objects whose names differed on either consonant information or vowel information. Whereas 3-year-olds performed better with pairs of pseudowords contrasting on consonants, the pattern of asymmetry was reversed in 4-year-olds, and 5-year-olds did not exhibit any significant response bias. Interestingly, girls showed overall better performance and exhibited earlier changes in performance than boys. The changes in consonant/vowel asymmetry in preschoolers are discussed in relation with developments in linguistic (lexical and morphosyntactic) and cognitive processing. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  8. Brain Basis of Phonological Awareness for Spoken Language in Children and Its Disruption in Dyslexia

    Science.gov (United States)

    Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.

    2012-01-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783

  9. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-01-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  10. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  11. Applicability of the Spoken Knowledge in Low Literacy Patients with Diabetes in Brazilian elderly.

    Science.gov (United States)

    Souza, Jonas Gordilho; Apolinario, Daniel; Farfel, José Marcelo; Jaluul, Omar; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Campora, Flávia; Jacob-Filho, Wilson

    2016-01-01

    To translate, adapt and evaluate the properties of a Brazilian Portuguese version of the Spoken Knowledge in Low Literacy Patients with Diabetes, which is a questionnaire that evaluate diabetes knowledge. A cross-sectional study with type 2 diabetes patients aged ≥60 years, seen at a public healthcare organization in the city of Sao Paulo (SP). After the development of the Portuguese version, we evaluated the psychometrics properties and the association with sociodemographic and clinical variables. The regression models were adjusted for sociodemographic data, functional health literacy, duration of disease, use of insulin, and glycemic control. We evaluated 129 type 2 diabetic patients, with mean age of 75.9 (±6.2) years, mean scholling of 5.2 (±4.4) years, mean glycosylated hemoglobin of 7.2% (±1.4), and mean score on Spoken Knowledge in Low Literacy Patients with Diabetes of 42.1% (±25.8). In the regression model, the variables independently associated to Spoken Knowledge in Low Literacy Patients with Diabetes were schooling (B=0.193; p=0.003), use of insulin (B=1.326; p=0.004), duration of diabetes (B=0.053; p=0.022) and health literacy (B=0.108; p=0.021). The determination coefficient was 0.273. The Cronbach a was 0.75, demonstrating appropriate internal consistency. This translated version of the Spoken Knowledge in Low Literacy Patients with Diabetes showed to be adequate to evaluate diabetes knowledge in elderly patients with low schooling levels. It presented normal distribution, adequate internal consistency, with no ceiling or floor effect. The tool is easy to be used, can be quickly applied and does not depend on reading skills. Traduzir, adaptar e avaliar as propriedades de uma versão, em português do Brasil, do Spoken Knowledge in Low Literacy Patients with Diabetes, um questionário que avalia conhecimento em diabetes. Estudo transversal, em diabéticos tipo 2, com idade ≥60 anos de uma instituição pública de saúde, em São Paulo (SP

  12. A Transcription Scheme for Languages Employing the Arabic Script Motivated by Speech Processing Application

    National Research Council Canada - National Science Library

    Ganjavi, Shadi; Georgiou, Panayiotis G; Narayanan, Shrikanth

    2004-01-01

    ... (The DARPA Babylon Program; Narayanan, 2003). In this paper, we discuss transcription systems needed for automated spoken language processing applications in Persian that uses the Arabic script for writing...

  13. Auditory-verbal therapy for promoting spoken language development in children with permanent hearing impairments.

    Science.gov (United States)

    Brennan-Jones, Christopher G; White, Jo; Rush, Robert W; Law, James

    2014-03-12

    Congenital or early-acquired hearing impairment poses a major barrier to the development of spoken language and communication. Early detection and effective (re)habilitative interventions are essential for parents and families who wish their children to achieve age-appropriate spoken language. Auditory-verbal therapy (AVT) is a (re)habilitative approach aimed at children with hearing impairments. AVT comprises intensive early intervention therapy sessions with a focus on audition, technological management and involvement of the child's caregivers in therapy sessions; it is typically the only therapy approach used to specifically promote avoidance or exclusion of non-auditory facial communication. The primary goal of AVT is to achieve age-appropriate spoken language and for this to be used as the primary or sole method of communication. AVT programmes are expanding throughout the world; however, little evidence can be found on the effectiveness of the intervention. To assess the effectiveness of auditory-verbal therapy (AVT) in developing receptive and expressive spoken language in children who are hearing impaired. CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL, speechBITE and eight other databases were searched in March 2013. We also searched two trials registers and three theses repositories, checked reference lists and contacted study authors to identify additional studies. The review considered prospective randomised controlled trials (RCTs) and quasi-randomised studies of children (birth to 18 years) with a significant (≥ 40 dBHL) permanent (congenital or early-acquired) hearing impairment, undergoing a programme of auditory-verbal therapy, administered by a certified auditory-verbal therapist for a period of at least six months. Comparison groups considered for inclusion were waiting list and treatment as usual controls. Two review authors independently assessed titles and abstracts identified from the searches and obtained full-text versions of all potentially

  14. Prediction of Audience Response from Spoken Sequences, Speech Pauses and Co-speech Gestures in Humorous Discourse by Barack Obama

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    In this paper, we aim to predict audience response from simple spoken sequences, speech pauses and co-speech gestures in annotated video- and audio-recorded speeches by Barack Obama at the Annual White House Correspondents’ Association Dinner in 2011 and 2016. At these dinners, the American...... president mocks himself, his collaborators, political adversary and the press corps making the audience react with cheers, laughter and/or applause. The results of the prediction experiment demonstrate that information about spoken sequences, pauses and co-speech gestures by Obama can be used to predict...

  15. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    Science.gov (United States)

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  16. Learning dialog act processing

    OpenAIRE

    Wermter, Stefan; Löchel, Matthias

    1996-01-01

    In this paper we describe a new approach for learning dialog act processing. In this approach we integrate a symbolic semantic segmentation parser with a learning dialog act network. In order to support the unforeseeable errors and variations of spoken language we have concentrated on robust data-driven learning. This approach already compares favorably with the statistical average plausibility method, produces a segmentation and dialog act assignment for all utterances in a robust manner,...

  17. Children's Spoken Word Recognition and Contributions to Phonological Awareness and Nonword Repetition: A 1-Year Follow-Up

    Science.gov (United States)

    Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.

    2009-01-01

    This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…

  18. The differential effects of written and spoken presentation for the modification of interpretation and judgmental bias in children.

    Science.gov (United States)

    Vassilopoulos, Stephanos P; Blackwell, Simon E; Misailidi, Plousia; Kyritsi, Alexandra; Ayfanti, Maria

    2014-09-01

    Interpretation training programs, in which individuals are trained to interpret ambiguous scenarios in either a negative or benign way, have proven effective in altering anxiety-related cognitive biases in both children and adults. The current study investigated whether the effects of the interpretation training procedure in children are differentiated according to the mode of presentation of the training. Ninety-four primary school children (aged 10-12 years) scoring above the mean on a social anxiety scale were randomly allocated to four groups, in which they were trained using written or spoken presentation of training materials in either the negative or benign direction. For the negative training, children who heard the training material spoken aloud (spoken presentation) made more negative interpretations of ambiguous social events, compared to children who read the training material (written presentation). However, for the benign training, there was less clear evidence for a differentiation of the effects between the two modes of presentation, although children in the spoken presentation group performed better in a stressful task and showed a trend to rate their mood as more positive after the task than children in the written presentation group. These results not only forward our understanding of the mechanism of the genesis of cognitive bias in children, but also highlight the need for further investigation of how to optimize the effectiveness of interpretation training in children.

  19. Influence of Spoken Language on the Initial Acquisition of Reading/Writing: Critical Analysis of Verbal Deficit Theory

    Science.gov (United States)

    Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel

    2004-01-01

    This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…

  20. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment.

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.

  1. The Development and Validation of the "Academic Spoken English Strategies Survey (ASESS)" for Non-Native English Speaking Graduate Students

    Science.gov (United States)

    Schroeder, Rui M.

    2016-01-01

    This study reports on the three-year development and validation of a new assessment tool--the Academic Spoken English Strategies Survey (ASESS). The questionnaire is the first of its kind to assess the listening and speaking strategy use of non-native English speaking (NNES) graduate students. A combination of sources was used to develop the…

  2. Receipt and use of spoken and written over-the-counter medicine information: insights into Australian and UK consumers' experiences.

    Science.gov (United States)

    Tong, Vivien; Raynor, David K; Aslani, Parisa

    2018-04-01

    To explore Australian and UK consumers' receipt and use of spoken and written medicine information and examine the role of leaflets for consumers of over-the-counter (OTC) medicines. Semistructured interviews were conducted with 37 Australian and 39 UK consumers to explore information received with their most recent OTC medicine purchase, and how information was used at different times post-purchase. Interviews were audio-recorded, transcribed verbatim and thematically analysed. Similarities were evident between the key themes identified from Australian and UK consumers' experiences. Consumers infrequently sought spoken information and reported that pharmacy staff provided minimal spoken information for OTC medicines. Leaflets were not always received or wanted and had a less salient role as an information source for repeat OTC purchases. Consumers tended not to read OTC labels or leaflets. Product familiarity led to consumers tending not to seek information on labels or leaflets. When labels were consulted, directions for use were commonly read. However, OTC medicine information in general was infrequently revisited. As familiarity is not an infallible proxy for safe and effective medication use, strategies to promote the value and use of these OTC medicine information sources are important and needed. Minimal spoken information provision coupled with limited written information use may adversely impact medication safety in self-management. © 2017 Royal Pharmaceutical Society.

  3. Portuguese spoken in Almoxarife, Sao Tome: relative clauses with “ku” and “com” [with

    Directory of Open Access Journals (Sweden)

    Carlos Filipe Guimarães Figueiredo

    2014-12-01

    Full Text Available This paper analyses the relative clauses with “ku” [substrate relativizer] and “com” [with] in the Portuguese of Almoxarife, spoken by the community of Almoxarife, São Tome island, aiming to uncover what the motivations are that trigger the use of those relativizers.

  4. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  5. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  6. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    Science.gov (United States)

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  7. Use of Spoken and Written Japanese Did Not Protect Japanese-American Men From Cognitive Decline in Late Life

    Science.gov (United States)

    Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-01-01

    Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282

  8. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  9. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

    Science.gov (United States)

    Koeritzer, Margaret A.; Rogers, Chad S.; Van Engen, Kristin J.; Peelle, Jonathan E.

    2018-01-01

    Purpose: The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. Method: We tested 30 young adults and 30 older adults. Participants heard lists of sentences in…

  10. Detecting uncertainty in spoken dialogues: an explorative research to the automatic detection of a speakers' uncertainty by using prosodic markers

    NARCIS (Netherlands)

    Dral, J.; Heylen, Dirk K.J.; op den Akker, Hendrikus J.A.; Ahmad, K.

    2008-01-01

    This paper reports results in automatic detection of speakers uncertainty in spoken dialogues by using prosodic markers. For this purpose a substantial part of the AMI corpus (a multi-modal multi-party meeting corpus) has been selected and converted to a suitable format so its data could be analyzed

  11. Detecting Uncertainty in Spoken Dialogues: An explorative research for the automatic detection of speaker uncertainty by using prosodic markers

    NARCIS (Netherlands)

    Dral, Jeroen; Heylen, Dirk K.J.; op den Akker, Hendrikus J.A.; Ahmad, Kurshid

    2011-01-01

    This paper reports results in automatic detection of speaker uncertainty in spoken dialogues by using prosodic markers. For this purpose a substantial part of the AMI corpus (a multi-modal multi-party meeting corpus) has been selected and converted to a suitable format so its data could be analyzed

  12. About Development and Innovation of the Slovak Spoken Language Dialogue System

    Directory of Open Access Journals (Sweden)

    Jozef Juhár

    2009-05-01

    Full Text Available The research and development of the Slovak spoken language dialogue system (SLDS is described in the paper. The dialogue system is based on the DARPA Communicator architecture and was developed in the period from July 2003 to June 2006. It consists of the Galaxy hub and telephony, automatic speech recognition, text-to-speech, backend, transport and VoiceXML dialogue management and automatic evaluation modules. The dialogue system is demonstrated and tested via two pilot applications, „Weather Forecast“ and „Public Transport Timetables“. The required information is retrieved from Internet resources in multi-user mode through PSTN, ISDN, GSM and/or VoIP network. Some innovation development has been performed since 2006 which is also described in the paper.

  13. Machine Translation Projects for Portuguese at INESC ID's Spoken Language Systems Laboratory

    Directory of Open Access Journals (Sweden)

    Anabela Barreiro

    2014-12-01

    Full Text Available Language technologies, in particular machine translation applications, have the potential to help break down linguistic and cultural barriers, presenting an important contribution to the globalization and internationalization of the Portuguese language, by allowing content to be shared 'from' and 'to' this language. This article aims to present the research work developed at the Laboratory of Spoken Language Systems of INESC-ID in the field of machine translation, namely the automated speech translation, the translation of microblogs and the creation of a hybrid machine translation system. We will focus on the creation of the hybrid system, which aims at combining linguistic knowledge, in particular semantico-syntactic knowledge, with statistical knowledge, to increase the level of translation quality.

  14. The Influence of Topic Status on Written and Spoken Sentence Production.

    Science.gov (United States)

    Cowles, H Wind; Ferreira, Victor S

    2011-12-01

    Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production.

  15. The Role of Camp in Promoting the Participants’ Spoken English Expression

    Directory of Open Access Journals (Sweden)

    jalaluddin Jalaluddin

    2016-01-01

    Full Text Available The study investigated the topics of participants’ spoken expressionin an English camp and how the topics were discussed.A case study was applied as the research design. Data were gained from focus-group interviews, observation, and questionnaire. The results showed that the participants talked about various topics, which could be categorized into two types i.e. guided topics and situational topics. Guided topics were discussed by the participants in guided conditions. On the other hand, situational topics appeared naturally with respect to the situation. The data also indicated that the activeness and confidence of the participants to talk in English gradually increasedduring the English camp. The findings suggested that English campsbeheld regularly as they can boost the participants’ English speaking skill.

  16. A Spoken Language Intervention for School-Aged Boys with fragile X Syndrome

    Science.gov (United States)

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2015-01-01

    Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  17. A Corpus-based Linguistic Analysis on Spoken Corpus: Semantic Prosodies on “Robots”

    Directory of Open Access Journals (Sweden)

    Yunisrina Qismullah Yusuf

    2010-04-01

    Full Text Available This study focuses on the semantic prosodies of the word ¡°robot¡± from words that colligates it in data of the spoken form. The data is collected from a lecturer.s talk discussing two topics which are about man and machines in perfect harmony and the effective temperature of workplaces. For annotation, UCREL CLAWS5 Tagset is used, with Tagset C5 to select output style of horizontal. The design of corpus used is by ICE. It reveals that more positive semantic prosodies on the word ¡°robot¡± are presented in the data compared to negative, with 52 occurrences discovered for positive (94,5% and 3 occurrences discovered for negative (5,5%. Words mostly collocated with ¡°robot¡± in the data are service with 8 collocations, machines with 20 collocations, surgical system with 15 collocations and intelligence with 13 collocations.

  18. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  19. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  20. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  1. Morphosyntactic constructs in the development of spoken and written Hebrew text production.

    Science.gov (United States)

    Ravid, Dorit; Zilberbuch, Shoshana

    2003-05-01

    This study examined the distribution of two Hebrew nominal structures-N-N compounds and denominal adjectives-in spoken and written texts of two genres produced by 90 native-speaking participants in three age groups: eleven/twelve-year-olds (6th graders), sixteen/seventeen-year-olds (11th graders), and adults. The two constructions are later linguistic acquisitions, part of the profound lexical and syntactic changes that occur in language development during the school years. They are investigated in the context of learning how modality (speech vs. writing) and genre (biographical vs. expository texts) affect the production of continuous discourse. Participants were asked to speak and write about two topics, one biographical, describing the life of a public figure or of a friend; and another, expository, discussing one of ten topics such as the cinema, cats, or higher academic studies. N-N compounding was found to be the main device of complex subcategorization in Hebrew discourse, unrelated to genre. Denominal adjectives are a secondary subcategorizing device emerging only during the late teen years, a linguistic resource untapped until very late, more restricted to specific text types than N-N compounding, and characteristic of expository writing. Written texts were found to be denser than spoken texts lexically and syntactically as measured by number of novel N-N compounds and denominal adjectives per clause, and in older age groups this difference was found to be more pronounced. The paper contributes to our understanding of how the syntax/lexicon interface changes with age, modality and genre in the context of later language acquisition.

  2. Insight into the neurophysiological processes of melodically intoned language with functional MRI.

    Science.gov (United States)

    Méndez Orellana, Carolina P; van de Sandt-Koenderman, Mieke E; Saliasi, Emi; van der Meulen, Ineke; Klip, Simone; van der Lugt, Aad; Smits, Marion

    2014-09-01

    Melodic Intonation Therapy (MIT) uses the melodic elements of speech to improve language production in severe nonfluent aphasia. A crucial element of MIT is the melodically intoned auditory input: the patient listens to the therapist singing a target utterance. Such input of melodically intoned language facilitates production, whereas auditory input of spoken language does not. Using a sparse sampling fMRI sequence, we examined the differential auditory processing of spoken and melodically intoned language. Nineteen right-handed healthy volunteers performed an auditory lexical decision task in an event related design consisting of spoken and melodically intoned meaningful and meaningless items. The control conditions consisted of neutral utterances, either melodically intoned or spoken. Irrespective of whether the items were normally spoken or melodically intoned, meaningful items showed greater activation in the supramarginal gyrus and inferior parietal lobule, predominantly in the left hemisphere. Melodically intoned language activated both temporal lobes rather symmetrically, as well as the right frontal lobe cortices, indicating that these regions are engaged in the acoustic complexity of melodically intoned stimuli. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. Combined melody and meaning provide a sound basis for the further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT.

  3. The Peculiarities of the Adverbs Functioning of the Dialect Spoken in the v. Shevchenkove, Kiliya district, Odessa Region

    Directory of Open Access Journals (Sweden)

    Maryna Delyusto

    2013-08-01

    Full Text Available The article gives new evidence about the adverb as a part of the grammatical system of the Ukrainian steppe dialect spread in the area between the Danube and the Dniester rivers. The author proves that the grammatical system of the dialect spoken in the v. Shevchenkove, Kiliya district, Odessa region is determined by the historical development of the Ukrainian language rather than the influence of neighboring dialects.

  4. Age and amount of exposure to a foreign language during childhood: behavioral and ERP data on the semantic comprehension of spoken English by Japanese children.

    Science.gov (United States)

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hoshino, Takahiro; Hagiwara, Hiroko

    2011-06-01

    Children's foreign-language (FL) learning is a matter of much social as well as scientific debate. Previous behavioral research indicates that starting language learning late in life can lead to problems in phonological processing. Inadequate phonological capacity may impede lexical learning and semantic processing (phonological bottleneck hypothesis). Using both behavioral and neuroimaging data, here we examine the effects of age of first exposure (AOFE) and total hours of exposure (HOE) to English, on 350 Japanese primary-school children's semantic processing of spoken English. Children's English proficiency scores and N400 event-related brain potentials (ERPs) were analyzed in multiple regression analyses. The results showed (1) that later, rather than earlier, AOFE led to higher English proficiency and larger N400 amplitudes, when HOE was controlled for; and (2) that longer HOE led to higher English proficiency and larger N400 amplitudes, whether AOFE was controlled for or not. These data highlight the important role of amount of exposure in FL learning, and cast doubt on the view that starting FL learning earlier always produces better results. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  5. Spoken language development in oral preschool children with permanent childhood deafness.

    Science.gov (United States)

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  6. Foreign body aspiration and language spoken at home: 10-year review.

    Science.gov (United States)

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  7. Algebraic Topology of Multi-Brain Connectivity Networks Reveals Dissimilarity in Functional Patterns during Spoken Communications.

    Directory of Open Access Journals (Sweden)

    Bosiljka Tadić

    Full Text Available Human behaviour in various circumstances mirrors the corresponding brain connectivity patterns, which are suitably represented by functional brain networks. While the objective analysis of these networks by graph theory tools deepened our understanding of brain functions, the multi-brain structures and connections underlying human social behaviour remain largely unexplored. In this study, we analyse the aggregate graph that maps coordination of EEG signals previously recorded during spoken communications in two groups of six listeners and two speakers. Applying an innovative approach based on the algebraic topology of graphs, we analyse higher-order topological complexes consisting of mutually interwoven cliques of a high order to which the identified functional connections organise. Our results reveal that the topological quantifiers provide new suitable measures for differences in the brain activity patterns and inter-brain synchronisation between speakers and listeners. Moreover, the higher topological complexity correlates with the listener's concentration to the story, confirmed by self-rating, and closeness to the speaker's brain activity pattern, which is measured by network-to-network distance. The connectivity structures of the frontal and parietal lobe consistently constitute distinct clusters, which extend across the listener's group. Formally, the topology quantifiers of the multi-brain communities exceed the sum of those of the participating individuals and also reflect the listener's rated attributes of the speaker and the narrated subject. In the broader context, the presented study exposes the relevance of higher topological structures (besides standard graph measures for characterising functional brain networks under different stimuli.

  8. Satisfaction with telemedicine for teaching listening and spoken language to children with hearing loss.

    Science.gov (United States)

    Constantinescu, Gabriella

    2012-07-01

    Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.

  9. Intelligibility of American English vowels and consonants spoken by international students in the United States.

    Science.gov (United States)

    Jin, Su-Hyun; Liu, Chang

    2014-04-01

    PURPOSE The purpose of this study was to examine the intelligibility of English consonants and vowels produced by Chinese-native (CN), and Korean-native (KN) students enrolled in American universities. METHOD 16 English-native (EN), 32 CN, and 32 KN speakers participated in this study. The intelligibility of 16 American English consonants and 16 vowels spoken by native and nonnative speakers of English was evaluated by EN listeners. All nonnative speakers also completed a survey of their language backgrounds. RESULTS Although the intelligibility of consonants and diphthongs for nonnative speakers was comparable to that of native speakers, the intelligibility of monophthongs was significantly lower for CN and KN speakers than for EN speakers. Sociolinguistic factors such as the age of arrival in the United States and daily use of English, as well as a linguistic factor, difference in vowel space between native (L1) and nonnative (L2) language, partially contributed to vowel intelligibility for CN and KN groups. There was no significant correlation between the length of U.S. residency and phoneme intelligibility. CONCLUSION Results indicated that the major difficulty in phonemic production in English for Chinese and Korean speakers is with vowels rather than consonants. This might be useful for developing training methods to improve English intelligibility for foreign students in the United States.

  10. The Role of Oral Communicative Tasks (OCT in Developing the Spoken Proficiency of Engineering Students

    Directory of Open Access Journals (Sweden)

    S. Shantha

    2017-04-01

    Full Text Available The mastery of speaking skills in English has become a major requisite in engineering industry. Engineers are expected to possess speaking skills for executing their routine activities and career prospects. The article focuses on the experimental study conducted to improve English spoken proficiency of Indian engineering students using task-based approach. Tasks are activities that concentrates on the learners in providing the main context and focus for learning. Therefore, a task facilitates the learners to use language rather than to learn it. This article further explores the pivotal role played by the pedagogical intervention in enabling the learners to improve their speaking skill in L2. The participants of the study chosen for control and experimental group were first year civil engineering students comprising 38 in each group respectively. The vital tool used in the study is oral communicative tasks administered to the experimental group. The oral communicative tasks enabled the students to think and generate sentences on their own orally. The‘t’ Test was computed to compare the performance of the students in control and experiment groups.The results of the statistical analysis revealed that there was a significant level of improvement in the oral proficiency of the experimental group.

  11. Olomouc Corpus of Spoken Czech: characterization and main features of the project

    Directory of Open Access Journals (Sweden)

    Pořízka, Petr

    2009-01-01

    Full Text Available This study presents the results of the author's research project called Olomouc Corpus of Spoken Czech (OCSC. The paper is focused on the state and partial phases of constructing the corpora, its methodology and annotation. Within the OCSC we use so called dual system of transcription, which means (1 an orthographic one with the purpose of linguistic (morpho-logical analysis and tagging and (2 a phonetic version of transcript which consists of three layers of the text: first the real transcription and further various types of the metatexts as a second and third layer, including communication aspects of the texts. The criteria of selection of speakers are also listed here and the highly important statistical analysis of the sociolin-guistic categories (gender, age, type of education, types of recordings is presented as well. This analysis can serve as a base for a partial correction of possible non-balance among those sociolinguistic parameters. The annotation rules and principles are mentioned at the end of this study.

  12. A functional analysis of indeterminate Subject in Popular Portuguese spoken in São Paulo

    Directory of Open Access Journals (Sweden)

    Deize Crespim Pereira

    2013-08-01

    Full Text Available Using the theoretical and methodological tools of Functional Linguistics, this paper presents a quantitative analysis of pronominal and verbal forms expressing subject indeterminacy in Popular Portuguese spoken in São Paulo. Data consist of 23 interviews which are part of the Popular Portuguese Project in São Paulo. This data correspond to recordings of speeches of adults illiterate or that have only few years of schooling, immigrants as well as people who were born in São Paulo, who live in slams located in the capital. The analysis demonstrates that (i there are many forms that express subject indeterminacy in Popular Portuguese; (ii considering that indeterminacy is a matter of degree, there are forms that are more indeterminate than others which are accompanied by referential clues in text or context; (iii these forms can have different functions according to the context in which they appear; (iv they can refer from one to three grammatical persons when they reach a high degree of generalization.

  13. Does segmental overlap help or hurt? Evidence from blocked cyclic naming in spoken and written production.

    Science.gov (United States)

    Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda

    2016-04-01

    Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.

  14. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  15. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Listening in circles. Spoken drama and the architects of sound, 1750-1830.

    Science.gov (United States)

    Tkaczyk, Viktoria

    2014-07-01

    The establishment of the discipline of architectural acoustics is generally attributed to the physicist Wallace Clement Sabine, who developed the formula for reverberation time around 1900, and with it the possibility of making calculated prognoses about the acoustic potential of a particular design. If, however, we shift the perspective from the history of this discipline to the history of architectural knowledge and praxis, it becomes apparent that the topos of 'good sound' had already entered the discourse much earlier. This paper traces the Europe-wide discussion on theatre architecture between 1750 and 1830. It will be shown that the period of investigation is marked by an increasing interest in auditorium acoustics, one linked to the emergence of a bourgeois theatre culture and the growing socio-political importance of the spoken word. In the wake of this development the search among architects for new methods of acoustic research started to differ fundamentally from an analogical reasoning on the nature of sound propagation and reflection, which in part dated back to antiquity. Through their attempts to find new ways of visualising the behaviour of sound in enclosed spaces and to rethink both the materiality and the mediality of theatre auditoria, architects helped pave the way for the establishment of architectural acoustics as an academic discipline around 1900.

  17. Repeats in advanced spoken English of learners with Czech as L1

    Directory of Open Access Journals (Sweden)

    Tomáš Gráf

    2017-09-01

    Full Text Available The article reports on the findings of an empirical study of the use of repeats – as one of the markers of disfluency – in advanced learner English and contributes to the study of L2 fluency. An analysis of 13 hours of recordings of interviews with 50 advanced learners of English with Czech as L1 revealed 1,905 instances of repeats which mainly (78% consisted of one-word repeats occurring at the beginning of clauses and constituents. Two-word repeats were less frequent (19% but appeared in the same positions within the utterances. Longer repeats are much rarer (<2.5%. A comparison with available analyses show that Czech advanced learners of English use repeats in a similar way as advanced learners of English with a different L1 and also as native speakers. If repeats are accepted as fluencemes, i.e. components contributing to fluency, it would appear clear that many advanced learners either successfully adopt this native-like strategy either as a result of exposure to native speech or as transfer from their L1s. Whilst a question remains whether such fluency enhancing strategies ought to become part of L2 instruction, it is argued that spoken learner corpora also ought to include samples of the learners’ L1 production.

  18. AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes.

    Science.gov (United States)

    Schillingmann, Lars; Ernst, Jessica; Keite, Verena; Wrede, Britta; Meyer, Antje S; Belke, Eva

    2018-01-29

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool's performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.

  19. "They never realized that, you know": linguistic collocation and interactional functions of you know in contemporary academin spoken english

    Directory of Open Access Journals (Sweden)

    Rodrigo Borba

    2012-12-01

    Full Text Available Discourse markers are a collection of one-word or multiword terms that help language users organize their utterances on the grammar, semantic, pragmatic and interactional levels. Researchers have characterized some of their roles in written and spoken discourse (Halliday & Hasan, 1976, Schffrin, 1988, 2001. Following this trend, this paper advances a discussion of discourse markers in contemporary academic spoken English. Through quantitative and qualitative analyses of the use of the discourse marker ‘you know’ in the Michigan Corpus of Academic Spoken English (MICASE we describe its frequency in this corpus, its collocation on the sentence level and its interactional functions. Grammatically, a concordance analysis shows that you know (as other discourse markers is linguistically fl exible as it seems to be placed in any grammatical slot of an utterance. Interactionally, a qualitative analysis indicates that its use in contemporary English goes beyond the uses described in the literature. We defend that besides serving as a hedging strategy (Lakoff, 1975, you know also serves as a powerful face-saving (Goffman, 1955 technique which constructs students’ identities vis-à-vis their professors’ and vice-versa.

  20. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  1. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates.

    Science.gov (United States)

    Petkov, Christopher I; Jarvis, Erich D

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

  2. The Spoken Knowledge in Low Literacy in Diabetes scale: a diabetes knowledge scale for vulnerable patients.

    Science.gov (United States)

    Rothman, Russell L; Malone, Robb; Bryant, Betsy; Wolfe, Catherine; Padgett, Penelope; DeWalt, Darren A; Weinberger, Morris; Pignone, Michael

    2005-01-01

    The purpose of this study was to develop and validate a new knowledge scale for patients with type 2 diabetes and poor literacy: the Spoken Knowledge in Low Literacy patients with Diabetes (SKILLD). The authors evaluated the 10-item SKILLD among 217 patients with type 2 diabetes and poor glycemic control at an academic general medicine clinic. Internal reliability was measured using the Kuder-Richardson coefficient. Performance on the SKILLD was compared to patient socioeconomic status, literacy level, duration of diabetes, and glycated hemoglobin (A1C). Respondents' mean age was 55 years, and they had diabetes for an average of 8.4 years; 38% had less than a sixth-grade literacy level. The average score on the SKILLD was 49%. Less than one third of patients knew the signs of hypoglycemia or the normal fasting blood glucose range. The internal reliability of the SKILLD was good (0.72). Higher performance on the SKILLD was significantly correlated with higher income (r = 0.22), education level (r = 0.36), literacy status (r = 0.33), duration of diabetes (r = 0.30), and lower A1C (r = -0.16). When dichotomized, patients with low SKILLD scores (< or = 50%) had significantly higher A1C (11.2% vs 10.3%, P < .01). This difference remained significant when adjusted for covariates. The SKILLD demonstrated good internal consistency and validity. It revealed significant knowledge deficits and was associated with glycemic control. The SKILLD represents a practical scale for patients with diabetes and low literacy.

  3. Engaging Minority Youth in Diabetes Prevention Efforts Through a Participatory, Spoken-Word Social Marketing Campaign.

    Science.gov (United States)

    Rogers, Elizabeth A; Fine, Sarah C; Handley, Margaret A; Davis, Hodari B; Kass, James; Schillinger, Dean

    2017-07-01

    To examine the reach, efficacy, and adoption of The Bigger Picture, a type 2 diabetes (T2DM) social marketing campaign that uses spoken-word public service announcements (PSAs) to teach youth about socioenvironmental conditions influencing T2DM risk. A nonexperimental pilot dissemination evaluation through high school assemblies and a Web-based platform were used. The study took place in San Francisco Bay Area high schools during 2013. In the study, 885 students were sampled from 13 high schools. A 1-hour assembly provided data, poet performances, video PSAs, and Web-based platform information. A Web-based platform featured the campaign Web site and social media. Student surveys preassembly and postassembly (knowledge, attitudes), assembly observations, school demographics, counts of Web-based utilization, and adoption were measured. Descriptive statistics, McNemar's χ 2 test, and mixed modeling accounting for clustering were used to analyze data. The campaign included 23 youth poet-created PSAs. It reached >2400 students (93% self-identified non-white) through school assemblies and has garnered >1,000,000 views of Web-based video PSAs. School participants demonstrated increased short-term knowledge of T2DM as preventable, with risk driven by socioenvironmental factors (34% preassembly identified environmental causes as influencing T2DM risk compared to 83% postassembly), and perceived greater personal salience of T2DM risk reduction (p < .001 for all). The campaign has been adopted by regional public health departments. The Bigger Picture campaign showed its potential for reaching and engaging diverse youth. Campaign messaging is being adopted by stakeholders.

  4. INDIVIDUAL ACCOUNTABILITY IN COOPERATIVE LEARNING: MORE OPPORTUNITIES TO PRODUCE SPOKEN ENGLISH

    Directory of Open Access Journals (Sweden)

    Puji Astuti

    2017-05-01

    Full Text Available The contribution of cooperative learning (CL in promoting second and foreign language learning has been widely acknowledged. Little scholarly attention, however, has been given to revealing how this teaching method works and promotes learners’ improved communicative competence. This qualitative case study explores the important role that individual accountability in CL plays in giving English as a Foreign Language (EFL learners in Indonesia the opportunity to use the target language of English. While individual accountability is a principle of and one of the activities in CL, it is currently under studied, thus little is known about how it enhances EFL learning. This study aims to address this gap by conducting a constructivist grounded theory analysis on participant observation, in-depth interview, and document analysis data drawn from two secondary school EFL teachers, 77 students in the observed classrooms, and four focal students. The analysis shows that through individual accountability in CL, the EFL learners had opportunities to use the target language, which may have contributed to the attainment of communicative competence—the goal of the EFL instruction. More specifically, compared to the use of conventional group work in the observed classrooms, through the activities of individual accountability in CL, i.e., performances and peer interaction, the EFL learners had more opportunities to use spoken English. The present study recommends that teachers, especially those new to CL, follow the preset procedure of selected CL instructional strategies or structures in order to recognize the activities within individual accountability in CL and understand how these activities benefit students.

  5. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates

    Science.gov (United States)

    Petkov, Christopher I.; Jarvis, Erich D.

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615

  6. Vývoj sociální kognice českých neslyšících dětí — uživatelů českého znakového jazyka a uživatelů mluvené češtiny: adaptace testové baterie : Development of Social Cognition in Czech Deaf Children — Czech Sign Language Users and Czech Spoken Language Users: Adaptation of a Test Battery

    Directory of Open Access Journals (Sweden)

    Andrea Hudáková

    2017-11-01

    Full Text Available The present paper describes the process of an adaptation of a set of tasks for testing theory-of-mind competencies, Theory of Mind Task Battery, for the use with the population of Czech Deaf children — both users of Czech Sign Language as well as those using spoken Czech.

  7. Reply to David Kemmerer's "a critique of Mark D. Allen's 'the preservation of verb subcategory knowledge in a spoken language comprehension deficit'".

    Science.gov (United States)

    Allen, Mark D; Owens, Tyler E

    2008-07-01

    Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic information. In his critique, Kemmerer argues that because Allen depended entirely on preposition-based verb subcategory violations to test WBN's knowledge of correct argument structure, his results, at best, address a "strawman" theory. This argument rests on the assumption that preposition subcategory options are superficial syntactic phenomena which are not represented by argument structure proper. We demonstrate that preposition subcategory is in fact treated as semantically determined argument structure in the theories that Allen evaluated, and thus far from irrelevant. In further discussion of grammatically relevant versus irrelevant semantic features, Kemmerer offers a review of his own studies. However, due to an important design shortcoming in these experiments, we remain unconvinced. Reemphasizing the fact the Allen (2005) never claimed to rule out all semantic contributions to syntax, we propose an improvement in Kemmerer's approach that might provide more satisfactory evidence on the distinction between the kinds of relevant versus irrelevant features his studies have addressed.

  8. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    Science.gov (United States)

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  9. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  10. Gray matter structure and morphosyntax within a spoken narrative in typically developing children and children with high functioning autism.

    Science.gov (United States)

    Mills, Brian D; Lai, Janie; Brown, Timothy T; Erhart, Matthew; Halgren, Eric; Reilly, Judy; Appelbaum, Mark; Moses, Pamela

    2013-01-01

    This study examined the relationship between magnetic resonance imaging (MRI)-based measures of gray matter structure and morphosyntax production in a spoken narrative in 17 typical children (TD) and 11 children with high functioning autism (HFA) between 6 and 13 years of age. In the TD group, cortical structure was related to narrative performance in the left inferior frontal gyrus (Broca's area), the right middle frontal sulcus, and the right inferior temporal sulcus. No associations were found in children with HFA. These findings suggest a systematic coupling between brain structure and spontaneous language in TD children and a disruption of these relationships in children with HFA.

  11. Left inferior frontal gyrus mediates morphosyntax: ERP evidence from verb processing in left-hemisphere damaged patients.

    Science.gov (United States)

    Regel, Stefanie; Kotz, Sonja A; Henseler, Ilona; Friederici, Angela D

    2017-01-01

    Neurocognitive models of language comprehension have proposed different mechanisms with different neural substrates mediating human language processing. Whether the left inferior frontal gyrus (LIFG) is engaged in morpho-syntactic information processing is currently still controversially debated. The present study addresses this issue by examining the processing of irregular verb inflection in real words (e.g., swim > swum > swam) and pseudowords (e.g., frim > frum > fram) by using event-related brain potentials (ERPs) in neurological patients with lesions in the LIFG involving Broca's area as well as healthy controls. Different ERP patterns in response to the grammatical violations were observed in both groups. Controls showed a biphasic negativity-P600 pattern in response to incorrect verb inflections whereas patients with LIFG lesions displayed a N400. For incorrect pseudoword inflections, a late positivity was found in controls, while no ERP effects were obtained in patients. These findings of different ERP patterns in the two groups strongly indicate an involvement of LIFG in morphosyntactic processing, thereby suggesting brain regions' specialization for different language functions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  14. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-01-25

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  15. The relationship between spoken English proficiency and participation in higher education, employment and income from two Australian censuses.

    Science.gov (United States)

    Blake, Helen L; Mcleod, Sharynne; Verdon, Sarah; Fuller, Gail

    2018-04-01

    Proficiency in the language of the country of residence has implications for an individual's level of education, employability, income and social integration. This paper explores the relationship between the spoken English proficiency of residents of Australia on census day and their educational level, employment and income to provide insight into multilingual speakers' ability to participate in Australia as an English-dominant society. Data presented are derived from two Australian censuses i.e. 2006 and 2011 of over 19 million people. The proportion of Australians who reported speaking a language other than English at home was 21.5% in the 2006 census and 23.2% in the 2011 census. Multilingual speakers who also spoke English very well were more likely to have post-graduate qualifications, full-time employment and high income than monolingual English-speaking Australians. However, multilingual speakers who reported speaking English not well were much less likely to have post-graduate qualifications or full-time employment than monolingual English-speaking Australians. These findings provide insight into the socioeconomic and educational profiles of multilingual speakers, which will inform the understanding of people such as speech-language pathologists who provide them with support. The results indicate spoken English proficiency may impact participation in Australian society. These findings challenge the "monolingual mindset" by demonstrating that outcomes for multilingual speakers in education, employment and income are higher than for monolingual speakers.

  16. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  18. Conceptualization of Man's Behavioral and Physical Characteristics as Animal Metaphors in the Spoken Discourse of Khezel People

    Directory of Open Access Journals (Sweden)

    Aliakbari, Mohammad

    2013-01-01

    Full Text Available Cognitive theory of metaphor has changed our understanding of metaphor as a figurative device to a matter of thought. It holds that metaphors are cognitively as well as culturally motivated. Despite having similar images in some languages, the culture-specific aspect of animal metaphors inspired the researchers to explore this area of metaphoric system in a local Kurdish variety to investigate how animal metaphors are reflected in spoken discourse. To achieve this objective, the authors collected and analyzed animal expressions adopted for praise and degradation of physical and behavioral characteristics in Khezeli dialect in Ilam, Iran. To create a representative corpus, the authors scrutinized spoken language and oral poetry of the dialect. The collected data indicate that more wild than domestic and more degrading than praising animal expressions are used for man's physical and behavioral characteristics. It is also confirmed that aspects of appearance, size, physical characteristics as well as body parts of animals are transferred to humans. Further, users' attitudes toward animals reflected themselves in their metaphors. Users were also found to have three categories of positive, positive/negative, and negative connotations for animal names. Despite the existence of similarities in the underlying patterns of metaphoric use in different languages, the research came to the conclusion that the types of animals used, their connotations and interpretations may be worlds apart and taking the meaning of one for another may lead to misunderstanding.

  19. The Plausibility of Tonal Evolution in the Malay Dialect Spoken in Thailand: Evidence from an Acoustic Study

    Directory of Open Access Journals (Sweden)

    Phanintra Teeranon

    2007-12-01

    Full Text Available The F0 values of vowels following voiceless consonants are higher than those of vowels following voiced consonants; high vowels have a higher F0 than low vowels. It has also been found that when high vowels follow voiced consonants, the F0 values decrease. In contrast, low vowels following voiceless consonants show increasing F0 values. In other words, the voicing of initial consonants has been found to counterbalance the intrinsic F0 values of high and low vowels (House and Fairbanks 1953, Lehiste and Peterson 1961, Lehiste 1970, Laver 1994, Teeranon 2006. To test whether these three findings are applicable to a disyllabic language, the F0 values of high and low vowels following voiceless and voiced consonants were studied in a Malay dialect of the Austronesian language family spoken in Pathumthani Province, Thailand. The data was collected from three male informants, aged 30-35. The Praat program was used for acoustic analysis. The findings revealed the influence of the voicing of initial consonants on the F0 of vowels to be greater than that of the influence of vowel height. Evidence from this acoustic study shows the plausibility for the Malay dialect spoken in Pathumthani to become a tonal language by the influence of initial consonants rather by the influence of the high-low vowel dimension.

  20. Neural correlates of sublexical processing in phonological working memory.

    Science.gov (United States)

    McGettigan, Carolyn; Warren, Jane E; Eisner, Frank; Marshall, Chloe R; Shanmugalingam, Pradheep; Scott, Sophie K

    2011-04-01

    This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural responses to these manipulations under conditions of covert rehearsal (Experiment 1). A left-dominant network of temporal and motor cortex showed increased activity for longer items, with motor cortex only showing greater activity concomitant with adding consonant clusters. An individual-differences analysis revealed a significant positive relationship between activity in the angular gyrus and the hippocampus, and accuracy on pseudoword repetition. As models of pWM stipulate that its neural correlates should be activated during both perception and production/rehearsal [Buchsbaum, B. R., & D'Esposito, M. The search for the phonological store: From loop to convolution. Journal of Cognitive Neuroscience, 20, 762-778, 2008; Jacquemot, C., & Scott, S. K. What is the relationship between phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10, 480-486, 2006; Baddeley, A. D., & Hitch, G. Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47-89). New York: Academic Press, 1974], we further assessed the effects of the two factors in a separate passive listening experiment (Experiment 2). In this experiment, the effect of the number of syllables was concentrated in posterior-medial regions of the supratemporal plane bilaterally, although there was no evidence of a significant response to added clusters. Taken together, the results identify the planum temporale as a key region in pWM; within this region, representations are likely to take the form of auditory or audiomotor "templates" or "chunks" at the level of the syllable

  1. Prosodic and narrative processing in American Sign Language: An fMRI study

    Science.gov (United States)

    Newman, Aaron J.; Supalla, Ted; Hauser, Peter; Newport, Elissa; Bavelier, Daphne

    2010-01-01

    Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages, but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices, as well as the basal ganglia, medial frontal and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages. PMID:20347996

  2. The Differences between Spoken and Written Grammar in English, in Comparison with Vietnamese (Las Diferencias entre la Gramática Oral y Escrita del Idioma Inglés en Comparación con el Idioma Vietnamita)

    Science.gov (United States)

    Thanh, Nguyen Cao

    2015-01-01

    The fundamental point of this paper is to describe and evaluate some differences between spoken and written grammar in English, and compare some of the points with Vietnamese. This paper illustrates that spoken grammar is less rigid than written grammar. Moreover, it highlights the distinction between speaking and writing in terms of subordination…

  3. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design

    Science.gov (United States)

    Higgins, Meaghan C.; Penney, Sarah B.; Robertson, Erin K.

    2017-01-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N = 71) completed a sentence-picture matching task. Children performed the control,…

  4. Comprehension of spoken language in non-speaking children with severe cerebral palsy: an explorative study on associations with motor type and disabilities

    NARCIS (Netherlands)

    Geytenbeek, J.J.M.; Vermeulen, R.J.; Becher, J.G.; Oostrom, K.J.

    2015-01-01

    Aim: To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Method: Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic

  5. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  6. Stimulus variability and the phonetic relevance hypothesis: effects of variability in speaking style, fundamental frequency, and speaking rate on spoken word identification.

    Science.gov (United States)

    Sommers, Mitchell S; Barcroft, Joe

    2006-04-01

    Three experiments were conducted to examine the effects of trial-to-trial variations in speaking style, fundamental frequency, and speaking rate on identification of spoken words. In addition, the experiments investigated whether any effects of stimulus variability would be modulated by phonetic confusability (i.e., lexical difficulty). In Experiment 1, trial-to-trial variations in speaking style reduced the overall identification performance compared with conditions containing no speaking-style variability. In addition, the effects of variability were greater for phonetically confusable words than for phonetically distinct words. In Experiment 2, variations in fundamental frequency were found to have no significant effects on spoken word identification and did not interact with lexical difficulty. In Experiment 3, two different methods for varying speaking rate were found to have equivalent negative effects on spoken word recognition and similar interactions with lexical difficulty. Overall, the findings are consistent with a phonetic-relevance hypothesis, in which accommodating sources of acoustic-phonetic variability that affect phonetically relevant properties of speech signals can impair spoken word identification. In contrast, variability in parameters of the speech signal that do not affect phonetically relevant properties are not expected to affect overall identification performance. Implications of these findings for the nature and development of lexical representations are discussed.

  7. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    Science.gov (United States)

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  8. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    Science.gov (United States)

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  9. Harnessing the Power of Informal Learning: Using WeChat, the Semi-Synchronous Group Chat, to Enhance Spoken Fluency in Chinese Learners

    Science.gov (United States)

    Sadoux, Marion

    2017-01-01

    This research is an exploratory study that seeks to evaluate the potentials of the Chinese app WeChat to enhance the spoken fluency of learners of French in China, who report having limited and insufficient opportunities to practice speaking in their daily life. WeChat is an extremely popular instant messenger facilitating communication through a…

  10. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  11. Talk About Mouth Speculums: Collocational Competence and Spoken Fluency in Non-Native English-Speaking University Lecturers

    DEFF Research Database (Denmark)

    Westbrook, Pete

    might exist between overall language proficiency, collocational competence and spoken fluency in non-native English-speaking university lecturers. The data came from 15 20-minute mini-lectures recorded between 2009 and 2011 for an English oral proficiency test for lecturers employed at the University......Despite the large body of research into formulaic language and fluency, there seems to be a lack of empirical evidence for how collocations, often considered a subset of formulaic language, might impact on fluency. To address this problem, this dissertation examined to what extent correlations...... fluency measures calculated for each lecturer. Initial findings across all lecturers showed no correlation between collocational competence and either overall proficiency or fluency. However, further analysis of lecturers by department revealed that possible correlations were hidden by variations...

  12. Distance delivery of a spoken language intervention for school-aged and adolescent boys with fragile X syndrome.

    Science.gov (United States)

    McDuffie, Andrea; Banasik, Amy; Bullard, Lauren; Nelson, Sarah; Feigles, Robyn Tempero; Hagerman, Randi; Abbeduto, Leonard

    2018-01-01

    A small randomized group design (N = 20) was used to examine a parent-implemented intervention designed to improve the spoken language skills of school-aged and adolescent boys with FXS, the leading cause of inherited intellectual disability. The intervention was implemented by speech-language pathologists who used distance video-teleconferencing to deliver the intervention. The intervention taught mothers to use a set of language facilitation strategies while interacting with their children in the context of shared story-telling. Treatment group mothers significantly improved their use of the targeted intervention strategies. Children in the treatment group increased the duration of engagement in the shared story-telling activity as well as use of utterances that maintained the topic of the story. Children also showed increases in lexical diversity, but not in grammatical complexity.

  13. EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations

    Directory of Open Access Journals (Sweden)

    João Mendonça Correia

    2015-02-01

    Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in

  14. When one person's mistake is another's standard usage: The effect of foreign accent on syntactic processing

    NARCIS (Netherlands)

    Hanuliková, A.; Alphen, P.M. van; Goch, M.M. van; Weber, A.C.

    2012-01-01

    How do native listeners process grammatical errors that are frequent in non-native speech? We investigated whether the neural correlates of syntactic processing are modulated by speaker identity. ERPs to gender agreement errors in sentences spoken by a native speaker were compared with the same

  15. Direct and indirect speech in aphasia : studies of spoken discourse production and comprehension

    NARCIS (Netherlands)

    Groenewold, Rimke

    2015-01-01

    Speakers with aphasia (a language impairment due to acquired brain damage) have difficulty processing grammatically complex sentences. In this dissertation we study the processing of direct speech constructions (e.g., John said: “I have to leave”) by people with and without aphasia. First, we study

  16. There Is No Culturally Responsive Teaching Spoken Here: A Critical Race Perspective

    Science.gov (United States)

    Hayes, Cleveland; Juarez, Brenda

    2012-01-01

    In this article, we are concerned with White racial domination as a process that occurs in teacher education and the ways it operates to hinder the preparation of teachers to effectively teach all students. Our purpose is to identify and highlight moments within processes of White racial domination when individuals and groups have and make choices…

  17. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Acheson, D.J.; Takashima, A.

    2013-01-01

    ulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and

  18. Changes to English as an Additional Language Writers' Research Articles: From Spoken to Written Register

    Science.gov (United States)

    Koyalan, Aylin; Mumford, Simon

    2011-01-01

    The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…

  19. Conversational interfaces for task-oriented spoken dialogues: design aspects influencing interaction quality

    NARCIS (Netherlands)

    Niculescu, A.I.

    2011-01-01

    This dissertation focuses on the design and evaluation of speech-based conversational interfaces for task-oriented dialogues. Conversational interfaces are software programs enabling interaction with computer devices through natural language dialogue. Even though processing conversational speech is

  20. The Bakayat SpokenText Tradition The Articulation of Religious Value and Social Discourse of Sasak Community in Lombok

    Directory of Open Access Journals (Sweden)

    I Made Suyasa

    2017-01-01

    Full Text Available This study explored the bakayat spoken-text tradition of the Sasak people in Lombok. The tradition was used as media for preaching on Islamic day, customs and ceremonies, as well as appreciating the folk literature. Malay literary texts that contained religious values were articulated continuously in various social discourses by the community that owned this tradition. The impact of the globalization and the inclusion of various Islamic doctrines in Lombok have threatened the existence of the bakayat tradition and now most Sasak people especially the younger ones are not interested in this tradition. The background explained above has become the main reason why this study was conducted. Moreover, there were still a few studies which had investigated the bakayat tradition in-depth. This present study was focused on the history, structure, function, meaning, and articulation of the religious values and social discourse of the bakayat tradition bySasak people. This research used the descriptive analytical method. The data were analyzed using the interpretive qualitative method. The theories used in this study were the theory narratology proposed by Gerard Genette (1986, the theory of articulation proposed by Stuart Hall (1986, the theory of functions, and the theory of semiotics. The results of this study showed that the historical development ofthe Sasakbakayat tradition was characterized by the emergence of Islam in Lombok. It significantly contributed to the existence of bakayat. It was followed by the Islamic Malay literature which was used as the reading material in the bakayat tradition and the media for learning Islam. The historical development of the bakayat Sasak was explained in various aspects such as religious, cultural, political, and social aspects. The structure of the bakayat text was a form of the articulation in spoken style which involved the characteristics of the

  1. Advances in natural language processing.

    Science.gov (United States)

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  2. Preferential inspection of recent real-world events over future events: evidence from eye tracking during spoken sentence comprehension

    Directory of Open Access Journals (Sweden)

    Pia eKnoeferle

    2011-12-01

    Full Text Available Eye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently-seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn't yet been acted upon. We examined whether this inspection preference generalizes to real-world events, and whether it is (vs. isn't modulated by how often people see recent and future events acted out. In a first eye-tracking study, the experimenter performed an action (e.g., sugaring pancakes, and then a spoken sentence either referred to that action or to an equally plausible future action (e.g., sugaring strawberries. At the verb, people more often inspected the pancakes (the recent target than the strawberries (the future target, thus replicating the recent-event preference with these real-world actions. Adverb tense, indicating a future versus past event, had no effect on participants' visual attention. In a second study we increased the frequency of future actions such that participants saw 50/50 future and recent actions. During the verb people mostly inspected the recent action target, but subsequently they began to rely on tense, and anticipated the future target more often for future than past tense adverbs. A corpus study showed that the verbs and adverbs indicating past versus future actions were equally frequent, suggesting long-term frequency biases did not cause the recent-event preference. Thus, (a recent real-world actions can rapidly influence comprehension (as indexed by eye gaze to objects, and (b people prefer to first inspect a recent action target (vs. an object that will soon be acted upon, even when past and future actions occur with equal frequency. A simple frequency-of-experience account cannot accommodate these

  3. The influence of background music on recognition processes of Chinese characters: an ERP study.

    Science.gov (United States)

    Liu, Baolin; Huang, Yizhou; Wang, Zhongning; Wu, Guangning

    2012-06-19

    In this paper, we employed RSS (rapid stream stimulation) paradigm to study the recognition processes of Chinese characters in background music. Real Chinese characters (upright or rotated) were used as target stimuli, while pseudo-words were used as background stimuli. Subjects were required to detect real characters while listening to Mozart's Sonata K. 448 and in silence. Both behavioral results and ERP results supported that Mozart's music mainly served as a distracter in the recognition processes of real Chinese characters in the experiment. The modulation of Mozart's music on RP (recognition potential) was different across different orientations of Chinese characters; in particular, the modulation of RP elicited by upright Chinese characters was more significant, suggesting that the music factor and orientation factor interact to affect the RP component. In brief, the simultaneous playing of Mozart's music did not improve subjects' performance in the detection of real Chinese characters. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Lexical Tone Variation and Spoken Word Recognition in Preschool Children: Effects of Perceptual Salience

    Science.gov (United States)

    Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.

    2017-01-01

    Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…

  5. Deviant ERP Response to Spoken Non-Words among Adolescents Exposed to Cocaine in Utero

    Science.gov (United States)

    Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.

    2012-01-01

    Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…

  6. Task-Oriented Spoken Dialog System for Second-Language Learning

    Science.gov (United States)

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  7. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    Directory of Open Access Journals (Sweden)

    Vitoria ePiai

    2013-12-01

    Full Text Available Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI; vocal colour naming while ignoring distractors (Stroop; and manual object discrimination while ignoring spatial position (Simon task. All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus. Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category relative to incongruent (categorically related and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the anterior cingulate cortex, a region that is likely implementing domain

  8. Fast mapping semantic features: performance of adults with normal language, history of disorders of spoken and written language, and attention deficit hyperactivity disorder on a word-learning task.

    Science.gov (United States)

    Alt, Mary; Gutmann, Michelle L

    2009-01-01

    This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.

  9. The Priority of Listening Comprehension over Speaking in the Language Acquisition Process

    Science.gov (United States)

    Xu, Fang

    2011-01-01

    By elaborating the definition of listening comprehension, the characteristic of spoken discourse, the relationship between STM and LTM and Krashen's comprehensible input, the paper puts forward the point that the priority of listening comprehension over speaking in the language acquisition process is very necessary.

  10. Early Language Experience Facilitates the Processing of Gender Agreement in Spanish Heritage Speakers

    Science.gov (United States)

    Montrul, Silvina; Davidson, Justin; De La Fuente, Israel; Foote, Rebecca

    2014-01-01

    We examined how age of acquisition in Spanish heritage speakers and L2 learners interacts with implicitness vs. explicitness of tasks in gender processing of canonical and non-canonical ending nouns. Twenty-three Spanish native speakers, 29 heritage speakers, and 33 proficiency-matched L2 learners completed three on-line spoken word recognition…

  11. Processing Lexical and Speaker Information in Repetition and Semantic/Associative Priming

    Science.gov (United States)

    Lee, Chao-Yang; Zhang, Yu

    2018-01-01

    The purpose of this study is to investigate the interaction between processing lexical and speaker-specific information in spoken word recognition. The specific question is whether repetition and semantic/associative priming is reduced when the prime and target are produced by different speakers. In Experiment 1, the prime and target were repeated…

  12. The role of visual representations within working memory for paired-associate and serial order of spoken words.

    Science.gov (United States)

    Ueno, Taiji; Saito, Satoru

    2013-09-01

    Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.

  13. Computer Assisted Testing of Spoken English: A Study of the SFLEP College English Oral Test System in China

    Directory of Open Access Journals (Sweden)

    John Lowe

    2009-06-01

    Full Text Available This paper reports on the on-going evaluation of a computer-assisted system (CEOTS for the assessing of spoken English skills among Chinese university students. This system is being developed to deal with the negative backwash effects of the present system of assessment of speaking skills which is only available to a tiny minority. We present data from a survey of students at the developing institution (USTC, with follow-up interviews and further interviews with English language teachers, to gauge the reactions to the test and its impact on language learning. We identify the key issue as being one of validity, with a tension existing between construct and consequential validities of the existing system and of CEOTS. We argue that a computer-based system seems to offer the only solution to the negative backwash problem but the development of the technology required to meet current construct validity demands makes this a very long term prospect. We suggest that a compromise between the competing forms of validity must therefore be accepted, probably well before a computer-based system can deliver the level of interaction with the examinees that would emulate the present face-to-face mode.

  14. UNDERSTANDING TENOR IN SPOKEN TEXTS IN YEAR XII ENGLISH TEXTBOOK TO IMPROVE THE APPROPRIACY OF THE TEXTS

    Directory of Open Access Journals (Sweden)

    Noeris Meristiani

    2011-07-01

    Full Text Available ABSTRACT: The goal of English Language Teaching is communicative competence. To reach this goal students should be supplied with good model texts. These texts should consider the appropriacy of language use. By analyzing the context of situation which is focused on tenor the meanings constructed to build the relationships among the interactants in spoken texts can be unfolded. This study aims at investigating the interpersonal relations (tenor of the interactants in the conversation texts as well as the appropriacy of their realization in the given contexts. The study was conducted under discourse analysis by applying a descriptive qualitative method. There were eight conversation texts which function as examples in five chapters of a textbook. The data were analyzed by using lexicogrammatical analysis, described, and interpreted contextually. Then, the realization of the tenor of the texts was further analyzed in terms of appropriacy to suggest improvement. The results of the study show that the tenor indicates relationships between friend-friend, student-student, questioners-respondents, mother-son, and teacher-student; the power is equal and unequal; the social distances show frequent contact, relatively frequent contact, relatively low contact, high and low affective involvement, using informal, relatively informal, relatively formal, and formal language. There are also some indications of inappropriacy of tenor realization in all texts. It should be improved in the use of degree of formality, the realization of societal roles, status, and affective involvement. Keywords: context of situation, tenor, appropriacy.

  15. Reflexive anaphor resolution in spoken language comprehension: structural constraints and beyond

    Science.gov (United States)

    Clackson, Kaili; Heyer, Vera

    2014-01-01

    We report results from an eye-tracking during listening study examining English-speaking adults’ online processing of reflexive pronouns, and specifically whether the search for an antecedent is restricted to syntactically appropriate positions. Participants listened to a short story where the recipient of an object was introduced with a reflexive, and were asked to identify the object recipient as quickly as possible. This allowed for the recording of participants’ offline interpretation of the reflexive, response times, and eye movements on hearing the reflexive. Whilst our offline results show that the ultimate interpretation for reflexives was constrained by binding principles, the response time, and eye-movement data revealed that during processing participants were temporarily distracted by a structurally inappropriate competitor antecedent when this was prominent in the discourse. These results indicate that in addition to binding principles, online referential decisions are also affected by discourse-level information. PMID:25191290

  16. General language performance measures in spoken and written narrative and expository discourse of school-age children with language learning disabilities.

    Science.gov (United States)

    Scott, C M; Windsor, J

    2000-04-01

    Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.

  17. Visual information constrains early and late stages of spoken-word recognition in sentence context.

    Science.gov (United States)

    Brunellière, Angèle; Sánchez-García, Carolina; Ikumi, Nara; Soto-Faraco, Salvador

    2013-07-01

    Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Long lasting musical training modifies language processing: a Dichotic Fused Word Test study.

    Science.gov (United States)

    Sebastiani, L; Castellani, E

    2016-01-01

    Musical training modifies neural areas associated with both music and language and enhances speech perception and discrimination by engaging the right hemisphere regions classically associated with music processing. On these bases we hypothesized that participants with extended musical training could have reduced left-hemisphere dominance for speech. In order to verify this hypothesis, two groups of right-handed individuals, one with long-term musical training and one with no musical training, participated to a Dichotic Fused Word Test consisting in the simultaneous presentation of different pairs of rhyming words and pseudo-words, one to the left ear and one to the right one. Participants typically show a greater number of reports of the right ear input than of the left one. This effect, called right ear advantage (REA), reflects left-hemisphere dominance for speech processing. In our study, we expected that musicians had a reduced dichotic listening REA for linguistic stimuli. The main result of this study was the attenuation, and in some cases the complete suppression, of the dichotic effect in musicians, since most of them perceived both words, simultaneously. This finding suggests that both hemispheres may have similar verbal competence and contribute to speech processing in parallel. This contrasts with the normal brain organization in which hemispheres cooperate but are engaged in different analysis of speech. The "two words" perception also extended to pseudo-words. Thus, musical training, by shaping the language circuits, could produce the enhancement of bilateral processing of stimuli with linguistic characteristics (i.e. phonetics) independently of semantics.

  20. The perceptual flow of phonetic feature processing

    DEFF Research Database (Denmark)

    Greenberg, Steven; Christiansen, Thomas Ulrich

    2008-01-01

    How does the brain process spoken language? It is our thesis that word intelligibility and consonant identification are insufficient by themselves to model how the speech signal is decoded - a finer-grained approach is required. In this study, listeners identified 11 different Danish consonants......, posterior probabilities associated with phonetic-feature decoding were computed from confusion matrices in order to deduce the temporal flow of phonetic processing. Decoding the feature, Manner-of-Articulation, depends on accurate decoding of the feature Voicing (but not vice-versa), and decoding Place....... This asymmetric pattern of feature decoding may provide extra-segmental information of utility for speech processing, particularly in adverse listening conditions....

  1. Energy and protein intake increases with an electronic bedside spoken meal ordering system compared to a paper menu in hospital patients.

    Science.gov (United States)

    Maunder, Kirsty; Lazarus, Carmel; Walton, Karen; Williams, Peter; Ferguson, Maree; Beck, Eleanor

    2015-08-01

    Electronic bedside spoken meal ordering systems (BMOS) have the potential to improve patient dietary intakes, but there are few published evaluation studies. The aim of this study was to determine changes in the dietary intake and satisfaction of hospital patients, as well as the role of the Nutrition Assistant (NA), associated with the implementation of an electronic BMOS compared to a paper menu. This study evaluated the effect of a BMOS compared to a paper menu at a 210-bed tertiary private hospital in Sydney during 2011-2012. Patient dietary intake, patient satisfaction and changes in NA role were the key outcomes measured. Dietary intake was estimated from observational recordings and photographs of meal trays (before and after patient intake) over two 48 h periods. Patient satisfaction was measured through written surveys, and the NA role was compared through a review of work schedules, observation, time recordings of patient contact, written surveys and structured interviews. Baseline data were collected across five wards from 54 patients (75% response rate) whilst using the paper menu service, and after BMOS was introduced across the same five wards, from 65 patients (95% response rate). Paper menu and BMOS cohorts' demographics, self-reported health, appetite, weight, body mass index, dietary requirements, and overall foodservice satisfaction remained consistent. However, 80% of patients preferred the BMOS, and importantly mean daily energy and protein intakes increased significantly (paper menu versus BMOS): 6273 kJ versus 8273 kJ and 66 g versus 83 g protein; both p < 0.05. No additional time was required for the NA role, however direct patient interaction increased significantly (p < 0.05), and patient awareness of the NA and their role increased with the BMOS. The utilisation of a BMOS improved patient energy and protein intake. These results are most likely due to an enhancement of existing NA work processes, enabling more NA time with

  2. Online lexical competition during spoken word recognition and word learning in children and adults.

    Science.gov (United States)

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children (n = 20) and adults (n = 17) were slower to detect pauses in familiar words with later uniqueness points. Faster latencies were obtained for words with late uniqueness points in constraining compared with neutral sentences; no such effect was observed for early unique words. Following exposure to novel competitors ("biscal"), children (n = 18) and adults (n = 18) showed competition for existing words with early uniqueness points ("biscuit") after 24 hr. Thus, online lexical competition effects are remarkably similar across development. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  3. [Modes of analysis in qualitative research in health: critical perspective and spoken reflexions].

    Science.gov (United States)

    Amezcua, Manuel; Gálvez Toro, Alberto

    2002-01-01

    There is a consistent theoretical and methodological foundations on qualitative health research provided mainly by social sciences. However, this existent overlap between social and health aspects is a wide multidisciplinary field still underexplored. This article offers an overview of the main paradigms, methodologies and theoretical tendencies of qualitative research analyses within the health sciences context. Based on an initial classification, two opposite extremes for setting the bounds of the qualitative analysis continuum--from designs focusing on data description, which are purely exploratory, to those which go into theorizing processes so as to draw out interpretations and inferences--are discussed. Qualitative research is an important tool in the analysis of health problems from a social and cultural point of view. Adopting different procedures such as content and speech analysis, qualitative research approaches communication patterns and examines the diverse language ideologies. Sociological and anthropological traditions provide unique methodologies which allow to know the context where the phenomena appear and set out theoretical proposals in order to explain them, for example ethnomethodology or analytical induction. Lastly, some keys are suggested for developing a common area, out of which new epistemological perspectives may be set out based on different disciplines coming together.

  4. Human inferior colliculus activity relates to individual differences in spoken language learning.

    Science.gov (United States)

    Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M

    2012-03-01

    A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.

  5. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words.

    Science.gov (United States)

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2017-04-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Young children learning Spanish make rapid use of grammatical gender in spoken word recognition.

    Science.gov (United States)

    Lew-Williams, Casey; Fernald, Anne

    2007-03-01

    All nouns in Spanish have grammatical gender, with obligatory gender marking on preceding articles (e.g., la and el, the feminine and masculine forms of "the," respectively). Adult native speakers of languages with grammatical gender exploit this cue in on-line sentence interpretation. In a study investigating the early development of this ability, Spanish-learning children (34-42 months) were tested in an eye-tracking procedure. Presented with pairs of pictures with names of either the same grammatical gender (la pelota, "ball [feminine]"; la galleta, "cookie [feminine]") or different grammatical gender (la pelota; el zapato, "shoe [masculine]"), they heard sentences referring to one picture (Encuentra la pelota, "Find the ball"). The children were faster to orient to the referent on different-gender trials, when the article was potentially informative, than on same-gender trials, when it was not, and this ability was correlated with productive measures of lexical and grammatical competence. Spanish-learning children who can speak only 500 words already use gender-marked articles in establishing reference, a processing advantage characteristic of native Spanish-speaking adults.

  7. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  8. Oral narrative context effects on poor readers' spoken language performance: story retelling, story generation, and personal narratives.

    Science.gov (United States)

    Westerveld, Marleen F; Gillon, Gail T

    2010-04-01

    This investigation explored the effects of oral narrative elicitation context on children's spoken language performance. Oral narratives were produced by a group of 11 children with reading disability (aged between 7;11 and 9;3) and an age-matched control group of 11 children with typical reading skills in three different contexts: story retelling, story generation, and personal narratives. In the story retelling condition, the children listened to a story on tape while looking at the pictures in a book, before being asked to retell the story without the pictures. In the story generation context, the children were shown a picture containing a scene and were asked to make up their own story. Personal narratives were elicited with the help of photos and short narrative prompts. The transcripts were analysed at microstructure level on measures of verbal productivity, semantic diversity, and morphosyntax. Consistent with previous research, the results revealed no significant interactions between group and context, indicating that the two groups of children responded to the type of elicitation context in a similar way. There was a significant group effect, however, with the typical readers showing better performance overall on measures of morphosyntax and semantic diversity. There was also a significant effect of elicitation context with both groups of children producing the longest, linguistically most dense language samples in the story retelling context. Finally, the most significant differences in group performance were observed in the story retelling condition, with the typical readers outperforming the poor readers on measures of verbal productivity, number of different words, and percent complex sentences. The results from this study confirm that oral narrative samples can distinguish between good and poor readers and that the story retelling condition may be a particularly useful context for identifying strengths and weaknesses in oral narrative performance.

  9. Recognition without Identification for Words, Pseudowords and Nonwords

    Science.gov (United States)

    Arndt, Jason; Lee, Karen; Flora, David B.

    2008-01-01

    Three experiments examined whether the representations underlying recognition memory familiarity can be episodic in nature. Recognition without identification [Cleary, A. M., & Greene, R. L. (2000). Recognition without identification. "Journal of Experimental Psychology: Learning, Memory, and Cognition," 26, 1063-1069; Peynircioglu, Z. F. (1990).…

  10. Two-year-olds' sensitivity to subphonemic mismatch during online spoken word recognition.

    Science.gov (United States)

    Paquette-Smith, Melissa; Fecher, Natalie; Johnson, Elizabeth K

    2016-11-01

    Sensitivity to noncontrastive subphonemic detail plays an important role in adult speech processing, but little is known about children's use of this information during online word recognition. In two eye-tracking experiments, we investigate 2-year-olds' sensitivity to a specific type of subphonemic detail: coarticulatory mismatch. In Experiment 1, toddlers viewed images of familiar objects (e.g., a boat and a book) while hearing labels containing appropriate or inappropriate coarticulation. Inappropriate coarticulation was created by cross-splicing the coda of the target word onto the onset of another word that shared the same onset and nucleus (e.g., to create boat, the final consonant of boat was cross-spliced onto the initial CV of bone). We tested 24-month-olds and 29-month-olds in this paradigm. Both age groups behaved similarly, readily detecting the inappropriate coarticulation (i.e., showing better recognition of identity-spliced than cross-spliced items). In Experiment 2, we asked how children's sensitivity to subphonemic mismatch compared to their sensitivity to phonemic mismatch. Twenty-nine-month-olds were presented with targets that contained either a phonemic (e.g., the final consonant of boat was spliced onto the initial CV of bait) or a subphonemic mismatch (e.g., the final consonant of boat was spliced onto the initial CV of bone). Here, the subphonemic (coarticulatory) mismatch was not nearly as disruptive to children's word recognition as a phonemic mismatch. Taken together, our findings support the view that 2-year-olds, like adults, use subphonemic information to optimize online word recognition.

  11. Vocabulary learning in a Yorkshire terrier: slow mapping of spoken words.

    Directory of Open Access Journals (Sweden)

    Ulrike Griebel

    Full Text Available Rapid vocabulary learning in children has been attributed to "fast mapping", with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion.

  12. Vocabulary Learning in a Yorkshire Terrier: Slow Mapping of Spoken Words

    Science.gov (United States)

    Griebel, Ulrike; Oller, D. Kimbrough

    2012-01-01

    Rapid vocabulary learning in children has been attributed to “fast mapping”, with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico) not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey) with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before) embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion. PMID:22363421

  13. Spatiotemporal Convergence of Semantic Processing in Reading and Speech Perception

    OpenAIRE

    Vartiainen, J.; Parviainen, T.; Salmelin, Riitta

    2009-01-01

    Retrieval of word meaning from the semantic system and its integration with context are often assumed to be shared by spoken and written words. How is modality-independent semantic processing manifested in the brain, spatially and temporally? Time-sensitive neuroimaging allows tracking of neural activation sequences. Use of semantically related versus unrelated word pairs or sentences ending with a semantically highly or less plausible word, in separate studies of the auditory and visual moda...

  14. "I Guess I "Do" Know a Good Story": Re-Envisioning Writing Process with Native American Students and Communities

    Science.gov (United States)

    Stanton, Christine Rogers; Sutton, Karl

    2012-01-01

    In two projects described in this article, the authors discuss the use of Photovoice and Elder Interviews to draw upon visual and spoken forms of community-based literacy, generate ideas for written projects, promote a connection to community and culture, and engage students in critical analysis of writing process. Both projects took place in…

  15. Similarities and Differences in the Processing of Written Text by Skilled and Less Skilled Readers with Prelingual Deafness

    Science.gov (United States)

    Miller, Paul

    2013-01-01

    This study focuses on similarities and differences in the processing of written text by individuals with prelingual deafness from different reading levels that used Hebrew as their first spoken language and Israeli Sign Language as their primary manual communication mode. Data were gathered from three sources, including (a) a sentence…

  16. Reading Performance Is Predicted by More Than Phonological Processing

    Directory of Open Access Journals (Sweden)

    Michelle Y. Kibby

    2014-09-01

    Full Text Available We compared three phonological processing components (phonological awareness, rapid automatized naming and phonological memory, verbal working memory, and attention control in terms of how well they predict the various aspects of reading: word recognition, pseudoword decoding, fluency and comprehension, in a mixed sample of 182 children ages 8-12 years. Participants displayed a wide range of reading ability and attention control. Multiple regression was used to determine how well the phonological processing components, verbal working memory, and attention control predict reading performance. All equations were highly significant. Phonological memory predicted word identification and decoding. In addition, phonological awareness and rapid automatized naming predicted every aspect of reading assessed, supporting the notion that phonological processing is a core contributor to reading ability. Nonetheless, phonological processing was not the only predictor of reading performance. Verbal working memory predicted fluency, decoding and comprehension, and attention control predicted fluency. Based upon our results, when using Baddeley’s model of working memory it appears that the phonological loop contributes to basic reading ability, whereas the central executive contributes to fluency and comprehension, along with decoding. Attention control was of interest as some children with ADHD have poor reading ability even if it is not sufficiently impaired to warrant diagnosis. Our finding that attention control predicts reading fluency is consistent with prior research which showed sustained attention plays a role in fluency. Taken together, our results suggest that reading is a highly complex skill that entails more than phonological processing to perform well.

  17. Discourse segmentation and the management of multiple tasks in single episodes of air traffic controller-pilot spoken radio communication

    Directory of Open Access Journals (Sweden)

    Paul A. Falzon

    2009-06-01

    Full Text Available Episodes of VHF radio-mediated pilot-controller spoken communication in which multiple tasks are conducted are engendered in and through the skilful deployment and combination, by the parties to the talk, of multiple orders of discourse segmentation. These orders of segmentation are manifest at the levels of transmission design and sequential organisation. Both of these features are analysed from a Conversation Analytic standpoint in order to track their segment by segment genesis, development and completion. From the analysis it emerges that in addition to the serial type of sequential organisations described by Schegloff (1986, there exists an alternative form of organisation that enables tasks to be managed in a quasi-parallel manner, and which affords controllers and pilots a number of practical advantages in the conduct of their radio-mediated service encounters.Cet article présente des extraits d’échanges oraux entre pilots et contrôleurs du ciel via la radio VHF. On peut y voir comment le déploiement et la combinaison habile de plusieurs ordres de segmentation discursive, engageant les deux coénonciateurs de la conversation, leur permet d’accomplir des tâches multiples. Ces ordres de segmentation se manifestent aux niveaux du plan de la transmission et de l’organisation séquentielle. Ces deux niveaux sont envisagées du point de vue de l’analyse conversationnelle dans le but d’examiner, segment après segment, comment ils se mettent en place, se développent puis prennent fin. Notre étude montre que, outre le type sériel d’organisations séquentielles décrit par Schegloff (1986, il existe une forme alternative d’organisation qui permet de gérer les tâches de manière quasi parallèle, et qui fournit aux contrôleurs aériens ainsi qu’aux pilotes de nombreux avantages pratiques dans la conduite de leurs radio.

  18. Effect of orthographic processes on letter-identity and letter-position encoding in dyslexic children

    Directory of Open Access Journals (Sweden)

    Caroline eReilhac

    2012-05-01

    Full Text Available The ability to identify letters and encode their position is a crucial step of the word recognition process. However and despite their word identification problem, the ability of dyslexic children to encode letter-identity and letter-position within strings was not systematically investigated. This study aimed at filling this gap and further explored how letter identity and letter position encoding is modulated by letter context in developmental dyslexia. For this purpose, a letter-string comparison task was administered to French dyslexic children and two chronological-age (CA and reading-age (RA-matched control groups. Children had to judge whether two successively and briefly presented 4-letter-strings were identical or different. Letter-position and letter-identity were manipulated through the transposition (e.g., RTGM vs. RMGT or substitution of two letters (e.g., TSHF vs. TGHD. Non-words, pseudo-words and words were used as stimuli to investigate sub-lexical and lexical effects on letter encoding. Dyslexic children showed both substitution and transposition detection problems relative to CA controls. A substitution advantage over transpositions was only found for words in dyslexic children whereas it extended to pseudo-words in RA controls and to all type of items in CA controls. Letters were better identified in the dyslexic group when belonging to orthographically familiar strings. Letter position encoding was very impaired in dyslexic children who did not show any word context effect in contrast to CA controls. Overall, the current findings point to a strong letter identity and letter position encoding disorder in developmental dyslexia.

  19. Linking mathematical modeling with human neuroimaging to segregate verbal working memory maintenance processes from stimulus encoding.

    Science.gov (United States)

    McKenna, Benjamin S; Brown, Gregory G; Drummond, Sean P A; Turner, Travis H; Mano, Quintino R

    2013-03-01

    A fundamental dissociation for most working memory (WM) theories involves the separation of sensory-perceptual encoding of stimulus information from the maintenance of this information. The present paper reports tests of this separability hypothesis for visually presented pseudowords at both mathematical and neuroimaging levels of analysis. Levels of analysis were linked by two experimental manipulations-visual degradation and pseudoword length variation-that coupled findings from a mathematical modeling study of WM performed in a separate sample to findings from an event-related functional MRI (fMRI) study reported in the present paper. Results from the mathematical modeling study generated parametric signatures of stimulus encoding and WM rehearsal and displacement. These signatures led to specific predictions about neurophysiological responses to study manipulations in a priori regions of interest (ROI). Results demonstrated predicted dissociations of activation signatures in several ROIs. Significant patterns of brain response mirroring the encode signature were observed only during the task encode interval and only in the visual cortex and posterior fusiform gyrus. In contrast, significant brain response mirroring the rehearsal/displacement signature was observed only in the dorsolateral prefrontal cortex, inferior frontal gyrus, and supramarginal gyrus. Present findings support the separability hypothesis insofar as brain regions that underlie sensory-perceptual processes demonstrated encode signatures whereas brain regions that support WM maintenance demonstrated the rehearsal/displacement signature. These results also provide evidence for the utility of combining mathematical modeling with fMRI to integrate information across cognitive and neural levels of analysis. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. Language as a multimodal phenomenon: implications for language learning, processing and evolution.

    Science.gov (United States)

    Vigliocco, Gabriella; Perniss, Pamela; Vinson, David

    2014-09-19

    Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Neural Processing of Emotional Prosody across the Adult Lifespan.

    Science.gov (United States)

    Demenescu, Liliana Ramona; Kato, Yutaka; Mathiak, Klaus

    2015-01-01

    Emotion recognition deficits emerge with the increasing age, in particular, a decline in the identification of sadness. However, little is known about the age-related changes of emotion processing in sensory, affective, and executive brain areas. This functional magnetic resonance imaging (fMRI) study investigated neural correlates of auditory processing of prosody across adult lifespan. Unattended detection of emotional prosody changes was assessed in 21 young (age range: 18-35 years), 19 middle-aged (age range: 36-55 years), and 15 older (age range: 56-75 years) adults. Pseudowords uttered with neutral prosody were standards in an oddball paradigm with angry, sad, happy, and gender deviants (total 20% deviants). Changes in emotional prosody and voice gender elicited bilateral superior temporal gyri (STG) responses reflecting automatic encoding of prosody. At the right STG, responses to sad deviants decreased linearly with age, whereas happy events exhibited a nonlinear relationship. In contrast to behavioral data, no age by sex interaction emerged on the neural networks. The aging decline of emotion processing of prosodic cues emerges already at an early automatic stage of information processing at the level of the auditory cortex. However, top-down modulation may lead to an additional perceptional bias, for example, towards positive stimuli, and may depend on context factors such as the listener's sex.

  2. Neural Processing of Emotional Prosody across the Adult Lifespan

    Directory of Open Access Journals (Sweden)

    Liliana Ramona Demenescu

    2015-01-01

    Full Text Available Emotion recognition deficits emerge with the increasing age, in particular, a decline in the identification of sadness. However, little is known about the age-related changes of emotion processing in sensory, affective, and executive brain areas. This functional magnetic resonance imaging (fMRI study investigated neural correlates of auditory processing of prosody across adult lifespan. Unattended detection of emotional prosody changes was assessed in 21 young (age range: 18–35 years, 19 middle-aged (age range: 36–55 years, and 15 older (age range: 56–75 years adults. Pseudowords uttered with neutral prosody were standards in an oddball paradigm with angry, sad, happy, and gender deviants (total 20% deviants. Changes in emotional prosody and voice gender elicited bilateral superior temporal gyri (STG responses reflecting automatic encoding of prosody. At the right STG, responses to sad deviants decreased linearly with age, whereas happy events exhibited a nonlinear relationship. In contrast to behavioral data, no age by sex interaction emerged on the neural networks. The aging decline of emotion processing of prosodic cues emerges already at an early automatic stage of information processing at the level of the auditory cortex. However, top-down modulation may lead to an additional perceptional bias, for example, towards positive stimuli, and may depend on context factors such as the listener’s sex.

  3. ON IF AND WHETHER COMPLEMENT CLAUSES OF SEE, WONDER, AND KNOW IN CONTEMPORARY SPOKEN ACADEMIC AMERICAN ENGLISH: A CORPUS-BASED STUDY

    Directory of Open Access Journals (Sweden)

    http://www.rephi.knf.vu.lt/images/24_29/2_6%20Respectus%202013%2024(29%20%20Online%20Issn%20Kazimianec.pdf

    2013-10-01

    Full Text Available The main goal of this article is to investigate the distribution of two apparently vying finite complementation patterns—if and whether clauses—accompanying three mental verbs (see, wonder, and know in the MICASE corpus of spoken academic American English. The default introspective theoretical assumption that the two investigated complementizers are in a free distribution was not corroborated by the empiricical inquiry. The three verbs do evince linguistic preferences regarding complementation, preferences which depend on a number of factors: the valency pattern of a given verb, co(ntext, sub-genre, and the like. Moreover, the investigation also appears to have demonstrated that, in respect to the complementation of see, wonder, and know, spoken academic English bears a greater resemblance to everyday conversation than to written academic English, thus corroborating the contention that field prevails over mode (to employ Hallidayan parlance. Furthermore, the inquiry into the semantics of the three mental verbs investigated indicates that their meanings are affected by the genre, inasmuch as the verbs investigated tend to depart from their default dictionary definitions by conveying less-prototypical meanings. This finding, in turn, provides a rationale for probing into the pragmatics and functions of the three verbs. It must be stressed that the results should not be generalised due to the relatively small corpus size, which implies that further research is indicated.

  4. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    Science.gov (United States)

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  5. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  6. Failing to get the gist of what's being said: background noise impairs higher-order cognitive processing.

    Science.gov (United States)

    Marsh, John E; Ljung, Robert; Nöstl, Anatole; Threadgold, Emma; Campbell, Tom A

    2015-01-01

    A dynamic interplay is known to exist between auditory processing and human cognition. For example, prior investigations of speech-in-noise have revealed there is more to learning than just listening: Even if all words within a spoken list are correctly heard in noise, later memory for those words is typically impoverished. These investigations supported a view that there is a "gap" between the intelligibility of speech and memory for that speech. Here, the notion was that this gap between speech intelligibility and memorability is a function of the extent to which the spoken message seizes limited immediate memory resources (e.g., Kjellberg et al., 2008). Accordingly, the more difficult the processing of the spoken message, the less resources are available for elaboration, storage, and recall of that spoken material. However, it was not previously known how increasing that difficulty affected the memory processing of semantically rich spoken material. This investigation showed that noise impairs higher levels of cognitive analysis. A variant of the Deese-Roediger-McDermott procedure that encourages semantic elaborative processes was deployed. On each trial, participants listened to a 36-item list comprising 12 words blocked by each of 3 different themes. Each of those 12 words (e.g., bed, tired, snore…) was associated with a "critical" lure theme word that was not presented (e.g., sleep). Word lists were either presented without noise or at a signal-to-noise ratio of 5 decibels upon an A-weighting. Noise reduced false recall of the critical words, and decreased the semantic clustering of recall. Theoretical and practical implications are discussed.

  7. Failing to Get the Gist of What’s Being Said: Background Noise Impairs Higher Order Cognitive Processing

    Directory of Open Access Journals (Sweden)

    John Everett Marsh

    2015-05-01

    Full Text Available A dynamic interplay is known to exist between auditory processing and human cognition. For example, prior investigations of speech-in-noise have revealed there is more to learning than just listening: Even if all words within a spoken list correctly heard in noise, later memory for those words is typically impoverished. At such low signal-to-noise ratios when listeners could identify words, those participants could not necessarily remember those words. These investigations supported a view that there is a gap between the intelligibility of speech and memory for that speech. Here, the notion was that this gap between speech intelligibility and memorability is a function of the extent to which the spoken message seizes limited immediate memory resources (e.g., Kjellberg, Ljung, & Hallman, 2008. Accordingly, the more difficult the processing of the spoken message, the less resources are available for elaboration, storage, and recall of that spoken material. However, it was not previously known how increasing that difficulty affected the memory processing of semantically rich spoken material. This investigation showed that noise impairs higher levels of cognitive analysis. A variant of the Deese-Roediger-McDermott procedure that encourages semantic elaborative processes was deployed. On each trial, participants listened to a 36-item list comprising 12 words blocked by each of 3 different themes. Each of those 12 words (e.g., bed, tired, snore… was associated with a critical lure theme word that was not presented (e.g., sleep. Word lists were either presented without noise or at a signal-to-noise ratio of 5 decibels upon an A-weighting. Noise reduced false recall of the critical words, and decreased the semantic clustering of recall. Theoretical and practical implications are discussed.

  8. Evaluation of innovation processes

    Directory of Open Access Journals (Sweden)

    Jakub Tabas

    2012-01-01

    Full Text Available In present, innovations are spoken as an engine of the world economy because the innovations are transforming not only business entities but the whole industries. The innovations have become a necessity for business entities in order to survive on floating challenging markets. This way, innovations are driving force of companies’ performance. The problem which arises here is a question of measurement innovation’s effect on the financial performance of company or selection between two or more possible variants of innovation’s realization. Various authors which are focused on innovations processes are divided into two groups in their attitudes towards the question of influence of innovations on financial performance of companies. One group of the authors present the idea that any reliable measurement is not possible or efficient. The second group of authors present some methods theoretically applicable on this measurement but they base their approaches mostly on the methods of measurement of investments effectiveness or they suggest employment of indicators or ratios which wouldn’t be clearly connected with the outcome of innovation process. The aim of submitted article is to compare different approaches to evaluation of the innovation processes. The authors compare various approaches here and by use of analysis and synthesis, they determine their own method how to measure outcome of innovation process.

  9. Brain bases of morphological processing in young children.

    Science.gov (United States)

    Arredondo, Maria M; Ip, Ka I; Shih Ju Hsu, Lucy; Tardif, Twila; Kovelman, Ioulia

    2015-08-01

    How does the developing brain support the transition from spoken language to print? Two spoken language abilities form the initial base of child literacy across languages: knowledge of language sounds (phonology) and knowledge of the smallest units that carry meaning (morphology). While phonology has received much attention from the field, the brain mechanisms that support morphological competence for learning to read remain largely unknown. In the present study, young English-speaking children completed an auditory morphological awareness task behaviorally (n = 69, ages 6-12) and in fMRI (n = 16). The data revealed two findings: First, children with better morphological abilities showed greater activation in left temporoparietal regions previously thought to be important for supporting phonological reading skills, suggesting that this region supports multiple language abilities for successful reading acquisition. Second, children showed activation in left frontal regions previously found active in young Chinese readers, suggesting morphological processes for reading acquisition might be similar across languages. These findings offer new insights for developing a comprehensive model of how spoken language abilities support children's reading acquisition across languages. © 2015 Wiley Periodicals, Inc.

  10. Altering Practices to Include Bimodal-bilingual (ASL-Spoken English) Programming at a Small School for the Deaf in Canada.

    Science.gov (United States)

    Priestley, Karen; Enns, Charlotte; Arbuckle, Shauna

    2018-01-01

    Bimodal-bilingual programs are emerging as one way to meet broader needs and provide expanded language, educational and social-emotional opportunities for students who are deaf and hard of hearing (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Paludneviciene & Harris, R. (2011). Impact of cochlear implants on the deaf community. In Paludneviciene, R. & Leigh, I. (Eds.), Cochlear implants evolving perspectives (pp. 3-19). Washington, DC: Gallaudet University Press). However, there is limited research on students' spoken language development, signed language growth, academic outcomes or the social-emotional factors associated with these programs (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Nussbaum, D & Scott, S. (2011). The cochlear implant education center: Perspectives on effective educational practices. In Paludneviciene, R. & Leigh, I. (Eds.) Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press. The cochlear implant education center: Perspectives on effective educational practices. In Paludnevicience & Leigh (Eds). Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press; Spencer, P. & Marschark, M. (Eds.) (2010). Evidence-based practice in educating deaf and hard-of-hearing students. New York, NY: Oxford University Press). The purpose of this case study was to look at formal and informal student outcomes as well as staff and parent perceptions during the first 3 years of implementing a bimodal-bilingual (ASL and spoken English) program within an ASL milieu at a small school for the deaf. Speech and language assessment results for five students were analyzed over a 3-year period and indicated that the students made significant positive gains in all areas, although results were variable. Staff and parent

  11. The effect of decreased interletter spacing on orthographic processing.

    Science.gov (United States)

    Montani, Veronica; Facoetti, Andrea; Zorzi, Marco

    2015-06-01

    There is growing interest in how perceptual factors such as the spacing between letters within words modulate performance in visual word recognition and reading aloud. Extra-large letter spacing can strongly improve the reading performance of dyslexic children, and a small increase with respect to the standard spacing seems beneficial even for skilled word recognition in adult readers. In the present study we examined the effect of decreased letter spacing on perceptual identification and lexical decision tasks. Identification in the decreased spacing condition was slower than identification of normally spaced strings, thereby confirming that the reciprocal interference among letters located in close proximity (crowding) poses critical constraints on visual word processing. Importantly, the effect of spacing was not modulated by string length, suggesting that the locus of the spacing effect is at the level of letter detectors. Moreover, the processing of crowded letters was facilitated by top-down support from orthographic lexical representation as indicated by the fact that decreased spacing affected pseudowords significantly more than words. Conversely, in the lexical decision task only word responses were affected by the spacing manipulation. Overall, our findings support the hypothesis that increased crowding is particularly harmful for phonological decoding, thereby adversely affecting reading development in dyslexic children.

  12. Do as I say! …but who says what i should say-or do? On the definition of a standard spoken command vocabulary for ICT devices and services

    NARCIS (Netherlands)

    Niman, B. von; Chaplin, C.; Collado-Vega, J.A.; Groh, L.; McGlashan, S.; Mellors, W.; Leeuwen, D.A. van

    2002-01-01

    This paper describes the development of a new ETSI Standard (ES): Generic spoken command vocabulary for ICT devices and services. It’s basic approach focuses on simplifying the learning procedure for end-users, there by allowing for reuse of basic knowledge between different terminal devices and

  13. TEACHING TURKISH AS SPOKEN IN TURKEY TO TURKIC SPEAKERS - TÜRK DİLLİLERE TÜRKİYE TÜRKÇESİ ÖĞRETİMİ NASIL OLMALIDIR?

    Directory of Open Access Journals (Sweden)

    Ali TAŞTEKİN

    2015-12-01

    Full Text Available Attributing different titles to the activity of teaching Turkish to non-native speakers is related to the perspective of those who conduct this activity. If Turkish Language teaching centres are sub-units of Schools of Foreign Languages and Departments of Foreign Languages of our Universities or teachers have a foreign language background, then the title “Teaching Turkish as a Foreign Language” is adopted and claimed to be universal. In determining success at teaching and learning, the psychological perception of the educational activity and the associational power of the words used are far more important factors than the teacher, students, educational environment and educational tools. For this reason, avoiding the negative connotations of the adjective “foreign” in the activity of teaching foreigners Turkish as spoken in Turkey would be beneficial. In order for the activity of Teaching Turkish as Spoken in Turkey to Turkic Speakers to be successful, it is crucial to dwell on the formal and contextual quality of the books written for this purpose. Almost none of the course books and supplementary books in the field of teaching Turkish to non-native speakers has taken Teaching Turkish as Spoken in Turkey to Turkic Speakers into consideration. The books written for the purpose of teaching Turkish to non-speakers should be examined thoroughly in terms of content and method and should be organized in accordance with the purpose and level of readiness of the target audience. Activities of Teaching Turkish as Spoken in Turkey to Turkic Speakers are still conducted at public and private primary and secondary schools and colleges as well as private courses by self-educated teachers who are trained within a master-apprentice relationship. Turkic populations who had long been parted by necessity have found the opportunity to reunite and turn towards common objectives after the dissolution of The Union of Soviet Socialist Republics. This recent

  14. The impact of inverted text on visual word processing: An fMRI study.

    Science.gov (United States)

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    Science.gov (United States)

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  16. How the stigma of low literacy can impair patient-professional spoken interactions and affect health: insights from a qualitative investigation.

    Science.gov (United States)

    Easton, Phyllis; Entwistle, Vikki A; Williams, Brian

    2013-08-16

    Low literacy is a significant problem across the developed world. A considerable body of research has reported associations between low literacy and less appropriate access to healthcare services, lower likelihood of self-managing health conditions well, and poorer health outcomes. There is a need to explore the previously neglected perspectives of people with low literacy to help explain how low literacy can lead to poor health, and to consider how to improve the ability of health services to meet their needs. Two stage qualitative study. In-depth individual interviews followed by focus groups to confirm analysis and develop suggestions for service improvements. A purposive sample of 29 adults with English as their first language who had sought help with literacy was recruited from an Adult Learning Centre in the UK. Over and above the well-documented difficulties that people with low literacy can have with the written information and complex explanations and instructions they encounter as they use health services, the stigma of low literacy had significant negative implications for participants' spoken interactions with healthcare professionals.Participants described various difficulties in consultations, some of which had impacted negatively on their broader healthcare experiences and abilities to self-manage health conditions. Some communication difficulties were apparently perpetuated or exacerbated because participants limited their conversational engagement and used a variety of strategies to cover up their low literacy that could send misleading signals to health professionals. Participants' biographical narratives revealed that the ways in which they managed their low literacy in healthcare settings, as in other social contexts, stemmed from highly negative experiences with literacy-related stigma, usually from their schooldays onwards. They also suggest that literacy-related stigma can significantly undermine mental wellbeing by prompting self

  17. Shielding voices: The modulation of binding processes between voice features and response features by task representations.

    Science.gov (United States)

    Bogon, Johanna; Eisenbarth, Hedwig; Landgraf, Steffen; Dreisbach, Gesine

    2017-09-01

    Vocal events offer not only semantic-linguistic content but also information about the identity and the emotional-motivational state of the speaker. Furthermore, most vocal events have implications for our actions and therefore include action-related features. But the relevance and irrelevance of vocal features varies from task to task. The present study investigates binding processes for perceptual and action-related features of spoken words and their modulation by the task representation of the listener. Participants reacted with two response keys to eight different words spoken by a male or a female voice (Experiment 1) or spoken by an angry or neutral male voice (Experiment 2). There were two instruction conditions: half of participants learned eight stimulus-response mappings by rote (SR), and half of participants applied a binary task rule (TR). In both experiments, SR instructed participants showed clear evidence for binding processes between voice and response features indicated by an interaction between the irrelevant voice feature and the response. By contrast, as indicated by a three-way interaction with instruction, no such binding was found in the TR instructed group. These results are suggestive of binding and shielding as two adaptive mechanisms that ensure successful communication and action in a dynamic social environment.

  18. Transcription and Annotation of a Japanese Accented Spoken Corpus of L2 Spanish for the Development of CAPT Applications

    Science.gov (United States)

    Carranza, Mario

    2016-01-01

    This paper addresses the process of transcribing and annotating spontaneous non-native speech with the aim of compiling a training corpus for the development of Computer Assisted Pronunciation Training (CAPT) applications, enhanced with Automatic Speech Recognition (ASR) technology. To better adapt ASR technology to CAPT tools, the recognition…

  19. Phonological awareness development in children with and without spoken language difficulties: A 12-month longitudinal study of German-speaking pre-school children.

    Science.gov (United States)

    Schaefer, Blanca; Stackhouse, Joy; Wells, Bill

    2017-10-01

    There is strong empirical evidence that English-speaking children with spoken language difficulties (SLD) often have phonological awareness (PA) deficits. The aim of this study was to explore longitudinally if this is also true of pre-school children speaking German, a language that makes extensive use of derivational morphemes which may impact on the acquisition of different PA levels. Thirty 4-year-old children with SLD were assessed on 11 PA subtests at three points over a 12-month period and compared with 97 four-year-old typically developing (TD) children. The TD-group had a mean percentage correct of over 50% for the majority of tasks (including phoneme tasks) and their PA skills developed significantly over time. In contrast, the SLD-group improved their PA performance over time on syllable and rhyme, but not on phoneme level tasks. Group comparisons revealed that children with SLD had weaker PA skills, particularly on phoneme level tasks. The study contributes a longitudinal perspective on PA development before school entry. In line with their English-speaking peers, German-speaking children with SLD showed poorer PA skills than TD peers, indicating that the relationship between SLD and PA is similar across these two related but different languages.

  20. Language spoken at home and the association between ethnicity and doctor-patient communication in primary care: analysis of survey data for South Asian and White British patients.

    Science.gov (United States)

    Brodie, Kara; Abel, Gary; Burt, Jenni

    2016-03-03

    To investigate if language spoken at home mediates the relationship between ethnicity and doctor-patient communication for South Asian and White British patients. We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner-patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. There was strong evidence of an association between doctor-patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0-100) than White British patients (95% CI -4.9 to -1.1, p=0.002). This difference reduced to 1.4 points (95% CI -3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI -6.4 to -0.2). South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/