WorldWideScience

Sample records for spoken word unfolds

  1. Accessing the spoken word

    OpenAIRE

    Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard

    2005-01-01

    Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...

  2. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  3. Dust, a spoken word poem by Guante

    Directory of Open Access Journals (Sweden)

    Kyle Tran Myhre

    2017-06-01

    Full Text Available In "Dust," spoken word poet Kyle "Guante" Tran Myhre crafts a multi-vocal exploration of the connections between the internment of Japanese Americans during World War II and the current struggles against xenophobia in general and Islamophobia specifically. Weaving together personal narrative, quotes from multiple voices, and "verse journalism" (a term coined by Gwendolyn Brooks, the poem seeks to bridge past and present in order to inform a more just future.

  4. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  5. Recording voiceover the spoken word in media

    CERN Document Server

    Blakemore, Tom

    2015-01-01

    The only book on the market to specifically address its audience, Recording Voiceover is the comprehensive guide for engineers looking to understand the aspects of capturing the spoken word.Discussing all phases of the recording session, Recording Voiceover addresses everything from microphone recommendations for voice recording to pre-production considerations, including setting up the studio, working with and directing the voice talent, and strategies for reducing or eliminating distracting noise elements found in human speech.Recording Voiceover features in-depth, specific recommendations f

  6. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  7. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  8. Attention to spoken word planning: Chronometric and neuroimaging evidence

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    This article reviews chronometric and neuroimaging evidence on attention to spoken word planning, using the WEAVER++ model as theoretical framework. First, chronometric studies on the time to initiate vocal responding and gaze shifting suggest that spoken word planning may require some attention,

  9. Comparison of Word Intelligibility in Spoken and Sung Phrases

    Directory of Open Access Journals (Sweden)

    Lauren B. Collister

    2008-09-01

    Full Text Available Twenty listeners were exposed to spoken and sung passages in English produced by three trained vocalists. Passages included representative words extracted from a large database of vocal lyrics, including both popular and classical repertoires. Target words were set within spoken or sung carrier phrases. Sung carrier phrases were selected from classical vocal melodies. Roughly a quarter of all words sung by an unaccompanied soloist were misheard. Sung passages showed a seven-fold decrease in intelligibility compared with their spoken counterparts. The perceptual mistakes occurring with vowels replicate previous studies showing the centralization of vowels. Significant confusions are also evident for consonants, especially voiced stops and nasals.

  10. Talker and background noise specificity in spoken word recognition memory

    OpenAIRE

    Cooper, Angela; Bradlow, Ann R.

    2017-01-01

    Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a fu...

  11. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    Science.gov (United States)

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  12. Word Up: Using Spoken Word and Hip Hop Subject Matter in Pre-College Writing Instruction.

    Science.gov (United States)

    Sirc, Geoffrey; Sutton, Terri

    2009-01-01

    In June 2008, the Department of English at the University of Minnesota partnered with the Minnesota Spoken Word Association to inaugurate an outreach literacy program for local high-school students and teachers. The four-day institute, named "In Da Tradition," used spoken word and hip hop to teach academic and creative writing to core-city…

  13. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    Science.gov (United States)

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  14. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    Science.gov (United States)

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  15. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  16. Automated Metadata Extraction for Semantic Access to Spoken Word Archives

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Heeren, W.F.L.; van Hessen, Adrianus J.; Ordelman, Roeland J.F.; Nijholt, Antinus; Ruiz Miyares, L.; Alvarez Silva, M.R.

    2011-01-01

    Archival practice is shifting from the analogue to the digital world. A specific subset of heritage collections that impose interesting challenges for the field of language and speech technology are spoken word archives. Given the enormous backlog at audiovisual archives of unannotated materials and

  17. Lexical competition in non-native spoken-word recognition

    NARCIS (Netherlands)

    Weber, A.C.; Cutler, A.

    2004-01-01

    Six eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target

  18. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    Gabriele Stein. Developing Your English Vocabulary: A Systematic New. Approach. 2002, VIII + 272 pp. ... objective of this book is twofold: to compile a lexical core and to maximise the skills of language students by ... chapter 3, she offers twelve major ways of expanding this core-word list and differentiating lexical items to ...

  19. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    data of the corpus and includes more formal audio material (lectures, TV and radio broadcasting). The book begins with a 20-page introduction, which is sometimes quite technical, but ... grounds words that belong to the core vocabulary of the language such as tool-. Lexikos 15 (AFRILEX-reeks/series 15: 2005): 338-339 ...

  20. The Activation of Embedded Words in Spoken Word Recognition

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  1. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    Science.gov (United States)

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.

  2. Talker and background noise specificity in spoken word recognition memory

    Directory of Open Access Journals (Sweden)

    Angela Cooper

    2017-11-01

    Full Text Available Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition and background noise (noise condition using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.

  3. Children reading spoken words: interactions between vocabulary and orthographic expectancy.

    Science.gov (United States)

    Wegener, Signy; Wang, Hua-Chen; de Lissa, Peter; Robidoux, Serje; Nation, Kate; Castles, Anne

    2017-07-12

    There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., 'nesh', 'coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. © 2017 John Wiley & Sons Ltd.

  4. How Are Pronunciation Variants of Spoken Words Recognized? A Test of Generalization to Newly Learned Words

    Science.gov (United States)

    Pitt, Mark A.

    2009-01-01

    One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…

  5. Toddlers' sensitivity to within-word coarticulation during spoken word recognition: Developmental differences in lexical competition.

    Science.gov (United States)

    Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix

    2016-12-01

    To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    Science.gov (United States)

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  7. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  8. Interaction in Spoken Word Recognition Models: Feedback Helps

    Science.gov (United States)

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  9. Interaction in Spoken Word Recognition Models: Feedback Helps

    Directory of Open Access Journals (Sweden)

    James S. Magnuson

    2018-04-01

    Full Text Available Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  10. Orthographic consistency affects spoken word recognition at different grain-sizes

    DEFF Research Database (Denmark)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous.......g., lobe) faster than words with consistent rhymes where the vowel has a less typical spelling (e.g., loaf). The present study extends previous literature by showing that auditory word recognition is affected by orthographic regularities at different grain sizes, just like written word recognition...... and spelling. The theoretical and methodological implications for future research in spoken word recognition are discussed....

  11. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  12. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  13. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  14. Online lexical competition during spoken word recognition and word learning in children and adults.

    Science.gov (United States)

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children (n = 20) and adults (n = 17) were slower to detect pauses in familiar words with later uniqueness points. Faster latencies were obtained for words with late uniqueness points in constraining compared with neutral sentences; no such effect was observed for early unique words. Following exposure to novel competitors ("biscal"), children (n = 18) and adults (n = 18) showed competition for existing words with early uniqueness points ("biscuit") after 24 hr. Thus, online lexical competition effects are remarkably similar across development. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  15. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    Science.gov (United States)

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  16. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    Science.gov (United States)

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  17. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  18. The Effect of Lexical Frequency on Spoken Word Recognition in Young and Older Listeners

    Science.gov (United States)

    Revill, Kathleen Pirog; Spieler, Daniel H.

    2011-01-01

    When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults’ eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age. PMID:21707175

  19. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  20. Discourse context and the recognition of reduced and canonical spoken words

    OpenAIRE

    Brouwer, S.; Mitterer, H.; Huettig, F.

    2013-01-01

    In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" ...

  1. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    Directory of Open Access Journals (Sweden)

    Qingfang eZhang

    2014-02-01

    Full Text Available The effects of word frequency and syllable frequency are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of word frequency and syllable frequency, and their interaction in Chinese written and spoken production. Significant facilitatory word frequency and syllable frequency effects were observed in spoken as well as in written production. The syllable frequency effect in writing indicated that phonological properties (i.e., syllabic frequency constrain orthographic output via a lexical route, at least, in Chinese written production. However, the syllable frequency effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the syllable frequency effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between word frequency and syllable frequency showed that the syllable frequency effect is independent of the word frequency effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  2. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  3. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  4. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    Science.gov (United States)

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  5. Beta oscillations reflect memory and motor aspects of spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Rommers, J.; Maris, E.G.G.

    2015-01-01

    Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in

  6. Probabilistic Phonotactics as a Cue for Recognizing Spoken Cantonese Words in Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2017-01-01

    Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using…

  7. Spoken Idiom Recognition: Meaning Retrieval and Word Expectancy

    Science.gov (United States)

    Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou

    2005-01-01

    This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…

  8. Word frequencies in written and spoken English based on the British National Corpus

    CERN Document Server

    Leech, Geoffrey; Wilson, Andrew (All Of Lancaster University)

    2014-01-01

    Word Frequencies in Written and Spoken English is a landmark volume in the development of vocabulary frequency studies. Whereas previous books have in general given frequency information about the written language only, this book provides information on both speech and writing. It not only gives information about the language as a whole, but also about the differences between spoken and written English, and between different spoken and written varieties of the language. The frequencies are derived from a wide ranging and up-to-date corpus of English: the British Na

  9. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Silent Letters Are Activated in Spoken Word Recognition

    Science.gov (United States)

    Ranbom, Larissa J.; Connine, Cynthia M.

    2011-01-01

    Four experiments are reported that investigate processing of mispronounced words for which the phonological form is inconsistent with the graphemic form (words spelled with silent letters). Words produced as mispronunciations that are consistent with their spelling were more confusable with their citation form counterpart than mispronunciations…

  11. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  12. Competition in the perception of spoken Japanese words

    NARCIS (Netherlands)

    Otake, T.; McQueen, J.M.; Cutler, A.

    2010-01-01

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors,

  13. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    Science.gov (United States)

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    Science.gov (United States)

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  15. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    OpenAIRE

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2012-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual varia...

  16. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-01-25

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  18. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    Science.gov (United States)

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  19. Event-related potentials reflecting the frequency of unattended spoken words

    DEFF Research Database (Denmark)

    Shtyrov, Yury; Kimppa, Lilli; Pulvermüller, Friedemann

    2011-01-01

    , in passive non-attend conditions, with acoustically matched high- and low-frequency words along with pseudo-words. Using factorial and correlation analyses, we found that already at ~120 ms after the spoken stimulus information was available, amplitude of brain responses was modulated by the words' lexical...... for the most frequent word stimuli, later-on (~270 ms), a more global lexicality effect with bilateral perisylvian sources was found for all stimuli, suggesting faster access to more frequent lexical entries. Our results support the account of word memory traces as interconnected neuronal circuits, and suggest......How are words represented in the human brain and can these representations be qualitatively assessed with respect to their structure and properties? Recent research demonstrates that neurophysiological signatures of individual words can be measured when subjects do not focus their attention...

  20. Spoken word production: A theory of lexical access

    NARCIS (Netherlands)

    Levelt, W.J.M.

    2001-01-01

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker's focusing on a target concept and ending with the initiation of articulation. The initial

  1. Attention demands of spoken word planning: A review

    NARCIS (Netherlands)

    Roelofs, A.P.A.; Piai, V.

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot

  2. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    Directory of Open Access Journals (Sweden)

    Rachel Schiff

    2018-04-01

    Full Text Available This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA, morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA and Standard Arabic (StA was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  3. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script.

    Science.gov (United States)

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  4. Semantic Richness Effects in Spoken Word Recognition: A Lexical Decision and Semantic Categorization Megastudy.

    Science.gov (United States)

    Goh, Winston D; Yap, Melvin J; Lau, Mabel C; Ng, Melvin M R; Tan, Luuan-Chin

    2016-01-01

    A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction-faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features-while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed.

  5. Integration of Pragmatic and Phonetic Cues in Spoken Word Recognition

    Science.gov (United States)

    Rohde, Hannah; Ettlinger, Marc

    2015-01-01

    Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the two most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the/hi/∼/∫i/continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time-course of this interaction and discussing how different models of cue integration could be adapted to account for our results. PMID:22250908

  6. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-05-16

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements.

  7. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    Science.gov (United States)

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-01-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  8. Children's Spoken Word Recognition and Contributions to Phonological Awareness and Nonword Repetition: A 1-Year Follow-Up

    Science.gov (United States)

    Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.

    2009-01-01

    This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…

  9. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  10. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    Science.gov (United States)

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  12. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  13. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    Science.gov (United States)

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  14. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    Directory of Open Access Journals (Sweden)

    Sarah eHirschmüller

    2016-01-01

    Full Text Available How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a positive emotion word usage base rates in spoken and written materials and (b positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  15. An exaggerated effect for proper nouns in a case of superior written over spoken word production.

    Science.gov (United States)

    Kemmerer, David; Tranel, Daniel; Manzel, Ken

    2005-02-01

    We describe a brain-damaged subject, RR, who manifests superior written over spoken naming of concrete entities from a wide range of conceptual domains. His spoken naming difficulties are due primarily to an impairment of lexical-phonological processing, which implies that his successful written naming does not depend on prior access to the sound structures of words. His performance therefore provides further support for the "orthographic autonomy hypothesis," which maintains that written word production is not obligatorily mediated by phonological knowledge. The case of RR is especially interesting, however, because for him the dissociation between impaired spoken naming and relatively preserved written naming is significantly greater for two categories of unique concrete entities that are lexicalised as proper nouns-specifically, famous faces and famous landmarks-than for five categories of nonunique (i.e., basic level) concrete entities that are lexicalised as common nouns-specifically, animals, fruits/vegetables, tools/utensils, musical instruments, and vehicles. Furthermore, RR's predominant error types in the oral modality are different for the two types of stimuli: omissions for unique entities vs. semantic errors for nonunique entities. We consider two alternative explanations for RR's extreme difficulty in producing the spoken forms of proper nouns: (1) a disconnection between the meanings of proper nouns and the corresponding word nodes in the phonological output lexicon; or (2) damage to the word nodes themselves. We argue that RR's combined behavioural and lesion data do not clearly adjudicate between the two explanations, but that they favour the first explanation over the second.

  16. Working memory affects older adults' use of context in spoken-word recognition.

    Science.gov (United States)

    Janse, Esther; Jesse, Alexandra

    2014-01-01

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

  17. EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations

    Directory of Open Access Journals (Sweden)

    João Mendonça Correia

    2015-02-01

    Full Text Available Spoken word recognition and production require fast transformations between acoustic, phonological and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g. ‘paard’-‘horse’. Multivariate pattern analysis (MVPA was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination and generalize meaning across two languages (across-language generalization. Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in

  18. The socially-weighted encoding of spoken words: A dual-route approach to speech perception

    Directory of Open Access Journals (Sweden)

    Meghan eSumner

    2014-01-01

    Full Text Available Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: Words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially-idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially-weighted, resulting in sparse, but high-resolution clusters of socially-idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  19. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  20. An fMRI study of concreteness effects in spoken word recognition.

    Science.gov (United States)

    Roxbury, Tracy; McMahon, Katie; Copland, David A

    2014-09-30

    Evidence for the brain mechanisms recruited when processing concrete versus abstract concepts has been largely derived from studies employing visual stimuli. The tasks and baseline contrasts used have also involved varying degrees of lexical processing. This study investigated the neural basis of the concreteness effect during spoken word recognition and employed a lexical decision task with a novel pseudoword condition. The participants were seventeen healthy young adults (9 females). The stimuli consisted of (a) concrete, high imageability nouns, (b) abstract, low imageability nouns and (c) opaque legal pseudowords presented in a pseudorandomised, event-related design. Activation for the concrete, abstract and pseudoword conditions was analysed using anatomical regions of interest derived from previous findings of concrete and abstract word processing. Behaviourally, lexical decision reaction times for the concrete condition were significantly faster than both abstract and pseudoword conditions and the abstract condition was significantly faster than the pseudoword condition (p word recognition. Significant activity was also elicited by concrete words relative to pseudowords in the left fusiform and left anterior middle temporal gyrus. These findings confirm the involvement of a widely distributed network of brain regions that are activated in response to the spoken recognition of concrete but not abstract words. Our findings are consistent with the proposal that distinct brain regions are engaged as convergence zones and enable the binding of supramodal input.

  1. Long-term temporal tracking of speech rate affects spoken-word recognition.

    Science.gov (United States)

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  2. Beta oscillations reflect memory and motor aspects of spoken word production.

    Science.gov (United States)

    Piai, Vitória; Roelofs, Ardi; Rommers, Joost; Maris, Eric

    2015-07-01

    Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band. © 2015 Wiley Periodicals, Inc.

  3. An fMRI study of concreteness effects during spoken word recognition in aging. Preservation or attenuation?

    Directory of Open Access Journals (Sweden)

    Tracy eRoxbury

    2016-01-01

    Full Text Available It is unclear whether healthy aging influences concreteness effects (ie. the processing advantage seen for concrete over abstract words and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete versus abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing.

  4. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  5. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  6. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Directory of Open Access Journals (Sweden)

    Michael Inspector

    Full Text Available OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i All words presented in a set flat monotonous pitch contour (ii Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41, temporal areas (BA 21 22 bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  7. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    -specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound....... 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality...

  8. Spoken Word Recognition and Serial Recall of Words from Components in the Phonological Network

    Science.gov (United States)

    Siew, Cynthia S. Q.; Vitevitch, Michael S.

    2016-01-01

    Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…

  9. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition

    Science.gov (United States)

    Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland

    2018-01-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…

  10. The interaction of lexical semantics and cohort competition in spoken word recognition: an fMRI study.

    Science.gov (United States)

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D; Tyler, Lorraine K

    2011-12-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning [Marslen-Wilson, W. D. Functional parallelism in spoken word-recognition. Cognition, 25, 71-102, 1987]. We examined these potential interactions in an fMRI study by presenting participants with words and pseudowords for lexical decision. In a factorial design, we manipulated (a) cohort competition (high/low competitive cohorts which vary the number of competing word candidates) and (b) the word's semantic properties (high/low imageability). A previous behavioral study [Tyler, L. K., Voice, J. K., & Moss, H. E. The interaction of meaning and sound in spoken word recognition. Psychonomic Bulletin & Review, 7, 320-326, 2000] showed that imageability facilitated word recognition but only for words in high competition cohorts. Here we found greater activity in the left inferior frontal gyrus (BA 45, 47) and the right inferior frontal gyrus (BA 47) with increased cohort competition, an imageability effect in the left posterior middle temporal gyrus/angular gyrus (BA 39), and a significant interaction between imageability and cohort competition in the left posterior superior temporal gyrus/middle temporal gyrus (BA 21, 22). In words with high competition cohorts, high imageability words generated stronger activity than low imageability words, indicating a facilitatory role of imageability in a highly competitive cohort context. For words in low competition cohorts, there was no effect of imageability. These results support the behavioral data in showing that selection processes do not rely solely on bottom-up acoustic-phonetic cues but rather that the semantic properties of candidate words facilitate discrimination between competitors.

  11. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    Science.gov (United States)

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  12. Long-term memory traces for familiar spoken words in tonal languages as revealed by the Mismatch Negativity

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Mismatch negativity (MMN, a primary response to an acoustic change and an index of sensory memory, was used to investigate the processing of the discrimination between familiar and unfamiliar Consonant-Vowel (CV speech contrasts. The MMN was elicited by rare familiar words presented among repetitive unfamiliar words. Phonetic and phonological contrasts were identical in all conditions. MMN elicited by the familiar word deviant was larger than that elicited by the unfamiliar word deviant. The presence of syllable contrast did significantly alter the word-elicited MMN in amplitude and scalp voltage field distribution. Thus, our results indicate the existence of word-related MMN enhancement largely independent of the word status of the standard stimulus. This enhancement may reflect the presence of a longterm memory trace for familiar spoken words in tonal languages.

  13. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment.

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. In Experiment 1, 69 children with TLD (7-10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7-12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.

  14. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    Science.gov (United States)

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  15. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. The Spoken Word, the Book and the Image in the Work of Evangelization

    Directory of Open Access Journals (Sweden)

    Jerzy Strzelczyk

    2017-06-01

    Full Text Available Little is known about the ‘material’ equipment of the early missionaries who set out to evangelize pagans and apostates, since the authors of the sources focused mainly on the successes (or failures of the missions. Information concerning the ‘infrastructure’ of missions is rather occasional and of fragmentary nature. The major part in the process of evangelization must have been played by the spoken word preached indirectly or through an interpreter, at least in the areas and milieus remote from the centers of ancient civilization. It could not have been otherwise when coming into contact with communities which did not know the art of reading, still less writing. A little more attention is devoted to the other two media, that is, the written word and the images. The significance of the written word was manifold, and – at least as the basic liturgical books are concerned (the missal, the evangeliary? – the manuscripts were indispensable elements of missionaries’ equipment. In certain circumstances the books which the missionaries had at their disposal could acquire special – even magical – significance, the most comprehensible to the Christianized people (the examples given: the evangeliary of St. Winfried-Boniface in the face of death at the hands of a pagan Frisian, the episode with a manuscript in the story of Anskar’s mission written by Rimbert. The role of the plastic art representations (images during the missions is much less frequently mentioned in the sources. After quoting a few relevant examples (Bede the Venerable, Ermoldus Nigellus, Paul the Deacon, Thietmar of Merseburg, the author also cites an interesting, although not entirely successful, attempt to use drama to instruct the Livonians in the faith while converting them to Christianity, which was reported by Henry of Latvia.

  17. Long-term repetition priming in spoken and written word production: evidence for a contribution of phonology to handwriting.

    Science.gov (United States)

    Damian, Markus F; Dorjee, Dusana; Stadthagen-Gonzalez, Hans

    2011-07-01

    Although it is relatively well established that access to orthographic codes in production tasks is possible via an autonomous link between meaning and spelling (e.g., Rapp, Benzing, & Caramazza, 1997), the relative contribution of phonology to orthographic access remains unclear. Two experiments demonstrated persistent repetition priming in spoken and written single-word responses, respectively. Two further experiments showed priming from spoken to written responses and vice versa, which is interpreted as reflecting a role of phonology in constraining orthographic access. A final experiment showed priming from spoken onto written responses even when participants engaged in articulatory suppression during writing. Overall, the results support the view that access to orthography codes is accomplished via both the autonomous link between meaning and spelling and an indirect route via phonology.

  18. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    Science.gov (United States)

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  19. Feature Statistics Modulate the Activation of Meaning During Spoken Word Processing.

    Science.gov (United States)

    Devereux, Barry J; Taylor, Kirsten I; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K

    2016-03-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation. Copyright © 2015 The Authors. Cognitive Science published by Cognitive Science Society, Inc.

  20. Time-compressed spoken words enhance driving performance in complex visual scenarios : evidence of crossmodal semantic priming effects in basic cognitive experiments and applied driving simulator studies

    OpenAIRE

    Castronovo, Angela

    2014-01-01

    Would speech warnings be a good option to inform drivers about time-critical traffic situations? Even though spoken words take time until they can be understood, listening is well trained from the earliest age and happens quite automatically. Therefore, it is conceivable that spoken words could immediately preactivate semantically identical (but physically diverse) visual information, and thereby enhance respective processing. Interestingly, this implies a crossmodal semantic effect of audito...

  1. Engaging Minority Youth in Diabetes Prevention Efforts Through a Participatory, Spoken-Word Social Marketing Campaign.

    Science.gov (United States)

    Rogers, Elizabeth A; Fine, Sarah C; Handley, Margaret A; Davis, Hodari B; Kass, James; Schillinger, Dean

    2017-07-01

    To examine the reach, efficacy, and adoption of The Bigger Picture, a type 2 diabetes (T2DM) social marketing campaign that uses spoken-word public service announcements (PSAs) to teach youth about socioenvironmental conditions influencing T2DM risk. A nonexperimental pilot dissemination evaluation through high school assemblies and a Web-based platform were used. The study took place in San Francisco Bay Area high schools during 2013. In the study, 885 students were sampled from 13 high schools. A 1-hour assembly provided data, poet performances, video PSAs, and Web-based platform information. A Web-based platform featured the campaign Web site and social media. Student surveys preassembly and postassembly (knowledge, attitudes), assembly observations, school demographics, counts of Web-based utilization, and adoption were measured. Descriptive statistics, McNemar's χ 2 test, and mixed modeling accounting for clustering were used to analyze data. The campaign included 23 youth poet-created PSAs. It reached >2400 students (93% self-identified non-white) through school assemblies and has garnered >1,000,000 views of Web-based video PSAs. School participants demonstrated increased short-term knowledge of T2DM as preventable, with risk driven by socioenvironmental factors (34% preassembly identified environmental causes as influencing T2DM risk compared to 83% postassembly), and perceived greater personal salience of T2DM risk reduction (p < .001 for all). The campaign has been adopted by regional public health departments. The Bigger Picture campaign showed its potential for reaching and engaging diverse youth. Campaign messaging is being adopted by stakeholders.

  2. Electrophysiological evidence for the involvement of the approximate number system in preschoolers' processing of spoken number words.

    Science.gov (United States)

    Pinhas, Michal; Donohue, Sarah E; Woldorff, Marty G; Brannon, Elizabeth M

    2014-09-01

    Little is known about the neural underpinnings of number word comprehension in young children. Here we investigated the neural processing of these words during the crucial developmental window in which children learn their meanings and asked whether such processing relies on the Approximate Number System. ERPs were recorded as 3- to 5-year-old children heard the words one, two, three, or six while looking at pictures of 1, 2, 3, or 6 objects. The auditory number word was incongruent with the number of visual objects on half the trials and congruent on the other half. Children's number word comprehension predicted their ERP incongruency effects. Specifically, children with the least number word knowledge did not show any ERP incongruency effects, whereas those with intermediate and high number word knowledge showed an enhanced, negative polarity incongruency response (N(inc)) over centroparietal sites from 200 to 500 msec after the number word onset. This negativity was followed by an enhanced, positive polarity incongruency effect (P(inc)) that emerged bilaterally over parietal sites at about 700 msec. Moreover, children with the most number word knowledge showed ratio dependence in the P(inc) (larger for greater compared with smaller numerical mismatches), a hallmark of the Approximate Number System. Importantly, a similar modulation of the P(inc) from 700 to 800 msec was found in children with intermediate number word knowledge. These results provide the first neural correlates of spoken number word comprehension in preschoolers and are consistent with the view that children map number words onto approximate number representations before they fully master the verbal count list.

  3. Effects of prosody on spoken Thai word perception in pre-attentive brain processing: a pilot study

    Directory of Open Access Journals (Sweden)

    Kittipun Arunphalungsanti

    2016-12-01

    Full Text Available This study aimed to investigate the effect of the unfamiliar stressed prosody on spoken Thai word perception in the pre-attentive processing of the brain evaluated by the N2a and brain wave oscillatory activity. EEG recording was obtained from eleven participants, who were instructed to ignore the sound stimuli while watching silent movies. Results showed that prosody of unfamiliar stress word perception elicited N2a component and the quantitative EEG analysis found that theta and delta wave powers were principally generated in the frontal area. It was possible that the unfamiliar prosody with different frequencies, duration and intensity of the sound of Thai words induced highly selective attention and retrieval of information from the episodic memory of the pre-attentive stage of speech perception. This brain electrical activity evidence could be used for further study in the development of valuable clinical tests to evaluate the frontal lobe function in speech perception.

  4. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    Science.gov (United States)

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  5. Webster's word power better English grammar improve your written and spoken English

    CERN Document Server

    Kirkpatrick, Betty

    2014-01-01

    With questions and answer sections throughout, this book helps you to improve your written and spoken English through understanding the structure of the English language. This is a thorough and useful book with all parts of speech and grammar explained. Used by ELT self-study students.

  6. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    Science.gov (United States)

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  7. The Influence of the Phonological Neighborhood Clustering Coefficient on Spoken Word Recognition

    Science.gov (United States)

    Chan, Kit Ying; Vitevitch, Michael S.

    2009-01-01

    Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…

  8. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception.

    Science.gov (United States)

    Liebenthal, Einat; Silbersweig, David A; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.

  9. The power of the spoken word: sociolinguistic cues influence the misinformation effect.

    Science.gov (United States)

    Vornik, Lana A; Sharman, Stefanie J; Garry, Maryanne

    2003-01-01

    We investigated whether the sociolinguistic information delivered by spoken, accented postevent narratives would influence the misinformation effect. New Zealand subjects listened to misleading postevent information spoken in either a New Zealand (NZ) or North American (NA) accent. Consistent with earlier research, we found that NA accents were seen as more powerful and more socially attractive. We found that accents per se had no influence on the misinformation effect but sociolinguistic factors did: both power and social attractiveness affected subjects' susceptibility to misleading postevent suggestions. When subjects rated the speaker highly on power, social attractiveness did not matter; they were equally misled. However, when subjects rated the speaker low on power, social attractiveness did matter: subjects who rated the speaker high on social attractiveness were more misled than subjects who rated it lower. There were similar effects for confidence. These results have implications for our understanding of social influences on the misinformation effect.

  10. Stimulus variability and the phonetic relevance hypothesis: effects of variability in speaking style, fundamental frequency, and speaking rate on spoken word identification.

    Science.gov (United States)

    Sommers, Mitchell S; Barcroft, Joe

    2006-04-01

    Three experiments were conducted to examine the effects of trial-to-trial variations in speaking style, fundamental frequency, and speaking rate on identification of spoken words. In addition, the experiments investigated whether any effects of stimulus variability would be modulated by phonetic confusability (i.e., lexical difficulty). In Experiment 1, trial-to-trial variations in speaking style reduced the overall identification performance compared with conditions containing no speaking-style variability. In addition, the effects of variability were greater for phonetically confusable words than for phonetically distinct words. In Experiment 2, variations in fundamental frequency were found to have no significant effects on spoken word identification and did not interact with lexical difficulty. In Experiment 3, two different methods for varying speaking rate were found to have equivalent negative effects on spoken word recognition and similar interactions with lexical difficulty. Overall, the findings are consistent with a phonetic-relevance hypothesis, in which accommodating sources of acoustic-phonetic variability that affect phonetically relevant properties of speech signals can impair spoken word identification. In contrast, variability in parameters of the speech signal that do not affect phonetically relevant properties are not expected to affect overall identification performance. Implications of these findings for the nature and development of lexical representations are discussed.

  11. A connectionist model for the simulation of human spoken-word recognition

    NARCIS (Netherlands)

    Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.; Brinker, B.P.L.M. Den; Beek, P.J.; Brand, A.N.; Maarse, F.J.; Mulder, L.J.M.

    1999-01-01

    A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor

  12. Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children

    Directory of Open Access Journals (Sweden)

    Mélanie Havy

    2017-12-01

    Full Text Available From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face. Another group experienced the words in the visual modality (seeing a silent talking face. Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.

  13. Distinctive Phonological Features Differ in Relevance for Both Spoken and Written Word Recognition

    Science.gov (United States)

    Ernestus, Mirjam; Mak, Willem Marinus

    2004-01-01

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the…

  14. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals.

    Science.gov (United States)

    Marchman, Virginia A; Fernald, Anne; Hurtado, Nereyda

    2010-09-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.

  15. The power of the spoken word in life, psychiatry, and psychoanalysis--a contribution to interpersonal psychoanalysis.

    Science.gov (United States)

    Lothane, Zvi

    2007-09-01

    Starting with a 1890 essay by Freud, the author goes in search of an interpersonal psychology native to Freud's psychoanalytic method and to in psychoanalysis and the interpersonal method in psychiatry. This derives from the basic interpersonal nature of the human situation in the lives of individuals and social groups. Psychiatry, the healing of the soul, and psychotherapy, therapy of the soul, are examined from the perspective of the communication model, based on the essential interpersonal function of language and the spoken word: persons addressing speeches to themselves and to others in relations, between family members, others in society, and the professionals who serve them. The communicational model is also applied in examining psychiatric disorders and psychiatric diagnoses, as well as psychodynamic formulas, which leads to a reformulation of the psychoanalytic therapy as a process. A plea is entered to define psychoanalysis as an interpersonal discipline, in analogy to Sullivan's interpersonal psychiatry.

  16. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words.

    Science.gov (United States)

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2017-04-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Distinct Patterns of Brain Activity Characterise Lexical Activation and Competition in Spoken Word Production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Jensen, O.; Schoffelen, J.M.; Bonnefond, M.

    2014-01-01

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography

  18. Visual information constrains early and late stages of spoken-word recognition in sentence context.

    Science.gov (United States)

    Brunellière, Angèle; Sánchez-García, Carolina; Ikumi, Nara; Soto-Faraco, Salvador

    2013-07-01

    Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Distinct patterns of brain activity characterise lexical activation and competition in spoken word production.

    Directory of Open Access Journals (Sweden)

    Vitória Piai

    Full Text Available According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog with distractor words. The distractor and picture name were semantically related (cat, unrelated (pin, or identical (dog. Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350-650 ms (4-10 Hz in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.

  20. Grasp it loudly! Supporting actions with semantically congruent spoken action words.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I. Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II. For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down" or unrelated words (e.g. "butterfly" and "pigeon". Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action language and motor structures are significant to the point that language can refine overt motor behaviour.

  1. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    Science.gov (United States)

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  2. Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe.

    Science.gov (United States)

    Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene

    2014-01-01

    Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.

  3. Lexical Tone Variation and Spoken Word Recognition in Preschool Children: Effects of Perceptual Salience

    Science.gov (United States)

    Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.

    2017-01-01

    Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…

  4. Children show right-lateralized effects of spoken word-form learning.

    Directory of Open Access Journals (Sweden)

    Anni Nora

    Full Text Available It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG to track cortical correlates of incidental learning of meaningless word forms over two days as 6-8-year-olds overtly repeated them. Native (Finnish pseudowords were compared with words of foreign sound structure (Korean to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody.

  5. Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2008-01-01

    Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared. and they manually responded to arrows presented away from (Experiment

  6. Two-year-olds' sensitivity to subphonemic mismatch during online spoken word recognition.

    Science.gov (United States)

    Paquette-Smith, Melissa; Fecher, Natalie; Johnson, Elizabeth K

    2016-11-01

    Sensitivity to noncontrastive subphonemic detail plays an important role in adult speech processing, but little is known about children's use of this information during online word recognition. In two eye-tracking experiments, we investigate 2-year-olds' sensitivity to a specific type of subphonemic detail: coarticulatory mismatch. In Experiment 1, toddlers viewed images of familiar objects (e.g., a boat and a book) while hearing labels containing appropriate or inappropriate coarticulation. Inappropriate coarticulation was created by cross-splicing the coda of the target word onto the onset of another word that shared the same onset and nucleus (e.g., to create boat, the final consonant of boat was cross-spliced onto the initial CV of bone). We tested 24-month-olds and 29-month-olds in this paradigm. Both age groups behaved similarly, readily detecting the inappropriate coarticulation (i.e., showing better recognition of identity-spliced than cross-spliced items). In Experiment 2, we asked how children's sensitivity to subphonemic mismatch compared to their sensitivity to phonemic mismatch. Twenty-nine-month-olds were presented with targets that contained either a phonemic (e.g., the final consonant of boat was spliced onto the initial CV of bait) or a subphonemic mismatch (e.g., the final consonant of boat was spliced onto the initial CV of bone). Here, the subphonemic (coarticulatory) mismatch was not nearly as disruptive to children's word recognition as a phonemic mismatch. Taken together, our findings support the view that 2-year-olds, like adults, use subphonemic information to optimize online word recognition.

  7. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    Directory of Open Access Journals (Sweden)

    Vitoria ePiai

    2013-12-01

    Full Text Available Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI; vocal colour naming while ignoring distractors (Stroop; and manual object discrimination while ignoring spatial position (Simon task. All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus. Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category relative to incongruent (categorically related and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the anterior cingulate cortex, a region that is likely implementing domain

  8. Pre-activation negativity (PrAN in brain potentials to unfolding words

    Directory of Open Access Journals (Sweden)

    Pelle Söderström

    2016-10-01

    Full Text Available We describe an ERP effect termed the ‘pre-activation negativity’ (PrAN, which is proposed to index the degree of pre-activation of upcoming word-internal morphemes in speech processing. Using lexical competition measures based on word-initial speech fragments (WIFs, as well as statistical analyses of ERP data from three experiments, it is shown that the PrAN is sensitive to lexical competition and that it reflects the degree of predictive certainty: the negativity is larger when there are fewer upcoming lexical competitors.

  9. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    Science.gov (United States)

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  10. Vocabulary learning in a Yorkshire terrier: slow mapping of spoken words.

    Directory of Open Access Journals (Sweden)

    Ulrike Griebel

    Full Text Available Rapid vocabulary learning in children has been attributed to "fast mapping", with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion.

  11. Vocabulary Learning in a Yorkshire Terrier: Slow Mapping of Spoken Words

    Science.gov (United States)

    Griebel, Ulrike; Oller, D. Kimbrough

    2012-01-01

    Rapid vocabulary learning in children has been attributed to “fast mapping”, with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico) not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey) with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before) embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion. PMID:22363421

  12. Inferior Frontal Cortex Contributions to the Recognition of Spoken Words and Their Constituent Speech Sounds.

    Science.gov (United States)

    Rogers, Jack C; Davis, Matthew H

    2017-05-01

    Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.

  13. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    Science.gov (United States)

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. The role of visual representations within working memory for paired-associate and serial order of spoken words.

    Science.gov (United States)

    Ueno, Taiji; Saito, Satoru

    2013-09-01

    Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.

  15. Young children learning Spanish make rapid use of grammatical gender in spoken word recognition.

    Science.gov (United States)

    Lew-Williams, Casey; Fernald, Anne

    2007-03-01

    All nouns in Spanish have grammatical gender, with obligatory gender marking on preceding articles (e.g., la and el, the feminine and masculine forms of "the," respectively). Adult native speakers of languages with grammatical gender exploit this cue in on-line sentence interpretation. In a study investigating the early development of this ability, Spanish-learning children (34-42 months) were tested in an eye-tracking procedure. Presented with pairs of pictures with names of either the same grammatical gender (la pelota, "ball [feminine]"; la galleta, "cookie [feminine]") or different grammatical gender (la pelota; el zapato, "shoe [masculine]"), they heard sentences referring to one picture (Encuentra la pelota, "Find the ball"). The children were faster to orient to the referent on different-gender trials, when the article was potentially informative, than on same-gender trials, when it was not, and this ability was correlated with productive measures of lexical and grammatical competence. Spanish-learning children who can speak only 500 words already use gender-marked articles in establishing reference, a processing advantage characteristic of native Spanish-speaking adults.

  16. Children’s recall of words spoken in their first and second language:Effects of signal-to-noise ratio and reverberation time

    Directory of Open Access Journals (Sweden)

    Anders eHurtig

    2016-01-01

    Full Text Available Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1 and second-language (L2. A total of 72 children (10 years old participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 sec crossed with two different SNRs (+3 dBA and +12 dBA. Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA improved recall compared to a low SNR (+3 dBA. Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time did not interact with language.

  17. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    Science.gov (United States)

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  18. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation.

    Science.gov (United States)

    Ma, Weiyi; Zhou, Peng; Singh, Leher; Gao, Liqun

    2017-02-01

    The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    Directory of Open Access Journals (Sweden)

    Yu Li

    2017-06-01

    Full Text Available Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC and Granger Causality Analysis (GCA methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1 the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG, was stronger in adults compared with children; (2 the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3 the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4 the RSFCs between left posterior middle frontal gyrus (LpMFG and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5 the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  20. Unfolding Participation

    DEFF Research Database (Denmark)

    Saad-Sulonen, Joanna; Halskov, Kim; Eriksson, Eva

    2015-01-01

    The aim of the Unfolding Participation workshop is to outline an agenda for the next 10 years of participatory design (PD) and participatory human computer interaction (HCI) research. We will do that through a double strategy: 1) by critically interrogating the concept of participation (unfolding...... the concept itself), while at the same time, 2) reflecting on the way that participation unfolds across different participatory configurations. We invite researchers and practitioners from PD and HCI and fields in which information technology mediated participation is embedded (e.g. in political studies......, urban planning, participatory arts, business, science and technology studies) to bring a plurality of perspectives and expertise related to participation....

  1. Spoken Lebanese.

    Science.gov (United States)

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  2. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    Science.gov (United States)

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  3. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  4. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  5. Fast mapping semantic features: performance of adults with normal language, history of disorders of spoken and written language, and attention deficit hyperactivity disorder on a word-learning task.

    Science.gov (United States)

    Alt, Mary; Gutmann, Michelle L

    2009-01-01

    This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.

  6. Activation of words with phonological overlap

    Directory of Open Access Journals (Sweden)

    Claudia K. Friedrich

    2013-08-01

    Full Text Available Multiple lexical representations overlapping with the input (cohort neighbors are temporarily activated in the listener’s mental lexicon when speech unfolds in time. Activation for cohort neighbors appears to rapidly decline as soon as there is mismatch with the input. However, it is a matter of debate whether or not they are completely excluded from further processing. We recorded behavioral data and event-related brain potentials (ERPs in auditory-visual word onset priming during a lexical decision task. As primes we used the first two syllables of spoken German words. In a carrier word condition, the primes were extracted from spoken versions of the target words (ano-ANORAK 'anorak'. In a cohort neighbor condition, the primes were taken from words that overlap with the target word up to the second nucleus (ana- taken from ANANAS 'pineapple'. Relative to a control condition, where primes and targets were unrelated, lexical decision responses for cohort neighbors were delayed. This reveals that cohort neighbors are disfavored by the decision processes at the behavioral front end. In contrast, left-anterior ERPs reflected long-lasting facilitated processing of cohort neighbors. We interpret these results as evidence for extended parallel processing of cohort neighbors. That is, in parallel to the preparation and elicitation of delayed lexical decision responses to cohort neighbors, aspects of the processing system appear to keep track of those less efficient candidates.

  7. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    Science.gov (United States)

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  8. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data.

    Science.gov (United States)

    Gow, David W; Olson, Bruna B

    2015-07-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.

  9. Automatic disambiguation of morphosyntax in spoken language corpora

    OpenAIRE

    Parisse , Christophe; Le Normand , Marie-Thérèse

    2000-01-01

    International audience; The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automa...

  10. The influence of spelling on phonological encoding in word reading, object naming, and word generation

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2006-01-01

    Does the spelling of a word mandatorily constrain spoken word production, or does it do so only when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling effects in spoken word production in English using a prompt–response word generation task. Preparation

  11. Audiovisual Spoken Word Training can Promote or Impede Auditory-only Perceptual Learning: Results from Prelingually Deafened Adults with Late-Acquired Cochlear Implants and Normal-Hearing Adults

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-08-01

    Full Text Available Training with audiovisual (AO speech can promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. Pre-/perilingually deafened adults rely on visual speech even when they also use a cochlear implant. This study investigated whether visual speech promotes auditory perceptual learning in these cochlear implant users. In Experiment 1, 28 prelingually deafened adults with late-acquired cochlear implants were assigned to learn paired associations between spoken disyllabic C(=consonantV(=vowelCVC nonsense words and nonsense pictures (fribbles, under AV and then under auditory-only (AO (or counter-balanced AO then AV training conditions. After training on each list of paired-associates (PA, testing was carried out AO. Across AV and AO training, AO PA test scores improved as did identification of consonants in untrained CVCVC stimuli. However, whenever PA training was carried out with AV stimuli, AO test scores were steeply reduced. Experiment 2 repeated the experiment with 43 normal-hearing adults. Their AO tests scores did not drop following AV PA training and even increased relative to scores following AO training. Normal-hearing participants' consonant identification scores improved also but with a pattern that contrasted with cochlear implant users’: Normal hearing adults were most accurate for medial consonants, and in contrast cochlear implant users were most accurate for initial consonants. The results are interpreted within a multisensory reverse hierarchy theory, which predicts that perceptual tasks are carried out whenever possible based on immediate high-level perception without scrutiny of lower-level features. The theory implies that, based on their bias towards visual speech, cochlear implant participants learned the PAs with greater reliance on vision to the detriment of auditory perceptual learning. Normal-hearing participants' learning took advantage of the concurrence between auditory and visual

  12. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  13. Word Frequencies in Written and Spoken English

    African Journals Online (AJOL)

    R.B. Ruthven

    extent of the emphasis on the acquisition vocabulary in school curricula. After a brief introduction, the author looks in chapter 2 at major books which in the. 20th century worked on a controlled vocabulary for foreign-language learners in Europe, Asia and America. This section provides the background for the elaboration of ...

  14. Effects of Rhyme and Spelling Patterns on Auditory Word ERPs Depend on Selective Attention to Phonology

    Science.gov (United States)

    Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2013-01-01

    ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…

  15. Teaching Spoken Spanish

    Science.gov (United States)

    Lipski, John M.

    1976-01-01

    The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)

  16. Teaching the Spoken Language.

    Science.gov (United States)

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  17. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  18. What does že jo (and že ne) mean in spoken dialogue

    Czech Academy of Sciences Publication Activity Database

    Komrsková, Zuzana

    2017-01-01

    Roč. 68, č. 2 (2017), s. 229-237 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : spoken languge * spoken corpus * tag question * responze word Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  19. NMR of unfolded proteins

    Indian Academy of Sciences (India)

    Unknown

    2005-01-03

    Jan 3, 2005 ... out' response to environmental changes with structural complexity ... of 3D structure at atomic resolution of folded proteins ...... 5.14 HIV-1 protease. NMR identification of local structural preferences in. HIV-1 protease in the 'unfolded state' at 6 M gua- nidine hydrochloride has been reported.49 Analyses.

  20. Phantom Word Activation in L2

    Science.gov (United States)

    Broersma, Mirjam; Cutler, Anne

    2008-01-01

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two…

  1. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...

  2. THE RECOGNITION OF SPOKEN MONO-MORPHEMIC COMPOUNDS IN CHINESE

    Directory of Open Access Journals (Sweden)

    Yu-da Lai

    2012-12-01

    Full Text Available This paper explores the auditory lexical access of mono-morphemic compounds in Chinese as a way of understanding the role of orthography in the recognition of spoken words. In traditional Chinese linguistics, a compound is a word written with two or more characters whether or not they are morphemic. A monomorphemic compound may either be a binding word, written with characters that only appear in this one word, or a non-binding word, written with characters that are chosen for their pronunciation but that also appear in other words. Our goal was to determine if this purely orthographic difference affects auditory lexical access by conducting a series of four experiments with materials matched by whole-word frequency, syllable frequency, cross-syllable predictability, cohort size, and acoustic duration, but differing in binding. An auditory lexical decision task (LDT found an orthographic effect: binding words were recognized more quickly than non-binding words. However, this effect disappeared in an auditory repetition and in a visual LDT with the same materials, implying that the orthographic effect during auditory lexical access was localized to the decision component and involved the influence of cross-character predictability without the activation of orthographic representations. This claim was further confirmed by overall faster recognition of spoken binding words in a cross-modal LDT with different types of visual interference. The theoretical and practical consequences of these findings are discussed.

  3. SPOKEN CORPORA: RATIONALE AND APPLICATION

    Directory of Open Access Journals (Sweden)

    John Newman

    2008-12-01

    Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.

  4. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within

  5. Native and Nonnative Use of Multi-Word vs. One-Word Verbs

    Science.gov (United States)

    Siyanova, Anna; Schmitt, Norbert

    2007-01-01

    One of the choices available in English is between one-word verbs (train at the gym) and their multi-word counterparts (work out at the gym). Multi-word verbs tend to be colloquial in tone and are a particular feature of informal spoken discourse. Previous research suggests that English learners often have problems with multi-word verbs, and may…

  6. Word Order Acquisition in Persian Speaking Children

    Directory of Open Access Journals (Sweden)

    Nahid Jalilevand

    2017-06-01

    Discussion: Despite the fact that the spoken-Persian language has no strict word order, Persian-speaking children tend to use other logically possible orders of subject (S, verb (V, and object (O lesser than the SOV structure.

  7. Segmentation of Written Words in French

    Science.gov (United States)

    Chetail, Fabienne; Content, Alain

    2013-01-01

    Syllabification of spoken words has been largely used to define syllabic properties of written words, such as the number of syllables or syllabic boundaries. By contrast, some authors proposed that the functional structure of written words stems from visuo-orthographic features rather than from the transposition of phonological structure into the…

  8. Utility of spoken dialog systems

    CSIR Research Space (South Africa)

    Barnard, E

    2008-12-01

    Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...

  9. Unfolding Green Defense

    DEFF Research Database (Denmark)

    Larsen, Kristian Knus

    2015-01-01

    to inform and support the further development of green solutions by unfolding how green technologies and green strategies have been developed and used to handle current security challenges. The report, initially, focuses on the security challenges that are being linked to green defense, namely fuel......In recent years, many states have developed and implemented green solutions for defense. Building on these initiatives NATO formulated the NATO Green Defence Framework in 2014. The framework provides a broad basis for cooperation within the Alliance on green solutions for defense. This report aims...... consumption in military operations, defense expenditure, energy security, and global climate change. The report then proceeds to introduce the NATO Green Defence Framework before exploring specific current uses of green technologies and green strategies for defense. The report concludes that a number...

  10. Verification of unfold error estimates in the unfold operator code

    International Nuclear Information System (INIS)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. copyright 1997 American Institute of Physics

  11. SPOKEN BAHASA INDONESIA BY GERMAN STUDENTS

    Directory of Open Access Journals (Sweden)

    I Nengah Sudipa

    2014-11-01

    Full Text Available This article investigates the spoken ability for German students using Bahasa Indonesia (BI. They have studied it for six weeks in IBSN Program at Udayana University, Bali-Indonesia. The data was collected at the time the students sat for the mid-term oral test and was further analyzed with reference to the standard usage of BI. The result suggests that most students managed to express several concepts related to (1 LOCATION; (2 TIME; (3 TRANSPORT; (4 PURPOSE; (5 TRANSACTION; (6 IMPRESSION; (7 REASON; (8 FOOD AND BEVERAGE, and (9 NUMBER AND PERSON. The only problem few students might encounter is due to the influence from their own language system called interference, especially in word order.

  12. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  13. Mechanical Protein Unfolding and Degradation.

    Science.gov (United States)

    Olivares, Adrian O; Baker, Tania A; Sauer, Robert T

    2018-02-10

    AAA+ proteolytic machines use energy from ATP hydrolysis to degrade damaged, misfolded, or unneeded proteins. Protein degradation occurs within a barrel-shaped self-compartmentalized peptidase. Before protein substrates can enter this peptidase, they must be unfolded and then translocated through the axial pore of an AAA+ ring hexamer. An unstructured region of the protein substrate is initially engaged in the axial pore, and conformational changes in the ring, powered by ATP hydrolysis, generate a mechanical force that pulls on and denatures the substrate. The same conformational changes in the hexameric ring then mediate mechanical translocation of the unfolded polypeptide into the peptidase chamber. For the bacterial ClpXP and ClpAP AAA+ proteases, the mechanical activities of protein unfolding and translocation have been directly visualized by single-molecule optical trapping. These studies in combination with structural and biochemical experiments illuminate many principles that underlie this universal mechanism of ATP-fueled protein unfolding and subsequent destruction.

  14. The Study of Synonymous Word "Mistake"

    OpenAIRE

    Suwardi, Albertus

    2016-01-01

    This article discusses the synonymous word "mistake*.The discussion will also cover the meaning of 'word' itself. Words can be considered as form whether spoken or written, or alternatively as composite expression, which combine and meaning. Synonymous are different phonological words which have the same or very similar meanings. The synonyms of mistake are error, fault, blunder, slip, slipup, gaffe and inaccuracy. The data is taken from a computer program. The procedure of data collection is...

  15. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  16. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  17. Correlative Conjunctions in Spoken Texts

    Czech Academy of Sciences Publication Activity Database

    Poukarová, Petra

    2017-01-01

    Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  18. Selective attention to phonology dynamically modulates initial encoding of auditory words within the left hemisphere.

    Science.gov (United States)

    Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D

    2014-08-15

    Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may

  19. Age of acquisition and word frequency in written picture naming.

    Science.gov (United States)

    Bonin, P; Fayol, M; Chalard, M

    2001-05-01

    This study investigates age of acquisition (AoA) and word frequency effects in both spoken and written picture naming. In the first two experiments, reliable AoA effects on object naming speed, with objective word frequency controlled for, were found in both spoken (Experiment 1) and written picture naming (Experiment 2). In contrast, no reliable objective word frequency effects were observed on naming speed, with AoA controlled for, in either spoken (Experiment 3) or written (Experiment 4) picture naming. The implications of the findings for written picture naming are briefly discussed.

  20. Unfolding Visual Lexical Decision in Time

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni

    2012-01-01

    Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419

  1. Automatic disambiguation of morphosyntax in spoken language corpora.

    Science.gov (United States)

    Parisse, C; Le Normand, M T

    2000-08-01

    The use of computer tools has led to major advances in the study of spoken language corpora. One area that has shown particular progress is the study of child language development. Although it is now easy to lexically tag every word in a spoken language corpus, one still has to choose between numerous ambiguous forms, especially with languages such as French or English, where more than 70% of words are ambiguous. Computational linguistics can now provide a fully automatic disambiguation of lexical tags. The tool presented here (POST) can tag and disambiguate a large text in a few seconds. This tool complements systems dealing with language transcription and suggests further theoretical developments in the assessment of the status of morphosyntax in spoken language corpora. The program currently works for French and English, but it can be easily adapted for use with other languages. The analysis and computation of a corpus produced by normal French children 2-4 years of age, as well as of a sample corpus produced by French SLI children, are given as examples.

  2. Word order in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2012-01-01

    The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple

  3. Adaptation to Pronunciation Variations in Indonesian Spoken Query-Based Information Retrieval

    Science.gov (United States)

    Lestari, Dessi Puji; Furui, Sadaoki

    Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.

  4. Spoken Document Retrieval Based on Confusion Network with Syllable Fragments

    Directory of Open Access Journals (Sweden)

    Zhang Lei

    2012-11-01

    Full Text Available This paper addresses the problem of spoken document retrieval under noisy conditions by incorporating sound selection of a basic unit and an output form of a speech recognition system. Syllable fragment is combined with a confusion network in a spoken document retrieval task. After selecting an appropriate syllable fragment, a lattice is converted into a confusion network that is able to minimize the word error rate instead of maximizing the whole sentence recognition rate. A vector space model is adopted in the retrieval task where tf-idf weights are derived from the posterior probability. The confusion network with syllable fragments is able to improve the mean of average precision (MAP score by 0.342 and 0.066 over one-best scheme and the lattice.

  5. Iterative nonlinear unfolding code: TWOGO

    International Nuclear Information System (INIS)

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors

  6. Semantic Access to Embedded Words? Electrophysiological and Behavioral Evidence from Spanish and English

    Science.gov (United States)

    Macizo, Pedro; Van Petten, Cyma; O'Rourke, Polly L.

    2012-01-01

    Many multisyllabic words contain shorter words that are not semantic units, like the CAP in HANDICAP and the DURA ("hard") in VERDURA ("vegetable"). The spaces between printed words identify word boundaries, but spurious identification of these embedded words is a potentially greater challenge for spoken language comprehension, a challenge that is…

  7. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  8. Does Hearing Several Speakers Reduce Foreign Word Learning?

    Science.gov (United States)

    Ludington, Jason Darryl

    2016-01-01

    Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…

  9. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    Science.gov (United States)

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  10. Why not model spoken word recognition instead of phoneme monitoring?

    NARCIS (Netherlands)

    Vroomen, J.; de Gelder, B.

    2000-01-01

    Norris, McQueen & Cutler present a detailed account of the decision stage of the phoneme monitoring task. However, we question whether this contributes to our understanding of the speech recognition process itself, and we fail to see why phonotactic knowledge is playing a role in phoneme

  11. Robust Audio Indexing for Dutch Spoken-word Collections

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; de Jong, Franciska M.G.; Huijbregts, M.A.H.; van Leeuwen, David

    2005-01-01

    Abstract—Whereas the growth of storage capacity is in accordance with widely acknowledged predictions, the possibilities to index and access the archives created is lagging behind. This is especially the case in the oral history domain and much of the rich content in these collections runs the risk

  12. The Self-Organization of a Spoken Word

    Directory of Open Access Journals (Sweden)

    John G. eHolden

    2012-07-01

    Full Text Available Pronunciation time probability density and hazard functions from large speeded wordnaming data sets were assessed for empirical patterns consistent with multiplicative andreciprocal feedback dynamics—interaction dominant dynamics. Lognormal and inversepower-law distributions are associated with multiplicative and interdependent dynamicsin many natural systems. Mixtures of lognormal and inverse power-law distributionsoffered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald—alternatives corresponding to additive, superposed, component processes. Theevidence for interaction dominant dynamics suggests fundamental links between theobserved coordinative synergies that support speech production and the shapes ofpronunciation time distributions.

  13. Many a true word spoken in jest : Visuele voorstellingspraktyke in ...

    African Journals Online (AJOL)

    Although humour conceals the cultural exclusion in the data set, the cultural codes in the visual material generalise the non-Western 'other' as either extremely religious or as fundamentally different. Key terms: hegemony, Flemish language textbook, Critical Discourse, Analysis, focus group discussion, representational ...

  14. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  15. Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition with Event-Related Potentials

    Science.gov (United States)

    Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.

    2013-01-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…

  16. When two newly-acquired words are one: New words differing in stress alone are not automatically represented differently

    NARCIS (Netherlands)

    Sulpizio, S.; McQueen, J.M.

    2011-01-01

    Do listeners use lexical stress at an early stage in word learning? Artificial-lexicon studies have shown that listeners can learn new spoken words easily. These studies used non-words differing in consonants and/or vowels, but not differing only in stress. If listeners use stress information in

  17. Voice congruency facilitates word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  18. Voice congruency facilitates word recognition.

    Directory of Open Access Journals (Sweden)

    Sandra Campeanu

    Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  19. Unfolding and unfoldability of digital pulses in the z-domain

    Science.gov (United States)

    Regadío, Alberto; Sánchez-Prieto, Sebastián

    2018-04-01

    The unfolding (or deconvolution) technique is used in the development of digital pulse processing systems applied to particle detection. This technique is applied to digital signals obtained by digitization of analog signals that represent the combined response of the particle detectors and the associated signal conditioning electronics. This work describes a technique to determine if the signal is unfoldable. For unfoldable signals the characteristics of the unfolding system (unfolder) are presented. Finally, examples of the method applied to real experimental setup are discussed.

  20. Social interaction facilitates word learning in preverbal infants: Word-object mapping and word segmentation.

    Science.gov (United States)

    Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo

    2017-08-01

    In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. How Do Raters Judge Spoken Vocabulary?

    Science.gov (United States)

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  2. Some words on Word

    NARCIS (Netherlands)

    Janssen, Maarten; Visser, A.

    In many disciplines, the notion of a word is of central importance. For instance, morphology studies le mot comme tel, pris isol´ement (Mel’ˇcuk, 1993 [74]). In the philosophy of language the word was often considered to be the primary bearer of meaning. Lexicography has as its fundamental role

  3. BUMS--Bonner sphere Unfolding Made Simple an HTML based multisphere neutron spectrometer unfolding package

    CERN Document Server

    Sweezy, J; Veinot, K

    2002-01-01

    A new multisphere neutron spectrometer unfolding package, Bonner sphere Unfolding Made Simple (BUMS) has been developed that uses an HTML interface to simplify data input and code execution for the novice and the advanced user. This new unfolding package combines the unfolding algorithms contained in other popular unfolding codes under one easy to use interface. The interface makes use of web browsing software to provide a graphical user interface to the unfolding algorithms. BUMS integrates the SPUNIT, BON, MAXIET, and SAND-II unfolding algorithms into a single package. This package also includes a library of 14 response matrices, 58 starting spectra, and 24 dose and detector responses. BUMS has several improvements beyond the addition of unfolding algorithms. It has the ability to search for the most appropriate starting spectra. Also, plots of the unfolded neutron spectra are automatically generated. The BUMS package runs via a web server and may be accessed by any computer with access to the Internet at h...

  4. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Phonological and Semantic Knowledge Are Causal Influences on Learning to Read Words in Chinese

    Science.gov (United States)

    Zhou, Lulin; Duff, Fiona J.; Hulme, Charles

    2015-01-01

    We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…

  6. Locus of Word Frequency Effects in Spelling to Dictation: Still at the Orthographic Level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-01-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…

  7. Analyzing Forced Unfolding of Protein Tandems by Ordered Variates, 2: Dependent Unfolding Times

    Science.gov (United States)

    Bura, E.; Klimov, D. K.; Barsegov, V.

    2008-01-01

    Statistical analyses of forced unfolding data for protein tandems, i.e., unfolding forces (force-ramp) and unfolding times (force-clamp), used in single-molecule dynamic force spectroscopy rely on the assumption that the unfolding transitions of individual protein domains are independent (uncorrelated) and characterized, respectively, by identically distributed unfolding forces and unfolding times. In our previous work, we showed that in the experimentally accessible piconewton force range, this assumption, which holds at a lower constant force, may break at an elevated force level, i.e., the unfolding transitions may become correlated when force is increased. In this work, we develop much needed statistical tests for assessing the independence of the unobserved forced unfolding times for individual protein domains in the tandem and equality of their parent distributions, which are based solely on the observed ordered unfolding times. The use and performance of these tests are illustrated through the analysis of unfolding times for computer models of protein tandems. The proposed tests can be used in force-clamp atomic force microscopy experiments to obtain accurate information on protein forced unfolding and to probe data on the presence of interdomain interactions. The order statistics-based formalism is extended to cover the analysis of correlated unfolding transitions. The use of order statistics leads naturally to the development of new kinetic models, which describe the probabilities of ordered unfolding transitions rather than the populations of chemical species. PMID:18065466

  8. A Descriptive Study of Registers Found in Spoken and Written Communication (A Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Nurul Hidayah

    2016-07-01

    Full Text Available This research is descriptive study of registers found in spoken and written communication. The type of this research is Descriptive Qualitative Research. In this research, the data of the study is register in spoken and written communication that are found in a book entitled "Communicating! Theory and Practice" and from internet. The data can be in the forms of words, phrases and abbreviation. In relation with method of collection data, the writer uses the library method as her instrument. The writer relates it to the study of register in spoken and written communication. The technique of analyzing the data using descriptive method. The types of register in this term will be separated into formal register and informal register, and identify the meaning of register.

  9. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    Science.gov (United States)

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  10. Voice reinstatement modulates neural indices of continuous word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Backer, Kristina C; Alain, Claude

    2014-09-01

    The present study was designed to examine listeners' ability to use voice information incidentally during spoken word recognition. We recorded event-related brain potentials (ERPs) during a continuous recognition paradigm in which participants indicated on each trial whether the spoken word was "new" or "old." Old items were presented at 2, 8 or 16 words following the first presentation. Context congruency was manipulated by having the same word repeated by either the same speaker or a different speaker. The different speaker could share the gender, accent or neither feature with the word presented the first time. Participants' accuracy was greatest when the old word was spoken by the same speaker than by a different speaker. In addition, accuracy decreased with increasing lag. The correct identification of old words was accompanied by an enhanced late positivity over parietal sites, with no difference found between voice congruency conditions. In contrast, an earlier voice reinstatement effect was observed over frontal sites, an index of priming that preceded recollection in this task. Our results provide further evidence that acoustic and semantic information are integrated into a unified trace and that acoustic information facilitates spoken word recollection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A linear iterative unfolding method

    International Nuclear Information System (INIS)

    László, András

    2012-01-01

    A frequently faced task in experimental physics is to measure the probability distribution of some quantity. Often this quantity to be measured is smeared by a non-ideal detector response or by some physical process. The procedure of removing this smearing effect from the measured distribution is called unfolding, and is a delicate problem in signal processing, due to the well-known numerical ill behavior of this task. Various methods were invented which, given some assumptions on the initial probability distribution, try to regularize the unfolding problem. Most of these methods definitely introduce bias into the estimate of the initial probability distribution. We propose a linear iterative method (motivated by the Neumann series / Landweber iteration known in functional analysis), which has the advantage that no assumptions on the initial probability distribution is needed, and the only regularization parameter is the stopping order of the iteration, which can be used to choose the best compromise between the introduced bias and the propagated statistical and systematic errors. The method is consistent: 'binwise' convergence to the initial probability distribution is proved in absence of measurement errors under a quite general condition on the response function. This condition holds for practical applications such as convolutions, calorimeter response functions, momentum reconstruction response functions based on tracking in magnetic field etc. In presence of measurement errors, explicit formulae for the propagation of the three important error terms is provided: bias error (distance from the unknown to-be-reconstructed initial distribution at a finite iteration order), statistical error, and systematic error. A trade-off between these three error terms can be used to define an optimal iteration stopping criterion, and the errors can be estimated there. We provide a numerical C library for the implementation of the method, which incorporates automatic

  12. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    Science.gov (United States)

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  14. Analyzing Forced Unfolding of Protein Tandems by Ordered Variates, 1: Independent Unfolding Times

    Science.gov (United States)

    Bura, E.; Klimov, D. K.; Barsegov, V.

    2007-01-01

    Most of the mechanically active proteins are organized into tandems of identical repeats, (D)N, or heterogeneous tandems, D1–D2–…–DN. In current atomic force microscopy experiments, conformational transitions of protein tandems can be accessed by employing constant stretching force f (force-clamp) and by analyzing the recorded unfolding times of individual domains. Analysis of unfolding data for homogeneous tandems relies on the assumption that unfolding times are independent and identically distributed, and involves inference of the (parent) probability density of unfolding times from the histogram of the combined unfolding times. This procedure cannot be used to describe tandems characterized by interdomain interactions, or heteregoneous tandems. In this article, we introduce an alternative approach that is based on recognizing that the observed data are ordered, i.e., first, second, third, etc., unfolding times. The approach is exemplified through the analysis of unfolding times for a computer model of the homogeneous and heterogeneous tandems, subjected to constant force. We show that, in the experimentally accessible range of stretching forces, the independent and identically distributed assumption may not hold. Specifically, the uncorrelated unfolding transitions of individual domains at lower force may become correlated (dependent) at elevated force levels. The proposed formalism can be used in atomic force microscopy experiments to infer the unfolding time distributions of individual domains from experimental histograms of ordered unfolding times, and it can be extended to analyzing protein tandems that exhibit interdomain interactions. PMID:17496033

  15. Thermal dissociation and unfolding of insulin

    DEFF Research Database (Denmark)

    Huus, Kasper; Havelund, Svend; Olsen, Helle B

    2005-01-01

    The thermal stability of human insulin was studied by differential scanning microcalorimetry and near-UV circular dichroism as a function of zinc/protein ratio, to elucidate the dissociation and unfolding processes of insulin in different association states. Zinc-free insulin, which is primarily...... dimeric at room temperature, unfolded at approximately 70 degrees C. The two monomeric insulin mutants Asp(B28) and Asp(B9),Glu(B27) unfolded at higher temperatures, but with enthalpies of unfolding that were approximately 30% smaller. Small amounts of zinc caused a biphasic thermal denaturation pattern...... of insulin. The biphasic denaturation is caused by a redistribution of zinc ions during the heating process and results in two distinct transitions with T(m)'s of approximately 70 and approximately 87 degrees C corresponding to monomer/dimer and hexamer, respectively. At high zinc concentrations (>or=5 Zn(2...

  16. Spoken Language Understanding Software for Language Learning

    Directory of Open Access Journals (Sweden)

    Hassan Alam

    2008-04-01

    Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.

  17. Theology of Jesus’ words from the cross

    Directory of Open Access Journals (Sweden)

    Bogdan Zbroja

    2012-09-01

    Full Text Available The article presents a theological message of the last words that Jesus spoke from the height of the cross. Layout content is conveyed in three kinds of Christ’s relations: the words addressed to God the Father; the words addressed to the good people standing by the cross; the so-called declarations that the Master had spoken to anyone but uttered them in general. All these words speak of the Master’s love. They express His full awareness of what is being done and of His decision voluntarily taken. Above all, it is revealed in the Lord’s statements His obedience to the will of God expressed in the inspired words of the Holy Scriptures. Jesus fulfills all the prophecies of the Old Testament by pronounced words and accomplished works that will become content of the New Testament.

  18. [Use of Freiburg monosyllabic test words in the contemporary German language : Currentness of the test words].

    Science.gov (United States)

    Steffens, T

    2016-08-01

    The Freiburg monosyllabic test has a word inventory based on the word frequency in written sources from the 19th century, the distribution of which is not even between the test lists. The median distributions of word frequency ranking in contemporary language of nine test lists deviate significantly from the overall median of all 400 monosyllables. Lists 1, 6, 9, 10, and 17 include significantly more very rarely used words; lists 2, 3, 5, and 15, include significantly more very frequently used words. Compared with the word frequency in the contemporary spoken German language, about 45 % of the test words are practically no longer used. Due to this high proportion of extremely rarely or no longer used words, the word inventory is no longer representative of the contemporary German language-neither for the written, nor for the spoken language. Highly educated persons with a large vocabulary are thereby favored. The reference values for normal-hearing persons should therefore be reevaluated.

  19. Tracking the time course of word-frequency effects in auditory word recognition with event-related potentials.

    Science.gov (United States)

    Dufour, Sophie; Brunellière, Angèle; Frauenfelder, Ulrich H

    2013-04-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken-word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed. Copyright © 2012 Cognitive Science Society, Inc.

  20. A Corpus-based Linguistic Analysis on Spoken Corpus: Semantic Prosodies on “Robots”

    Directory of Open Access Journals (Sweden)

    Yunisrina Qismullah Yusuf

    2010-04-01

    Full Text Available This study focuses on the semantic prosodies of the word ¡°robot¡± from words that colligates it in data of the spoken form. The data is collected from a lecturer.s talk discussing two topics which are about man and machines in perfect harmony and the effective temperature of workplaces. For annotation, UCREL CLAWS5 Tagset is used, with Tagset C5 to select output style of horizontal. The design of corpus used is by ICE. It reveals that more positive semantic prosodies on the word ¡°robot¡± are presented in the data compared to negative, with 52 occurrences discovered for positive (94,5% and 3 occurrences discovered for negative (5,5%. Words mostly collocated with ¡°robot¡± in the data are service with 8 collocations, machines with 20 collocations, surgical system with 15 collocations and intelligence with 13 collocations.

  1. Czech spoken in Bohemia and Moravia

    NARCIS (Netherlands)

    Šimáčková, Š.; Podlipský, V.J.; Chládková, K.

    2012-01-01

    As a western Slavic language of the Indo-European family, Czech is closest to Slovak and Polish. It is spoken as a native language by nearly 10 million people in the Czech Republic (Czech Statistical Office n.d.). About two million people living abroad, mostly in the USA, Canada, Austria, Germany,

  2. Artfulness in Young Children's Spoken Narratives

    Science.gov (United States)

    Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.

    2010-01-01

    Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…

  3. A Mother Tongue Spoken Mainly by Fathers.

    Science.gov (United States)

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  4. Spoken Grammar and Its Role in the English Language Classroom

    Science.gov (United States)

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  5. Forehearing words: Pre-activation of word endings at word onset.

    Science.gov (United States)

    Roll, Mikael; Söderström, Pelle; Frid, Johan; Mannfolk, Peter; Horne, Merle

    2017-09-29

    Occurring at rates up to 6-7 syllables per second, speech perception and understanding involves rapid identification of speech sounds and pre-activation of morphemes and words. Using event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI), we investigated the time-course and neural sources of pre-activation of word endings as participants heard the beginning of unfolding words. ERPs showed a pre-activation negativity (PrAN) for word beginnings (first two segmental phonemes) with few possible completions. PrAN increased gradually as the number of possible completions of word onsets decreased and the lexical frequency of the completions increased. The early brain potential effect for few possible word completions was associated with a blood-oxygen-level-dependent (BOLD) contrast increase in Broca's area (pars opercularis of the left inferior frontal gyrus) and angular gyrus of the left parietal lobe. We suggest early involvement of the left prefrontal cortex in inhibiting irrelevant left parietal activation during lexical selection. The results further our understanding of the importance of Broca's area in rapid online pre-activation of words. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. NEUPAC, Experimental Neutron Spectra Unfolding with Sensitivities

    International Nuclear Information System (INIS)

    Sasaki, Makoto; Nakazawa, Masaharu

    1986-01-01

    1 - Description of problem or function: The code is able to determine the integral quantities and their sensitivities, together with an estimate of the unfolded spectrum and integral quantities. The code also performs a chi-square test of the input/output data, and contains many options for the calculational routines. 2 - Method of solution: The code is based on the J1-type unfolding method, and the estimated neutron flux spectrum is obtained as its solution. 3 - Restrictions on the complexity of the problem: The maximum number of energy groups used for unfolding is 620. The maximum number of reaction rates and the window functions given as input is 20. The total storage requirement depends on the amount of input data

  7. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    Science.gov (United States)

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  8. Comparing spoken language treatments for minimally verbal preschoolers with autism spectrum disorders.

    Science.gov (United States)

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-02-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in each group achieved benchmarks for the first stage of functional spoken language development, as defined by Tager-Flusberg et al. (J Speech Lang Hear Res, 52: 643-652, 2009). Analyses of moderators of treatment suggest that joint attention moderates response to both treatments, and children with better receptive language pre-treatment do better with the naturalistic method, while those with lower receptive language show better response to the discrete trial treatment. The implications of these findings are discussed.

  9. Modality differences between written and spoken story retelling in healthy older adults

    Directory of Open Access Journals (Sweden)

    Jessica Ann Obermeyer

    2015-04-01

    Methods: Ten native English speaking healthy elderly participants between the ages of 50 and 80 were recruited. Exclusionary criteria included neurological disease/injury, history of learning disability, uncorrected hearing or vision impairment, history of drug/alcohol abuse and presence of cognitive decline (based on Cognitive Linguistic Quick Test. Spoken and written discourse was analyzed for micro linguistic measures including total words, percent correct information units (CIUs; Nicholas & Brookshire, 1993 and percent complete utterances (CUs; Edmonds, et al. 2009. CIUs measure relevant and informative words while CUs focus at the sentence level and measure whether a relevant subject and verb and object (if appropriate are present. Results: Analysis was completed using Wilcoxon Rank Sum Test due to small sample size. Preliminary results revealed that healthy elderly people produced significantly more words in spoken retellings than written retellings (p=.000; however, this measure contrasted with %CIUs and %CUs with participants producing significantly higher %CIUs (p=.000 and %CUs (p=.000 in written story retellings than in spoken story retellings. Conclusion: These findings indicate that written retellings, while shorter, contained higher accuracy at both a word (CIU and sentence (CU level. This observation could be related to the ability to revise written text and therefore make it more concise, whereas the nature of speech results in more embellishment and “thinking out loud,” such as comments about the task, associated observations about the story, etc. We plan to run more participants and conduct a main concepts analysis (before conference time to gain more insight into modality differences and implications.

  10. Phonological Analysis of University Students’ Spoken Discourse

    Directory of Open Access Journals (Sweden)

    Clara Herlina

    2011-04-01

    Full Text Available The study of discourse is the study of using language in actual use. In this article, the writer is trying to investigate the phonological features, either segmental or supra-segmental, in the spoken discourse of Indonesian university students. The data were taken from the recordings of 15 conversations by 30 students of Bina Nusantara University who are taking English Entrant subject (TOEFL –IBT. Finally, the writer is in opinion that the students are still influenced by their first language in their spoken discourse. This results in English with Indonesian accent. Even though it does not cause misunderstanding at the moment, this may become problematic if they have to communicate in the real world.  

  11. Fourth International Workshop on Spoken Dialog Systems

    CERN Document Server

    Rosset, Sophie; Garnier-Rizet, Martine; Devillers, Laurence; Natural Interaction with Robots, Knowbots and Smartphones : Putting Spoken Dialog Systems into Practice

    2014-01-01

    These proceedings presents the state-of-the-art in spoken dialog systems with applications in robotics, knowledge access and communication. It addresses specifically: 1. Dialog for interacting with smartphones; 2. Dialog for Open Domain knowledge access; 3. Dialog for robot interaction; 4. Mediated dialog (including crosslingual dialog involving Speech Translation); and, 5. Dialog quality evaluation. These articles were presented at the IWSDS 2012 workshop.

  12. Native language, spoken language, translation and trade

    OpenAIRE

    Jacques Melitz; Farid Toubal

    2012-01-01

    We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role...

  13. AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes.

    Science.gov (United States)

    Schillingmann, Lars; Ernst, Jessica; Keite, Verena; Wrede, Britta; Meyer, Antje S; Belke, Eva

    2018-01-29

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool's performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.

  14. An energetic model for macromolecules unfolding in stretching experiments

    Science.gov (United States)

    De Tommasi, D.; Millardi, N.; Puglisi, G.; Saccomandi, G.

    2013-01-01

    We propose a simple approach, based on the minimization of the total (entropic plus unfolding) energy of a two-state system, to describe the unfolding of multi-domain macromolecules (proteins, silks, polysaccharides, nanopolymers). The model is fully analytical and enlightens the role of the different energetic components regulating the unfolding evolution. As an explicit example, we compare the analytical results with a titin atomic force microscopy stretch-induced unfolding experiment showing the ability of the model to quantitatively reproduce the experimental behaviour. In the thermodynamic limit, the sawtooth force–elongation unfolding curve degenerates to a constant force unfolding plateau. PMID:24047874

  15. Using brain potentials to functionally localise stroop-like effects in colour and picture naming : Perceptual encoding versus word planning

    NARCIS (Netherlands)

    Shitova, Natalia; Roelofs, Ardi; Schriefers, Herbert; Bastiaansen, M.C.M.; Schoffelen, Jan-Mathijs

    2016-01-01

    The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent

  16. Using brain potentials to functionally localise Stroop-like effects in colour and picture naming: Perceptual encoding versus word planning

    NARCIS (Netherlands)

    Shitova, N.; Roelofs, A.P.A.; Schriefers, H.J.; Bastiaansen, M.C.M.; Schoffelen, J.M.

    2016-01-01

    The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent

  17. Studies of the Processing of Single Words Using Positron Tomographic Measures of Cerebral Blood Flow Change.

    Science.gov (United States)

    1987-01-01

    in dyslexia provide support for a direct route from visual word forms to semantic and articulatory codes. There also seems to be independence in the...experiment. (LaBerge & Samuels, 1974; Rumelhart & McClelland, 1982, 1986). ’-.Exam-es of some of these separate codes include a visual image of the...form of a spoken word ( visual code), pronunciation of the word (phonological code) or the association of related words (semantic codes). Studies of the

  18. Comparison of neutron spectrum unfolding codes

    International Nuclear Information System (INIS)

    Zijp, W.

    1979-02-01

    This final report contains a set of four ECN-reports. The first is dealing with the comparison of the neutron spectrum unfolding codes CRYSTAL BALL, RFSP-JUL, SAND II and STAY'SL. The other three present the results of calculations about the influence of statistical weights in CRYSTAL BALL, SAND II and RFSP-JUL

  19. Chemical and thermal unfolding of calreticulin

    DEFF Research Database (Denmark)

    Duus, K.; Larsen, N.; Tran, T. A. T.

    2013-01-01

    assay, we have investigated the chemical and thermal stability of calreticulin. When the chemical stability of calreticulin was assessed, a midpoint for calreticulin unfolding was calculated to 3.0M urea using CD data at 222 nm. Using the fluorescent dye binding thermal shift assay, calreticulin...

  20. Word classes

    DEFF Research Database (Denmark)

    Rijkhoff, Jan

    2007-01-01

    This article provides an overview of recent literature and research on word classes, focusing in particular on typological approaches to word classification. The cross-linguistic classification of word class systems (or parts-of-speech systems) presented in this article is based on statements found...... in grammatical descriptions of some 50 languages, which together constitute a representative sample of the world’s languages (Hengeveld et al. 2004: 529). It appears that there are both quantitative and qualitative differences between word class systems of individual languages. Whereas some languages employ...... a parts-of-speech system that includes the categories Verb, Noun, Adjective and Adverb, other languages may use only a subset of these four lexical categories. Furthermore, quite a few languages have a major word class whose members cannot be classified in terms of the categories Verb – Noun – Adjective...

  1. Great expectations: Specific lexical anticipation influences the processing of spoken language

    Directory of Open Access Journals (Sweden)

    Nieuwland Mante S

    2007-10-01

    Full Text Available Abstract Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs we tested whether these on-line predictions are based on a message-level representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusion When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.

  2. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  3. Symbolic gestures and spoken language are processed by a common neural system.

    Science.gov (United States)

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-08

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.

  4. Polysynthesis in Hueyapan Nahuatl: The Status of Noun Phrases, Basic Word Order, and Other Concerns

    DEFF Research Database (Denmark)

    Pharao Hansen, Magnus

    2010-01-01

    This article presents data showing that the syntax of the Nahuatl dialect spoken in Hueyapan, Morelos, Mexico has traits of nonconfigurationality: free word order and free pro-drop, with predicate-initial word order being pragmatically neutral. It permits discontinuous noun phrases and has no nat...

  5. The visual-auditory color-word Stroop asymmetry and its time course

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2005-01-01

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect.

  6. African American English and Spelling: How Do Second Graders Spell Dialect-Sensitive Features of Words?

    Science.gov (United States)

    Patton-Terry, Nicole; Connor, Carol

    2010-01-01

    This study explored the spelling skills of African American second graders who produced African American English (AAE) features in speech. The children (N = 92), who varied in spoken AAE use and word reading skills, were asked to spell words that contained phonological and morphological dialect-sensitive (DS) features that can vary between AAE and…

  7. Spoken language interface for a network management system

    Science.gov (United States)

    Remington, Robert J.

    1999-11-01

    Leaders within the Information Technology (IT) industry are expressing a general concern that the products used to deliver and manage today's communications network capabilities require far too much effort to learn and to use, even by highly skilled and increasingly scarce support personnel. The usability of network management systems must be significantly improved if they are to deliver the performance and quality of service needed to meet the ever-increasing demand for new Internet-based information and services. Fortunately, recent advances in spoken language (SL) interface technologies show promise for significantly improving the usability of most interactive IT applications, including network management systems. The emerging SL interfaces will allow users to communicate with IT applications through words and phases -- our most familiar form of everyday communication. Recent advancements in SL technologies have resulted in new commercial products that are being operationally deployed at an increasing rate. The present paper describes a project aimed at the application of new SL interface technology for improving the usability of an advanced network management system. It describes several SL interface features that are being incorporated within an existing system with a modern graphical user interface (GUI), including 3-D visualization of network topology and network performance data. The rationale for using these SL interface features to augment existing user interfaces is presented, along with selected task scenarios to provide insight into how a SL interface will simplify the operator's task and enhance overall system usability.

  8. Action and Object Word Writing in a Case of Bilingual Aphasia

    Directory of Open Access Journals (Sweden)

    Maria Kambanaros

    2012-01-01

    Full Text Available We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.

  9. Mobile Information Access with Spoken Query Answering

    DEFF Research Database (Denmark)

    Brøndsted, Tom; Larsen, Henrik Legind; Larsen, Lars Bo

    2006-01-01

    This paper addresses the problem of information and service accessibility in mobile devices with limited resources. A solution is developed and tested through a prototype that applies state-of-the-art Distributed Speech Recognition (DSR) and knowledge-based Information Retrieval (IR) processing...... for spoken query answering. For the DSR part, a configurable DSR system is implemented on the basis of the ETSI-DSR advanced front-end and the SPHINX IV recognizer. For the knowledge-based IR part, a distributed system solution is developed for fast retrieval of the most relevant documents, with a text...

  10. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    Science.gov (United States)

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  11. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    Science.gov (United States)

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  12. Does segmental overlap help or hurt? Evidence from blocked cyclic naming in spoken and written production.

    Science.gov (United States)

    Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda

    2016-04-01

    Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.

  13. Spectral unfolding of fast neutron energy distributions

    Science.gov (United States)

    Mosby, Michelle; Jackman, Kevin; Engle, Jonathan

    2015-10-01

    The characterization of the energy distribution of a neutron flux is difficult in experiments with constrained geometry where techniques such as time of flight cannot be used to resolve the distribution. The measurement of neutron fluxes in reactors, which often present similar challenges, has been accomplished using radioactivation foils as an indirect probe. Spectral unfolding codes use statistical methods to adjust MCNP predictions of neutron energy distributions using quantified radioactive residuals produced in these foils. We have applied a modification of this established neutron flux characterization technique to experimentally characterize the neutron flux in the critical assemblies at the Nevada National Security Site (NNSS) and the spallation neutron flux at the Isotope Production Facility (IPF) at Los Alamos National Laboratory (LANL). Results of the unfolding procedure are presented and compared with a priori MCNP predictions, and the implications for measurements using the neutron fluxes at these facilities are discussed.

  14. Unfolded aplanats for high-concentration photovoltaics.

    Science.gov (United States)

    Gordon, Jeffrey M; Feuermann, Daniel; Young, Pete

    2008-05-15

    The exigencies of high-concentration photovoltaics motivate optics that (1) obviate the need for optical bonds, (2) exhibit maximal optical tolerance, (3) are not damaged at off-axis orientation, and (4) allow convenient location of the solar cell and heat sink. We show that dual-mirror unfolded aplanats can satisfy all these criteria. Lens enhancement improves compactness and, with millimeter-scale cells, concentrator depth is only a few centimeters, amenable to precise large-volume fabrication.

  15. Transition-Systems, Event Structures, and Unfoldings

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Thiagarajan, P.S.

    1995-01-01

    A subclass of transition systems called elementary transition systems can be identified with the help of axioms based on a structural notion called regions. Elementary transition systems have been shown to be the transition system model of a basic system model of net theory called elementary net ...... event structures. We then propose an operation of unfolding elementary transition systems into occurrence transition systems, We prove that it is "correct" in a strong categorical sense....

  16. Stimulus-independent semantic bias misdirects word recognition in older adults.

    Science.gov (United States)

    Rogers, Chad S; Wingfield, Arthur

    2015-07-01

    Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence. This age difference was unaffected by differences in the relative level of acoustic masking.

  17. Word Learning Deficits in Children With Dyslexia.

    Science.gov (United States)

    Alt, Mary; Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson

    2017-04-14

    The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets of games that manipulated phonological or visuospatial demands. All children were monolingual English-speaking 2nd graders without oral language impairment. The word learning games measured children's ability to link novel names with novel objects, to make decisions about the accuracy of those names and objects, to recognize the semantic features of the objects, and to produce the names of the novel words. Accuracy data were analyzed using analyses of covariance with nonverbal intelligence scores as a covariate. Word learning deficits were evident for children with dyslexia across every type of manipulation and on 3 of 5 tasks, but not for every combination of task/manipulation. Deficits were more common when task demands taxed phonology. Visuospatial manipulations led to both disadvantages and advantages for children with dyslexia. Children with dyslexia evidence spoken word learning deficits, but their performance is highly dependent on manipulations and task demand, suggesting a processing trade-off between visuospatial and phonological demands.

  18. Cross-modal working memory binding and L1-L2 word learning.

    Science.gov (United States)

    Wang, Shinmin; Allen, Richard J; Fang, Shin-Yi; Li, Ping

    2017-11-01

    The ability to create temporary binding representations of information from different sources in working memory has recently been found to relate to the development of monolingual word recognition in children. The current study explored this possible relationship in an adult word-learning context. We assessed whether the relationship between cross-modal working memory binding and lexical development would be observed in the learning of associations between unfamiliar spoken words and their semantic referents, and whether it would vary across experimental conditions in first- and second-language word learning. A group of English monolinguals were recruited to learn 24 spoken disyllable Mandarin Chinese words in association with either familiar or novel objects as semantic referents. They also took a working memory task in which their ability to temporarily bind auditory-verbal and visual information was measured. Participants' performance on this task was uniquely linked to their learning and retention of words for both novel objects and for familiar objects. This suggests that, at least for spoken language, cross-modal working memory binding might play a similar role in second language-like (i.e., learning new words for familiar objects) and in more native-like situations (i.e., learning new words for novel objects). Our findings provide new evidence for the role of cross-modal working memory binding in L1 word learning and further indicate that early stages of picture-based word learning in L2 might rely on similar cognitive processes as in L1.

  19. Direction Asymmetries in Spoken and Signed Language Interpreting

    Science.gov (United States)

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  20. Spoken and Written Communication: Are Five Vowels Enough?

    Science.gov (United States)

    Abbott, Gerry

    The comparatively small vowel inventory of Bantu languages leads young Bantu learners to produce "undifferentiations," so that, for example, the spoken forms of "hat,""hut,""heart" and "hurt" sound the same to a British ear. The two criteria for a non-native speaker's spoken performance are…

  1. Spoken Grammar: Where Are We and Where Are We Going?

    Science.gov (United States)

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  2. Enhancing the Performance of Female Students in Spoken English

    Science.gov (United States)

    Inegbeboh, Bridget O.

    2009-01-01

    Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…

  3. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Assessing spoken-language educational interpreting: Measuring up and measuring right. Lenelle Foster, Adriaan Cupido. Abstract. This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at ...

  4. Spoken language corpora for the nine official African languages of ...

    African Journals Online (AJOL)

    Spoken language corpora for the nine official African languages of South Africa. Jens Allwood, AP Hendrikse. Abstract. In this paper we give an outline of a corpus planning project which aims to develop linguistic resources for the nine official African languages of South Africa in the form of corpora, more specifically spoken ...

  5. Distinguish Spoken English from Written English: Rich Feature Analysis

    Science.gov (United States)

    Tian, Xiufeng

    2013-01-01

    This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…

  6. Periodic words connected with the Fibonacci words

    Directory of Open Access Journals (Sweden)

    G. M. Barabash

    2016-06-01

    Full Text Available In this paper we introduce two families of periodic words (FLP-words of type 1 and FLP-words of type 2 that are connected with the Fibonacci words and investigated their properties.

  7. Presentation video retrieval using automatically recovered slide and spoken text

    Science.gov (United States)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  8. The Influence of Topic Status on Written and Spoken Sentence Production.

    Science.gov (United States)

    Cowles, H Wind; Ferreira, Victor S

    2011-12-01

    Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production.

  9. Learning words

    DEFF Research Database (Denmark)

    Jaswal, Vikram K.; Hansen, Mikkel

    2006-01-01

    Children tend to infer that when a speaker uses a new label, the label refers to an unlabeled object rather than one they already know the label for. Does this inference reflect a default assumption that words are mutually exclusive? Or does it instead reflect the result of a pragmatic reasoning...... process about what the speaker intended? In two studies, we distinguish between these possibilities. Preschoolers watched as a speaker pointed toward (Study 1) or looked at (Study 2) a familiar object while requesting the referent for a new word (e.g. 'Can you give me the blicket?'). In both studies......, despite the speaker's unambiguous behavioral cue indicating an intent to refer to a familiar object, children inferred that the novel label referred to an unfamiliar object. These results suggest that children expect words to be mutually exclusive even when a speaker provides some kinds of pragmatic...

  10. Word prediction

    Energy Technology Data Exchange (ETDEWEB)

    Rumelhart, D.E.; Skokowski, P.G.; Martin, B.O.

    1995-05-01

    In this project we have developed a language model based on Artificial Neural Networks (ANNs) for use in conjunction with automatic textual search or speech recognition systems. The model can be trained on large corpora of text to produce probability estimates that would improve the ability of systems to identify words in a sentence given partial contextual information. The model uses a gradient-descent learning procedure to develop a metric of similarity among terms in a corpus, based on context. Using lexical categories based on this metric, a network can then be trained to do serial word probability estimation. Such a metric can also be used to improve the performance of topic-based search by allowing retrieval of information that is related to desired topics even if no obvious set of key words unites all the retrieved items.

  11. On the Usability of Spoken Dialogue Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Bo

     This work is centred on the methods and problems associated with defining and measuring the usability of Spoken Dialogue Systems (SDS). The starting point is the fact that speech based interfaces has several times during the last 20 years fallen short of the high expectations and predictions held...... by industry, researchers and analysts. Several studies in the literature of SDS indicate that this can be ascribed to a lack of attention from the speech technology community towards the usability of such systems. The experimental results presented in this work are based on a field trial with the OVID home...... model roughly explains 50% of the observed variance in the user satisfaction based on measures of task success and speech recognition accuracy, a result similar to those obtained at AT&T. The applied methods are discussed and evaluated critically....

  12. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  13. Word order variation and foregrounding of complement clauses

    DEFF Research Database (Denmark)

    Christensen, Tanya Karoli; Jensen, Torben Juel

    2015-01-01

    Through mixed models analyses of complement clauses in a corpus of spoken Danish we examine the role of sentence adverbials in relation to a word order distinction in Scandinavian signalled by the relative position of sentence adverbials and finite verb (V>Adv vs. Adv>V). The type of sentence...... adverbial was the third-most important factor in explaining the word order variation: Sentence adverbials categorized as ‘dialogic’ are significantly associated with V>Adv word order. We argue that the results are readily interpretable in the light of the semantico-pragmatic hypothesis that V>Adv signals...

  14. General language performance measures in spoken and written narrative and expository discourse of school-age children with language learning disabilities.

    Science.gov (United States)

    Scott, C M; Windsor, J

    2000-04-01

    Language performance in naturalistic contexts can be characterized by general measures of productivity, fluency, lexical diversity, and grammatical complexity and accuracy. The use of such measures as indices of language impairment in older children is open to questions of method and interpretation. This study evaluated the extent to which 10 general language performance measures (GLPM) differentiated school-age children with language learning disabilities (LLD) from chronological-age (CA) and language-age (LA) peers. Children produced both spoken and written summaries of two educational videotapes that provided models of either narrative or expository (informational) discourse. Productivity measures, including total T-units, total words, and words per minute, were significantly lower for children with LLD than for CA children. Fluency (percent T-units with mazes) and lexical diversity (number of different words) measures were similar for all children. Grammatical complexity as measured by words per T-unit was significantly lower for LLD children. However, there was no difference among groups for clauses per T-unit. The only measure that distinguished children with LLD from both CA and LA peers was the extent of grammatical error. Effects of discourse genre and modality were consistent across groups. Compared to narratives, expository summaries were shorter, less fluent (spoken versions), more complex (words per T-unit), and more error prone. Written summaries were shorter and had more errors than spoken versions. For many LLD and LA children, expository writing was exceedingly difficult. Implications for accounts of language impairment in older children are discussed.

  15. Effects of Word Frequency and Transitional Probability on Word Reading Durations of Younger and Older Speakers.

    Science.gov (United States)

    Moers, Cornelia; Meyer, Antje; Janse, Esther

    2017-06-01

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.

  16. Unfolding Implementation in Industrial Market Segmentation

    DEFF Research Database (Denmark)

    Bøjgaard, John; Ellegaard, Chris

    2011-01-01

    Market segmentation is an important method of strategic marketing and constitutes a cornerstone of the marketing literature. It has undergone extensive scientific inquiry during the past 50 years. Reporting on an extensive review of the market segmentation literature, the challenging task...... of implementing industrial market segmentation is discussed and unfolded in this article. Extant literature has identified segmentation implementation as a core challenge for marketers, but also one, which has received limited empirical attention. Future research opportunities are formulated in this article...... for marketing management. Three key elements and challenges connected to execution of market segmentation are identified — organization, motivation, and adaptation....

  17. Sarbalap! Words.

    Science.gov (United States)

    Cantu, Virginia, Comp.; And Others

    Prepared by bilingual teacher aide students, this glossary provides the Spanish translation of about 1,300 English words used in the bilingual classroom. Intended to serve as a handy reference for teachers, teacher aides, and students, the glossary can also be used in teacher training programs as a vocabulary builder for future bilingual teachers…

  18. Word Formation below and above Little x: Evidence from Sign Language of the Netherlands

    Directory of Open Access Journals (Sweden)

    Inge Zwitserlood

    2004-01-01

    Full Text Available Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT, with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM, which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

  19. The Influence of Emotional Words on Sentence Processing: Electrophysiological and Behavioral Evidence

    Science.gov (United States)

    Martin-Loeches, Manuel; Fernandez, Anabel; Schacht, Annekathrin; Sommer, Werner; Casado, Pilar; Jimenez-Ortega, Laura; Fondevila, Sabela

    2012-01-01

    Whereas most previous studies on emotion in language have focussed on single words, we investigated the influence of the emotional valence of a word on the syntactic and semantic processes unfolding during sentence comprehension, by means of event-related brain potentials (ERP). Experiment 1 assessed how positive, negative, and neutral adjectives…

  20. On the spectral unfolding of chaotic and mixed systems

    Science.gov (United States)

    Abuelenin, Sherif M.

    2018-02-01

    Random matrix theory (RMT) provides a framework to study the spectral fluctuations in physical systems. RMT is capable of making predictions for the fluctuations only after the removal of the secular properties of the spectrum. Spectral unfolding procedure is used to separate the local level fluctuations from overall energy dependence of the level separation. The unfolding procedure is not unique. Several studies showed that statistics of long-term correlation in the spectrum are very sensitive to the choice of the unfolding function in polynomial unfolding. This can give misleading results regarding the chaoticity of quantum systems. In this letter, we consider the spectra of ordered eigenvalues of large random matrices. We show that the main cause behind the reported sensitivity to the unfolding polynomial degree is the inclusion of specific extreme eigenvalue(s) in the unfolding process.

  1. The impact of music on learning and consolidation of novel words.

    Science.gov (United States)

    Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J

    2017-01-01

    Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.

  2. Does Set for Variability Mediate the Influence of Vocabulary Knowledge on the Development of Word Recognition Skills?

    Science.gov (United States)

    Tunmer, William E.; Chapman, James W.

    2012-01-01

    This study investigated the hypothesis that vocabulary influences word recognition skills indirectly through "set for variability", the ability to determine the correct pronunciation of approximations to spoken English words. One hundred forty children participating in a 3-year longitudinal study were administered reading and…

  3. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  4. Recurrent Word Combinations in EAP Test-Taker Writing: Differences between High- and Low-Proficiency Levels

    Science.gov (United States)

    Appel, Randy; Wood, David

    2016-01-01

    The correct use of frequently occurring word combinations represents an important part of language proficiency in spoken and written discourse. This study investigates the use of English-language recurrent word combinations in low-level and high-level L2 English academic essays sourced from the Canadian Academic English Language (CAEL) assessment.…

  5. Word wheels

    CERN Document Server

    Clark, Kathryn

    2013-01-01

    Targeting the specific problems learners have with language structure, these multi-sensory exercises appeal to all age groups including adults. Exercises use sight, sound and touch and are also suitable for English as an Additional Lanaguage and Basic Skills students.Word Wheels includes off-the-shelf resources including lesson plans and photocopiable worksheets, an interactive CD with practice exercises, and support material for the busy teacher or non-specialist staff, as well as homework activities.

  6. Mechanical unfolding of proteins: reduction to a single-reaction coordinate unfolding potential, and an application of the Jarzynski Relation

    Science.gov (United States)

    Olmsted, Peter; West, Daniel; Paci, Emanuele

    2007-03-01

    Single molecule force spectroscopy (AFM, optical tweezers, etc) has revolutionized the study of many biopolymers, including DNA, RNA, and proteins. In this talk I will discuss recent work on modelling of mechanical unfolding of proteins, as often probed by AFM. I will address two issues in obtaining a coarse-grained description of protein unfolding: how to project the entire energy landscape onto an effective one dimensional unfolding potential, and how to apply the Jarzynski Relation to extract equilibrium free energies from nonequilibrium unfolding experiments.

  7. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Using Spoken Language to Facilitate Military Transportation Planning

    National Research Council Canada - National Science Library

    Bates, Madeleine; Ellard, Dan; Peterson, Pat; Shaked, Varda

    1991-01-01

    .... In an effort to demonstrate the relevance of SIS technology to real-world military applications, BBN has undertaken the task of providing a spoken language interface to DART, a system for military...

  9. ELSIE: The Quick Reaction Spoken Language Translation (QRSLT)

    National Research Council Canada - National Science Library

    Montgomery, Christine

    2000-01-01

    The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...

  10. Verb Errors in Advanced Spoken English

    Directory of Open Access Journals (Sweden)

    Tomáš Gráf

    2017-07-01

    Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.

  11. Unfolding the phenomenon of interrater agreement

    DEFF Research Database (Denmark)

    Slaug, Björn; Schilling, Oliver; Helle, Tina

    2012-01-01

    accounted for 6-11%, the items for 32-33%, and the residual for 57-60% of the variation. Multilevel regression analysis showed barrier prevalence and raters' familiarity with using standardized instruments to have the strongest impact on agreement. CONCLUSION: Supported by a conceptual analysis, we propose......OBJECTIVE: The overall objective was to unfold the phenomenon of interrater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement...... shares of agreement variation were calculated. Multilevel regression analysis was carried out, using rater and item characteristics as predictors of agreement variation. RESULTS: Following a conceptual decomposition, the agreement variation was statistically disentangled into relative shares. The raters...

  12. Kinetics of protein unfolding at interfaces

    International Nuclear Information System (INIS)

    Yano, Yohko F

    2012-01-01

    The conformation of protein molecules is determined by a balance of various forces, including van der Waals attraction, electrostatic interaction, hydrogen bonding, and conformational entropy. When protein molecules encounter an interface, they are often adsorbed on the interface. The conformation of an adsorbed protein molecule strongly depends on the interaction between the protein and the interface. Recent time-resolved investigations have revealed that protein conformation changes during the adsorption process due to the protein-protein interaction increasing with increasing interface coverage. External conditions also affect the protein conformation. This review considers recent dynamic observations of protein adsorption at various interfaces and their implications for the kinetics of protein unfolding at interfaces. (topical review)

  13. Review of unfolding methods for neutron flux dosimetry

    International Nuclear Information System (INIS)

    Stallmann, F.W.; Kam, F.B.K.

    1975-01-01

    The primary method in reactor dosimetry is the foil activation technique. To translate the activation measurements into neutron fluxes, a special data processing technique called unfolding is needed. Some general observations about the problems and the reliability of this approach to reactor dosimetry are presented. Current unfolding methods are reviewed. 12 references. (auth)

  14. Characterization of protein unfolding with solid-state nanopores.

    Science.gov (United States)

    Li, Jiali; Fologea, Daniel; Rollings, Ryan; Ledden, Brad

    2014-03-01

    In this work, we review the process of protein unfolding characterized by a solid-state nanopore based device. The occupied or excluded volume of a protein molecule in a nanopore depends on the protein's conformation or shape. A folded protein has a larger excluded volume in a nanopore thus it blocks more ionic current flow than its unfolded form and produces a greater current blockage amplitude. The time duration a protein stays in a pore also depends on the protein's folding state. We use Bovine Serum Albumin (BSA) as a model protein to discuss this current blockage amplitude and the time duration associated with the protein unfolding process. BSA molecules were measured in folded, partially unfolded, and completely unfolded conformations in solid-state nanopores. We discuss experimental results, data analysis, and theoretical considerations of BSA protein unfolding measured with silicon nitride nanopores. We show this nanopore method is capable of characterizing a protein's unfolding process at single molecule level. Problems and future studies in characterization of protein unfolding using a solid-state nanopore device will also be discussed.

  15. XBP1, Unfolded Protein Response, and Endocrine Responsiveness

    Science.gov (United States)

    2012-05-01

    organelles or unfolded/ misfolded /aggregated proteins . Under normal conditions, this provides a qual- ity-control mechanism, removing damaged ...which attempts to restore metabolic homeostasis through the catabolic lysis of aggregated proteins , unfolded/ misfolded proteins or damaged subcellular...molecular sensors and binds to the misfolded proteins in an attempt to activate their repair (30), thus activating the sensors. It seems likely

  16. Thermal unfolding of Acanthamoeba myosin II and skeletal muscle myosin.

    Science.gov (United States)

    Zolkiewski, M; Redowicz, M J; Korn, E D; Ginsburg, A

    1996-04-16

    Studies on the thermal unfolding of monomeric Acanthamoeba myosin II and other myosins, in particular skeletal muscle myosin, using differential scanning calorimetry (DSC) are reviewed. The unfolding transitions for intact myosin or its head fragment are irreversible, whereas those of the rod part and its fragments are completely reversible. Acanthamoeba myosin II unfolds with a high degree of cooperativity from ca. 40-45 degrees C at pH 7.5 in 0.6 M KCl, producing a single, sharp endotherm in DSC. In contrast, thermal transitions of rabbit skeletal muscle myosin occur over a broader temperature range (ca. 40-60 degrees C) under the same conditions. The DSC studies on the unfolding of the myosin rod and its fragments allow identification of cooperative domains, each of which unfolds according to a two-state mechanism. Also, DSC data show the effect of the nucleotide-induced conformational changes in the myosin head on the protein stability.

  17. Word Domain Disambiguation via Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.

    2006-06-04

    Word subject domains have been widely used to improve the perform-ance of word sense disambiguation al-gorithms. However, comparatively little effort has been devoted so far to the disambiguation of word subject do-mains. The few existing approaches have focused on the development of al-gorithms specific to word domain dis-ambiguation. In this paper we explore an alternative approach where word domain disambiguation is achieved via word sense disambiguation. Our study shows that this approach yields very strong results, suggesting that word domain disambiguation can be ad-dressed in terms of word sense disam-biguation with no need for special purpose algorithms.

  18. Character-based Recognition of Simple Word Gesture

    Directory of Open Access Journals (Sweden)

    Paulus Insap Santosa

    2013-11-01

    Full Text Available People with normal senses use spoken language to communicate with others. This method cannot be used by those with hearing and speech impaired. These two groups of people will have difficulty when they try to communicate to each other using their own language. Sign language is not easy to learn, as there are various sign languages, and not many tutors are available. This research focused on a simple word recognition gesture based on characters that form a word to be recognized. The method used for character recognition was the nearest neighbour method. This method identified different fingers using the different markers attached to each finger. Testing a simple word gesture recognition is done by providing a series of characters that make up the intended simple word. The accuracy of a simple word gesture recognition depended upon the accuracy of recognition of each character.

  19. When does word frequency influence written production?

    Directory of Open Access Journals (Sweden)

    Cristina eBaus

    2013-12-01

    Full Text Available The aim of the present study was to explore the central (e.g., lexical processing and peripheral processes (motor preparation and execution underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: 1 first keystroke latency and 2 keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analysed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals. The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  20. When does word frequency influence written production?

    Science.gov (United States)

    Baus, Cristina; Strijkers, Kristof; Costa, Albert

    2013-01-01

    The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  1. Word and object recognition during reading acquisition: MEG evidence.

    Science.gov (United States)

    Caffarra, Sendy; Martin, Clara D; Lizarazu, Mikel; Lallier, Marie; Zarraga, Asier; Molinaro, Nicola; Carreiras, Manuel

    2017-04-01

    Studies on adults suggest that reading-induced brain changes might not be limited to linguistic processes. It is still unclear whether these results can be generalized to reading development. The present study shows to which extent neural responses to verbal and nonverbal stimuli are reorganized while children learn to read. MEG data of thirty Basque children (4-8y) were collected while they were presented with written words, spoken words and visual objects. The evoked fields elicited by the experimental stimuli were compared to their scrambled counterparts. Visual words elicited left posterior (200-300ms) and temporal activations (400-800ms). The size of these effects increased as reading performance improved, suggesting a reorganization of children's visual word responses. Spoken words elicited greater left temporal responses relative to scrambles (300-700ms). No evidence for the influence of reading expertise was observed. Brain responses to objects were greater than to scrambles in bilateral posterior regions (200-500ms). There was a greater left hemisphere involvement as reading errors decreased, suggesting a strengthened verbal decoding of visual configurations with reading acquisition. The present results reveal that learning to read not only influences written word processing, but also affects visual object recognition, suggesting a non-language specific impact of reading on children's neural mechanisms. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Word and object recognition during reading acquisition: MEG evidence

    Directory of Open Access Journals (Sweden)

    Sendy Caffarra

    2017-04-01

    Full Text Available Studies on adults suggest that reading-induced brain changes might not be limited to linguistic processes. It is still unclear whether these results can be generalized to reading development. The present study shows to which extent neural responses to verbal and nonverbal stimuli are reorganized while children learn to read. MEG data of thirty Basque children (4–8y were collected while they were presented with written words, spoken words and visual objects. The evoked fields elicited by the experimental stimuli were compared to their scrambled counterparts. Visual words elicited left posterior (200–300 ms and temporal activations (400–800 ms. The size of these effects increased as reading performance improved, suggesting a reorganization of children’s visual word responses. Spoken words elicited greater left temporal responses relative to scrambles (300–700 ms. No evidence for the influence of reading expertise was observed. Brain responses to objects were greater than to scrambles in bilateral posterior regions (200–500 ms. There was a greater left hemisphere involvement as reading errors decreased, suggesting a strengthened verbal decoding of visual configurations with reading acquisition. The present results reveal that learning to read not only influences written word processing, but also affects visual object recognition, suggesting a non-language specific impact of reading on children’s neural mechanisms.

  3. A Spoken English Recognition Expert System.

    Science.gov (United States)

    1983-09-01

    approach. In comparing these two approaches, Chomsky writes: Our main conclusion will be that familiar linguistic theory has only a limited adequacy...from Chomsky : In general, we introduce an element or a sentence form transformationally only when by so doing we manage to eliminate special...testing and debugging of functionally isolated modules. LISP was considered because of the facility with which it can manipulate word strings. The

  4. Brain Basis of Phonological Awareness for Spoken Language in Children and Its Disruption in Dyslexia

    Science.gov (United States)

    Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.

    2012-01-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783

  5. Psycholinguistic norms for action photographs in French and their relationships with spoken and written latencies.

    Science.gov (United States)

    Bonin, Patrick; Boyer, Bruno; Méot, Alain; Fayol, Michel; Droit, Sylvie

    2004-02-01

    A set of 142 photographs of actions (taken from Fiez & Tranel, 1997) was standardized in French on name agreement, image agreement, conceptual familiarity, visual complexity, imageability, age of acquisition, and duration of the depicted actions. Objective word frequency measures were provided for the infinitive modal forms of the verbs and for the cumulative frequency of the verbal forms associated with the photographs. Statistics on the variables collected for action items were provided and compared with the statistics on the same variables collected for object items. The relationships between these variables were analyzed, and certain comparisons between the current database and other similar published databases of pictures of actions are reported. Spoken and written naming latencies were also collected for the photographs of actions, and multiple regression analyses revealed that name agreement, image agreement, and age of acquisition are the major determinants of action naming speed. Finally, certain analyses were performed to compare object and action naming times. The norms and the spoken and written naming latencies corresponding to the pictures are available on the Internet (http://www.psy.univ-bpclermont.fr/~pbonin/pbonin-eng.html) and should be of great use to researchers interested in the processing of actions.

  6. Electronic Control System Of Home Appliances Using Speech Command Words

    Directory of Open Access Journals (Sweden)

    Aye Min Soe

    2015-06-01

    Full Text Available Abstract The main idea of this paper is to develop a speech recognition system. By using this system smart home appliances are controlled by spoken words. The spoken words chosen for recognition are Fan On Fan Off Light On Light Off TV On and TV Off. The input of the system takes speech signals to control home appliances. The proposed system has two main parts speech recognition and smart home appliances electronic control system. Speech recognition is implemented in MATLAB environment. In this process it contains two main modules feature extraction and feature matching. Mel Frequency Cepstral Coefficients MFCC is used for feature extraction. Vector Quantization VQ approach using clustering algorithm is applied for feature matching. In electrical home appliances control system RF module is used to carry command signal from PC to microcontroller wirelessly. Microcontroller is connected to driver circuit for relay and motor. The input commands are recognized very well. The system is a good performance to control home appliances by spoken words.

  7. Mrs. Malaprop’s Neighborhood: Using Word Errors to Reveal Neighborhood Structure

    Science.gov (United States)

    Goldrick, Matthew; Folk, Jocelyn R.; Rapp, Brenda

    2009-01-01

    Many theories of language production and perception assume that in the normal course of processing a word, additional non-target words (lexical neighbors) become active. The properties of these neighbors can provide insight into the structure of representations and processing mechanisms in the language processing system. To infer the properties of neighbors, we examined the non-semantic errors produced in both spoken and written word production by four individuals who suffered neurological injury. Using converging evidence from multiple language tasks, we first demonstrate that the errors originate in disruption to the processes involved in the retrieval of word form representations from long-term memory. The targets and errors produced were then examined for their similarity along a number of dimensions. A novel statistical simulation procedure was developed to determine the significance of the observed similarities between targets and errors relative to multiple chance baselines. The results reveal that in addition to position-specific form overlap (the only consistent claim of traditional definitions of neighborhood structure) the dimensions of lexical frequency, grammatical category, target length and initial segment independently contribute to the activation of non-target words in both spoken and written production. Additional analyses confirm the relevance of these dimensions for word production showing that, in both written and spoken modalities, the retrieval of a target word is facilitated by increasing neighborhood density, as defined by the results of the target-error analyses. PMID:20161591

  8. "Visual" Cortex Responds to Spoken Language in Blind Children.

    Science.gov (United States)

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  9. Adapting the Freiburg monosyllabic word test for Slovenian

    Directory of Open Access Journals (Sweden)

    Tatjana Marvin

    2017-12-01

    Full Text Available Speech audiometry is one of the standard methods used to diagnose the type of hearing loss and to assess the communication function of the patient by determining the level of the patient’s ability to understand and repeat words presented to him or her in a hearing test. For this purpose, the Slovenian adaptation of the German tests developed by Hahlbrock (1953, 1960 – the Freiburg Monosyllabic Word Test and the Freiburg Number Test – are used in Slovenia (adapted in 1968 by Pompe. In this paper we focus on the Freiburg Monosyllabic Word Test for Slovenian, which has been criticized by patients as well as in the literature for the unequal difficulty and frequency of the words, with many of these being extremely rare or even obsolete. As part of the patient’s communication function is retrieving the meaning of individual words by guessing, the less frequent and consequently less familiar words do not contribute to reliable testing results. We therefore adapt the test by identifying and removing such words and supplement them with phonetically similar words to preserve the phonetic balance of the list. The words used for replacement are extracted from the written corpus of Slovenian Gigafida and the spoken corpus of Slovenian GOS, while the optimal combinations of words are established by using computational algorithms.

  10. A genetic algorithm based method for neutron spectrum unfolding

    International Nuclear Information System (INIS)

    Suman, Vitisha; Sarkar, P.K.

    2013-03-01

    An approach to neutron spectrum unfolding based on a stochastic evolutionary search mechanism - Genetic Algorithm (GA) is presented. It is tested to unfold a set of simulated spectra, the unfolded spectra is compared to the output of a standard code FERDOR. The method was then applied to a set of measured pulse height spectrum of neutrons from the AmBe source as well as of emitted neutrons from Li(p,n) and Ag(C,n) nuclear reactions carried out in the accelerator environment. The unfolded spectra compared to the output of FERDOR show good agreement in the case of AmBe spectra and Li(p,n) spectra. In the case of Ag(C,n) spectra GA method results in some fluctuations. Necessity of carrying out smoothening of the obtained solution is also studied, which leads to approximation of the solution yielding an appropriate solution finally. Few smoothing techniques like second difference smoothing, Monte Carlo averaging, combination of both and gaussian based smoothing methods are also studied. Unfolded results obtained after inclusion of the smoothening criteria are in close agreement with the output obtained from the FERDOR code. The present method is also tested on a set of underdetermined problems, the outputs of which is compared to the unfolded spectra obtained from the FERDOR applied to a completely determined problem, shows a good match. The distribution of the unfolded spectra is also studied. Uncertainty propagation in the unfolded spectra due to the errors present in the measurement as well as the response function is also carried out. The method appears to be promising for unfolding the completely determined as well as underdetermined problems. It also has provisions to carry out the uncertainty analysis. (author)

  11. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    Science.gov (United States)

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  12. Unfolding of globular polymers by external force

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Samuel; Terentjev, Eugene M., E-mail: emt1000@cam.ac.uk [Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom)

    2015-11-14

    We examine the problem of a polymer chain, folded into a globule in poor solvent, subjected to a constant tensile force. Such a situation represents a Gibbs thermodynamic ensemble and is useful for analysing force-clamp atomic force microscopy measurements, now very common in molecular biophysics. Using a basic Flory mean-field theory, we account for surface interactions of monomers with solvent. Under an increasing tensile force, a first-order phase transition occurs from a compact globule to a fully extended chain, in an “all-or-nothing” unfolding event. This contrasts with the regime of imposed extension, first studied by Halperin and Zhulina [Europhys. Lett. 15, 417 (1991)], where there is a regime of coexistence of a partial globule with an extended chain segment. We relate the transition forces in this problem to the solvent quality and degree of polymerisation, and also find analytical expressions for the energy barriers present in the problem. Using these expressions, we analyse the kinetic problem of a force-ramp experiment and show that the force at which a globule ruptures depends on the rate of loading.

  13. Neutron spectrum unfolding using neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2004-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  14. Does textual feedback hinder spoken interaction in natural language?

    Science.gov (United States)

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.

  15. "They never realized that, you know": linguistic collocation and interactional functions of you know in contemporary academin spoken english

    Directory of Open Access Journals (Sweden)

    Rodrigo Borba

    2012-12-01

    Full Text Available Discourse markers are a collection of one-word or multiword terms that help language users organize their utterances on the grammar, semantic, pragmatic and interactional levels. Researchers have characterized some of their roles in written and spoken discourse (Halliday & Hasan, 1976, Schffrin, 1988, 2001. Following this trend, this paper advances a discussion of discourse markers in contemporary academic spoken English. Through quantitative and qualitative analyses of the use of the discourse marker ‘you know’ in the Michigan Corpus of Academic Spoken English (MICASE we describe its frequency in this corpus, its collocation on the sentence level and its interactional functions. Grammatically, a concordance analysis shows that you know (as other discourse markers is linguistically fl exible as it seems to be placed in any grammatical slot of an utterance. Interactionally, a qualitative analysis indicates that its use in contemporary English goes beyond the uses described in the literature. We defend that besides serving as a hedging strategy (Lakoff, 1975, you know also serves as a powerful face-saving (Goffman, 1955 technique which constructs students’ identities vis-à-vis their professors’ and vice-versa.

  16. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  17. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    Science.gov (United States)

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  18. Gated Word Recognition by Postlingually Deafened Adults with Cochlear Implants: Influence of Semantic Context

    Science.gov (United States)

    Patro, Chhayakanta; Mendel, Lisa Lucks

    2018-01-01

    Purpose: The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Method: Listeners with CIs as well as those with normal hearing (NH)…

  19. Age of Acquisition and Sensitivity to Gender in Spanish Word Recognition

    Science.gov (United States)

    Foote, Rebecca

    2014-01-01

    Speakers of gender-agreement languages use gender-marked elements of the noun phrase in spoken-word recognition: A congruent marking on a determiner or adjective facilitates the recognition of a subsequent noun, while an incongruent marking inhibits its recognition. However, while monolinguals and early language learners evidence this…

  20. Cognitive, Linguistic and Print-Related Predictors of Preschool Children's Word Spelling and Name Writing

    Science.gov (United States)

    Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi

    2017-01-01

    Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…

  1. Tracking Eye Movements to Localize Stroop Interference in Naming: Word Planning Versus Articulatory Buffering

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2014-01-01

    Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the

  2. Six-Month-Olds Comprehend Words that Refer to Parts of the Body

    Science.gov (United States)

    Tincoff, Ruth; Jusczyk, Peter W.

    2012-01-01

    Comprehending spoken words requires a lexicon of sound patterns and knowledge of their referents in the world. Tincoff and Jusczyk (1999) demonstrated that 6-month-olds link the sound patterns "Mommy" and "Daddy" to video images of their parents, but not to other adults. This finding suggests that comprehension emerges at this young age and might…

  3. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    Science.gov (United States)

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  4. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use Over Time

    NARCIS (Netherlands)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson-Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semifixed multi-word units (MWUs),

  5. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  6. Word, Words, Words: Ellul and the Mediocritization of Language

    Science.gov (United States)

    Foltz, Franz; Foltz, Frederick

    2012-01-01

    The authors explore how technique via propaganda has replaced the word with images creating a mass society and limiting the ability of people to act as individuals. They begin by looking at how words affect human society and how they have changed over time. They explore how technology has altered the meaning of words in order to create a more…

  7. Impact of Diglossia on Word and Non-word Repetition among Language Impaired and Typically Developing Arabic Native Speaking Children

    Directory of Open Access Journals (Sweden)

    Elinor Saiegh-Haddad

    2017-11-01

    Full Text Available The study tested the impact of the phonological and lexical distance between a dialect of Palestinian Arabic spoken in the north of Israel (SpA and Modern Standard Arabic (StA or MSA on word and non-word repetition in children with specific language impairment (SLI and in typically developing (TD age-matched controls. Fifty kindergarten children (25 SLI, 25 TD; mean age 5;5 and fifty first grade children (25 SLI, 25 TD; mean age 6:11 were tested with a repetition task for 1–4 syllable long real words and pseudo words; Items varied systematically in whether each encoded a novel StA phoneme or not, namely a phoneme that is only used in StA but not in the spoken dialect targeted. Real words also varied in whether they were lexically novel, meaning whether the word is used only in StA, but not in SpA. SLI children were found to significantly underperform TD children on all repetition tasks indicating a general phonological memory deficit. More interesting for the current investigation is the observed strong and consistent effect of phonological novelty on word and non-word repetition in SLI and TD children, with a stronger effect observed in SLI. In contrast with phonological novelty, the effect of lexical novelty on word repetition was limited and it did not interact with group. The results are argued to reflect the role of linguistic distance in phonological memory for novel linguistic units in Arabic SLI and, hence, to support a specific Linguistic Distance Hypothesis of SLI in a diglossic setting. The implications of the findings for assessment, diagnosis and intervention with Arabic speaking children with SLI are discussed.

  8. Spoken Word Recognition in Children with Autism Spectrum Disorder: The Role of Visual Disengagement

    Science.gov (United States)

    Venker, Courtney E.

    2017-01-01

    Deficits in visual disengagement are one of the earliest emerging differences in infants who are later diagnosed with autism spectrum disorder. Although researchers have speculated that deficits in visual disengagement could have negative effects on the development of children with autism spectrum disorder, we do not know which skills are…

  9. Deviant ERP Response to Spoken Non-Words among Adolescents Exposed to Cocaine in Utero

    Science.gov (United States)

    Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.

    2012-01-01

    Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…

  10. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    NARCIS (Netherlands)

    Piai, V.; Roelofs, A.P.A.; Acheson, D.J.; Takashima, A.

    2013-01-01

    ulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and

  12. Pupils' Knowledge and Spoken Literary Response beyond Polite Meaningless Words: Studying Yeats's "Easter, 1916"

    Science.gov (United States)

    Gordon, John

    2016-01-01

    This article presents research exploring the knowledge pupils bring to texts introduced to them for literary study, how they share knowledge through talk, and how it is elicited by the teacher in the course of an English lesson. It sets classroom discussion in a context where new examination requirements diminish the relevance of social, cultural…

  13. Revenge of the Spoken Word?: Writing, Performance, and New Media in Urban West Africa

    Directory of Open Access Journals (Sweden)

    Moradewun Adejunmobi

    2011-03-01

    Full Text Available This paper examines the impact of digital media on the relationship between writing, performance, and textuality from the perspective of literate verbal artists in Mali. It considers why some highly educated verbal artists in urban Africa self-identify as writers despite the oralizing properties of new media, and despite the fact that their own works circulate entirely through performance. The motivating factors are identified as a desire to present themselves as composers rather than as performers of texts, and to differentiate their work from that of minimally educated performers of texts associated with traditional orality.

  14. Roy Reider (1914-1979) selections from his written and spoken words

    International Nuclear Information System (INIS)

    Paxton, H.C.

    1980-01-01

    Comments by Roy Reider on chemical criticality control, the fundamentals of safety, policy and responsibility, on written procedures, profiting from accidents, safety training, early history of criticality safety, requirements for the possible, the value of enlightened challenge, public acceptance of a new risk, and on prophets of doom are presented

  15. The Power of the Spoken Word in Defining Religion and Thought: A Case Study

    Directory of Open Access Journals (Sweden)

    Hilary Watt

    2009-01-01

    Full Text Available This essay explores the relationship between religion and language through a literature review of animist scholarship and, in particular, a case study of the animist worldview of Hmong immigrants to the United States. An analysis of the existing literature reveals how the Hmong worldview (which has remained remarkably intact despite widely dispersed settlements both informs and is informed by the Hmong language. Hmong is contrasted with English with regard to both languages’ respective affinities to the scientific worldview and Christianity. I conclude that Hmong and other "pre-scientific" languages have fundamental incompatibilities with the Western worldview (which both informs and is informed by dualistic linguistic conventions of modern language, a modern notion of scientific causality, and Judeo-Christian notions of the body/soul dichotomy. This incompatibility proves to be a major stumbling block for Western scholars of animist religion, who bring their own linguistic and cultural biases to their scholarship.

  16. Writing Workshop Revisited: Confronting Communicative Dilemmas through Spoken Word Poetry in a High School English Classroom

    Science.gov (United States)

    Scarbrough, Burke; Allen, Anna-Ruth

    2014-01-01

    Workshop pedagogy is a staple of writing classrooms at all levels. However, little research has explored the pedagogical moves that can address longstanding critiques of writing workshop, nor the sorts of rhetorical challenges that teachers and students in secondary classrooms can tackle through workshops. This article documents and analyzes the…

  17. Catalogue to select the initial guess spectrum during unfolding

    CERN Document Server

    Vega-Carrillo, H R

    2002-01-01

    A new method to select the initial guess spectrum is presented. Neutron spectra unfolded from Bonner sphere data are dependent on the initial guess spectrum used in the unfolding code. The method is based on a catalogue of detector count rates calculated from a set of reported neutron spectra. The spectra of three isotopic neutron sources sup 2 sup 5 sup 2 Cf, sup 2 sup 3 sup 9 PuBe and sup 2 sup 5 sup 2 Cf/D sub 2 O, were measured to test the method. The unfolding was carried out using the three initial guess options included in the BUNKIUT code. Neutron spectra were also calculated using MCNP code. Unfolded spectra were compared with those calculated; in all the cases our method gives the best results.

  18. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  19. An Iterative, Dynamically Stabilized (IDS) Method of Data Unfolding

    CERN Document Server

    Malaescu, Bogdan

    2011-01-01

    We describe an iterative unfolding method for experimental data, making use of a regularization function. The use of this function allows one to build an improved normalization procedure for Monte Carlo spectra, unbiased by the presence of possible new structures in data. We unfold, in a dynamically stable way, data spectra which can be strongly affected by fluctuations in the background subtraction and simultaneously reconstruct structures which were not initially simulated.

  20. MAWRID: A Model of Arabic Word Reading in Development.

    Science.gov (United States)

    Saiegh-Haddad, Elinor

    2017-07-01

    This article offers a model of Arabic word reading according to which three conspicuous features of the Arabic language and orthography shape the development of word reading in this language: (a) vowelization/vocalization, or the use of diacritical marks to represent short vowels and other features of articulation; (b) morphological structure, namely, the predominance and transparency of derivational morphological structure in the linguistic and orthographic representation of the Arabic word; and (c) diglossia, specifically, the lexical and lexico-phonological distance between the spoken and the standard forms of Arabic words. It is argued that the triangulation of these features governs the acquisition and deployment of reading mechanisms across development. Moreover, the difficulties that readers encounter in their journey from beginning to skilled reading may be better understood if evaluated within these language-specific features of Arabic language and orthography.

  1. The determinants of spoken and written picture naming latencies.

    Science.gov (United States)

    Bonin, Patrick; Chalard, Marylène; Méot, Alain; Fayol, Michel

    2002-02-01

    The influence of nine variables on the latencies to write down or to speak aloud the names of pictures taken from Snodgrass and Vanderwart (1980) was investigated in French adults. The major determinants of both written and spoken picture naming latencies were image variability, image agreement and age of acquisition. To a lesser extent, name agreement was also found to have an impact in both production modes. The implications of the findings for theoretical views of both spoken and written picture naming are discussed.

  2. Unfolding method for first-principles LCAO electronic structure calculations

    Science.gov (United States)

    Lee, Chi-Cheng; Yamada-Takamura, Yukiko; Ozaki, Taisuke

    2013-08-01

    Unfolding the band structure of a supercell to a normal cell enables us to investigate how symmetry breakers such as surfaces and impurities perturb the band structure of the normal cell. We generalize the unfolding method, originally developed based on Wannier functions, to the linear combination of atomic orbitals (LCAO) method, and present a general formula to calculate the unfolded spectral weight. The LCAO basis set is ideal for the unfolding method because the basis functions allocated to each atomic species are invariant regardless of the existence of surface and impurity. The unfolded spectral weight is well defined by the property of the LCAO basis functions. In exchange for the property, the non-orthogonality of the LCAO basis functions has to be taken into account. We show how the non-orthogonality can be properly incorporated in the general formula. As an illustration of the method, we calculate the dispersive quantized spectral weight of a ZrB2 slab and show strong spectral broadening in the out-of-plane direction, demonstrating the usefulness of the unfolding method.

  3. Unfolding method for first-principles LCAO electronic structure calculations

    International Nuclear Information System (INIS)

    Lee, Chi-Cheng; Yamada-Takamura, Yukiko; Ozaki, Taisuke

    2013-01-01

    Unfolding the band structure of a supercell to a normal cell enables us to investigate how symmetry breakers such as surfaces and impurities perturb the band structure of the normal cell. We generalize the unfolding method, originally developed based on Wannier functions, to the linear combination of atomic orbitals (LCAO) method, and present a general formula to calculate the unfolded spectral weight. The LCAO basis set is ideal for the unfolding method because the basis functions allocated to each atomic species are invariant regardless of the existence of surface and impurity. The unfolded spectral weight is well defined by the property of the LCAO basis functions. In exchange for the property, the non-orthogonality of the LCAO basis functions has to be taken into account. We show how the non-orthogonality can be properly incorporated in the general formula. As an illustration of the method, we calculate the dispersive quantized spectral weight of a ZrB 2 slab and show strong spectral broadening in the out-of-plane direction, demonstrating the usefulness of the unfolding method. (paper)

  4. Unfolding method for first-principles LCAO electronic structure calculations.

    Science.gov (United States)

    Lee, Chi-Cheng; Yamada-Takamura, Yukiko; Ozaki, Taisuke

    2013-08-28

    Unfolding the band structure of a supercell to a normal cell enables us to investigate how symmetry breakers such as surfaces and impurities perturb the band structure of the normal cell. We generalize the unfolding method, originally developed based on Wannier functions, to the linear combination of atomic orbitals (LCAO) method, and present a general formula to calculate the unfolded spectral weight. The LCAO basis set is ideal for the unfolding method because the basis functions allocated to each atomic species are invariant regardless of the existence of surface and impurity. The unfolded spectral weight is well defined by the property of the LCAO basis functions. In exchange for the property, the non-orthogonality of the LCAO basis functions has to be taken into account. We show how the non-orthogonality can be properly incorporated in the general formula. As an illustration of the method, we calculate the dispersive quantized spectral weight of a ZrB2 slab and show strong spectral broadening in the out-of-plane direction, demonstrating the usefulness of the unfolding method.

  5. Branches of Triangulated Origami Near the Unfolded State

    Science.gov (United States)

    Chen, Bryan Gin-ge; Santangelo, Christian D.

    2018-01-01

    Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct "branches" which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either "pop up" or "pop down." The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a "misfolded" state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  6. Branches of Triangulated Origami Near the Unfolded State

    Directory of Open Access Journals (Sweden)

    Bryan Gin-ge Chen

    2018-02-01

    Full Text Available Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct “branches” which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either “pop up” or “pop down.” The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a “misfolded” state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  7. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    Science.gov (United States)

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  8. What's in a Word?

    OpenAIRE

    Henderson, Jennifer A

    2007-01-01

    Words are all around us to the point that their complexity is lost in familiarity. The term “word” itself can ambiguously refer to different linguistic concepts: orthographic words, phonological words, grammatical words, word-forms, lexemes, and to an extent lexical items. While it is hard to come up with exception-less criteria for wordhood, some typical properties are that words are writeable and spellable, consist of morphemes, are syntactic units, carry meaning, and interrelate with oth...

  9. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    Science.gov (United States)

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  10. A Comparison between Written and Spoken Narratives in Aphasia

    Science.gov (United States)

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  11. Assessing spoken-language educational interpreting: Measuring up ...

    African Journals Online (AJOL)

    Kate H

    assessment instrument used to assess formally the spoken-language educational interpreters at. Stellenbosch University (SU). Research ..... Is the interpreter suited to the module? Is the interpreter easier to follow? Technical. Microphone technique. Lag. Completeness. Language use. Vocabulary. Role. Personal Objectives ...

  12. Using the Corpus of Spoken Afrikaans to generate an Afrikaans ...

    African Journals Online (AJOL)

    This paper presents two chatbot systems, ALICE and. Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the. Corpus of Spoken Afrikaans (Korpus Gesproke Afrikaans) to retrain the ALICE chatbot system with human ...

  13. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Therefore, this paper examined vowel insertion in the spoken French of 50 Ijebu Undergraduate French Learners (IUFLs) in Selected Universities in South West of Nigeria. The data collection for this study was through tape-recording of participants' production of 30 sentences containing both French vowel and consonant ...

  14. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  15. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  16. Flipper: An Information State Component for Spoken Dialogue Systems

    NARCIS (Netherlands)

    ter Maat, Mark; Heylen, Dirk K.J.; Vilhjálmsson, Hannes; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn

    This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.

  17. Pair Counting to Improve Grammar and Spoken Fluency

    Science.gov (United States)

    Hanson, Stephanie

    2017-01-01

    English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…

  18. A memory-based shallow parser for spoken Dutch

    NARCIS (Netherlands)

    Canisius, S.V.M.; van den Bosch, A.; Decadt, B.; Hoste, V.; De Pauw, G.

    2004-01-01

    We describe the development of a Dutch memory-based shallow parser. The availability of large treebanks for Dutch, such as the one provided by the Spoken Dutch Corpus, allows memory-based learners to be trained on examples of shallow parsing taken from the treebank, and act as a shallow parser after

  19. The Link between Vocabulary Knowledge and Spoken L2 Fluency

    Science.gov (United States)

    Hilton, Heather

    2008-01-01

    In spite of the vast numbers of articles devoted to vocabulary acquisition in a foreign language, few studies address the contribution of lexical knowledge to spoken fluency. The present article begins with basic definitions of the temporal characteristics of oral fluency, summarizing L1 research over several decades, and then presents fluency…

  20. Oral and Literate Strategies in Spoken and Written Narratives.

    Science.gov (United States)

    Tannen, Deborah

    1982-01-01

    Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…

  1. Porting a spoken language identification systen to a new environment.

    CSIR Research Space (South Africa)

    Peche, M

    2008-11-01

    Full Text Available the carefully selected training data used to construct the system initially. The authors investigated the process of porting a Spoken Language Identification (S-LID) system to a new environment and describe methods to prepare it for more effective use...

  2. Evaluation of Noisy Transcripts for Spoken Document Retrieval

    NARCIS (Netherlands)

    van der Werff, Laurens Bastiaan

    2012-01-01

    This thesis introduces a novel framework for the evaluation of Automatic Speech Recognition (ASR) transcripts in an Spoken Document Retrieval (SDR) context. The basic premise is that ASR transcripts must be evaluated by measuring the impact of noise in the transcripts on the search results of a

  3. Phonological Interference in the Spoken English Performance of the ...

    African Journals Online (AJOL)

    This paper sets out to examine the phonological interference in the spoken English performance of the Izon speaker. It emphasizes that the level of interference is not just as a result of the systemic differences that exist between both language systems (Izon and English) but also as a result of the interlanguage factors such ...

  4. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  5. An Analysis of Spoken Grammar: The Case for Production

    Science.gov (United States)

    Mumford, Simon

    2009-01-01

    Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…

  6. IMPACT ON THE INDIGENOUS LANGUAGES SPOKEN IN NIGERIA ...

    African Journals Online (AJOL)

    This article examines the impact of the hegemony of English, as a common lingua franca, referred to as a global language, on the indigenous languages spoken in Nigeria. Since English, through the British political imperialism and because of the economic supremacy of English dominated countries, has assumed the ...

  7. Spoken Indian language identification: a review of features and ...

    Indian Academy of Sciences (India)

    BAKSHI AARTI

    2018-04-12

    Apr 12, 2018 ... sound of that language. These language-specific properties can be exploited to identify a spoken language reliably. Automatic language identification has emerged as a prominent research area in. Indian languages processing. People from different regions of India speak around 800 different languages.

  8. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    Science.gov (United States)

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  9. Efficient unfolding pattern recognition in single molecule force spectroscopy data

    Directory of Open Access Journals (Sweden)

    Labudde Dirk

    2011-06-01

    Full Text Available Abstract Background Single-molecule force spectroscopy (SMFS is a technique that measures the force necessary to unfold a protein. SMFS experiments generate Force-Distance (F-D curves. A statistical analysis of a set of F-D curves reveals different unfolding pathways. Information on protein structure, conformation, functional states, and inter- and intra-molecular interactions can be derived. Results In the present work, we propose a pattern recognition algorithm and apply our algorithm to datasets from SMFS experiments on the membrane protein bacterioRhodopsin (bR. We discuss the unfolding pathways found in bR, which are characterised by main peaks and side peaks. A main peak is the result of the pairwise unfolding of the transmembrane helices. In contrast, a side peak is an unfolding event in the alpha-helix or other secondary structural element. The algorithm is capable of detecting side peaks along with main peaks. Therefore, we can detect the individual unfolding pathway as the sequence of events labeled with their occurrences and co-occurrences special to bR's unfolding pathway. We find that side peaks do not co-occur with one another in curves as frequently as main peaks do, which may imply a synergistic effect occurring between helices. While main peaks co-occur as pairs in at least 50% of curves, the side peaks co-occur with one another in less than 10% of curves. Moreover, the algorithm runtime scales well as the dataset size increases. Conclusions Our algorithm satisfies the requirements of an automated methodology that combines high accuracy with efficiency in analyzing SMFS datasets. The algorithm tackles the force spectroscopy analysis bottleneck leading to more consistent and reproducible results.

  10. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    Science.gov (United States)

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  11. Understanding Non-Restrictive "Which"-Clauses in Spoken English, Which Is Not an Easy Thing.

    Science.gov (United States)

    Tao, Hongyin; McCarthy, Michael J.

    2001-01-01

    Reexamines the notion of non-restrictive relative clauses (NRRCs) in light of spoken corpus evidence, based on analysis of 692 occurrences of non-restrictive "which"-clauses in British and American spoken English data. Reviews traditional conceptions of NRRCs and recent work on the broader notion of subordination in spoken grammar.…

  12. Design and performance of a large vocabulary discrete word recognition system. Volume 1: Technical report. [real time computer technique for voice data processing

    Science.gov (United States)

    1973-01-01

    The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.

  13. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    Science.gov (United States)

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  14. Young children's mapping between arrays, number words, and digits.

    Science.gov (United States)

    Benoit, Laurent; Lehalle, Henri; Molina, Michèle; Tijus, Charles; Jouen, François

    2013-10-01

    This study investigates when young children develop the ability to map between three numerical representations: arrays, spoken number words, and digits. Children (3, 4, and 5 years old) had to map between the two directions (e.g., array-to-digit vs. digit-to-array) of each of these three representation pairs, with small (1-3) and large numbers (4-6). Five-year-olds were at ceiling in all tasks. Three-year-olds succeeded when mapping between arrays and number words for small numbers (but not large numbers), and failed when mapping between arrays and digits and between number words and digits. The main finding was that four-year-olds performed equally well when mapping between arrays and number words and when mapping between arrays and digits. However, they performed more poorly when mapping between number words and digits. Taken together, these results suggest that children first learn to map number words to arrays, then learn to map digits to arrays and finally map number words to digits. These findings highlight the importance of directly exploring when children acquire digits rather than assuming that they acquire digits directly from number words. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Repeats in advanced spoken English of learners with Czech as L1

    Directory of Open Access Journals (Sweden)

    Tomáš Gráf

    2017-09-01

    Full Text Available The article reports on the findings of an empirical study of the use of repeats – as one of the markers of disfluency – in advanced learner English and contributes to the study of L2 fluency. An analysis of 13 hours of recordings of interviews with 50 advanced learners of English with Czech as L1 revealed 1,905 instances of repeats which mainly (78% consisted of one-word repeats occurring at the beginning of clauses and constituents. Two-word repeats were less frequent (19% but appeared in the same positions within the utterances. Longer repeats are much rarer (<2.5%. A comparison with available analyses show that Czech advanced learners of English use repeats in a similar way as advanced learners of English with a different L1 and also as native speakers. If repeats are accepted as fluencemes, i.e. components contributing to fluency, it would appear clear that many advanced learners either successfully adopt this native-like strategy either as a result of exposure to native speech or as transfer from their L1s. Whilst a question remains whether such fluency enhancing strategies ought to become part of L2 instruction, it is argued that spoken learner corpora also ought to include samples of the learners’ L1 production.

  16. Unfolding code for neutron spectrometry based on neural nets technology

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the R obust Design of Artificial Neural Networks Methodology . The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  17. Visual phonology: the effects of orthographic consistency on different auditory word recognition tasks.

    Science.gov (United States)

    Ziegler, Johannes C; Ferrand, Ludovic; Montant, Marie

    2004-07-01

    In this study, we investigated orthographic influences on spoken word recognition. The degree of spelling inconsistency was manipulated while rime phonology was held constant. Inconsistent words with subdominant spellings were processed more slowly than inconsistent words with dominant spellings. This graded consistency effect was obtained in three experiments. However, the effect was strongest in lexical decision, intermediate in rime detection, and weakest in auditory naming. We conclude that (1) orthographic consistency effects are not artifacts of phonological, phonetic, or phonotactic properties of the stimulus material; (2) orthographic effects can be found even when the error rate is extremely low, which rules out the possibility that they result from strategies used to reduce task difficulty; and (3) orthographic effects are not restricted to lexical decision. However, they are stronger in lexical decision than in other tasks. Overall, the study shows that learning about orthography alters the way we process spoken language.

  18. Learning Word Sense Embeddings from Word Sense Definitions

    OpenAIRE

    Li, Qi; Li, Tianshi; Chang, Baobao

    2016-01-01

    Word embeddings play a significant role in many modern NLP systems. Since learning one representation per word is problematic for polysemous words and homonymous words, researchers propose to use one embedding per word sense. Their approaches mainly train word sense embeddings on a corpus. In this paper, we propose to use word sense definitions to learn one embedding per word sense. Experimental results on word similarity tasks and a word sense disambiguation task show that word sense embeddi...

  19. Influences of lexical tone and pitch on word recognition in bilingual infants.

    Science.gov (United States)

    Singh, Leher; Foong, Joanne

    2012-08-01

    Infants' abilities to discriminate native and non-native phonemes have been extensively investigated in monolingual learners, demonstrating a transition from language-general to language-specific sensitivities over the first year after birth. However, these studies have mostly been limited to the study of vowels and consonants in monolingual learners. There is relatively little research on other types of phonetic segments, such as lexical tone, even though tone languages are very well represented across languages of the world. The goal of the present study is to investigate how Mandarin Chinese-English bilingual learners contend with non-phonemic pitch variation in English spoken word recognition. This is contrasted with their treatment of phonemic changes in lexical tone in Mandarin spoken word recognition. The experimental design was cross-sectional and three age-groups were sampled (7.5months, 9months and 11months). Results demonstrated limited generalization abilities at 7.5months, where infants only recognized words in English when matched in pitch and words in Mandarin that were matched in tone. At 9months, infants recognized words in Mandarin Chinese that matched in tone, but also falsely recognized words that contrasted in tone. At this age, infants also recognized English words whether they were matched or mismatched in pitch. By 11months, infants correctly recognized pitch-matched and - mismatched words in English but only recognized tonal matches in Mandarin Chinese. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  1. Periodicity-Free Unfolding Method of Electronic Energy Spectra

    Science.gov (United States)

    Kosugi, Taichi; Nishi, Hirofumi; Kato, Yasuyuki; Matsushita, Yu-ichiro

    2017-12-01

    We propose a novel periodicity-free unfolding method of electronic energy spectra. Our new method solves the serious problem that a calculated electronic band structure strongly depends on the choice of the simulation cell, i.e., primitive cell or supercell. The present method projects the electronic states onto the free-electron states, giving rise to plane-wave unfolded spectra. Using the method, the energy spectra can be calculated as a quantity independent of the choice of the simulation cell. We examine the unfolded energy spectra in detail for the following three models and clarify the validity of our method: a one-dimensional two-chain model, monolayer graphene, and twisted bilayer graphene. We also discuss the relation between our present method and the spectra observed in angle-resolved photoemission spectroscopy (ARPES) experiments.

  2. Narrative-based intervention for word-finding difficulties: a case study.

    Science.gov (United States)

    Marks, Ian; Stokes, Stephanie F

    2010-01-01

    Children with word-finding difficulties manifest a high frequency of word-finding characteristics in narrative, yet word-finding interventions have concentrated on single-word treatments and outcome measures. This study measured the effectiveness of a narrative-based intervention in improving single-word picture-naming and word-finding characteristics in narrative in a case study. A case study, quasi-experimental design was employed. The participant was tested on picture naming and spoken word to picture matching on control and treatment words at pre-, mid-, and post-therapy and an 8-month maintenance point. Narrative samples at pre- and post-therapy were analysed for word-finding characteristics and language production. A narrative-based language intervention for word-finding difficulties (NBLI-WF) was carried out for eight sessions, over 3 weeks. The data were subjected to a repeated-measures trend analysis for dichotomous data. Significant improvement occurred for naming accuracy of treatment, but not for control words. The pattern of word-finding characteristics in narrative changed, but the frequency did not reduce. NBLI-WF was effective in improving naming accuracy in this single case, but there were limitations to the research. Further research is required to assess the changes that may occur in language production and word-finding characteristics in narrative. Community clinicians are encouraged to refine clinical practice to ensure clinical research meets quality indicators.

  3. Estimating valence from the sound of a word: Computational, experimental, and cross-linguistic evidence.

    Science.gov (United States)

    Louwerse, Max; Qu, Zhan

    2017-06-01

    It is assumed linguistic symbols must be grounded in perceptual information to attain meaning, because the sound of a word in a language has an arbitrary relation with its referent. This paper demonstrates that a strong arbitrariness claim should be reconsidered. In a computational study, we showed that one phonological feature (nasals in the beginning of a word) predicted negative valence in three European languages (English, Dutch, and German) and positive valence in Chinese. In three experiments, we tested whether participants used this feature in estimating the valence of a word. In Experiment 1, Chinese and Dutch participants rated the valence of written valence-neutral words, with Chinese participants rating the nasal-first neutral-valence words more positive and the Dutch participants rating nasal-first neutral-valence words more negative. In Experiment 2, Chinese (and Dutch) participants rated the valence of Dutch (and Chinese) written valence-neutral words without being able to understand the meaning of these words. The patterns replicated the valence patterns from Experiment 1. When the written words from Experiment 2 were transformed into spoken words, results in Experiment 3 again showed that participants estimated the valence of words on the basis of the sound of the word. The computational study and psycholinguistic experiments indicated that language users can bootstrap meaning from the sound of a word.

  4. The unfolding effects on the protein hydration shell and partial molar volume: a computational study.

    Science.gov (United States)

    Del Galdo, Sara; Amadei, Andrea

    2016-10-12

    In this paper we apply the computational analysis recently proposed by our group to characterize the solvation properties of a native protein in aqueous solution, and to four model aqueous solutions of globular proteins in their unfolded states thus characterizing the protein unfolded state hydration shell and quantitatively evaluating the protein unfolded state partial molar volumes. Moreover, by using both the native and unfolded protein partial molar volumes, we obtain the corresponding variations (unfolding partial molar volumes) to be compared with the available experimental estimates. We also reconstruct the temperature and pressure dependence of the unfolding partial molar volume of Myoglobin dissecting the structural and hydration effects involved in the process.

  5. It's a Mad, Mad Wordle: For a New Take on Text, Try This Fun Word Cloud Generator

    Science.gov (United States)

    Foote, Carolyn

    2009-01-01

    Nation. New. Common. Generation. These are among the most frequently used words spoken by President Barack Obama in his January 2009 inauguration speech as seen in a fascinating visual display called a Wordle. Educators, too, can harness the power of Wordle to enhance learning. Imagine providing students with a whole new perspective on…

  6. Criteria for the segmentation of spoken input into individual utterances

    OpenAIRE

    Mast, Marion; Maier, Elisabeth; Schmitz, Birte

    1995-01-01

    This report describes how spoken language turns are segmented into utterances in the framework of the verbmobil project. The problem of segmenting turns is directly related to the task of annotating a discourse with dialogue act information: an utterance can be characterized as a stretch of dialogue that is attributed one dialogue act. Unfortunately, this rule in many cases is insufficient and many doubtful cases remain. We tried to at least reduce the number of unclear cases by providing a n...

  7. Linguistic adaptations during spoken and multimodal error resolution.

    Science.gov (United States)

    Oviatt, S; Bernard, J; Levow, G A

    1998-01-01

    Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.

  8. Non-Arbitrariness in Mapping Word Form to Meaning: Cross-Linguistic Formal Markers of Word Concreteness.

    Science.gov (United States)

    Reilly, Jamie; Hung, Jinyi; Westbury, Chris

    2017-05-01

    Arbitrary symbolism is a linguistic doctrine that predicts an orthogonal relationship between word forms and their corresponding meanings. Recent corpora analyses have demonstrated violations of arbitrary symbolism with respect to concreteness, a variable characterizing the sensorimotor salience of a word. In addition to qualitative semantic differences, abstract and concrete words are also marked by distinct morphophonological structures such as length and morphological complexity. Native English speakers show sensitivity to these markers in tasks such as auditory word recognition and naming. One unanswered question is whether this violation of arbitrariness reflects an idiosyncratic property of the English lexicon or whether word concreteness is a marked phenomenon across other natural languages. We isolated concrete and abstract English nouns (N = 400), and translated each into Russian, Arabic, Dutch, Mandarin, Hindi, Korean, Hebrew, and American Sign Language. We conducted offline acoustic analyses of abstract and concrete word length discrepancies across languages. In a separate experiment, native English speakers (N = 56) with no prior knowledge of these foreign languages judged concreteness of these nouns (e.g., Can you see, hear, feel, or touch this? Yes/No). Each naïve participant heard pre-recorded words presented in randomized blocks of three foreign languages following a brief listening exposure to a narrative sample from each respective language. Concrete and abstract words differed by length across five of eight languages, and prediction accuracy exceeded chance for four of eight languages. These results suggest that word concreteness is a marked phenomenon across several of the world's most widely spoken languages. We interpret these findings as supportive of an adaptive cognitive heuristic that allows listeners to exploit non-arbitrary mappings of word form to word meaning. Copyright © 2016 Cognitive Science Society, Inc.

  9. The unfolded protein response in neurodegenerative diseases: a neuropathological perspective

    NARCIS (Netherlands)

    Scheper, Wiep; Hoozemans, Jeroen J. M.

    2015-01-01

    The unfolded protein response (UPR) is a stress response of the endoplasmic reticulum (ER) to a disturbance in protein folding. The so-called ER stress sensors PERK, IRE1 and ATF6 play a central role in the initiation and regulation of the UPR. The accumulation of misfolded and aggregated proteins

  10. PPARγ Ligand-Induced Unfolded Protein Responses in Monocytes

    African Journals Online (AJOL)

    High levels of oxLDL lead to cell dysfunction and apoptosis, a phenomenon known as lipotoxicity. Disturbing endoplasmic reticulum (ER) function results in ER stress and unfolded protein response (UPR), which tends to restore ER homeostasis but switches to apoptosis when ER stress is prolonged. In the present study the ...

  11. Structural changes during the unfolding of Bovine serum albumin in ...

    Indian Academy of Sciences (India)

    The native form of serum albumin is the most important soluble protein in the body plasma. In order to investigate the structural changes of Bovine serum albumin (BSA) during its unfolding in the presence of urea, a small-angle neutron scattering (SANS) study was performed. The scattering curves of dilute solutions of BSA ...

  12. Unfolding intermediates of the mutant His-107-Tyr of human ...

    Indian Academy of Sciences (India)

    Srabani Taraphder

    Abstract. The mutant His-107-Tyr of human carbonic anhydrase II (HCA II) is highly unstable and has long been linked to a misfolding disease known as carbonic anhydrase deficiency syndrome (CADS). High temperature unfolding trajectories of the mutant are obtained from classical molecular dynamics simulations.

  13. Nonintegrability of the unfolding of the fold-Hopf bifurcation

    Science.gov (United States)

    Yagasaki, Kazuyuki

    2018-02-01

    We consider the unfolding of the codimension-two fold-Hopf bifurcation and prove its meromorphic nonintegrability in the meaning of Bogoyavlenskij for almost all parameter values. Our proof is based on a generalized version of the Morales-Ramis-Simó theory for non-Hamiltonian systems and related variational equations up to second order are used.

  14. Perceived Helpfulness and Unfolding Processen in Body-Oriented ...

    African Journals Online (AJOL)

    Perceived Helpfulness and Unfolding Processen in Body-Oriented Therapy Practice. C Price, K Krycka, T Breitenbucher, N Brown. Abstract. No Abstract. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · AJOL African Journals Online. HOW TO USE AJOL.

  15. Unfolding Lives in Digital Worlds: Digikid Teachers Revisited

    Science.gov (United States)

    Graham, Lynda

    2012-01-01

    In this paper, I describe ongoing research exploring ways in which young teachers' digital lives unfold inside and outside classrooms. I first interviewed teachers in 2006, and identified three different routes into digital worlds: serious solitary self-taught, serious solitary school-taught and playful social. A number of teachers agreed to be…

  16. Word 2013 for dummies

    CERN Document Server

    Gookin, Dan

    2013-01-01

    This bestselling guide to Microsoft Word is the first and last word on Word 2013 It's a whole new Word, so jump right into this book and learn how to make the most of it. Bestselling For Dummies author Dan Gookin puts his usual fun and friendly candor back to work to show you how to navigate the new features of Word 2013. Completely in tune with the needs of the beginning user, Gookin explains how to use Word 2013 quickly and efficiently so that you can spend more time working on your projects and less time trying to figure it all out. Walks you through the capabilit

  17. Combinatorics on words Christoffel words and repetitions in words

    CERN Document Server

    Berstel, Jean; Reutenauer, Christophe; Saliola, Franco V

    2008-01-01

    The two parts of this text are based on two series of lectures delivered by Jean Berstel and Christophe Reutenauer in March 2007 at the Centre de Recherches Mathématiques, Montréal, Canada. Part I represents the first modern and comprehensive exposition of the theory of Christoffel words. Part II presents numerous combinatorial and algorithmic aspects of repetition-free words stemming from the work of Axel Thue-a pioneer in the theory of combinatorics on words. A beginner to the theory of combinatorics on words will be motivated by the numerous examples, and the large variety of exercises, which make the book unique at this level of exposition. The clean and streamlined exposition and the extensive bibliography will also be appreciated. After reading this book, beginners should be ready to read modern research papers in this rapidly growing field and contribute their own research to its development. Experienced readers will be interested in the finitary approach to Sturmian words that Christoffel words offe...

  18. The Plausibility of Tonal Evolution in the Malay Dialect Spoken in Thailand: Evidence from an Acoustic Study

    Directory of Open Access Journals (Sweden)

    Phanintra Teeranon

    2007-12-01

    Full Text Available The F0 values of vowels following voiceless consonants are higher than those of vowels following voiced consonants; high vowels have a higher F0 than low vowels. It has also been found that when high vowels follow voiced consonants, the F0 values decrease. In contrast, low vowels following voiceless consonants show increasing F0 values. In other words, the voicing of initial consonants has been found to counterbalance the intrinsic F0 values of high and low vowels (House and Fairbanks 1953, Lehiste and Peterson 1961, Lehiste 1970, Laver 1994, Teeranon 2006. To test whether these three findings are applicable to a disyllabic language, the F0 values of high and low vowels following voiceless and voiced consonants were studied in a Malay dialect of the Austronesian language family spoken in Pathumthani Province, Thailand. The data was collected from three male informants, aged 30-35. The Praat program was used for acoustic analysis. The findings revealed the influence of the voicing of initial consonants on the F0 of vowels to be greater than that of the influence of vowel height. Evidence from this acoustic study shows the plausibility for the Malay dialect spoken in Pathumthani to become a tonal language by the influence of initial consonants rather by the influence of the high-low vowel dimension.

  19. UNDERSTANDING TENOR IN SPOKEN TEXTS IN YEAR XII ENGLISH TEXTBOOK TO IMPROVE THE APPROPRIACY OF THE TEXTS

    Directory of Open Access Journals (Sweden)

    Noeris Meristiani

    2011-07-01

    Full Text Available ABSTRACT: The goal of English Language Teaching is communicative competence. To reach this goal students should be supplied with good model texts. These texts should consider the appropriacy of language use. By analyzing the context of situation which is focused on tenor the meanings constructed to build the relationships among the interactants in spoken texts can be unfolded. This study aims at investigating the interpersonal relations (tenor of the interactants in the conversation texts as well as the appropriacy of their realization in the given contexts. The study was conducted under discourse analysis by applying a descriptive qualitative method. There were eight conversation texts which function as examples in five chapters of a textbook. The data were analyzed by using lexicogrammatical analysis, described, and interpreted contextually. Then, the realization of the tenor of the texts was further analyzed in terms of appropriacy to suggest improvement. The results of the study show that the tenor indicates relationships between friend-friend, student-student, questioners-respondents, mother-son, and teacher-student; the power is equal and unequal; the social distances show frequent contact, relatively frequent contact, relatively low contact, high and low affective involvement, using informal, relatively informal, relatively formal, and formal language. There are also some indications of inappropriacy of tenor realization in all texts. It should be improved in the use of degree of formality, the realization of societal roles, status, and affective involvement. Keywords: context of situation, tenor, appropriacy.

  20. A joint model of word segmentation and meaning acquisition through cross-situational learning.

    Science.gov (United States)

    Räsänen, Okko; Rasilo, Heikki

    2015-10-01

    Human infants learn meanings for spoken words in complex interactions with other people, but the exact learning mechanisms are unknown. Among researchers, a widely studied learning mechanism is called cross-situational learning (XSL). In XSL, word meanings are learned when learners accumulate statistical information between spoken words and co-occurring objects or events, allowing the learner to overcome referential uncertainty after having sufficient experience with individually ambiguous scenarios. Existing models in this area have mainly assumed that the learner is capable of segmenting words from speech before grounding them to their referential meaning, while segmentation itself has been treated relatively independently of the meaning acquisition. In this article, we argue that XSL is not just a mechanism for word-to-meaning mapping, but that it provides strong cues for proto-lexical word segmentation. If a learner directly solves the correspondence problem between continuous speech input and the contextual referents being talked about, segmentation of the input into word-like units emerges as a by-product of the learning. We present a theoretical model for joint acquisition of proto-lexical segments and their meanings without assuming a priori knowledge of the language. We also investigate the behavior of the model using a computational implementation, making use of transition probability-based statistical learning. Results from simulations show that the model is not only capable of replicating behavioral data on word learning in artificial languages, but also shows effective learning of word segments and their meanings from continuous speech. Moreover, when augmented with a simple familiarity preference during learning, the model shows a good fit to human behavioral data in XSL tasks. These results support the idea of simultaneous segmentation and meaning acquisition and show that comprehensive models of early word segmentation should take referential word

  1. Can the meaning of multiple words be integrated unconsciously?

    Science.gov (United States)

    van Gaal, Simon; Naccache, Lionel; Meuwese, Julia D I; van Loon, Anouk M; Leighton, Alexandra H; Cohen, Laurent; Dehaene, Stanislas

    2014-05-05

    What are the limits of unconscious language processing? Can language circuits process simple grammatical constructions unconsciously and integrate the meaning of several unseen words? Using behavioural priming and electroencephalography (EEG), we studied a specific rule-based linguistic operation traditionally thought to require conscious cognitive control: the negation of valence. In a masked priming paradigm, two masked words were successively (Experiment 1) or simultaneously presented (Experiment 2), a modifier ('not'/'very') and an adjective (e.g. 'good'/'bad'), followed by a visible target noun (e.g. 'peace'/'murder'). Subjects indicated whether the target noun had a positive or negative valence. The combination of these three words could either be contextually consistent (e.g. 'very bad - murder') or inconsistent (e.g. 'not bad - murder'). EEG recordings revealed that grammatical negations could unfold partly unconsciously, as reflected in similar occipito-parietal N400 effects for conscious and unconscious three-word sequences forming inconsistent combinations. However, only conscious word sequences elicited P600 effects, later in time. Overall, these results suggest that multiple unconscious words can be rapidly integrated and that an unconscious negation can automatically 'flip the sign' of an unconscious adjective. These findings not only extend the limits of subliminal combinatorial language processes, but also highlight how consciousness modulates the grammatical integration of multiple words.

  2. Contending with foreign accent in early word learning.

    Science.gov (United States)

    Schmale, Rachel; Hollich, George; Seidl, Amanda

    2011-11-01

    By their second birthday, children are beginning to map meaning to form with relative ease. One challenge for these developing abilities is separating information relevant to word identity (i.e. phonemic information) from irrelevant information (e.g. voice and foreign accent). Nevertheless, little is known about toddlers' abilities to ignore irrelevant phonetic detail when faced with the demanding task of word learning. In an experiment with English-learning toddlers, we examined the impact of foreign accent on word learning. Findings revealed that while toddlers aged 2 ; 6 successfully generalized newly learned words spoken by a Spanish-accented speaker and a native English speaker, success of those aged 2 ; 0 was restricted. Specifically, toddlers aged 2 ; 0 failed to generalize words when trained by the native English speaker and tested by the Spanish-accented speaker. Data suggest that exposure to foreign accent in training may promote generalization of newly learned forms. These findings are considered in the context of developmental changes in early word representations.

  3. Fast mapping of novel word forms traced neurophysiologically

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2011-11-01

    Full Text Available Human capacity to quickly learn new words, critical for our ability to communicate using language, is well-known from behavioural studies and observations, but its neural underpinnings remain unclear. In this study, we have used event-related potentials to record brain activity to novel spoken word forms as they are being learnt by the human nervous system through passive auditory exposure. We found that the brain response dynamics change dramatically within the short (20 min exposure session: as the subjects become familiarised with the novel word forms, the early (~100 ms fronto-central activity they elicit increases in magnitude and becomes similar to that of known real words. At the same time, acoustically similar real words used as control stimuli show a relatively stable response throughout the recording session; these differences between the stimulus groups are confirmed using both factorial and linear regression analyses. Furthermore, acoustically matched novel non-speech stimuli do not demonstrate similar response increase, suggesting neural specificity of this rapid learning phenomenon to linguistic stimuli. Left-lateralised perisylvian cortical networks appear to be underlying such fast mapping of novel word forms unto the brain’s mental lexicon.

  4. Spoken English Language Development Among Native Signing Children With Cochlear Implants

    OpenAIRE

    Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah

    2013-01-01

    Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...

  5. Learning during processing Word learning doesn’t wait for word recognition to finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  6. WordPress Bible

    CERN Document Server

    Brazell, Aaron

    2010-01-01

    The WordPress Bible provides a complete and thorough guide to the largest self hosted blogging tool. This guide starts by covering the basics of WordPress such as installing and the principles of blogging, marketing and social media interaction, but then quickly ramps the reader up to more intermediate to advanced level topics such as plugins, WordPress Loop, themes and templates, custom fields, caching, security and more. The WordPress Bible is the only complete resource one needs to learning WordPress from beginning to end.

  7. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    Science.gov (United States)

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  8. Textual, Genre and Social Features of Spoken Grammar: A Corpus-Based Approach

    Directory of Open Access Journals (Sweden)

    Carmen Pérez-Llantada

    2009-02-01

    Full Text Available This paper describes a corpus-based approach to teaching and learning spoken grammar for English for Academic Purposes with reference to Bhatia’s (2002 multi-perspective model for discourse analysis: a textual perspective, a genre perspective and a social perspective. From a textual perspective, corpus-informed instruction helps students identify grammar items through statistical frequencies, collocational patterns, context-sensitive meanings and discoursal uses of words. From a genre perspective, corpus observation provides students with exposure to recurrent lexico-grammatical patterns across different academic text types (genres. From a social perspective, corpus models can be used to raise learners’ awareness of how speakers’ different discourse roles, discourse privileges and power statuses are enacted in their grammar choices. The paper describes corpus-based instructional procedures, gives samples of learners’ linguistic output, and provides comments on the students’ response to this method of instruction. Data resulting from the assessment process and student production suggest that corpus-informed instruction grounded in Bhatia’s multi-perspective model can constitute a pedagogical approach in order to i obtain positive student responses from input and authentic samples of grammar use, ii help students identify and understand the textual, genre and social aspects of grammar in real contexts of use, and therefore iii help develop students’ ability to use grammar accurately and appropriately.

  9. Listening in circles. Spoken drama and the architects of sound, 1750-1830.

    Science.gov (United States)

    Tkaczyk, Viktoria

    2014-07-01

    The establishment of the discipline of architectural acoustics is generally attributed to the physicist Wallace Clement Sabine, who developed the formula for reverberation time around 1900, and with it the possibility of making calculated prognoses about the acoustic potential of a particular design. If, however, we shift the perspective from the history of this discipline to the history of architectural knowledge and praxis, it becomes apparent that the topos of 'good sound' had already entered the discourse much earlier. This paper traces the Europe-wide discussion on theatre architecture between 1750 and 1830. It will be shown that the period of investigation is marked by an increasing interest in auditorium acoustics, one linked to the emergence of a bourgeois theatre culture and the growing socio-political importance of the spoken word. In the wake of this development the search among architects for new methods of acoustic research started to differ fundamentally from an analogical reasoning on the nature of sound propagation and reflection, which in part dated back to antiquity. Through their attempts to find new ways of visualising the behaviour of sound in enclosed spaces and to rethink both the materiality and the mediality of theatre auditoria, architects helped pave the way for the establishment of architectural acoustics as an academic discipline around 1900.

  10. When two and too don't go together: a selective phonological deficit sparing number words.

    Science.gov (United States)

    Bencini, Giulia M L; Pozzan, Lucia; Bertella, Laura; Mori, Ileana; Pignatti, Riccardo; Ceriani, Francesca; Semenza, Carlo

    2011-10-01

    We report the case of an Italian speaker (GBC) with classical Wernicke's aphasia syndrome following a vascular lesion in the left posterior middle temporal region. GBC exhibited a selective phonological deficit in spoken language production (repetition and reading) which affected all word classes irrespective of grammatical class, frequency, and length. GBC's production of number words, in contrast, was error free. The specific pattern of phonological errors on non-number words allows us to attribute the locus of impairment at the level of phonological form retrieval of a correctly selected lexical entry. These data support the claim that number words are represented and processed differently from other word categories in language production. Copyright © 2011 Elsevier Srl. All rights reserved.

  11. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  12. Phonotactic spoken language identification with limited training data

    CSIR Research Space (South Africa)

    Peche, M

    2007-08-01

    Full Text Available rates when no Japanese acoustic models are constructed. An increasing amount of Japanese training data is used to train the language classifier of an English-only (E), an English-French (EF), and an English-French-Portuguese PPR system. ple.... Experimental design 3.1. Corpora Because of their role as world languages that are widely spoken in Africa, our initial LID system was designed to distinguish between English, French and Portuguese. We therefore trained phone recognizers and language...

  13. Plant transducers of the endoplasmic reticulum unfolded protein response

    KAUST Repository

    Iwata, Yuji

    2012-12-01

    The unfolded protein response (UPR) activates a set of genes to overcome accumulation of unfolded proteins in the endoplasmic reticulum (ER), a condition termed ER stress, and constitutes an essential part of ER protein quality control that ensures efficient maturation of secretory and membrane proteins in eukaryotes. Recent studies on Arabidopsis and rice identified the signaling pathway in which the ER membrane-localized ribonuclease IRE1 (inositol-requiring enzyme 1) catalyzes unconventional cytoplasmic splicing of mRNA, thereby producing the active transcription factor Arabidopsis bZIP60 (basic leucine zipper 60) and its ortholog in rice. Here we review recent findings identifying the molecular components of the plant UPR, including IRE1/bZIP60 and the membrane-bound transcription factors bZIP17 and bZIP28, and implicating its importance in several physiological phenomena such as pathogen response. © 2012 Elsevier Ltd.

  14. Amyloid protein unfolding and insertion kinetics on neuronal membrane mimics

    Science.gov (United States)

    Qiu, Liming; Buie, Creighton; Vaughn, Mark; Cheng, Kwan

    2010-03-01

    Atomistic details of beta-amyloid (Aβ ) protein unfolding and lipid interaction kinetics mediated by the neuronal membrane surface are important for developing new therapeutic strategies to prevent and cure Alzheimer's disease. Using all-atom MD simulations, we explored the early unfolding and insertion kinetics of 40 and 42 residue long Aβ in binary lipid mixtures with and without cholesterol that mimic the cholesterol-depleted and cholesterol-enriched lipid nanodomains of neurons. The protein conformational transition kinetics was evaluated from the secondary structure profile versus simulation time plot. The extent of membrane disruption was examined by the calculated order parameters of lipid acyl chains and cholesterol fused rings as well as the density profiles of water and lipid headgroups at defined regions across the lipid bilayer from our simulations. Our results revealed that both the cholesterol content and the length of the protein affect the protein-insertion and membrane stability in our model lipid bilayer systems.

  15. Unfolding of spectra with continuum and discrete components

    International Nuclear Information System (INIS)

    Sperling, M.; Reed, J.; Shreve, D.

    1979-01-01

    Purpose of unfolding is to determine the existence of discrete spectral components, their energies and intensities, as well as the shape and intensity of the spectral continuum. Codes implementing these and related ancillary processes share needs for vector algebra and scalar, vector, and matrix input-output, storage, and graphic display and possess an interrelated descriptive vocabulary. DELPHI ia an interactive English-language command system that maintains basis data structures and alters them by activating sequences of basic utilities. MAZNAI is a gamma-ray spectral unfolding code for NaI data with discrete and continuum components with extremely powerful peak recognition and resolution enhancement capabilities. MAZAS is a high-speed line-strength estimation code for NaI data with predetermined line energies. 7 figures

  16. Directional Unfolded Source Term (DUST) for Compton Cameras.

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Dean J.; Mitchell, Dean J.; Horne, Steven M.; O' Brien, Sean; Thoreson, Gregory G

    2018-03-01

    A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.

  17. The unfolding turmoil of 2007 - 2008: Lessons and responses

    OpenAIRE

    Cohen, Ben; Remolona, Eli

    2008-01-01

    While the unfolding financial turmoil has involved new elements, more fundamental elements have remained the same. New elements include structured credit, the originate-to-distribute business model and the tri-party repurchase agreement. The recurrence of crises reflects a basic procyclicality in the system, which is characterized by a build-up of risk-taking and leverage in good times and an abrupt withdrawal from risk and an unwinding of leverage in bad times. To deal with the adverse liqui...

  18. Unfolding education for sustainable development as didactic thinking and practice

    DEFF Research Database (Denmark)

    Madsen, Katrine Dahl

    2013-01-01

    This article’s primary objective is to unfold how teachers translate education for sustainable development (ESD) in a school context. The article argues that exploring tensions, ruptures and openings apparent in this meeting is crucial for the development of existing teaching practices in relation...... the analytical foundation; thus it is the practices as seen from the ‘inside’. Furthermore, ESD practices are considered in a broader societal perspective, pointing to the critical power of the practice lens....

  19. Measurement of the unfolded protein response (UPR) in monocytes.

    LENUS (Irish Health Repository)

    Carroll, Tomás P

    2011-01-01

    In mammalian cells, the primary function of the endoplasmic reticulum (ER) is to synthesize and assemble membrane and secreted proteins. As the main site of protein folding and posttranslational modification in the cell, the ER operates a highly conserved quality control system to ensure only correctly assembled proteins exit the ER and misfolded and unfolded proteins are retained for disposal. Any disruption in the equilibrium of the ER engages a multifaceted intracellular signaling pathway termed the unfolded protein response (UPR) to restore normal conditions in the cell. A variety of pathological conditions can induce activation of the UPR, including neurodegenerative disorders such as Parkinson\\'s disease, metabolic disorders such as atherosclerosis, and conformational disorders such as cystic fibrosis. Conformational disorders are characterized by mutations that modify the final structure of a protein and any cells that express abnormal protein risk functional impairment. The monocyte is an important and long-lived immune cell and acts as a key immunological orchestrator, dictating the intensity and duration of the host immune response. Monocytes expressing misfolded or unfolded protein may exhibit UPR activation and this can compromise the host immune system. Here, we describe in detail methods and protocols for the examination of UPR activation in peripheral blood monocytes. This guide should provide new investigators to the field with a broad understanding of the tools required to investigate the UPR in the monocyte.

  20. Measurement of the unfolded protein response (UPR) in monocytes.

    LENUS (Irish Health Repository)

    Carroll, Tomas P

    2012-02-01

    In mammalian cells, the primary function of the endoplasmic reticulum (ER) is to synthesize and assemble membrane and secreted proteins. As the main site of protein folding and posttranslational modification in the cell, the ER operates a highly conserved quality control system to ensure only correctly assembled proteins exit the ER and misfolded and unfolded proteins are retained for disposal. Any disruption in the equilibrium of the ER engages a multifaceted intracellular signaling pathway termed the unfolded protein response (UPR) to restore normal conditions in the cell. A variety of pathological conditions can induce activation of the UPR, including neurodegenerative disorders such as Parkinson\\'s disease, metabolic disorders such as atherosclerosis, and conformational disorders such as cystic fibrosis. Conformational disorders are characterized by mutations that modify the final structure of a protein and any cells that express abnormal protein risk functional impairment. The monocyte is an important and long-lived immune cell and acts as a key immunological orchestrator, dictating the intensity and duration of the host immune response. Monocytes expressing misfolded or unfolded protein may exhibit UPR activation and this can compromise the host immune system. Here, we describe in detail methods and protocols for the examination of UPR activation in peripheral blood monocytes. This guide should provide new investigators to the field with a broad understanding of the tools required to investigate the UPR in the monocyte.

  1. The unfolded protein response in ischemic heart disease.

    Science.gov (United States)

    Wang, Xiaoding; Xu, Lin; Gillette, Thomas G; Jiang, Xuejun; Wang, Zhao V

    2018-02-20

    Ischemic heart disease is a severe stress condition that causes extensive pathological alterations and triggers cardiac cell death. Accumulating evidence suggests that the unfolded protein response (UPR) is strongly induced by myocardial ischemia. The UPR is an evolutionarily conserved cellular response to cope with protein-folding stress, from yeast to mammals. Endoplasmic reticulum (ER) transmembrane sensors detect the accumulation of unfolded proteins and stimulate a signaling network to accommodate unfolded and misfolded proteins. Distinct mechanisms participate in the activation of three major signal pathways, viz. protein kinase RNA-like ER kinase, inositol-requiring protein 1, and activating transcription factor 6, to transiently suppress protein translation, enhance protein folding capacity of the ER, and augment ER-associated degradation to refold denatured proteins and restore cellular homeostasis. However, if the stress is severe and persistent, the UPR elicits inflammatory and apoptotic pathways to eliminate terminally affected cells. The ER is therefore recognized as a vitally important organelle that determines cell survival or death. Recent studies indicate the UPR plays critical roles in the pathophysiology of ischemic heart disease. The three signaling branches may elicit distinct but overlapping effects in cardiac response to ischemia. Here, we outline the findings and discuss the mechanisms of action and therapeutic potentials of the UPR in the treatment of ischemic heart disease. Copyright © 2018. Published by Elsevier Ltd.

  2. ATP-induced noncooperative thermal unfolding of hen lysozyme

    International Nuclear Information System (INIS)

    Liu, Honglin; Yin, Peidong; He, Shengnan; Sun, Zhihu; Tao, Ye; Huang, Yan; Zhuang, Hao; Zhang, Guobin; Wei, Shiqiang

    2010-01-01

    To understand the role of ATP underlying the enhanced amyloidosis of hen egg white lysozyme (HEWL), the synchrotron radiation circular dichroism, combined with tryptophan fluorescence, dynamic light-scattering, and differential scanning calorimetry, is used to examine the alterations of the conformation and thermal unfolding pathway of the HEWL in the presence of ATP, Mg 2+ -ATP, ADP, AMP, etc. It is revealed that the binding of ATP to HEWL through strong electrostatic interaction changes the secondary structures of HEWL and makes the exposed residue W62 move into hydrophobic environments. This alteration of W62 decreases the β-domain stability of HEWL, induces a noncooperative unfolding of the secondary structures, and produces a partially unfolded intermediate. This intermediate containing relatively rich α-helix and less β-sheet structures has a great tendency to aggregate. The results imply that the ease of aggregating of HEWL is related to the extent of denaturation of the amyloidogenic region, rather than the electrostatic neutralizing effect or monomeric β-sheet enriched intermediate.

  3. Processing Electromyographic Signals to Recognize Words

    Science.gov (United States)

    Jorgensen, C. C.; Lee, D. D.

    2009-01-01

    A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.

  4. Do handwritten words magnify lexical effects in visual word recognition?

    Science.gov (United States)

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  5. Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.

    Science.gov (United States)

    Brimo, Danielle; Lund, Emily; Sapp, Alysha

    2017-12-18

    Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below

  6. A Spoken Dialogue System for Command and Control

    Science.gov (United States)

    2012-10-01

    10 2.4.2.2 The Syntax ...comprised of three core components: lexicon, syntax , and declarations. Each module is briefly discussed below. 2.4.2.1 The Lexicon The lexicon consists...wh- word, etc. Each lexical entry is specified for its word-(sub-)class, and relevant semantic, grammatical, morphological and graphemic properties

  7. Autosegmental Representation of Epenthesis in the Spoken French ...

    African Journals Online (AJOL)

    Nneka Umera-Okeke

    At the end of a word, many languages insert a so-called prop vowel at the end of a word to avoid the loss of a non-permitted cluster. This cluster can come about due ..... reading materials used during the course of our field work and these are shown in the ..... Prosodic domains in phonology: Sanskrit revisited, Aronoff, M. &.

  8. Word graphs: The second set

    NARCIS (Netherlands)

    Hoede, C.; Liu, X

    1998-01-01

    In continuation of the paper of Hoede and Li on word graphs for a set of prepositions, word graphs are given for adjectives, adverbs and Chinese classifier words. It is argued that these three classes of words belong to a general class of words that may be called adwords. These words express the

  9. ROMANCE LOAN WORDS IN HRELJIĆ BEDROOM

    Directory of Open Access Journals (Sweden)

    Lina Pliško

    2016-01-01

    Full Text Available In this paper we present the immediate etymology (etymologia proxima of twenty words of Romance origin belonging to the semantic field of furniture (5, bed parts (4, bed linen (5 and decorations and certain objects found (6 in the bedroom. The words have been obtained through field work in the Hreljići area, and attestations of these words were sought in dictionaries of the speech of the north Adriatic (Boljun, Grobnik, Labin, Medulin, Roveria dialects as well as the south Adriatic, primarily island, regions (Ugljan, Pag, Brač, Hvar. On the basis of the analysis of all the words obtained, it may be concluded that only two words from the questionnaire are of Slavic origin: postelja and punjava, and that, according to the immediate etymology, twenty words are of Istro-Venetian origin, i.e. from the Istrian variants of the Venetian dialect, which has been spoken in the region of Istria for centuries. This idiom is still spoken by many today, although it no longer serves as a lingua franca among the several ethnic and language groups living in the area as it once did: nowadays its role has been taken over by the standard Croatian language. By comparing the words obtained from Hreljići with those from other Čakavian dialects in Istria (in Medulin, Labin, Boljun and Roverian dialects, in Grobnik, as well as those from the southern Adriatic islands (Novlja on the island of Pag, Kukljica on the island of Ugljan, Brač, as well as Pitava and Zavala on Hvar, we have concluded that many words are used and have been preserved in the same form and with the same meanings that can be found in the dialect of Hreljići. In all the dictionaries we have consulted, nine words and their variants corresponding to those in Hreljići have been attested: armar/armarun/ormarun, lampadina/lampa, koltrina, šusta/šušta, kučeta/kočeta, štramac, lancun, kušin, intima/intimela. Two attested words of Venetian origin have only been found in certain Istrian idioms:

  10. Words as cultivators of others minds.

    Science.gov (United States)

    Schilhab, Theresa S S

    2015-01-01

    The embodied-grounded view of cognition and language holds that sensorimotor experiences in the form of 're-enactments' or 'simulations' are significant to the individual's development of concepts and competent language use. However, a typical objection to the explanatory force of this view is that, in everyday life, we engage in linguistic exchanges about much more than might be directly accessible to our senses. For instance, when knowledge-sharing occurs as part of deep conversations between a teacher and student, language is the salient tool by which to obtain understanding, through the unfolding of explanations. Here, the acquisition of knowledge is realized through language, and the constitution of knowledge seems entirely linguistic. In this paper, based on a review of selected studies within contemporary embodied cognitive science, I propose that such linguistic exchanges, though occurring independently of direct experience, are in fact disguised forms of embodied cognition, leading to the reconciliation of the opposing views. I suggest that, in conversation, interlocutors use Words as Cultivators (WAC) of other minds as a direct result of their embodied-grounded origin, rendering WAC a radical interpretation of the Words as social Tools (WAT) proposal. The WAC hypothesis endorses the view of language as dynamic, continuously integrating with, and negotiating, cognitive processes in the individual. One such dynamic feature results from the 'linguification process', a term by which I refer to the socially produced mapping of a word to its referent which, mediated by the interlocutor, turns words into cultivators of others minds. In support of the linguification process hypothesis and WAC, I review relevant embodied-grounded research, and selected studies of instructed fear conditioning and guided imagery.

  11. Nurturing a lexical legacy: reading experience is critical for the development of word reading skill

    Science.gov (United States)

    Nation, Kate

    2017-12-01

    The scientific study of reading has taught us much about the beginnings of reading in childhood, with clear evidence that the gateway to reading opens when children are able to decode, or `sound out' written words. Similarly, there is a large evidence base charting the cognitive processes that characterise skilled word recognition in adults. Less understood is how children develop word reading expertise. Once basic reading skills are in place, what factors are critical for children to move from novice to expert? This paper outlines the role of reading experience in this transition. Encountering individual words in text provides opportunities for children to refine their knowledge about how spelling represents spoken language. Alongside this, however, reading experience provides much more than repeated exposure to individual words in isolation. According to the lexical legacy perspective, outlined in this paper, experiencing words in diverse and meaningful language environments is critical for the development of word reading skill. At its heart is the idea that reading provides exposure to words in many different contexts, episodes and experiences which, over time, sum to a rich and nuanced database about their lexical history within an individual's experience. These rich and diverse encounters bring about local variation at the word level: a lexical legacy that is measurable during word reading behaviour, even in skilled adults.

  12. Word 2010 Bible

    CERN Document Server

    Tyson, Herb

    2010-01-01

    In-depth guidance on Word 2010 from a Microsoft MVP. Microsoft Word 2010 arrives with many changes and improvements, and this comprehensive guide from Microsoft MVP Herb Tyson is your expert, one-stop resource for it all. Master Word's new features such as a new interface and customized Ribbon, major new productivity-boosting collaboration tools, how to publish directly to blogs, how to work with XML, and much more. Follow step-by-step instructions and best practices, avoid pitfalls, discover practical workarounds, and get the very most out of your new Word 2010 with this packed guide. Coverag

  13. Perception of words and pitch patterns in song and speech

    Directory of Open Access Journals (Sweden)

    Julia eMerrill

    2012-03-01

    Full Text Available This fMRI study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words, pitch and rhythm. Univariate and multivariate analyses were performed on the brain activity patterns of six conditions, arranged in a subtractive hierarchy: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; as well as the pure musical or speech rhythm.Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG and intraparietal sulcus (IPS in processing song and speech. The left IFG was involved in word- and pitch-related processing in speech, the right IFG in processing pitch in song.Furthermore, the IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the activity of IFG and IPS.

  14. Targeted memory reactivation of newly learned words during sleep triggers REM-mediated integration of new memories and existing knowledge.

    Science.gov (United States)

    Tamminen, Jakke; Lambon Ralph, Matthew A; Lewis, Penelope A

    2017-01-01

    Recent memories are spontaneously reactivated during sleep, leading to their gradual strengthening. Whether reactivation also mediates the integration of new memories with existing knowledge is unknown. We used targeted memory reactivation (TMR) during slow-wave sleep (SWS) to selectively cue reactivation of newly learned spoken words. While integration of new words into their phonological neighbourhood was observed in both cued and uncued words after sleep, TMR-triggered integration was predicted by the time spent in rapid eye movement (REM) sleep. These data support complementary roles for SWS and REM in memory consolidation. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Different neurophysiological mechanisms underlying word and rule extraction from speech.

    Directory of Open Access Journals (Sweden)

    Ruth De Diego Balaguer

    Full Text Available The initial process of identifying words from spoken language and the detection of more subtle regularities underlying their structure are mandatory processes for language acquisition. Little is known about the cognitive mechanisms that allow us to extract these two types of information and their specific time-course of acquisition following initial contact with a new language. We report time-related electrophysiological changes that occurred while participants learned an artificial language. These changes strongly correlated with the discovery of the structural rules embedded in the words. These changes were clearly different from those related to word learning and occurred during the first minutes of exposition. There is a functional distinction in the nature of the electrophysiological signals during acquisition: an increase in negativity (N400 in the central electrodes is related to word-learning and development of a frontal positivity (P2 is related to rule-learning. In addition, the results of an online implicit and a post-learning test indicate that, once the rules of the language have been acquired, new words following the rule are processed as words of the language. By contrast, new words violating the rule induce syntax-related electrophysiological responses when inserted online in the stream (an early frontal negativity followed by a late posterior positivity and clear lexical effects when presented in isolation (N400 modulation. The present study provides direct evidence suggesting that the mechanisms to extract words and structural dependencies from continuous speech are functionally segregated. When these mechanisms are engaged, the electrophysiological marker associated with rule-learning appears very quickly, during the earliest phases of exposition to a new language.

  16. The Frequency and Functions of "Just" in British Academic Spoken English

    Science.gov (United States)

    Grant, Lynn E.

    2011-01-01

    This study investigates the frequency and functions of "just" in British academic spoken English. It adopts the meanings of "just" established by Lindemann and Mauranen, 2001, taken from the occurrences of "just" across five speech events in the Michigan Corpus of Academic Spoken English (MICASE) to see if they also apply to occurrences of "just"…

  17. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    Science.gov (United States)

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  18. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  19. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    Science.gov (United States)

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  20. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Keuning, J.; Knoors, H.; Verhoeven, L.

    2016-01-01

    BACKGROUND: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. AIMS: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  1. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Keuning, J.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2016-01-01

    Background: Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. Aims: In the present study, we examined the extent of delay in lexical and morphosyntactic spoken

  2. Elementary School Students' Spoken Activities and Their Responses in Math Learning by Peer-Tutoring

    Science.gov (United States)

    Baiduri

    2017-01-01

    Students' activities in the learning process are very important to indicate the quality of learning process. One of which is spoken activity. This study was intended to analyze the elementary school students' spoken activities and their responses in joining Math learning process by peer-tutoring. Descriptive qualitative design was piloted by means…

  3. Using the TED Talks to Evaluate Spoken Post-editing of Machine Translation

    DEFF Research Database (Denmark)

    Liyanapathirana, Jeevanthi; Popescu-Belis, Andrei

    2016-01-01

    . To obtain a data set with spoken post-editing information, we use the French version of TED talks as the source texts submitted to MT, and the spoken English counterparts as their corrections, which are submitted to an ASR system. We experiment with various levels of artificial ASR noise and also...

  4. The oral and written side of word production in young and older adults: generation of lexical neighbors.

    Science.gov (United States)

    Robert, Christelle; Mathey, Stéphanie

    2018-03-01

    The aim of the present study was to investigate the effects of aging on both spoken and written word production by using analogous tasks. To do so, a phonological neighbor generation task (Experiment 1) and an orthographic neighbor generation task (Experiment 2) were designed. In both tasks, young and older participants were given a word and had to generate as many words as they could think of by changing one phoneme in the target word (Experiment 1) or one letter in the target word (Experiment 2). The data of the two experiments were consistent, showing that the older adults generated fewer lexical neighbors and made more errors than the young adults. For both groups, the number of words produced, as well as their lexical frequency, decreased as a function of time. These data strongly support the assumption of a symmetrical age-related decline in the transmission of activation within the phonological and orthographic systems.

  5. Word of Jeremiah - Word of God

    DEFF Research Database (Denmark)

    Holt, Else Kragelund

    2007-01-01

    The article examines the relationship between God, prophet and the people in the Book of Jeremiah. The analysis shows a close connection, almost an identification, between the divine word (and consequently God himself) and the prophet, so that the prophet becomes a metaphor for God. This is done...

  6. Lexical access in children with hearing loss or specific language impairment, using the cross-modal picture–word interference paradigm

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M.W.C. van; Knoors, H.E.T.; Verhoeven, L.T.W.

    2015-01-01

    In this study we compared lexical access to spoken words in 25 deaf children with cochlear implants (CIs), 13 hard-of-hearing (HoH) children and 20 children with specific language impairment (SLI). Twenty-one age-matched typically developing children served as controls. The children with CIs and the

  7. Lexical access in children with hearing loss or specific language impairment, using the cross-modal picture-word interference paradigm

    NARCIS (Netherlands)

    Hoog, B.E. de; Langereis, M.C.; Weerdenburg, M. van; Knoors, H.; Verhoeven, L.

    2015-01-01

    In this study we compared lexical access to spoken words in 25 deaf children with cochlear implants (CIs), 13 hard-of-hearing (HoH) children and 20 children with specific language impairment (SLI). Twenty-one age-matched typically developing children served as controls. The children with CIs and the

  8. When Diglossia Meets Dyslexia: The Effect of Diglossia on Voweled and Unvoweled Word Reading among Native Arabic-Speaking Dyslexic Children

    Science.gov (United States)

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2017-01-01

    Native Arabic speakers read in a language variety that is different from the one they use for everyday speech. The aim of the present study was: (1) to examine Spoken Arabic (SpA) and Standard Arabic (StA) voweled and unvoweled word reading among native-speaking sixth graders with developmental dyslexia; and (2) to determine whether SpA reading…

  9. Tracking the time course of lexical access in orthographic production: An event-related potential study of word frequency effects in written picture naming.

    Science.gov (United States)

    Qu, Qingqing; Zhang, Qingfang; Damian, Markus F

    2016-08-01

    Previous studies of spoken picture naming using event-related potentials (ERPs) have shown that speakers initiate lexical access within 200ms after stimulus onset. In the present study, we investigated the time course of lexical access in written, rather than spoken, word production. Chinese participants wrote target object names which varied in word frequency, and written naming times and ERPs were measured. Writing latencies exhibited a classical frequency effect (faster responses for high- than for low-frequency names). More importantly, ERP results revealed that electrophysiological activity elicited by high- and low frequency target names started to diverge as early as 168ms post picture onset. We conclude that lexical access during written word production is initiated within 200ms after picture onset. This estimate is compatible with previous studies on spoken production which likewise showed a rapid onset of lexical access (i.e., within 200ms after stimuli onset). We suggest that written and spoken word production share the lexicalization stage. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Word Processing for All.

    Science.gov (United States)

    Abbott, Chris

    1991-01-01

    Pupils with special educational needs are finding that the use of word processors can give them a new confidence and pride in their own abilities. This article describes the use of such devices as the "mouse," on-screen word lists, spell checkers, and overlay keyboards. (JDD)

  11. Word Translation Entropy

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard

    This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied. In p...

  12. Word Translation Entropy

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard

    2016-01-01

    This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied. In p...

  13. Visual Word Ambiguity

    NARCIS (Netherlands)

    van Gemert, J.C.; Veenman, C.J.; Smeulders, A.W.M.; Geusebroek, J.M.

    2010-01-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One

  14. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    Science.gov (United States)

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  15. Refolding of SDS-Unfolded Proteins by Nonionic Surfactants.

    Science.gov (United States)

    Kaspersen, Jørn Døvling; Søndergaard, Anne; Madsen, Daniel Jhaf; Otzen, Daniel E; Pedersen, Jan Skov

    2017-04-25

    The strong and usually denaturing interaction between anionic surfactants (AS) and proteins/enzymes has both benefits and drawbacks: for example, it is put to good use in electrophoretic mass determinations but limits enzyme efficiency in detergent formulations. Therefore, studies of the interactions between proteins and AS as well as nonionic surfactants (NIS) are of both basic and applied relevance. The AS sodium dodecyl sulfate (SDS) denatures and unfolds globular proteins under most conditions. In contrast, NIS such as octaethylene glycol monododecyl ether (C 12 E 8 ) and dodecyl maltoside (DDM) protect bovine serum albumin (BSA) from unfolding in SDS. Membrane proteins denatured in SDS can also be refolded by addition of NIS. Here, we investigate whether globular proteins unfolded by SDS can be refolded upon addition of C 12 E 8 and DDM. Four proteins, BSA, α-lactalbumin (αLA), lysozyme, and β-lactoglobulin (βLG), were studied by small-angle x-ray scattering and both near- and far-UV circular dichroism. All proteins and their complexes with SDS were attempted to be refolded by the addition of C 12 E 8 , while DDM was additionally added to SDS-denatured αLA and βLG. Except for αLA, the proteins did not interact with NIS alone. For all proteins, the addition of NIS to the protein-SDS samples resulted in extraction of the SDS from the protein-SDS complexes and refolding of βLG, BSA, and lysozyme, while αLA changed to its NIS-bound state instead of the native state. We conclude that NIS competes with globular proteins for association with SDS, making it possible to release and refold SDS-denatured proteins by adding sufficient amounts of NIS, unless the protein also interacts with NIS alone. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. WordPress Bible

    CERN Document Server

    Brazell, Aaron

    2011-01-01

    Get the latest word on the biggest self-hosted blogging tool on the marketWithin a week of the announcement of WordPress 3.0, it had been downloaded over a million times. Now you can get on the bandwagon of this popular open-source blogging tool with WordPress Bible, 2nd Edition. Whether you're a casual blogger or programming pro, this comprehensive guide covers the latest version of WordPress, from the basics through advanced application development. If you want to thoroughly learn WordPress, this is the book you need to succeed.Explores the principles of blogging, marketing, and social media

  17. Spoken commands control robot that handles radioactive materials

    International Nuclear Information System (INIS)

    Phelan, P.F.; Keddy, C.; Beugelsdojk, T.J.

    1989-01-01

    Several robotic systems have been developed by Los Alamos National Laboratory to handle radioactive material. Because of safety considerations, the robotic system must be under direct human supervision and interactive control continuously. In this paper, we describe the implementation of a voice-recognition system that permits this control, yet allows the robot to perform complex preprogrammed manipulations without the operator's intervention. To provide better interactive control, we connected to the robot's control computer, a speech synthesis unit, which provides audible feedback to the operator. Thus upon completion of a task or if an emergency arises, an appropriate spoken message can be reported by the control computer. The training programming and operation of this commercially available system are discussed, as are the practical problems encountered during operations

  18. Computational Interpersonal Communication: Communication Studies and Spoken Dialogue Systems

    Directory of Open Access Journals (Sweden)

    David J. Gunkel

    2016-09-01

    Full Text Available With the advent of spoken dialogue systems (SDS, communication can no longer be considered a human-to-human transaction. It now involves machines. These mechanisms are not just a medium through which human messages pass, but now occupy the position of the other in social interactions. But the development of robust and efficient conversational agents is not just an engineering challenge. It also depends on research in human conversational behavior. It is the thesis of this paper that communication studies is best situated to respond to this need. The paper argues: 1 that research in communication can supply the information necessary to respond to and resolve many of the open problems in SDS engineering, and 2 that the development of SDS applications can provide the discipline of communication with unique opportunities to test extant theory and verify experimental results. We call this new area of interdisciplinary collaboration “computational interpersonal communication” (CIC

  19. Predicting user mental states in spoken dialogue systems

    Science.gov (United States)

    Callejas, Zoraida; Griol, David; López-Cózar, Ramón

    2011-12-01

    In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  20. Predicting user mental states in spoken dialogue systems

    Directory of Open Access Journals (Sweden)

    Griol David

    2011-01-01

    Full Text Available Abstract In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  1. Evaluation of spectral unfolding techniques for neutron spectroscopy

    International Nuclear Information System (INIS)

    Sunden, Erik Andersson; Conroy, S.; Ericsson, G.; Johnson, M. Gatu; Giacomelli, L.; Hellesen, C.; Hjalmarsson, A.; Ronchi, E.; Sjoestrand, H.; Weiszflog, M.; Kaellne, J.; Gorini, G.; Tardocchi, M.

    2008-01-01

    The precision of the JET installations of MAXED, GRAVEL and the L-curve version of MAXED has been evaluated by using synthetic neutron spectra. We have determined the number of counts needed for the detector systems NE213 and MPR to get an error below 10% of the MAXED unfolded neutron spectra is determined to be ∼10 6 and ∼10 4 , respectively. For GRAVEL the same number is ∼10 7 and ∼3·10 4 for NE213 and MPR, respectively

  2. Unfolding of Vortices into Topological Stripes in a Multiferroic Material

    Science.gov (United States)

    Wang, X.; Mostovoy, M.; Han, M. G.; Horibe, Y.; Aoki, T.; Zhu, Y.; Cheong, S.-W.

    2014-06-01

    Multiferroic hexagonal RMnO3 (R =rare earths) crystals exhibit dense networks of vortex lines at which six domain walls merge. While the domain walls can be readily moved with an applied electric field, the vortex cores so far have been impossible to control. Our experiments demonstrate that shear strain induces a Magnus-type force pulling vortices and antivortices in opposite directions and unfolding them into a topological stripe domain state. We discuss the analogy between this effect and the current-driven dynamics of vortices in superconductors and superfluids.

  3. Unfolding of neutron spectra from Godiva type critical assemblies

    International Nuclear Information System (INIS)

    Harvey, J.T.; Meason, J.L.; Wright, H.L.

    1976-01-01

    The results from three experiments conducted at the White Sands Missile Range Fast Burst Reactor Facility are discussed. The experiments were designed to measure the ''free-field'' neutron leakage spectrum and the neutron spectra from mildly perturbed environments. SAND-II was used to calculate the neutron spectrum utilizing several different trial input spectra for each experiment. Comparisons are made between the unfolded neutron spectrum for each trial input on the basis of the following parameters: average neutron energy (above 10 KeV), integral fluence (above 10 KeV), spectral index and the hardness parameter, phi/sub eq//phi

  4. Solving inverse problems with the unfolding program TRUEE: Examples in astroparticle physics

    International Nuclear Information System (INIS)

    Milke, N.; Doert, M.; Klepser, S.; Mazin, D.; Blobel, V.; Rhode, W.

    2013-01-01

    The unfolding program TRUEE is a software package for the numerical solution of inverse problems. The algorithm was first applied in the FORTRAN 77 program RUN. RUN is an event-based unfolding algorithm which makes use of the Tikhonov regularization. It has been tested and compared to different unfolding applications and stood out with notably stable results and reliable error estimation. TRUEE is a conversion of RUN to C++, which works within the powerful ROOT framework. The program has been extended for more user-friendliness and delivers unfolding results which are identical to RUN. Beside the simplicity of the installation of the software and the generation of graphics, there are new functions, which facilitate the choice of unfolding parameters and observables for the user. In this paper, we introduce the new unfolding program and present its performance by applying it to two exemplary data sets from astroparticle physics, taken with the MAGIC telescopes and the IceCube neutrino detector, respectively.

  5. The role of partial knowledge in statistical word learning

    Science.gov (United States)

    Fricker, Damian C.; Yu, Chen; Smith, Linda B.

    2013-01-01

    A critical question about the nature of human learning is whether it is an all-or-none or a gradual, accumulative process. Associative and statistical theories of word learning rely critically on the later assumption: that the process of learning a word's meaning unfolds over time. That is, learning the correct referent for a word involves the accumulation of partial knowledge across multiple instances. Some theories also make an even stronger claim: Partial knowledge of one word–object mapping can speed up the acquisition of other word–object mappings. We present three experiments that test and verify these claims by exposing learners to two consecutive blocks of cross-situational learning, in which half of the words and objects in the second block were those that participants failed to learn in Block 1. In line with an accumulative account, Re-exposure to these mis-mapped items accelerated the acquisition of both previously experienced mappings and wholly new word–object mappings. But how does partial knowledge of some words speed the acquisition of others? We consider two hypotheses. First, partial knowledge of a word could reduce the amount of information required for it to reach threshold, and the supra-threshold mapping could subsequently aid in the acquisition of new mappings. Alternatively, partial knowledge of a word's meaning could be useful for disambiguating the meanings of other words even before the threshold of learning is reached. We construct and compare computational models embodying each of these hypotheses and show that the latter provides a better explanation of the empirical data. PMID:23702980

  6. TEACHING TURKISH AS SPOKEN IN TURKEY TO TURKIC SPEAKERS - TÜRK DİLLİLERE TÜRKİYE TÜRKÇESİ ÖĞRETİMİ NASIL OLMALIDIR?

    Directory of Open Access Journals (Sweden)

    Ali TAŞTEKİN

    2015-12-01

    Full Text Available Attributing different titles to the activity of teaching Turkish to non-native speakers is related to the perspective of those who conduct this activity. If Turkish Language teaching centres are sub-units of Schools of Foreign Languages and Departments of Foreign Languages of our Universities or teachers have a foreign language background, then the title “Teaching Turkish as a Foreign Language” is adopted and claimed to be universal. In determining success at teaching and learning, the psychological perception of the educational activity and the associational power of the words used are far more important factors than the teacher, students, educational environment and educational tools. For this reason, avoiding the negative connotations of the adjective “foreign” in the activity of teaching foreigners Turkish as spoken in Turkey would be beneficial. In order for the activity of Teaching Turkish as Spoken in Turkey to Turkic Speakers to be successful, it is crucial to dwell on the formal and contextual quality of the books written for this purpose. Almost none of the course books and supplementary books in the field of teaching Turkish to non-native speakers has taken Teaching Turkish as Spoken in Turkey to Turkic Speakers into consideration. The books written for the purpose of teaching Turkish to non-speakers should be examined thoroughly in terms of content and method and should be organized in accordance with the purpose and level of readiness of the target audience. Activities of Teaching Turkish as Spoken in Turkey to Turkic Speakers are still conducted at public and private primary and secondary schools and colleges as well as private courses by self-educated teachers who are trained within a master-apprentice relationship. Turkic populations who had long been parted by necessity have found the opportunity to reunite and turn towards common objectives after the dissolution of The Union of Soviet Socialist Republics. This recent

  7. Unfolding Simulations of Holomyoglobin from Four Mammals: Identification of Intermediates and β-Sheet Formation from Partially Unfolded States

    DEFF Research Database (Denmark)

    Dasmeh, Pouria; Kepp, Kasper Planeta

    2013-01-01

    simulations of holoMb and the first comparative study of unfolding of protein orthologs from different species (sperm whale, pig, horse, and harbor seal). We also provide new interpretations of experimental mean molecular ellipticities of myoglobin intermediates, notably correcting for random coil and number...... of helices in intermediates. The simulated holoproteins at 310 K displayed structures and dynamics in agreement with crystal structures (Rg ,1.48–1.51 nm, helicity ,75%). At 400 K, heme was not lost, but some helix loss was observed in pig and horse, suggesting that these helices are less stable...

  8. Network Unfolding Map by Vertex-Edge Dynamics Modeling.

    Science.gov (United States)

    Verri, Filipe Alves Neto; Urio, Paulo Roberto; Zhao, Liang

    2018-02-01

    The emergence of collective dynamics in neural networks is a mechanism of the animal and human brain for information processing. In this paper, we develop a computational technique using distributed processing elements in a complex network, which are called particles, to solve semisupervised learning problems. Three actions govern the particles' dynamics: generation, walking, and absorption. Labeled vertices generate new particles that compete against rival particles for edge domination. Active particles randomly walk in the network until they are absorbed by either a rival vertex or an edge currently dominated by rival particles. The result from the model evolution consists of sets of edges arranged by the label dominance. Each set tends to form a connected subnetwork to represent a data class. Although the intrinsic dynamics of the model is a stochastic one, we prove that there exists a deterministic version with largely reduced computational complexity; specifically, with linear growth. Furthermore, the edge domination process corresponds to an unfolding map in such way that edges "stretch" and "shrink" according to the vertex-edge dynamics. Consequently, the unfolding effect summarizes the relevant relationships between vertices and the uncovered data classes. The proposed model captures important details of connectivity patterns over the vertex-edge dynamics evolution, in contrast to the previous approaches, which focused on only vertex or only edge dynamics. Computer simulations reveal that the new model can identify nonlinear features in both real and artificial data, including boundaries between distinct classes and overlapping structures of data.

  9. Spectrum unfolding by the least-squares methods

    International Nuclear Information System (INIS)

    Perey, F.G.

    1977-01-01

    The method of least squares is briefly reviewed, and the conditions under which it may be used are stated. From this analysis, a least-squares approach to the solution of the dosimetry neutron spectrum unfolding problem is introduced. The mathematical solution to this least-squares problem is derived from the general solution. The existence of this solution is analyzed in some detail. A chi 2 -test is derived for the consistency of the input data which does not require the solution to be obtained first. The fact that the problem is technically nonlinear, but should be treated in general as a linear one, is argued. Therefore, the solution should not be obtained by iteration. Two interpretations are made for the solution of the code STAY'SL, which solves this least-squares problem. The relationship of the solution to this least-squares problem to those obtained currently by other methods of solving the dosimetry neutron spectrum unfolding problem is extensively discussed. It is shown that the least-squares method does not require more input information than would be needed by current methods in order to estimate the uncertainties in their solutions. From this discussion it is concluded that the proposed least-squares method does provide the best complete solution, with uncertainties, to the problem as it is understood now. Finally, some implications of this method are mentioned regarding future work required in order to exploit its potential fully

  10. Constrained Unfolding of a Helical Peptide: Implicit versus Explicit Solvents.

    Directory of Open Access Journals (Sweden)

    Hailey R Bureau

    Full Text Available Steered Molecular Dynamics (SMD has been seen to provide the potential of mean force (PMF along a peptide unfolding pathway effectively but at significant computational cost, particularly in all-atom solvents. Adaptive steered molecular dynamics (ASMD has been seen to provide a significant computational advantage by limiting the spread of the trajectories in a staged approach. The contraction of the trajectories at the end of each stage can be performed by taking a structure whose nonequilibrium work is closest to the Jarzynski average (in naive ASMD or by relaxing the trajectories under a no-work condition (in full-relaxation ASMD--namely, FR-ASMD. Both approaches have been used to determine the energetics and hydrogen-bonding structure along the pathway for unfolding of a benchmark peptide initially constrained as an α-helix in a water environment. The energetics are quite different to those in vacuum, but are found to be similar between implicit and explicit solvents. Surprisingly, the hydrogen-bonding pathways are also similar in the implicit and explicit solvents despite the fact that the solvent contact plays an important role in opening the helix.

  11. Understanding how biodiversity unfolds through time under neutral theory.

    Science.gov (United States)

    Missa, Olivier; Dytham, Calvin; Morlon, Hélène

    2016-04-05

    Theoretical predictions for biodiversity patterns are typically derived under the assumption that ecological systems have reached a dynamic equilibrium. Yet, there is increasing evidence that various aspects of ecological systems, including (but not limited to) species richness, are not at equilibrium. Here, we use simulations to analyse how biodiversity patterns unfold through time. In particular, we focus on the relative time required for various biodiversity patterns (macroecological or phylogenetic) to reach equilibrium. We simulate spatially explicit metacommunities according to the Neutral Theory of Biodiversity (NTB) under three modes of speciation, which differ in how evenly a parent species is split between its two daughter species. We find that species richness stabilizes first, followed by species area relationships (SAR) and finally species abundance distributions (SAD). The difference in timing of equilibrium between these different macroecological patterns is the largest when the split of individuals between sibling species at speciation is the most uneven. Phylogenetic patterns of biodiversity take even longer to stabilize (tens to hundreds of times longer than species richness) so that equilibrium predictions from neutral theory for these patterns are unlikely to be relevant. Our results suggest that it may be unwise to assume that biodiversity patterns are at equilibrium and provide a first step in studying how these patterns unfold through time. © 2016 The Author(s).

  12. A neutron spectrum unfolding code based on iterative procedures

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    In this work, the version 3.0 of the neutron spectrum unfolding code called Neutron Spectrometry and Dosimetry from Universidad Autonoma de Zacatecas (NSDUAZ), is presented. This code was designed in a graphical interface under the LabVIEW programming environment and it is based on the iterative SPUNIT iterative algorithm, using as entrance data, only the rate counts obtained with 7 Bonner spheres based on a 6 Lil(Eu) neutron detector. The main features of the code are: it is intuitive and friendly to the user; it has a programming routine which automatically selects the initial guess spectrum by using a set of neutron spectra compiled by the International Atomic Energy Agency. Besides the neutron spectrum, this code calculates the total flux, the mean energy, H(10), h(10), 15 dosimetric quantities for radiation protection porpoises and 7 survey meter responses, in four energy grids, based on the International Atomic Energy Agency compilation. This code generates a full report in html format with all relevant information. In this work, the neutron spectrum of a 241 AmBe neutron source on air, located at 150 cm from detector, is unfolded. (Author)

  13. CONVERTING RETRIEVED SPOKEN DOCUMENTS INTO TEXT USING AN AUTO ASSOCIATIVE NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2016-06-01

    Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.

  14. A question of words.

    Science.gov (United States)

    Mains, Heather

    2003-01-01

    By tracing powerful patriarchal words originally applied to mass production, military manoeuvres, building construction and war, and then ascribing them to birth we can see how Western culture denounces the mother's true role as the central and essential figure in the birth of her own children. Many words for birth imply fear, doubt and opposition. Yet when we consciously select words to describe giving birth as a form of art--a giving of creation--we can not only place the woman at the centre of the birth event, but also grant the respect and honour due her as she brings forth new life.

  15. Spoken language interaction with model uncertainty: an adaptive human-robot interaction system

    Science.gov (United States)

    Doshi, Finale; Roy, Nicholas

    2008-12-01

    Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.

  16. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    Science.gov (United States)

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  17. Words that Pop!

    Science.gov (United States)

    Russell, Shirley

    1988-01-01

    To excite students' appreciation of language, comic book words--onomatopoeia--are a useful tool. Exercises and books are suggested. A list of books for adults and children is recommended, and a reproducible page is provided. (JL)

  18. Non-intentional but not automatic: reduction of word- and arrow-based compatibility effects by sound distractors in the same categorical domain.

    Science.gov (United States)

    Miles, James D; Proctor, Robert W

    2009-10-01

    In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.

  19. Nine Words - Nine Columns

    DEFF Research Database (Denmark)

    Trempe Jr., Robert B.; Buthke, Jan

    2016-01-01

    of computational and mechanical processes towards an anesthetic. Each team received a single word, translating and evolving that word first into a double-curved computational surface, next a ruled computational surface, and then a physically shaped foam mold via a 6-axis robot. The foam molds then operated...... as formwork for the shaping of wood veneer. The resulting columns ‘wear’ every aspect of this design pipeline process and display the power of process towards an architectural resolution....

  20. Flexible Word Classes

    DEFF Research Database (Denmark)

    van Lier, Eva; Rijkhoff, Jan

    2013-01-01

    • First major publication on the phenomenon • Offers cross-linguistic, descriptive, and diverse theoretical approaches • Includes analysis of data from different language families and from lesser studied languages This book is the first major cross-linguistic study of 'flexible words', i.e. words...... Indonesian, Santali, Sri Lanka Malay, Lushootseed, Gooniyandi, and Late Archaic Chinese. Readership: Linguists and students of linguistics and cognitive sciences, anthropologists, philosophers...

  1. Sonority and early words

    DEFF Research Database (Denmark)

    Kjærbæk, Laila; Boeg Thomsen, Ditte; Lambertsen, Claus

    2015-01-01

    Syllables play an important role in children’s early language acquisition, and children appear to rely on clear syllabic structures as a key to word acquisition (Vihman 1996; Oller 2000). However, not all languages present children with equally clear cues to syllabic structure, and since the spec......Syllables play an important role in children’s early language acquisition, and children appear to rely on clear syllabic structures as a key to word acquisition (Vihman 1996; Oller 2000). However, not all languages present children with equally clear cues to syllabic structure, and since...... acquisition therefore presents us with the opportunity to examine how children respond to the task of word learning when the input language offers less clear cues to syllabic structure than usually seen. To investigate the sound structure in Danish children’s lexical development, we need a model of syllable......-29 months. For the two children, the phonetic structure of the first ten words to occur is compared with that of the last ten words to occur before 30 months of age, and with that of ten words in between. Measures related to the sonority envelope, viz. sonority types and in particular sonority rises...

  2. Amyl: A Misunderstood Word

    Science.gov (United States)

    Kjonaas, Richard A.

    1996-12-01

    There is much confusion associated with the word amyl. For example, many textbooks draw a structural formula of n-pentyl acetate rather than isopentyl acetate when referring to the chief component of banana oil (amyl acetate). When younger chemists are taught to use the words propyl, butyl, and pentyl in place of n-propyl, n-butyl, and n-pentyl, they then incorrectly assume that this practice also applies to the word amyl. As is the case with banana oil, if the word amyl is going to be used to refer to just one of the isomeric pentyl groups, it should rightfully be isopentyl. The reason for this dates back to an abundant and important article of commerce called amylic alcohol (also called potato oil) which consisted chiefly of isopentyl alcohol. In fact, one can look in various chemical catalogs and handbooks of today and see such names as amyl benzoate and amyl nitrite used in place of isopentyl benzoate and isopentyl nitrite. Adding to all the confusion is the common practice of using the word amyl along with the singular form of another word when referring to an isomeric mixture; i.e. using amyl acetate rather than amyl acetates when referring to a mixture of pentyl acetates.

  3. The statistical trade-off between word order and word structure – Large-scale evidence for the principle of least effort

    Science.gov (United States)

    Koplenig, Alexander; Meyer, Peter; Wolfer, Sascha; Müller-Spitzer, Carolin

    2017-01-01

    Languages employ different strategies to transmit structural and grammatical information. While, for example, grammatical dependency relationships in sentences are mainly conveyed by the ordering of the words for languages like Mandarin Chinese, or Vietnamese, the word ordering is much less restricted for languages such as Inupiatun or Quechua, as these languages (also) use the internal structure of words (e.g. inflectional morphology) to mark grammatical relationships in a sentence. Based on a quantitative analysis of more than 1,500 unique translations of different books of the Bible in almost 1,200 different languages that are spoken as a native language by approximately 6 billion people (more than 80% of the world population), we present large-scale evidence for a statistical trade-off between the amount of information conveyed by the ordering of words and the amount of information conveyed by internal word structure: languages that rely more strongly on word order information tend to rely less on word structure information and vice versa. Or put differently, if less information is carried within the word, more information has to be spread among words in order to communicate successfully. In addition, we find that–despite differences in the way information is expressed–there is also evidence for a trade-off between different books of the biblical canon that recurs with little variation across languages: the more informative the word order of the book, the less informative its word structure and vice versa. We argue that this might suggest that, on the one hand, languages encode information in very different (but efficient) ways. On the other hand, content-related and stylistic features are statistically encoded in very similar ways. PMID:28282435

  4. Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants.

    Science.gov (United States)

    Harris, Michael S; Kronenberger, William G; Gao, Sujuan; Hoen, Helena M; Miyamoto, Richard T; Pisoni, David B

    2013-01-01

    Cochlear implants (CIs) help many deaf children achieve near-normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes after CI in children. Longitudinal study of 66 children with CIs for prelingual severe-to-profound hearing loss. Outcome measures included performance on digit span forward (DSF), digit span backward (DSB), and four conventional S/L measures that examined spoken-word recognition (Phonetically Balanced Kindergarten word test), receptive vocabulary (Peabody Picture Vocabulary Test ), sentence-recognition skills (Hearing in Noise Test), and receptive and expressive language functioning (Clinical Evaluation of Language Fundamentals Fourth Edition Core Language Score; CELF). Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored more than 1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13 to 31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (auditory-oral communication versus total communication), and maternal education. Only DSB baseline scores predicted endpoint language scores on Peabody Picture Vocabulary Test and CELF. DSB slopes were not significantly related to any endpoint S/L measures

  5. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    Science.gov (United States)

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  6. Evolution and thermodynamics of the slow unfolding of hyperstable monomeric proteins

    Directory of Open Access Journals (Sweden)

    Koga Yuichi

    2010-07-01

    Full Text Available Abstract Background The unfolding speed of some hyperthermophilic proteins is dramatically lower than that of their mesostable homologs. Ribonuclease HII from the hyperthermophilic archaeon Thermococcus kodakaraensis (Tk-RNase HII is stabilized by its remarkably slow unfolding rate, whereas RNase HI from the thermophilic bacterium Thermus thermophilus (Tt-RNase HI unfolds rapidly, comparable with to that of RNase HI from Escherichia coli (Ec-RNase HI. Results To clarify whether the difference in the unfolding rate is due to differences in the types of RNase H or differences in proteins from archaea and bacteria, we examined the equilibrium stability and unfolding reaction of RNases HII from the hyperthermophilic bacteria Thermotoga maritima (Tm-RNase HII and Aquifex aeolicus (Aa-RNase HII and RNase HI from the hyperthermophilic archaeon Sulfolobus tokodaii (Sto-RNase HI. These proteins from hyperthermophiles are more stable than Ec-RNase HI over all the temperature ranges examined. The observed unfolding speeds of all hyperstable proteins at the different denaturant concentrations studied are much lower than those of Ec-RNase HI, which is in accordance with the familiar slow unfolding of hyperstable proteins. However, the unfolding rate constants of these RNases H in water are dispersed, and the unfolding rate constant of thermophilic archaeal proteins is lower than that of thermophilic bacterial proteins. Conclusions These results suggest that the nature of slow unfolding of thermophilic proteins is determined by the evolutionary history of the organisms involved. The unfolding rate constants in water are related to the amount of buried hydrophobic residues in the tertiary structure.

  7. P2-13: Location word Cues' Effect on Location Discrimination Task: Cross-Modal Study

    Directory of Open Access Journals (Sweden)

    Satoko Ohtsuka

    2012-10-01

    Full Text Available As is well known, participants are slower and make more errors in responding to the display color of an incongruent color word than a congruent one. This traditional stroop effect is often accounted for with relatively automatic and dominant word processing. Although the word dominance account has been widely supported, it is not clear in what extent of perceptual tasks it is valid. Here we aimed to examine whether the word dominance effect is observed in location stroop tasks and in audio-visual situations. The participants were required to press a key according to the location of visual (Experiment 1 and audio (Experiment 2 targets, left or right, as soon as possible. A cue of written (Experiments 1a and 2a or spoken (Experiments 1b and 2b location words, “left” or “right”, was presented on the left or right side of the fixation with cue lead times (CLT of 200 ms and 1200 ms. Reaction time from target presentation to key press was recorded as a dependent variable. The results were that the location validity effect was marked in within-modality but less so in cross-modality trials. The word validity effect was strong in within- but not in cross-modality trials. The CLT gave some effect of inhibition of return. So the word dominance could be less effective in location tasks and in cross-modal situations. The spatial correspondence seems to overcome the word effect.

  8. Thermal unfolding of a Ca- and Lanthanide-binding protein

    Energy Technology Data Exchange (ETDEWEB)

    Fahmy, Karim [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Biophysics; Goettfert, M. [Technische Univ. Dresden (Germany); Knoeppel, J.

    2017-06-01

    The MIIA (metal ion-induced autocleavage)-domain of the protein Vic001052 from the pathogen Vibrio coralliilyticus, comprises 173 amino acids and exhibits Ca-dependent autoproteolytic activity. It shows homology to nodulation proteins which are secreted by Rhizobiacea into plant host cells where they exert Ca-dependent functions. We have studied the structural and energetic aspects of metal protein interactions of the MIIA domain which appear attractive for engineering metal-binding synthetic peptides. Using a non-cleavable MIIA domain construct, we detected very similar structural changes upon binding to Ca{sup 2+} and Eu{sup 3+}. The thermal denaturation of the Ca-bound state was studied by circular dichroism spectroscopy. The metal-bound folded state unfolds reversibly into an unstructured metal-free state similar to the metal-free state at room temperature.

  9. Unfolding/refolding studies of the myosin rod.

    Science.gov (United States)

    Nozais, M; Bechet, J J

    1993-12-15

    The effect of guanidine hydrochloride on the gel-filtration chromatography, viscosity, far ultraviolet circular dichroism and fluorescence emission intensity of the myosin rod was studied under equilibrium conditions. The normalized transition curves for each of these methods were comparable with a midpoint at a guanidine hydrochloride concentration of 1.75-2 M. The curves were not, however, superposable, suggesting that the loss of helix content and the dissociation of the two chains of the myosin rod were not tightly linked. Furthermore, they were unexpectedly independent of the protein concentration over 0.05-20 microM. These phenomena are interpreted taking into account the large size of the molecule. A step-wise process is proposed as a model for the unfolding of the myosin rod.

  10. The Unfolded Protein Response and Cell Fate Control.

    Science.gov (United States)

    Hetz, Claudio; Papa, Feroz R

    2018-01-18

    The secretory capacity of a cell is constantly challenged by physiological demands and pathological perturbations. To adjust and match the protein-folding capacity of the endoplasmic reticulum (ER) to changing secretory needs, cells employ a dynamic intracellular signaling pathway known as the unfolded protein response (UPR). Homeostatic activation of the UPR enforces adaptive programs that modulate and augment key aspects of the entire secretory pathway, whereas maladaptive UPR outputs trigger apoptosis. Here, we discuss recent advances into how the UPR integrates information about the intensity and duration of ER stress stimuli in order to control cell fate. These findings are timely and significant because they inform an evolving mechanistic understanding of a wide variety of human diseases, including diabetes mellitus, neurodegeneration, and cancer, thus opening up the potential for new therapeutic modalities to treat these diverse diseases. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Unfolding the phenomenon of inter-rater agreement

    DEFF Research Database (Denmark)

    Slaug, Bjørn; Schilling, Oliver; Helle, Tina

    2011-01-01

    indices, relative shares of agreement variation were calculated. Multilevel regression analysis was carried out, using rater and item characteristics as predictors of agreement variation. Results: The raters accounted for 6-11 % of the agreement variation, the items for 33-39 % and the contexts for 53......Objective: The overall objective was to unfold the phenomenon of inter-rater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement......-60 %. Multilevel regression analysis showed barrier prevalence and raters’ familiarity with using standardized instruments to have the strongest impact on agreement, though for study design reasons contextual characteristics were not included. Conclusion: Supported by a conceptual analysis, we propose an approach...

  12. Unfolded protein response in hepatitis C virus infection

    Directory of Open Access Journals (Sweden)

    Shiu-Wan eChan

    2014-05-01

    Full Text Available Hepatitis C virus (HCV is a single-stranded, positive-sense RNA virus of clinical importance. The virus establishes a chronic infection and can progress from chronic hepatitis, steatosis to fibrosis, cirrhosis and hepatocellular carcinoma. The mechanisms of viral persistence and pathogenesis are poorly understood. Recently the unfolded protein response (UPR, a cellular homeostatic response to endoplasmic reticulum (ER stress, has emerged to be a major contributing factor in many human diseases. It is also evident that viruses interact with the host UPR in many different ways and the outcome could be pro-viral, anti-viral or pathogenic, depending on the particular type of infection. Here we present evidence for the elicitation of chronic ER stress in HCV infection. We analyze the UPR signaling pathways involved in HCV infection, the various levels of UPR regulation by different viral proteins and finally, we propose several mechanisms by which the virus provokes the UPR.

  13. Unfolding and Refolding Embodiment into the Landscape of Ubiquitous Computing

    DEFF Research Database (Denmark)

    Schick, Lea; Malmborg, Lone

    2009-01-01

    This paper advocates the future of the body as a distributed and shared embodiment; an unfolded body that doesn’t end at one's skin, but emerges as intercorporeality between bodies and the technological environment. Looking at new tendencies within interaction design and ubiquitous computing to see...... how these are to an increasing extent focusing on sociality, context-awareness, relations, affects, connectedness, and collectivity we will examine how these new technological movements can change our perception of embodiment towards a distributed and shared one. By examining interactive textiles...... as part of a future rising landscape of multi-sensory networks we will exemplify how the new technologies can shutter dichotomies and challenge traditional notions of embodiment and the subject. Finally, we show how this ‘new embodiment’ manifests Deleuze’s philosophy of the body as something unstable...

  14. The Unfolded Protein Response in Chronic Obstructive Pulmonary Disease.

    Science.gov (United States)

    Kelsen, Steven G

    2016-04-01

    Accumulation of nonfunctional and potentially cytotoxic, misfolded proteins in chronic obstructive pulmonary disease (COPD) is believed to contribute to lung cell apoptosis, inflammation, and autophagy. Because of its fundamental role as a quality control system in protein metabolism, the "unfolded protein response" (UPR) is of potential importance in the pathogenesis of COPD. The UPR comprises a series of transcriptional, translational, and post-translational processes that decrease protein synthesis while enhancing protein folding capacity and protein degradation. Several studies have suggested that the UPR contributes to lung cell apoptosis and lung inflammation in at least some subjects with human COPD. However, information on the prevalence of the UPR in subjects with COPD, the lung cells that manifest a UPR, and the role of the UPR in the pathogenesis of COPD is extremely limited and requires additional study.

  15. Development of a spoken language identification system for South African languages

    CSIR Research Space (South Africa)

    Peché, M

    2009-12-01

    Full Text Available This article introduces the first Spoken Language Identification system developed to distinguish among all eleven of South Africa’s official languages. The PPR-LM (Parallel Phoneme Recognition followed by Language Modeling) architecture...

  16. The role of planum temporale in processing accent variation in spoken language comprehension.

    NARCIS (Netherlands)

    Adank, P.M.; Noordzij, M.L.; Hagoort, P.

    2012-01-01

    A repetitionsuppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variationspeaker and accentduring spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a

  17. Initial fieldwork for LWAZI: a telephone-based spoken dialog system for rural South Africa

    CSIR Research Space (South Africa)

    Gumede, T

    2009-03-01

    Full Text Available government information and services. Our interviews, focus group discussions and surveys revealed that Lwazi, a telephone-based spoken dialog system, could greatly support current South African government efforts to effectively connect citizens to available...

  18. WORD LEVEL DISCRIMINATIVE TRAINING FOR HANDWRITTEN WORD RECOGNITION

    NARCIS (Netherlands)

    Chen, W.; Gader, P.

    2004-01-01

    Word level training refers to the process of learning the parameters of a word recognition system based on word level criteria functions. Previously, researchers trained lexicon­driven handwritten word recognition systems at the character level individually. These systems generally use statistical

  19. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  20. Give and take: syntactic priming during spoken language comprehension.

    Science.gov (United States)

    Thothathiri, Malathi; Snedeker, Jesse

    2008-07-01

    Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven more elusive, fueling claims that comprehension is less dependent on general syntactic representations and more dependent on lexical knowledge. In three experiments we explored syntactic priming during spoken language comprehension. Participants acted out double-object (DO) or prepositional-object (PO) dative sentences while their eye movements were recorded. Prime sentences used different verbs and nouns than the target sentences. In target sentences, the onset of the direct-object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (DO e.g., Show the horse the book; PO e.g., Show the horn to the dog). We measured the difference in looks to the potential recipient and the potential theme during the ambiguous interval. In all experiments, participants who heard DO primes showed a greater preference for the recipient over the theme than those who heard PO primes, demonstrating across-verb priming during online language comprehension. These results accord with priming found in production studies, indicating a role for abstract structural information during comprehension as well as production.