WorldWideScience

Sample records for infant-directed speech ids

  1. Influences of Infant-Directed Speech on Early Word Recognition

    Science.gov (United States)

    Singh, Leher; Nestor, Sarah; Parikh, Chandni; Yull, Ashley

    2009-01-01

    When addressing infants, many adults adopt a particular type of speech, known as infant-directed speech (IDS). IDS is characterized by exaggerated intonation, as well as reduced speech rate, shorter utterance duration, and grammatical simplification. It is commonly asserted that IDS serves in part to facilitate language learning. Although…

  2. Lip Movement Exaggerations during Infant-Directed Speech

    Science.gov (United States)

    Green, Jordan R.; Nip, Ignatius S. B.; Wilson, Erin M.; Mefferd, Antje S.; Yunusova, Yana

    2010-01-01

    Purpose: Although a growing body of literature has identified the positive effects of visual speech on speech and language learning, oral movements of infant-directed speech (IDS) have rarely been studied. This investigation used 3-dimensional motion capture technology to describe how mothers modify their lip movements when talking to their…

  3. Recognizing intentions in infant-directed speech: evidence for universals.

    Science.gov (United States)

    Bryant, Gregory A; Barrett, H Clark

    2007-08-01

    In all languages studied to date, distinct prosodic contours characterize different intention categories of infant-directed (ID) speech. This vocal behavior likely exists universally as a species-typical trait, but little research has examined whether listeners can accurately recognize intentions in ID speech using only vocal cues, without access to semantic information. We recorded native-English-speaking mothers producing four intention categories of utterances (prohibition, approval, comfort, and attention) as both ID and adult-directed (AD) speech, and we then presented the utterances to Shuar adults (South American hunter-horticulturalists). Shuar subjects were able to reliably distinguish ID from AD speech and were able to reliably recognize the intention categories in both types of speech, although performance was significantly better with ID speech. This is the first demonstration that adult listeners in an indigenous, nonindustrialized, and nonliterate culture can accurately infer intentions from both ID speech and AD speech in a language they do not speak.

  4. Linking infant-directed speech and face preferences to language outcomes in infants at risk for autism spectrum disorder.

    Science.gov (United States)

    Droucker, Danielle; Curtin, Suzanne; Vouloumanos, Athena

    2013-04-01

    In this study, the authors aimed to examine whether biases for infant-directed (ID) speech and faces differ between infant siblings of children with autism spectrum disorder (ASD) (SIBS-A) and infant siblings of typically developing children (SIBS-TD), and whether speech and face biases predict language outcomes and risk group membership. Thirty-six infants were tested at ages 6, 8, 12, and 18 months. Infants heard 2 ID and 2 adult-directed (AD) speech passages paired with either a checkerboard or a face. The authors assessed expressive language at 12 and 18 months and general functioning at 12 months using the Mullen Scales of Early Learning (Mullen, 1995). Both infant groups preferred ID to AD speech and preferred faces to checkerboards. SIBS-TD demonstrated higher expressive language at 18 months than did SIBS-A, a finding that correlated with preferences for ID speech at 12 months. Although both groups looked longer to face stimuli than to the checkerboard, the magnitude of the preference was smaller in SIBS-A and predicted expressive vocabulary at 18 months in this group. Infants' preference for faces contributed to risk-group membership in a logistic regression analysis. Infants at heightened risk of ASD differ from typically developing infants in their preferences for ID speech and faces, which may underlie deficits in later language development and social communication.

  5. Acoustic characteristics of Danish infant directed speech

    DEFF Research Database (Denmark)

    Bohn, Ocke-Schwen

    2013-01-01

    speaking to their 18 month old children (infant directed speech - IDS) as opposed to an adult (adult directed speech - ADS). Caregivers were recorded talking about toy animals in conversations with their child and with an adult interlocutor. The toy names were designed to elicit Danish contrasts differing......, the Euclidean F1/F2 differences between vowels, F0 of the stressed (first) syllable in the toy name, as well as the duration of the stressed syllable, the vowels, and the fricatives. Results of the acoustic differences between ADS and IDS were compared to the results of parents' reports on the children...

  6. Infant Directed Speech Enhances Statistical Learning in Newborn Infants: An ERP Study.

    Directory of Open Access Journals (Sweden)

    Alexis N Bosseler

    Full Text Available Statistical learning and the social contexts of language addressed to infants are hypothesized to play important roles in early language development. Previous behavioral work has found that the exaggerated prosodic contours of infant-directed speech (IDS facilitate statistical learning in 8-month-old infants. Here we examined the neural processes involved in on-line statistical learning and investigated whether the use of IDS facilitates statistical learning in sleeping newborns. Event-related potentials (ERPs were recorded while newborns were exposed to12 pseudo-words, six spoken with exaggerated pitch contours of IDS and six spoken without exaggerated pitch contours (ADS in ten alternating blocks. We examined whether ERP amplitudes for syllable position within a pseudo-word (word-initial vs. word-medial vs. word-final, indicating statistical word learning and speech register (ADS vs. IDS would interact. The ADS and IDS registers elicited similar ERP patterns for syllable position in an early 0-100 ms component but elicited different ERP effects in both the polarity and topographical distribution at 200-400 ms and 450-650 ms. These results provide the first evidence that the exaggerated pitch contours of IDS result in differences in brain activity linked to on-line statistical learning in sleeping newborns.

  7. Vowels in infant-directed speech: More breathy and more variable, but not clearer.

    Science.gov (United States)

    Miyazawa, Kouki; Shinya, Takahito; Martin, Andrew; Kikuchi, Hideaki; Mazuka, Reiko

    2017-09-01

    Infant-directed speech (IDS) is known to differ from adult-directed speech (ADS) in a number of ways, and it has often been argued that some of these IDS properties facilitate infants' acquisition of language. An influential study in support of this view is Kuhl et al. (1997), which found that vowels in IDS are produced with expanded first and second formants (F1/F2) on average, indicating that the vowels are acoustically further apart in IDS than in ADS. These results have been interpreted to mean that the way vowels are produced in IDS makes infants' task of learning vowel categories easier. The present paper revisits this interpretation by means of a thorough analysis of IDS vowels using a large-scale corpus of Japanese natural utterances. We will show that the expansion of F1/F2 values does occur in spontaneous IDS even when the vowels' prosodic position, lexical pitch accent, and lexical bias are accounted for. When IDS vowels are compared to carefully read speech (CS) by the same mothers, however, larger variability among IDS vowel tokens means that the acoustic distances among vowels are farther apart only in CS, but not in IDS when compared to ADS. Finally, we will show that IDS vowels are significantly more breathy than ADS or CS vowels. Taken together, our results demonstrate that even though expansion of formant values occurs in spontaneous IDS, this expansion cannot be interpreted as an indication that the acoustic distances among vowels are farther apart, as is the case in CS. Instead, we found that IDS vowels are characterized by breathy voice, which has been associated with the communication of emotional affect. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Infant-Directed Speech Drives Social Preferences in 5-Month-Old Infants

    Science.gov (United States)

    Schachner, Adena; Hannon, Erin E.

    2011-01-01

    Adults across cultures speak to infants in a specific infant-directed manner. We asked whether infants use this manner of speech (infant- or adult-directed) to guide their subsequent visual preferences for social partners. We found that 5-month-old infants encode an individuals' use of infant-directed speech and adult-directed speech, and use this…

  9. Cross-Cultural Register Differences in Infant-Directed Speech: An Initial Study.

    Directory of Open Access Journals (Sweden)

    Lama K Farran

    Full Text Available Infant-directed speech (IDS provides an environment that appears to play a significant role in the origins of language in the human infant. Differences have been reported in the use of IDS across cultures, suggesting different styles of infant language-learning. Importantly, both cross-cultural and intra-cultural research suggest there may be a positive relationship between the use of IDS and rates of language development, underscoring the need to investigate cultural differences more deeply. The majority of studies, however, have conceptualized IDS monolithically, granting little attention to a potentially key distinction in how IDS manifests across cultures during the first two years. This study examines and quantifies for the first time differences within IDS in the use of baby register (IDS/BR, an acoustically identifiable type of IDS that includes features such as high pitch, long duration, and smooth intonation (the register that is usually assumed to occur in IDS, and adult register (IDS/AR, the type of IDS that does not include such features and thus sounds as if it could have been addressed to an adult. We studied IDS across 19 American and 19 Lebanese mother-infant dyads, with particular focus on the differential use of registers within IDS as mothers interacted with their infants ages 0-24 months. Our results showed considerable usage of IDS/AR (>30% of utterances and a tendency for Lebanese mothers to use more IDS than American mothers. Implications for future research on IDS and its role in elucidating how language evolves across cultures are explored.

  10. Cross-Cultural Register Differences in Infant-Directed Speech: An Initial Study.

    Science.gov (United States)

    Farran, Lama K; Lee, Chia-Cheng; Yoo, Hyunjoo; Oller, D Kimbrough

    2016-01-01

    Infant-directed speech (IDS) provides an environment that appears to play a significant role in the origins of language in the human infant. Differences have been reported in the use of IDS across cultures, suggesting different styles of infant language-learning. Importantly, both cross-cultural and intra-cultural research suggest there may be a positive relationship between the use of IDS and rates of language development, underscoring the need to investigate cultural differences more deeply. The majority of studies, however, have conceptualized IDS monolithically, granting little attention to a potentially key distinction in how IDS manifests across cultures during the first two years. This study examines and quantifies for the first time differences within IDS in the use of baby register (IDS/BR), an acoustically identifiable type of IDS that includes features such as high pitch, long duration, and smooth intonation (the register that is usually assumed to occur in IDS), and adult register (IDS/AR), the type of IDS that does not include such features and thus sounds as if it could have been addressed to an adult. We studied IDS across 19 American and 19 Lebanese mother-infant dyads, with particular focus on the differential use of registers within IDS as mothers interacted with their infants ages 0-24 months. Our results showed considerable usage of IDS/AR (>30% of utterances) and a tendency for Lebanese mothers to use more IDS than American mothers. Implications for future research on IDS and its role in elucidating how language evolves across cultures are explored.

  11. How Salient Are Onomatopoeia in the Early Input? A Prosodic Analysis of Infant-Directed Speech

    Science.gov (United States)

    Laing, Catherine E.; Vihman, Marilyn; Keren-Portnoy, Tamar

    2017-01-01

    Onomatopoeia are frequently identified amongst infants' earliest words (Menn & Vihman, 2011), yet few authors have considered why this might be, and even fewer have explored this phenomenon empirically. Here we analyze mothers' production of onomatopoeia in infant-directed speech (IDS) to provide an input-based perspective on these forms.…

  12. Auditory observation of infant-directed speech by mothers: Experience-dependent interaction between language and emotion in the basal ganglia

    Directory of Open Access Journals (Sweden)

    Yoshi-Taka eMatsuda

    2014-11-01

    Full Text Available Adults address infants with a special speech register known as infant-directed speech (IDS, which conveys both linguistic and emotional information through its characteristic lexicon and exaggerated prosody (e.g., higher pitched, slower, and hyperarticulated. Although caregivers are known to regulate the usage of IDS (linguistic and emotional components depending on their child’s development, the underlying neural substrates of this flexible modification are largely unknown. Here, using an auditory observation method and functional magnetic resonance imaging (fMRI of four different groups of females, we revealed the experience-dependent influence of the emotional component on linguistic processing in the right caudate nucleus when mothers process IDS: (1 non-mothers, who do not use IDS regularly, showed no significant difference between IDS and adult-directed speech (ADS; (2 mothers with preverbal infants, who primarily use the emotional component of IDS, showed the main effect of the emotional component of IDS; (3 mothers with toddlers at the two-word stage, who use both linguistic and emotional components of IDS, showed an interaction between the linguistic and emotional components of IDS; and (4 mothers with school-age children, who use ADS rather than IDS toward their children, showed a tendency toward the main effect of ADS. The task that was most comparable to the naturalistic categories of IDS (i.e., explicit-language and implicit-emotion processing recruited the right caudate nucleus, but it was not recruited in the control, less naturalistic condition (explicit-emotion and implicit-language processing. Our results indicate that the right caudate nucleus processes experience- and task-dependent interactions between language and emotion in mothers’ IDS.

  13. Now You Hear It, Now You Don't: Vowel Devoicing in Japanese Infant-Directed Speech

    Science.gov (United States)

    Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F.

    2010-01-01

    In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically…

  14. Face Preferences for Infant- and Adult-Directed Speakers in Infants of Depressed and Nondepressed Mothers: Association with Infant Cognitive Development.

    Science.gov (United States)

    Kaplan, Peter S; Asherin, Ryan M; Vogeli, Jo M; Fekri, Shiva M; Scheyer, Kathryn E; Everhart, Kevin D

    2018-01-01

    Face preferences for speakers of infant-directed and adult-directed speech (IDS and ADS) were investigated in 4- to 13.5-month-old infants of depressed and non-depressed mothers. Following 1-min of exposure to an ID or AD speaker (order counterbalanced), infants had an immediate paired-comparison test with a still, silent image of the familiarized versus a novel face. In the test phase, ID face preference ratios were significantly lower in infants of depressed than non-depressed mothers. Infants' ID face preference ratios, but not AD face preference ratios, correlated with their percentile scores on the cognitive ( Cog ) scale of the Bayley Scales of Infant & Toddler Development (3 rd Edition; BSID III), assessed concurrently. Regression analyses revealed that infant ID face preferences significantly predicted infant Cog percentiles even after demographic risk factors and maternal depression had been controlled. Infants may use IDS to select social partners who are likely to support and facilitate cognitive development.

  15. Lexical Tones in Mandarin Chinese Infant-Directed Speech: Age-Related Changes in the Second Year of Life

    Directory of Open Access Journals (Sweden)

    Mengru Han

    2018-04-01

    Full Text Available Tonal information is essential to early word learning in tone languages. Although numerous studies have investigated the intonational and segmental properties of infant-directed speech (IDS, only a few studies have explored the properties of lexical tones in IDS. These studies mostly focused on the first year of life; thus little is known about how lexical tones in IDS change as children’s vocabulary acquisition accelerates in the second year (Goldfield and Reznick, 1990; Bloom, 2001. The present study examines whether Mandarin Chinese mothers hyperarticulate lexical tones in IDS addressing 18- and 24-month-old children—at which age children are learning words at a rapid speed—vs. adult-directed speech (ADS. Thirty-nine Mandarin Chinese–speaking mothers were tested in a semi-spontaneous picture-book-reading task, in which they told the same story to their child (IDS condition and to an adult (ADS condition. Results for the F0 measurements (minimum F0, maximum F0, and F0 range of tone in the speech data revealed a continuum of differences among IDS addressing 18-month-olds, IDS addressing 24-month-olds, and ADS. Lexical tones in IDS addressing 18-month-old children had a higher minimum F0, higher maximum F0, and larger pitch range than lexical tones in ADS. Lexical tones in IDS addressing 24-month-old children showed more similarity to ADS tones with respect to pitch height: there were no differences in minimum F0 and maximum F0 between ADS and IDS. However, F0 range was still larger. These results suggest that lexical tones are generally hyperarticulated in Mandarin Chinese IDS addressing 18- and 24- month-old children despite the change in pitch level over time. Mandarin Chinese mothers hyperarticulate lexical tones in IDS when talking to toddlers and potentially facilitate tone acquisition and word learning.

  16. Hierarchical organization in the temporal structure of infant-direct speech and song.

    Science.gov (United States)

    Falk, Simone; Kello, Christopher T

    2017-06-01

    Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech. Events were defined as peaks in the amplitude envelope, and clusters of various sizes related to periods of acoustic speech energy at varying timescales. Infant-directed speech and song clearly showed greater event clustering compared with adult-directed registers, at multiple timescales of hundreds of milliseconds to tens of seconds. We discuss the relation of this newly discovered acoustic property to temporal variability in linguistic units and its potential implications for parent-infant communication and infants learning the hierarchical structures of speech and language. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. 'Who's a good boy?!' Dogs prefer naturalistic dog-directed speech.

    Science.gov (United States)

    Benjamin, Alex; Slocombe, Katie

    2018-05-01

    Infant-directed speech (IDS) is a special speech register thought to aid language acquisition and improve affiliation in human infants. Although IDS shares some of its properties with dog-directed speech (DDS), it is unclear whether the production of DDS is functional, or simply an overgeneralisation of IDS within Western cultures. One recent study found that, while puppies attended more to a script read with DDS compared with adult-directed speech (ADS), adult dogs displayed no preference. In contrast, using naturalistic speech and a more ecologically valid set-up, we found that adult dogs attended to and showed more affiliative behaviour towards a speaker of DDS than of ADS. To explore whether this preference for DDS was modulated by the dog-specific words typically used in DDS, the acoustic features (prosody) of DDS or a combination of the two, we conducted a second experiment. Here the stimuli from experiment 1 were produced with reversed prosody, meaning the prosody and content of ADS and DDS were mismatched. The results revealed no significant effect of speech type, or content, suggesting that it is maybe the combination of the acoustic properties and the dog-related content of DDS that modulates the preference shown for naturalistic DDS. Overall, the results of this study suggest that naturalistic DDS, comprising of both dog-directed prosody and dog-relevant content words, improves dogs' attention and may strengthen the affiliative bond between humans and their pets.

  18. Sixteen-Month-Old Infants' Segment Words from Infant- and Adult-Directed Speech

    Science.gov (United States)

    Mani, Nivedita; Pätzold, Wiebke

    2016-01-01

    One of the first challenges facing the young language learner is the task of segmenting words from a natural language speech stream, without prior knowledge of how these words sound. Studies with younger children find that children find it easier to segment words from fluent speech when the words are presented in infant-directed speech, i.e., the…

  19. A Privileged Status for Male Infant-Directed Speech in Infants of Depressed Mothers? Role of Father Involvement

    Science.gov (United States)

    Kaplan, Peter S.; Danko, Christina M.; Diaz, Andres

    2010-01-01

    Prior research showed that 5- to 13-month-old infants of chronically depressed mothers did not learn to associate a segment of infant-directed speech produced by their own mothers or an unfamiliar nondepressed mother with a smiling female face, but showed better-than-normal learning when a segment of infant-directed speech produced by an…

  20. Asymmetry in infants' selective attention to facial features during visual processing of infant-directed speech

    OpenAIRE

    Smith, Nicholas A.; Gibilisco, Colleen R.; Meisinger, Rachel E.; Hankey, Maren

    2013-01-01

    Two experiments used eye tracking to examine how infant and adult observers distribute their eye gaze on videos of a mother producing infant- and adult-directed speech. Both groups showed greater attention to the eyes than to the nose and mouth, as well as an asymmetrical focus on the talker’s right eye for infant-directed speech stimuli. Observers continued to look more at the talker’s apparent right eye when the video stimuli were mirror flipped, suggesting that the asymmetry reflects a per...

  1. Phonetic Category Cues in Adult-Directed Speech: Evidence from Three Languages with Distinct Vowel Characteristics

    Science.gov (United States)

    Pons, Ferran; Biesanz, Jeremy C.; Kajikawa, Sachiyo; Fais, Laurel; Narayan, Chandan R.; Amano, Shigeaki; Werker, Janet F.

    2012-01-01

    Using an artificial language learning manipulation, Maye, Werker, and Gerken (2002) demonstrated that infants' speech sound categories change as a function of the distributional properties of the input. In a recent study, Werker et al. (2007) showed that Infant-directed Speech (IDS) input contains reliable acoustic cues that support distributional…

  2. Mother-Infant Face-to-Face Interaction: The Communicative Value of Infant-Directed Talking and Singing.

    Science.gov (United States)

    Arias, Diana; Peña, Marcela

    Across culture, healthy infants show a high interest in infant-directed (ID) talking and singing. Despite ID talking and ID singing being very similar in physical properties, infants differentially respond to each of them. The mechanisms underpinning these different responses are still under discussion. This study explored the behavioral (n = 26) and brain (n = 14) responses from 6- to 8-month-old infants to ID talking and ID singing during a face-to-face mother-infant interaction with their own mother. Behavioral response was analyzed from offline video coding, and brain response was estimated from the analysis of electrophysiological recordings. We found that during ID talking, infants displayed a significantly higher number of visual contacts, vocalizations, and body movements than during ID singing. Moreover, only during ID talking were the number of visual contacts and vocalizations positively correlated with the number of questions and pauses in the mother's speech. Our results suggest that ID talking provides infants with specific cues that allow them not only to react to mother stimulation, but also to act toward them, displaying a rudimentary version of turn-taking behavior. Brain activity partially supported that interpretation. The relevance of our results for bonding is discussed. © 2016 S. Karger AG, Basel.

  3. Prosodic differences between declaratives and interrogatives in infant-directed speech.

    Science.gov (United States)

    Geffen, Susan; Mintz, Toben H

    2017-07-01

    In many languages, declaratives and interrogatives differ in word order properties, and in syntactic organization more broadly. Thus, in order to learn the distinct syntactic properties of the two sentence types, learners must first be able to distinguish them using non-syntactic information. Prosodic information is often assumed to be a useful basis for this type of discrimination, although no systematic studies of the prosodic cues available to infants have been reported. Analysis of maternal speech in three Standard American English-speaking mother-infant dyads found that polar interrogatives differed from declaratives on the patterning of pitch and duration on the final two syllables, but wh-questions did not. Thus, while prosody is unlikely to aid discrimination of declaratives from wh-questions, infant-directed speech provides prosodic information that infants could use to distinguish declaratives and polar interrogatives. We discuss how learners could leverage this information to identify all question forms, in the context of syntax acquisition.

  4. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  5. Asymmetry in infants’ selective attention to facial features during visual processing of infant-directed speech

    Directory of Open Access Journals (Sweden)

    Nicholas A Smith

    2013-09-01

    Full Text Available Two experiments used eye tracking to examine how infant and adult observers distribute their eye gaze on videos of a mother producing infant- and adult-directed speech. Both groups showed greater attention to the eyes than to the nose and mouth, as well as an asymmetrical focus on the talker’s right eye for infant-directed speech stimuli. Observers continued to look more at the talker’s apparent right eye when the video stimuli were mirror flipped, suggesting that the asymmetry reflects a perceptual processing bias rather than a stimulus artifact, which may be related to cerebral lateralization of emotion processing.

  6. When Infants Talk, Infants Listen: Pre-Babbling Infants Prefer Listening to Speech with Infant Vocal Properties

    Science.gov (United States)

    Masapollo, Matthew; Polka, Linda; Ménard, Lucie

    2016-01-01

    To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre-babbling infants (at 4-6 months) prefer listening to…

  7. Self-Regulation and Infant-Directed Singing in Infants with Down Syndrome.

    Science.gov (United States)

    de l'Etoile, Shannon K

    2015-01-01

    Infants learn how to regulate internal states and subsequent behavior through dyadic interactions with caregivers. During infant-directed (ID) singing, mothers help infants practice attentional control and arousal modulation, thus providing critical experience in self-regulation. Infants with Down syndrome are known to have attention deficits and delayed information processing as well as difficulty managing arousability, factors that may disrupt their efforts at self-regulation. The researcher explored responses to ID singing in infants with Down syndrome (DS) and compared them with those of typically developing (TD) infants. Behaviors measured included infant gaze and affect as indicators of self-regulation. Participants included 3- to 9-month-old infants with and without DS who were videotaped throughout a 2-minute face-to-face interaction during which their mothers sang to them any song(s) of their choosing. Infant behavior was then coded for percentage of time spent demonstrating a specific gaze or affect type. All infants displayed sustained gaze more than any other gaze type. TD infants demonstrated intermittent gaze significantly more often than infants with DS. Infant status had no effect on affect type, and all infants showed predominantly neutral affect. Findings suggest that ID singing effectively maintains infant attention for both TD infants and infants with DS. However, infants with DS may have difficulty shifting attention during ID singing as needed to adjust arousal levels and self-regulate. High levels of neutral affect for all infants imply that ID singing is likely to promote a calm, curious state, regardless of infant status. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Infant-Directed Speech Supports Phonetic Category Learning in English and Japanese

    Science.gov (United States)

    Werker, Janet F.; Pons, Ferran; Dietrich, Christiane; Kajikawa, Sachiyo; Fais, Laurel; Amano, Shigeaki

    2007-01-01

    Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. "Infant Behaviour and Development," 7, 49-63]. In an artificial language learning…

  9. Infants' brain responses to speech suggest analysis by synthesis.

    Science.gov (United States)

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  10. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  11. Refining Stimulus Parameters in Assessing Infant Speech Perception Using Visual Reinforcement Infant Speech Discrimination: Sensation Level.

    Science.gov (United States)

    Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy

    2015-01-01

    Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a

  12. Dog-directed speech: why do we use it and do dogs pay attention to it?

    Science.gov (United States)

    Ben-Aderet, Tobey; Gallego-Abenza, Mario; Reby, David; Mathevon, Nicolas

    2017-01-11

    Pet-directed speech is strikingly similar to infant-directed speech, a peculiar speaking pattern with higher pitch and slower tempo known to engage infants' attention and promote language learning. Here, we report the first investigation of potential factors modulating the use of dog-directed speech, as well as its immediate impact on dogs' behaviour. We recorded adult participants speaking in front of pictures of puppies, adult and old dogs, and analysed the quality of their speech. We then performed playback experiments to assess dogs' reaction to dog-directed speech compared with normal speech. We found that human speakers used dog-directed speech with dogs of all ages and that the acoustic structure of dog-directed speech was mostly independent of dog age, except for sound pitch which was relatively higher when communicating with puppies. Playback demonstrated that, in the absence of other non-auditory cues, puppies were highly reactive to dog-directed speech, and that the pitch was a key factor modulating their behaviour, suggesting that this specific speech register has a functional value in young dogs. Conversely, older dogs did not react differentially to dog-directed speech compared with normal speech. The fact that speakers continue to use dog-directed with older dogs therefore suggests that this speech pattern may mainly be a spontaneous attempt to facilitate interactions with non-verbal listeners. © 2017 The Author(s).

  13. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  14. Word-Form Familiarity Bootstraps Infant Speech Segmentation

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita

    2013-01-01

    At about 7 months of age, infants listen longer to sentences containing familiar words--but not deviant pronunciations of familiar words (Jusczyk & Aslin, 1995). This finding suggests that infants are able to segment familiar words from fluent speech and that they store words in sufficient phonological detail to recognize deviations from a…

  15. Infants' Behaviors as Antecedents and Consequents of Mothers' Responsive and Directive Utterances

    Science.gov (United States)

    Masur, Elise Frank; Flynn, Valerie; Lloyd, Carrie A.

    2013-01-01

    To investigate possible influences on and consequences of mothers' speech, specific infant behaviors preceding and following four pragmatic categories of mothers' utterances--responsive utterances, supportive behavioral directives, intrusive behavioral directives, and intrusive attentional directives--were examined longitudinally during dyadic…

  16. Musical intervention enhances infants' neural processing of temporal structure in music and speech.

    Science.gov (United States)

    Zhao, T Christina; Kuhl, Patricia K

    2016-05-10

    Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants' neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants' neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants' neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants' ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing.

  17. Acoustic parameters of infant-directed singing in mothers of infants with down syndrome.

    Science.gov (United States)

    de l'Etoile, Shannon; Behura, Samarth; Zopluoglu, Cengiz

    2017-11-01

    This study compared the acoustic parameters and degree of perceived warmth in two types of infant-directed (ID) songs - the lullaby and the playsong - between mothers of infants with Down syndrome (DS) and mothers of typically-developing (TD) infants. Participants included mothers of 15 DS infants and 15 TD infants between 3 and 9 months of age. Each mother's singing voice was digitally recorded while singing to her infant and subjected to feature extraction and data mining. Mothers of DS infants and TD infants sang both lullabies and playsongs with similar frequency. In comparison with mothers of TD infants, mothers of DS infants used a higher maximum pitch and more key changes during playsong. Mothers of DS infants also took more time to establish a rhythmic structure in their singing. These differences suggest mothers are sensitive to the attentional and arousal needs of their DS infants. Mothers of TD infants sang with a higher degree of perceived warmth which does not agree with previous observations of "forceful warmth" in mothers of DS infants. In comparison with lullaby, all mothers sang playsong with higher overall pitch and slower tempo. Playsongs were also distinguished by higher levels of spectral centroid properties related to emotional expressivity, as well as higher degrees of perceived warmth. These similarities help to define specific song types, and suggest that all mothers sing in an expressive manner that can modulate infant arousal, including mothers of DS infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  19. Acoustic analyses of speech sounds and rhythms in Japanese- and English-learning infants

    Directory of Open Access Journals (Sweden)

    Yuko eYamashita

    2013-02-01

    Full Text Available The purpose of this study was to explore developmental changes, in terms of spectral fluctuations and temporal periodicity with Japanese- and English-learning infants. Three age groups (15, 20, and 24 months were selected, because infants diversify phonetic inventories with age. Natural speech of the infants was recorded. We utilized a critical-band-filter bank, which simulated the frequency resolution in adults’ auditory periphery. First, the correlations between the critical-band outputs represented by factor analysis were observed in order to see how the critical bands should be connected to each other, if a listener is to differentiate sounds in infants’ speech. In the following analysis, we analyzed the temporal fluctuations of factor scores by calculating autocorrelations. The present analysis identified three factors observed in adult speech at 24 months of age in both linguistic environments. These three factors were shifted to a higher frequency range corresponding to the smaller vocal tract size of the infants. The results suggest that the vocal tract structures of the infants had developed to become adult-like configuration by 24 months of age in both language environments. The amount of utterances with periodic nature of shorter time increased with age in both environments. This trend was clearer in the Japanese environment.

  20. Brain responses and looking behaviour during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.

    Directory of Open Access Journals (Sweden)

    Elena V Kushnerenko

    2013-07-01

    Full Text Available The use of visual cues during the processing of audiovisual speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6-9 months to 14-16 months of age. We used eye-tracking to examine whether individual differences in visual attention during audiovisual processing of speech in 6 to 9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6-9 month old infants also participated in an event-related potential (ERP audiovisual task within the same experimental session. Language development was then followed-up at the age of 14-16 months, using two measures of language development, the Preschool Language Scale (PLS and the Oxford Communicative Development Inventory (CDI. The results show that those infants who were less efficient in auditory speech processing at the age of 6-9 months had lower receptive language scores at 14-16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audio-visually incongruent stimuli at 6-9 months were both significantly associated with language development at 14-16 months. These findings add to the understanding of individual differences in neural signatures of audiovisual processing and associated looking behaviour in infants.

  1. Relationships between Structural and Acoustic Properties of Maternal Talk and Children's Early Word Recognition

    Science.gov (United States)

    Suttora, Chiara; Salerni, Nicoletta; Zanchi, Paola; Zampini, Laura; Spinelli, Maria; Fasolo, Mirco

    2017-01-01

    This study aimed to investigate specific associations between structural and acoustic characteristics of infant-directed (ID) speech and word recognition. Thirty Italian-acquiring children and their mothers were tested when the children were 1;3. Children's word recognition was measured with the looking-while-listening task. Maternal ID speech was…

  2. Identifying Cortical Lateralization of Speech Processing in Infants Using Near-Infrared Spectroscopy

    Science.gov (United States)

    Bortfeld, Heather; Fava, Eswen; Boas, David A.

    2010-01-01

    We investigate the utility of near-infrared spectroscopy (NIRS) as an alternative technique for studying infant speech processing. NIRS is an optical imaging technology that uses relative changes in total hemoglobin concentration and oxygenation as an indicator of neural activation. Procedurally, NIRS has the advantage over more common methods (e.g., fMRI) in that it can be used to study the neural responses of behaviorally active infants. Older infants (aged 6–9 months) were allowed to sit on their caretakers’ laps during stimulus presentation to determine relative differences in focal activity in the temporal region of the brain during speech processing. Results revealed a dissociation of sensory-specific processing in two cortical regions, the left and right temporal lobes. These findings are consistent with those obtained using other neurophysiological methods and point to the utility of NIRS as a means of establishing neural correlates of language development in older (and more active) infants. PMID:19142766

  3. The Artistic Infant Directed Performance: A Mycroanalysis of the Adult's Movements and Sounds.

    Science.gov (United States)

    Español, Silvia; Shifres, Favio

    2015-09-01

    Intersubjectivity experiences established between adults and infants are partially determined by the particular ways in which adults are active in front of babies. An important amount of research focuses on the "musicality" of infant-directed speech (defined melodic contours, tonal and rhythm variations, etc.) and its role in linguistic enculturation. However, researchers have recently suggested that adults also bring a multimodal performance to infants. According to this, some scholars seem to find indicators of the genesis of the performing arts (mainly music and dance) in such a multimodal stimulation. We analyze the adult performance using analytical categories and methodologies of analysis broadly validated in the fields of music performance and movement analysis in contemporary dance. We present microanalyses of an adult-7 month old infant interaction scene that evidenced structural aspects of infant directed multimodal performance compatible with music and dance structures, and suggest functions of adult performance similar to performing arts functions or related to them.

  4. Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones

    Science.gov (United States)

    Heinzen, Christina Carolyn

    The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns

  5. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  6. Audio-visual speech perception in infants and toddlers with Down syndrome, fragile X syndrome, and Williams syndrome.

    Science.gov (United States)

    D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette

    2016-08-01

    Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Volubility, consonant, and syllable characteristics in infants and toddlers later diagnosed with childhood apraxia of speech: A pilot study.

    Science.gov (United States)

    Overby, Megan; Caspari, Susan S

    2015-01-01

    This pilot study explored the volubility, consonant singleton acquisition, and syllable structure development between infants and toddlers (birth-24 months) with typical speech sound production (TYP) and those later diagnosed with childhood apraxia of speech (CAS). A retrospective longitudinal between- and within-subjects research design was utilized (TYP N=2; CAS N=4). Vocalizations from participants were analyzed between birth-24 months from home videotapes, volunteered by the children's parents, according to type (nonresonant vs. resonant), volubility, place and manner of consonant singletons, and syllable shape (V, CV, VC, CVC, VCV, CVCV, VCVC, and "Other"). Volubility between groups was not significant but statistically significant differences were found in the number of: resonant and non-resonant productions; different consonant singletons; different place features; different manner classes; and proportional use of fricative, glottal, and voiceless phones. Infants and toddlers in the CAS group also demonstrated difficulty with CVCs, had limited syllable shapes, and possible regression of vowel syllable structure. Data corroborate parent reports that infants and toddlers later diagnosed with CAS present differently than do those with typical speech sound skills. Additional study with infants and toddlers later diagnosed with non-CAS speech sound disorder is needed. Readers will: (1) describe current perspectives on volubility of infants and toddlers later diagnosed with CAS; (2) describe current perspectives of the consonant singleton and syllable shape inventories of infants and toddlers later diagnosed with CAS; and (3) discuss the potential differences between the speech sound development of infants and toddlers later diagnosed with CAS and those with typical speech sound skill. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Influences on infant speech processing: toward a new synthesis.

    Science.gov (United States)

    Werker, J F; Tees, R C

    1999-01-01

    To comprehend and produce language, we must be able to recognize the sound patterns of our language and the rules for how these sounds "map on" to meaning. Human infants are born with a remarkable array of perceptual sensitivities that allow them to detect the basic properties that are common to the world's languages. During the first year of life, these sensitivities undergo modification reflecting an exquisite tuning to just that phonological information that is needed to map sound to meaning in the native language. We review this transition from language-general to language-specific perceptual sensitivity that occurs during the first year of life and consider whether the changes propel the child into word learning. To account for the broad-based initial sensitivities and subsequent reorganizations, we offer an integrated transactional framework based on the notion of a specialized perceptual-motor system that has evolved to serve human speech, but which functions in concert with other developing abilities. In so doing, we highlight the links between infant speech perception, babbling, and word learning.

  9. Multimodal Infant-Directed Communication: How Caregivers Combine Tactile and Linguistic Cues

    Science.gov (United States)

    Abu-Zhaya, Rana; Seidl, Amanda; Cristia, Alejandrina

    2017-01-01

    Both touch and speech independently have been shown to play an important role in infant development. However, little is known about how they may be combined in the input to the child. We examined the use of touch and speech together by having mothers read their 5-month-olds books about body parts and animals. Results suggest that speech+touch…

  10. Perception of Speech Modulation Cues by 6-Month-Old Infants

    Science.gov (United States)

    Cabrera, Laurianne; Bertoncini, Josiane; Lorenzi, Christian

    2013-01-01

    Purpose: The capacity of 6-month-old infants to discriminate a voicing contrast (/aba/--/apa/) on the basis of "amplitude modulation (AM) cues" and "frequency modulation (FM) cues" was evaluated. Method: Several vocoded speech conditions were designed to either degrade FM cues in 4 or 32 bands or degrade AM in 32 bands. Infants…

  11. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  12. The Influence of Direct and Indirect Speech on Source Memory

    Directory of Open Access Journals (Sweden)

    Anita Eerland

    2018-02-01

    Full Text Available People perceive the same situation described in direct speech (e.g., John said, “I like the food at this restaurant” as more vivid and perceptually engaging than described in indirect speech (e.g., John said that he likes the food at the restaurant. So, if direct speech enhances the perception of vividness relative to indirect speech, what are the effects of using indirect speech? In four experiments, we examined whether the use of direct and indirect speech influences the comprehender’s memory for the identity of the speaker. Participants read a direct or an indirect speech version of a story and then addressed statements to one of the four protagonists of the story in a memory task. We found better source memory at the level of protagonist gender after indirect than direct speech (Exp. 1–3. When the story was rewritten to make the protagonists more distinctive, we also found an effect of speech type on source memory at the level of the individual, with better memory after indirect than direct speech (Exp. 3–4. Memory for the content of the story, however, was not influenced by speech type (Exp. 4. While previous research showed that direct speech may enhance memory for how something was said, we conclude that indirect speech enhances memory for who said what.

  13. Differing Developmental Trajectories in Heart Rate Responses to Speech Stimuli in Infants at High and Low Risk for Autism Spectrum Disorder.

    Science.gov (United States)

    Perdue, Katherine L; Edwards, Laura A; Tager-Flusberg, Helen; Nelson, Charles A

    2017-08-01

    We investigated heart rate (HR) in infants at 3, 6, 9, and 12 months of age, at high (HRA) and low (LRC) familial risk for ASD, to identify potential endophenotypes of ASD risk related to attentional responses. HR was extracted from functional near-infrared spectroscopy recordings while infants listened to speech stimuli. Longitudinal analysis revealed that HRA infants and males generally had lower baseline HR than LRC infants and females. HRA infants showed decreased HR responses to early trials over development, while LRC infants showed increased responses. These findings suggest altered developmental trajectories in physiological responses to speech stimuli over the first year of life, with HRA infants showing less social orienting over time.

  14. Direct speech constructions in aphasic Dutch narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Huiskes, Mike

    2013-01-01

    Background: Previous studies have shown that individuals with aphasia are usually able to produce direct reported speech constructions. So far these studies have mainly been conducted in English. The results show that direct speech is beneficial for aphasic speakers for various reasons. In Dutch the

  15. Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech.

    Directory of Open Access Journals (Sweden)

    Victoria Leong

    Full Text Available When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes. Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS. Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz, syllables (Syllable AM, ~5 Hz and onset-rime units (Phoneme AM, ~20 Hz. We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words, syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72-82% (freely-read CDS and 90-98% (rhythmically-regular CDS stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across

  16. Perceived liveliness and speech comprehensibility in aphasia : the effects of direct speech in auditory narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in 'healthy' communication direct speech constructions contribute to the liveliness, and indirectly to

  17. The politeness prosody of the Javanese directive speech

    Directory of Open Access Journals (Sweden)

    F.X. Rahyono

    2009-10-01

    Full Text Available This experimental phonetic research deals with the prosodies of directive speech in Javanese. The research procedures were: (1 speech production, (2 acoustic analysis, and (3 perception test. The data investigated are three directive utterances, in the form of statements, commands, and questions. The data were obtained by recording dialogues that present polite as well as impolite speech. Three acoustic experiments were conducted for statements, commands, and questions in directive speech: (1 modifications of duration, (2 modifications of contour, and (3 modifications of fundamental frequency. The result of the subsequent perception tests to 90 stimuli with 24 subjects were analysed statistically with ANOVA (Analysis of Variant. Based on this statistic analysis, the prosodic characteristics of polite and impolite speech were identified.

  18. THE DIRECTIVE SPEECH ACTS USED IN ENGLISH SPEAKING CLASS

    Directory of Open Access Journals (Sweden)

    Muhammad Khatib Bayanuddin

    2016-12-01

    Full Text Available This research discusses about an analysis of the directive speech acts used in english speaking class at the third semester of english speaking class of english study program of IAIN STS Jambi. The aims of this research are to describe the types of directive speech acts and politeness strategies that found in English speaking class. This research used descriptive qualitative method. This method used to describe clearly about the types and politeness strategies of directive speech acts based on the data in English speaking class. The result showed that in English speaking class that there are some types and politeness strategies of directive speech acts, such as: requestives, questions, requirements, prohibitives, permissives, and advisores as types, as well as on-record indirect strategies (prediction statement, strong obligation statement, possibility statement, weaker obligation statement, volitional statement, direct strategies (imperative, performative, and nonsentential strategies as politeness strategies. The achievement of this research are hoped can be additional knowledge about linguistics study, especially in directive speech acts and can be developed for future researches. Key words: directive speech acts, types, politeness strategies.

  19. Sensitivity of cortical auditory evoked potential detection for hearing-impaired infants in response to short speech sounds

    Directory of Open Access Journals (Sweden)

    Bram Van Dun

    2012-01-01

    Full Text Available

    Background: Cortical auditory evoked potentials (CAEPs are an emerging tool for hearing aid fitting evaluation in young children who cannot provide reliable behavioral feedback. It is therefore useful to determine the relationship between the sensation level of speech sounds and the detection sensitivity of CAEPs.

    Design and methods: Twenty-five sensorineurally hearing impaired infants with an age range of 8 to 30 months were tested once, 18 aided and 7 unaided. First, behavioral thresholds of speech stimuli /m/, /g/, and /t/ were determined using visual reinforcement orientation audiometry (VROA. Afterwards, the same speech stimuli were presented at 55, 65, and 75 dB SPL, and CAEP recordings were made. An automatic statistical detection paradigm was used for CAEP detection.

    Results: For sensation levels above 0, 10, and 20 dB respectively, detection sensitivities were equal to 72 ± 10, 75 ± 10, and 78 ± 12%. In 79% of the cases, automatic detection p-values became smaller when the sensation level was increased by 10 dB.

    Conclusions: The results of this study suggest that the presence or absence of CAEPs can provide some indication of the audibility of a speech sound for infants with sensorineural hearing loss. The detection of a CAEP provides confidence, to a degree commensurate with the detection probability, that the infant is detecting that sound at the level presented. When testing infants where the audibility of speech sounds has not been established behaviorally, the lack of a cortical response indicates the possibility, but by no means a certainty, that the sensation level is 10 dB or less.

  20. Atypical lateralization of ERP response to native and non-native speech in infants at risk for autism spectrum disorder.

    Science.gov (United States)

    Seery, Anne M; Vogel-Farley, Vanessa; Tager-Flusberg, Helen; Nelson, Charles A

    2013-07-01

    Language impairment is common in autism spectrum disorders (ASD) and is often accompanied by atypical neural lateralization. However, it is unclear when in development language impairment or atypical lateralization first emerges. To address these questions, we recorded event-related-potentials (ERPs) to native and non-native speech contrasts longitudinally in infants at risk for ASD (HRA) over the first year of life to determine whether atypical lateralization is present as an endophenotype early in development and whether these infants show delay in a very basic precursor of language acquisition: phonemic perceptual narrowing. ERP response for the HRA group to a non-native speech contrast revealed a trajectory of perceptual narrowing similar to a group of low-risk controls (LRC), suggesting that phonemic perceptual narrowing does not appear to be delayed in these high-risk infants. In contrast there were significant group differences in the development of lateralized ERP response to speech: between 6 and 12 months the LRC group displayed a lateralized response to the speech sounds, while the HRA group failed to display this pattern. We suggest the possibility that atypical lateralization to speech may be an ASD endophenotype over the first year of life. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Finding Words and Word Structure in Artificial Speech: The Development of Infants' Sensitivity to Morphosyntactic Regularities

    Science.gov (United States)

    Marchetto, Erika; Bonatti, Luca L.

    2015-01-01

    To achieve language proficiency, infants must find the building blocks of speech and master the rules governing their legal combinations. However, these problems are linked: words are also built according to rules. Here, we explored early morphosyntactic sensitivity by testing when and how infants could find either words or within-word structure…

  2. Precursors to language in preterm infants: speech perception abilities in the first year of life.

    Science.gov (United States)

    Bosch, Laura

    2011-01-01

    Language development in infants born very preterm is often compromised. Poor language skills have been described in preschoolers and differences between preterms and full terms, relative to early vocabulary size and morphosyntactical complexity, have also been identified. However, very few data are available concerning early speech perception abilities and their predictive value for later language outcomes. An overview of the results obtained in a prospective study exploring the link between early speech perception abilities and lexical development in the second year of life in a population of very preterm infants (≤32 gestation weeks) is presented. Specifically, behavioral measures relative to (a) native-language recognition and discrimination from a rhythmically distant and a rhythmically close nonfamiliar languages, and (b) monosyllabic word-form segmentation, were obtained and compared to data from full-term infants. Expressive vocabulary at two test ages (12 and 18 months, corrected age for gestation) was measured using the MacArthur Communicative Development Inventory. Behavioral results indicated that differences between preterm and control groups were present, but only evident when task demands were high in terms of language processing, selective attention to relevant information and memory load. When responses could be based on acquired knowledge from accumulated linguistic experience, between-group differences were no longer observed. Critically, while preterm infants responded satisfactorily to the native-language recognition and discrimination tasks, they clearly differed from full-term infants in the more challenging activity of extracting and retaining word-form units from fluent speech, a fundamental ability for starting to building a lexicon. Correlations between results from the language discrimination tasks and expressive vocabulary measures could not be systematically established. However, attention time to novel words in the word segmentation

  3. The attention-getting capacity of whines and child-directed speech.

    Science.gov (United States)

    Chang, Rosemarie Sokol; Thompson, Nicholas S

    2010-06-03

    The current study tested the ability of whines and child-directed speech to attract the attention of listeners involved in a story repetition task. Twenty non-parents and 17 parents were presented with two dull stories, each playing to a separate ear, and asked to repeat one of the stories verbatim. The story that participants were instructed to ignore was interrupted occasionally with the reader whining and using child-directed speech. While repeating the passage, participants were monitored for Galvanic skin response, heart rate, and blood pressure. Based on 4 measures, participants tuned in more to whining, and to a lesser extent child-directed speech, than neutral speech segments that served as a control. Participants, regardless of gender or parental status, made more mistakes when presented with the whine or child-directed speech, they recalled hearing those vocalizations, they recognized more words from the whining segment than the neutral control segment, and they exhibited higher Galvanic skin response during the presence of whines and child- directed speech than neutral speech segments. Whines and child-directed speech appear to be integral members of a suite of vocalizations designed to get the attention of attachment partners by playing to an auditory sensitivity among humans. Whines in particular may serve the function of eliciting care at a time when caregivers switch from primarily mothers to greater care from other caregivers.

  4. The Attention-Getting Capacity of Whines and Child-Directed Speech

    Directory of Open Access Journals (Sweden)

    Rosemarie Sokol Chang

    2010-04-01

    Full Text Available The current study tested the ability of whines and child-directed speech to attract the attention of listeners involved in a story repetition task. Twenty non-parents and 17 parents were presented with two dull stories, each playing to a separate ear, and asked to repeat one of the stories verbatim. The story that participants were instructed to ignore was interrupted occasionally with the reader whining and using child-directed speech. While repeating the passage, participants were monitored for Galvanic skin response, heart rate, and blood pressure. Based on 4 measures, participants tuned in more to whining, and to a lesser extent child-directed speech, than neutral speech segments that served as a control. Participants, regardless of gender or parental status, made more mistakes when presented with the whine or child-directed speech, they recalled hearing those vocalizations, they recognized more words from the whining segment than the neutral control segment, and they exhibited higher Galvanic skin response during the presence of whines and child-directed speech than neutral speech segments. Whines and child-directed speech appear to be integral members of a suite of vocalizations designed to get the attention of attachment partners by playing to an auditory sensitivity among humans. Whines in particular may serve the function of eliciting care at a time when caregivers switch from primarily mothers to greater care from other caregivers.

  5. Contextual modulation of reading rate for direct versus indirect speech quotations.

    Science.gov (United States)

    Yao, Bo; Scheepers, Christoph

    2011-12-01

    In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Infant and Toddler Oral- and Manual-Motor Skills Predict Later Speech Fluency in Autism

    Science.gov (United States)

    Gernsbacher, Morton Ann; Sauer, Eve A.; Geye, Heather M.; Schweigert, Emily K.; Goldsmith, H. Hill

    2008-01-01

    Background: Spoken and gestural communication proficiency varies greatly among autistic individuals. Three studies examined the role of oral- and manual-motor skill in predicting autistic children's speech development. Methods: Study 1 investigated whether infant and toddler oral- and manual-motor skills predict middle childhood and teenage speech…

  7. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  8. The knowledge of pregnant teenagers on Speech Therapy related to maternal-infant health care - doi:10.5020/18061230.2007.p207

    Directory of Open Access Journals (Sweden)

    Daniela Carvalho Neves

    2012-01-01

    Full Text Available This study had as its objective to investigate the knowledge of pregnant teenagers on Speech Therapy related to maternal-infant health care. A qualitative analysis was made based on a thematic investigation of the subject matter. Ten pregnant teenagers with chronological age between ten and nineteen years old joined in the survey. They found themselves around the fifth and ninth months of gestation age, being attended at the Center of Pregnant TeenagersAttention Care at Fortaleza General Hospital. The collection of data involved the application of a semi-structured interview broaching topics that could identify what the pregnant teenagers knew about Speech Therapy and maternal-infant care. Education interventions, related to Speech Therapy health promotion, were also accomplished. The results pointed out that the pregnant teenagers’ level of knowledge on aspects such as: food transition and utensils, oral habits, language stimulation and hearing loss detection, was still incipient and unsatisfactory. It is concluded that the knowledge of the pregnant teenagers on Speech Therapy related to maternal-infant health care was unsatisfactory, which demonstrates the importance of education interventions related to human communication health care for this studied sample.

  9. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Directive Speech Act of Imamu in Katoba Discourse of Muna Ethnic

    Science.gov (United States)

    Ardianto, Ardianto; Hadirman, Hardiman

    2018-05-01

    One of the traditions of Muna ethnic is katoba ritual. Katoba ritual is one tradition that values local knowledge maintained its existence for generations until today. Katoba ritual is a ritual to be Islamic person, repentance, and the formation of a child's character (male/female) who will enter adulthood (6-11 years) using directive speech. In katoba ritual, a child who is in-katoba introduced to the teaching of the Islamic religion, customs, manners to parents and his brother and behaviour towards others which is expected to be implemented in daily life. This study aims to describe and explain the directive speech acts of the imamu in the katoba discourse of Muna ethnic. This research uses a qualitative approach. Data are collected from a natural setting, namely katoba speech discourses. The data consist of two types, namely: (a) speech data, and (b) field note data. Data are analyzed using an interactive model with four stages: (1) data collection, (2) data reduction, (3) data display, and (4) conclusion and verification. The result shows, firstly, the form of directive speech acts includes declarative and imperative form; secondly, the function of directive speech acts includes functions of teaching, explaining, suggesting, and expecting; and thirdly, the strategy of directive speech acts includes both direct and indirect strategy. The results of this study could be implied in the development of character learning materials at schools. It also can be one of the contents of local content (mulok) at school.

  11. Pragmatic Study of Directive Speech Acts in Stories in Alquran

    Directory of Open Access Journals (Sweden)

    Rochmat Budi Santosa

    2016-10-01

    Full Text Available This study aims at describing the directive speech acts in the verses that contain the stories in the Qur'an. Specifically, the objectives of this study are to assess the sub directive speech acts contained in the verses of the stories and the dominant directive speech acts. The research target is the verses (ayat containing stories in the Qur’an. This study emphasizes the problem of finding the meaning of verses pragmatically. The data in this study are all expressions of verses about the stories in the Qur'an that contain directive speech acts. In addition, the data in the form of contexts behind the emergence of the verses in the Qur’an story also included. Data collection technique used is the reading and record techniques. The data analysis was conducted using content analysis. Analysis of the data by classifying directive speech acts into 6 (six categories of Bach and Harnish theory namely; requestives, questions, requirements, prohibitive, permissives, and advisories. The result is that the requestives speech act consist only 1 (one paragraph, namely sub-directive asking for patience. In sub-directive questions, there are 4 (four questions that have meaning to ask about what, question tag, why, asking for permission, who, where, which, possibilities, and offering. For sub-requirements directive there are 60 (sixty types of command. Pray command is the most number (24 verses and command for giving attention is the second position with 21 verses. About sub-directive prohibitives, we found 19 kinds of restrictions. As for permissives, there is only one (1 verse that allows punishment. In advisories that there are 2 kinds of advises, they are 1 verse that counsel for fear of punishment of God, and advise to be humble (1 verse. Thus it can be said that the stories in the Alquran really contain messages, including a message to the people to carry out the commands of God and away from His prohibition. The purpose is to crystallize the basic

  12. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  13. Infants in cocktail parties

    Science.gov (United States)

    Newman, Rochelle S.

    2003-04-01

    Most work on listeners' ability to separate streams of speech has focused on adults. Yet infants also find themselves in noisy environments. In order to learn from their caregivers' speech in these settings, they must first separate it from background noise such as that from television shows and siblings. Previous work has found that 7.5-month-old infants can separate streams of speech when the target voice is more intense than the distractor voice (Newman and Jusczyk, 1996), when the target voice is known to the infant (Barker and Newman, 2000) or when infants are presented with an audiovisual (rather than auditory-only) signal (Hollich, Jusczyk, and Newman, 2001). Unfortunately, the paradigm in these studies can only be used on infants at least 7.5 months of age, limiting the ability to investigate how stream segregation develops over time. The present work uses a new paradigm to explore younger infants' ability to separate streams of speech. Infants aged 4.5 months heard a female talker repeat either their own name or another infants' name, while several other voices spoke fluently in the background. We present data on infants' ability to recognize their own name in this cocktail party situation. [Work supported by NSF and NICHD.

  14. Infants Show Stability of Goal-Directed Imitation

    Science.gov (United States)

    Sakkalou, Elena; Ellis-Davies, Kate; Fowler, Nia C.; Hilbrink, Elma E.; Gattis, Merideth

    2013-01-01

    Previous studies have reported that infants selectively reproduce observed actions and have argued that this selectivity reflects understanding of intentions and goals, or goal-directed imitation. We reasoned that if selective imitation of goal-directed actions reflects understanding of intentions, infants should demonstrate stability across…

  15. Statistical learning in a natural language by 8-month-old infants.

    Science.gov (United States)

    Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R

    2009-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.

  16. Do 6-Month-Olds Understand That Speech Can Communicate?

    Science.gov (United States)

    Vouloumanos, Athena; Martin, Alia; Onishi, Kristine H.

    2014-01-01

    Adults and 12-month-old infants recognize that even unfamiliar speech can communicate information between third parties, suggesting that they can separate the communicative function of speech from its lexical content. But do infants recognize that speech can communicate due to their experience understanding and producing language, or do they…

  17. Neural responses to multimodal ostensive signals in 5-month-old infants.

    Directory of Open Access Journals (Sweden)

    Eugenio Parise

    Full Text Available Infants' sensitivity to ostensive signals, such as direct eye contact and infant-directed speech, is well documented in the literature. We investigated how infants interpret such signals by assessing common processing mechanisms devoted to them and by measuring neural responses to their compounds. In Experiment 1, we found that ostensive signals from different modalities display overlapping electrophysiological activity in 5-month-old infants, suggesting that these signals share neural processing mechanisms independently of their modality. In Experiment 2, we found that the activation to ostensive signals from different modalities is not additive to each other, but rather reflects the presence of ostension in either stimulus stream. These data support the thesis that ostensive signals obligatorily indicate to young infants that communication is directed to them.

  18. Silent reading of direct versus indirect speech activates voice-selective areas in the auditory cortex.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2011-10-01

    In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, for silent reading, the representational consequences of this distinction are still unclear. Although many of us share the intuition of an "inner voice," particularly during silent reading of direct speech statements in text, there has been little direct empirical confirmation of this experience so far. Combining fMRI with eye tracking in human volunteers, we show that silent reading of direct versus indirect speech engenders differential brain activation in voice-selective areas of the auditory cortex. This suggests that readers are indeed more likely to engage in perceptual simulations (or spontaneous imagery) of the reported speaker's voice when reading direct speech as opposed to meaning-equivalent indirect speech statements as part of a more vivid representation of the former. Our results may be interpreted in line with embodied cognition and form a starting point for more sophisticated interdisciplinary research on the nature of auditory mental simulation during reading.

  19. Direct speech quotations promote low relative-clause attachment in silent reading of English.

    Science.gov (United States)

    Yao, Bo; Scheepers, Christoph

    2018-07-01

    The implicit prosody hypothesis (Fodor, 1998, 2002) proposes that silent reading coincides with a default, implicit form of prosody to facilitate sentence processing. Recent research demonstrated that a more vivid form of implicit prosody is mentally simulated during silent reading of direct speech quotations (e.g., Mary said, "This dress is beautiful"), with neural and behavioural consequences (e.g., Yao, Belin, & Scheepers, 2011; Yao & Scheepers, 2011). Here, we explored the relation between 'default' and 'simulated' implicit prosody in the context of relative-clause (RC) attachment in English. Apart from confirming a general low RC-attachment preference in both production (Experiment 1) and comprehension (Experiments 2 and 3), we found that during written sentence completion (Experiment 1) or when reading silently (Experiment 2), the low RC-attachment preference was reliably enhanced when the critical sentences were embedded in direct speech quotations as compared to indirect speech or narrative sentences. However, when reading aloud (Experiment 3), direct speech did not enhance the general low RC-attachment preference. The results from Experiments 1 and 2 suggest a quantitative boost to implicit prosody (via auditory perceptual simulation) during silent production/comprehension of direct speech. By contrast, when reading aloud (Experiment 3), prosody becomes equally salient across conditions due to its explicit nature; indirect speech and narrative sentences thus become as susceptible to prosody-induced syntactic biases as direct speech. The present findings suggest a shared cognitive basis between default implicit prosody and simulated implicit prosody, providing a new platform for studying the effects of implicit prosody on sentence processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Speech preference is associated with autistic-like behavior in 18-months-olds at risk for Autism Spectrum Disorder.

    Science.gov (United States)

    Curtin, Suzanne; Vouloumanos, Athena

    2013-09-01

    We examined whether infants' preference for speech at 12 months is associated with autistic-like behaviors at 18 months in infants who are at increased risk for autism spectrum disorder (ASD) because they have an older sibling diagnosed with ASD and in low-risk infants. Only low-risk infants listened significantly longer to speech than to nonspeech at 12 months. In both groups, relative preference for speech correlated positively with general cognitive ability at 12 months. However, in high-risk infants only, preference for speech was associated with autistic-like behavior at 18 months, while in low-risk infants, preference for speech correlated with language abilities. This suggests that in children at risk for ASD an atypical species-specific bias for speech may underlie atypical social development.

  1. Infants learn better from left to right: a directional bias in infants? sequence learning

    OpenAIRE

    Bulf, Hermann; de Hevia, Maria Dolores; Gariboldi, Valeria; Macchi Cassia, Viola

    2017-01-01

    A wealth of studies show that human adults map ordered information onto a directional spatial continuum. We asked whether mapping ordinal information into a directional space constitutes an early predisposition, already functional prior to the acquisition of symbolic knowledge and language. While it is known that preverbal infants represent numerical order along a left-to-right spatial continuum, no studies have investigated yet whether infants, like adults, organize any kind of ordinal infor...

  2. Dysfluencies in the speech of adults with intellectual disabilities and reported speech difficulties.

    Science.gov (United States)

    Coppens-Hofman, Marjolein C; Terband, Hayo R; Maassen, Ben A M; van Schrojenstein Lantman-De Valk, Henny M J; van Zaalen-op't Hof, Yvonne; Snik, Ad F M

    2013-01-01

    In individuals with an intellectual disability, speech dysfluencies are more common than in the general population. In clinical practice, these fluency disorders are generally diagnosed and treated as stuttering rather than cluttering. To characterise the type of dysfluencies in adults with intellectual disabilities and reported speech difficulties with an emphasis on manifestations of stuttering and cluttering, which distinction is to help optimise treatment aimed at improving fluency and intelligibility. The dysfluencies in the spontaneous speech of 28 adults (18-40 years; 16 men) with mild and moderate intellectual disabilities (IQs 40-70), who were characterised as poorly intelligible by their caregivers, were analysed using the speech norms for typically developing adults and children. The speakers were subsequently assigned to different diagnostic categories by relating their resulting dysfluency profiles to mean articulatory rate and articulatory rate variability. Twenty-two (75%) of the participants showed clinically significant dysfluencies, of which 21% were classified as cluttering, 29% as cluttering-stuttering and 25% as clear cluttering at normal articulatory rate. The characteristic pattern of stuttering did not occur. The dysfluencies in the speech of adults with intellectual disabilities and poor intelligibility show patterns that are specific for this population. Together, the results suggest that in this specific group of dysfluent speakers interventions should be aimed at cluttering rather than stuttering. The reader will be able to (1) describe patterns of dysfluencies in the speech of adults with intellectual disabilities that are specific for this group of people, (2) explain that a high rate of dysfluencies in speech is potentially a major determiner of poor intelligibility in adults with ID and (3) describe suggestions for intervention focusing on cluttering rather than stuttering in dysfluent speakers with ID. Copyright © 2013 Elsevier Inc

  3. Contextual Modulation of Reading Rate for Direct versus Indirect Speech Quotations

    Science.gov (United States)

    Yao, Bo; Scheepers, Christoph

    2011-01-01

    In human communication, direct speech (e.g., "Mary said: "I'm hungry"") is perceived to be more vivid than indirect speech (e.g., "Mary said [that] she was hungry"). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2,…

  4. An Overview of Iron in Term Breast-Fed Infants

    Directory of Open Access Journals (Sweden)

    Wafaa A. Qasem

    2015-01-01

    Full Text Available Background Iron is an essential nutrient for normal growth and neurodevelopment of infants. Iron deficiency (ID remains the most common micronutrient deficiency worldwide. There are convincing data that ID is associated with negative effects on neurological and psychomotor development. Objectives In this review, we provide an overview of current knowledge of the importance of iron in normal term breast-fed infants with a focus on recommendations, metabolism, and iron requirements. Conclusions Health organizations around the world recommend the introduction of iron-rich foods or iron supplements for growing infants to prevent ID. However, there is no routine screening for ID in infancy. Multicenter trials with long-term follow-up are needed to investigate the association between iron fortification/supplementation and various health outcomes.

  5. Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2012-04-15

    In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Infants learn better from left to right: a directional bias in infants' sequence learning.

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Gariboldi, Valeria; Macchi Cassia, Viola

    2017-05-26

    A wealth of studies show that human adults map ordered information onto a directional spatial continuum. We asked whether mapping ordinal information into a directional space constitutes an early predisposition, already functional prior to the acquisition of symbolic knowledge and language. While it is known that preverbal infants represent numerical order along a left-to-right spatial continuum, no studies have investigated yet whether infants, like adults, organize any kind of ordinal information onto a directional space. We investigated whether 7-month-olds' ability to learn high-order rule-like patterns from visual sequences of geometric shapes was affected by the spatial orientation of the sequences (left-to-right vs. right-to-left). Results showed that infants readily learn rule-like patterns when visual sequences were presented from left to right, but not when presented from right to left. This result provides evidence that spatial orientation critically determines preverbal infants' ability to perceive and learn ordered information in visual sequences, opening to the idea that a left-to-right spatially organized mental representation of ordered dimensions might be rooted in biologically-determined constraints on human brain development.

  7. Investigating the neural correlates of voice versus speech-sound directed information in pre-school children.

    Directory of Open Access Journals (Sweden)

    Nora Maria Raschle

    Full Text Available Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI in 20 typically developing preschool children (average age  = 5.8 y; range 5.2-6.8 y to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments.

  8. The Beginnings of Danish Speech Perception

    DEFF Research Database (Denmark)

    Østerbye, Torkil

    , in the light of the rich and complex Danish sound system. The first two studies report on native adults’ perception of Danish speech sounds in quiet and noise. The third study examined the development of language-specific perception in native Danish infants at 6, 9 and 12 months of age. The book points......Little is known about the perception of speech sounds by native Danish listeners. However, the Danish sound system differs in several interesting ways from the sound systems of other languages. For instance, Danish is characterized, among other features, by a rich vowel inventory and by different...... reductions of speech sounds evident in the pronunciation of the language. This book (originally a PhD thesis) consists of three studies based on the results of two experiments. The experiments were designed to provide knowledge of the perception of Danish speech sounds by Danish adults and infants...

  9. Beam transport radiation shielding for branch lines 2-ID-B and 2-ID-C

    International Nuclear Information System (INIS)

    Feng, Y.P.; Lai, B.; McNulty, I.; Dejus, R.J.; Randall, K.J.; Yun, W.

    1995-01-01

    The x-ray radiation shielding requirements beyond the first optics enclosure have been considered for the beam transport of the 2-ID-B and 2-ID-C branch lines of Sector 2 (SRI-CAT) of the APS. The first three optical components (mirrors) of the 2-ID-B branch are contained within the shielded first optics enclosure. Calculations indicate that scattering of the primary synchrotron beam by beamline components outside the enclosure, such as apertures and monochromators, or by gas particles in case of vacuum failure is within safe limits for this branch. A standard 2.5-inch-diameter stainless steel pipe with 1/16-inch-thick walls provides adequate shielding to reduce the radiation dose equivalent rate to human tissue to below the maximum permissible limit of 0.25 mrem/hr. The 2-ID-C branch requires, between the first optics enclosure where only two mirrors are used and the housing for the third mirror, additional lead shielding (0.75 mm) and a minimum approach distance of 2.6 cm. A direct beam stop consisting of at least 4.5 mm of lead is also required immediately downstream of the third mirror for 2-ID-C. Finally, to stop the direct beam from escaping the experimental station, a beam stop consisting of at least 4-mm or 2.5-mm steel is required for the 2-ID-B or 2-ID-C branches, respectively. This final requirement can be met by the vacuum chambers used to house the experiments for both branch lines

  10. Transcranial direct current stimulation over left inferior frontal cortex improves speech fluency in adults who stutter.

    Science.gov (United States)

    Chesters, Jennifer; Möttönen, Riikka; Watkins, Kate E

    2018-04-01

    See Crinion (doi:10.1093/brain/awy075) for a scientific commentary on this article.Stuttering is a neurodevelopmental condition affecting 5% of children, and persisting in 1% of adults. Promoting lasting fluency improvement in adults who stutter is a particular challenge. Novel interventions to improve outcomes are of value, therefore. Previous work in patients with acquired motor and language disorders reported enhanced benefits of behavioural therapies when paired with transcranial direct current stimulation. Here, we report the results of the first trial investigating whether transcranial direct current stimulation can improve speech fluency in adults who stutter. We predicted that applying anodal stimulation to the left inferior frontal cortex during speech production with temporary fluency inducers would result in longer-lasting fluency improvements. Thirty male adults who stutter completed a randomized, double-blind, controlled trial of anodal transcranial direct current stimulation over left inferior frontal cortex. Fifteen participants received 20 min of 1-mA stimulation on five consecutive days while speech fluency was temporarily induced using choral and metronome-timed speech. The other 15 participants received the same speech fluency intervention with sham stimulation. Speech fluency during reading and conversation was assessed at baseline, before and after the stimulation on each day of the 5-day intervention, and at 1 and 6 weeks after the end of the intervention. Anodal stimulation combined with speech fluency training significantly reduced the percentage of disfluent speech measured 1 week after the intervention compared with fluency intervention alone. At 6 weeks after the intervention, this improvement was maintained during reading but not during conversation. Outcome scores at both post-intervention time points on a clinical assessment tool (the Stuttering Severity Instrument, version 4) also showed significant improvement in the group receiving

  11. Maternal and paternal pragmatic speech directed to young children with Down syndrome and typical development.

    Science.gov (United States)

    de Falco, Simona; Venuti, Paola; Esposito, Gianluca; Bornstein, Marc H

    2011-02-01

    The aim of this study was to compare functional features of maternal and paternal speech directed to children with Down syndrome and developmental age-matched typically developing children. Altogether 88 parents (44 mothers and 44 fathers) and their 44 young children (22 children with Down syndrome and 22 typically developing children) participated. Parents' speech directed to children was obtained through observation of naturalistic parent-child dyadic interactions. Verbatim transcripts of maternal and paternal language were categorized in terms of the primary function of each speech unit. Parents (both mothers and fathers) of children with Down syndrome used more affect-salient speech compared to parents of typically developing children. Although parents used the same amounts of information-salient speech, parents of children with Down syndrome used more direct statements and asked fewer questions than did parents of typically developing children. Concerning parent gender, in both groups mothers used more language than fathers and specifically more descriptions. These findings held controlling for child age and MLU and family SES. This study highlights strengths and weaknesses of parental communication to children with Down syndrome and helps to identify areas of potential improvement through intervention. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Bilingualism modulates infants' selective attention to the mouth of a talking face.

    Science.gov (United States)

    Pons, Ferran; Bosch, Laura; Lewkowicz, David J

    2015-04-01

    Infants growing up in bilingual environments succeed at learning two languages. What adaptive processes enable them to master the more complex nature of bilingual input? One possibility is that bilingual infants take greater advantage of the redundancy of the audiovisual speech that they usually experience during social interactions. Thus, we investigated whether bilingual infants' need to keep languages apart increases their attention to the mouth as a source of redundant and reliable speech cues. We measured selective attention to talking faces in 4-, 8-, and 12-month-old Catalan and Spanish monolingual and bilingual infants. Monolinguals looked more at the eyes than the mouth at 4 months and more at the mouth than the eyes at 8 months in response to both native and nonnative speech, but they looked more at the mouth than the eyes at 12 months only in response to nonnative speech. In contrast, bilinguals looked equally at the eyes and mouth at 4 months, more at the mouth than the eyes at 8 months, and more at the mouth than the eyes at 12 months, and these patterns of responses were found for both native and nonnative speech at all ages. Thus, to support their dual-language acquisition processes, bilingual infants exploit the greater perceptual salience of redundant audiovisual speech cues at an earlier age and for a longer time than monolingual infants. © The Author(s) 2015.

  13. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  14. Infant speech-sound discrimination testing: effects of stimulus intensity and procedural model on measures of performance.

    Science.gov (United States)

    Nozza, R J

    1987-06-01

    Performance of infants in a speech-sound discrimination task (/ba/ vs /da/) was measured at three stimulus intensity levels (50, 60, and 70 dB SPL) using the operant head-turn procedure. The procedure was modified so that data could be treated as though from a single-interval (yes-no) procedure, as is commonly done, as well as if from a sustained attention (vigilance) task. Discrimination performance changed significantly with increase in intensity, suggesting caution in the interpretation of results from infant discrimination studies in which only single stimulus intensity levels within this range are used. The assumptions made about the underlying methodological model did not change the performance-intensity relationships. However, infants demonstrated response decrement, typical of vigilance tasks, which supports the notion that the head-turn procedure is represented best by the vigilance model. Analysis then was done according to a method designed for tasks with undefined observation intervals [C. S. Watson and T. L. Nichols, J. Acoust. Soc. Am. 59, 655-668 (1976)]. Results reveal that, while group data are reasonably well represented across levels of difficulty by the fixed-interval model, there is a variation in performance as a function of time following trial onset that could lead to underestimation of performance in some cases.

  15. Maternal and paternal pragmatic speech directed to young children with Down syndrome and typical development

    OpenAIRE

    de Falco, Simona; Venuti, Paola; Esposito, Gianluca; Bornstein, Marc H.

    2011-01-01

    The aim of this study was to compare functional features of maternal and paternal speech directed to children with Down syndrome and developmental age-matched typically developing children. Altogether 88 parents (44 mothers and 44 fathers) and their 44 young children (22 children with Down syndrome and 22 typically developing children) participated. Parents’ speech directed to children was obtained through observation of naturalistic parent–child dyadic interactions. Verbatim transcripts of m...

  16. The Influence of Direct and Indirect Speech on Mental Representations

    NARCIS (Netherlands)

    A. Eerland (Anita); J.A.A. Engelen (Jan A.A.); R.A. Zwaan (Rolf)

    2013-01-01

    textabstractLanguage can be viewed as a set of cues that modulate the comprehender's thought processes. It is a very subtle instrument. For example, the literature suggests that people perceive direct speech (e.g., Joanne said: 'I went out for dinner last night') as more vivid and perceptually

  17. DIRECTIVE SPEECH ACTS REALIZATION OF INDONESIAN EFL TEACHER

    Directory of Open Access Journals (Sweden)

    Cucu Suhartini

    2015-06-01

    Full Text Available This research examines the types and functions of directive speech act performed by an Indonesian EFL teacher in one senior high school in Kuningan, Indonesia. This study uses qualitative method. The data of this research were taken from the video transcription which contains directives spoken by the EFL teacher and analyzed based on Kreidler’s (1998 theory. The findings show that there are three types of directives used by the teacher. They are commands, requests, and suggestions. The most frequent type of directives performed is commands with 233 occurrences (94.8%. It was also found that there are five functions seized by the directives, they are elicitation, instruction, advice, threat, and attention-getter. The most frequent function of directives used is elicitation with 108 occurrences (44%. From the findings, it is concluded that the using of commands shows teacher’s dominance. Yet, this type of directives is not easily understood by the students. Therefore, it is suggested that the teacher should use other types of directives more, such as requests and suggestions, to encourage students’ participation.

  18. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  19. Lexical and sublexical units in speech perception.

    Science.gov (United States)

    Giroux, Ibrahima; Rey, Arnaud

    2009-03-01

    Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.

  20. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  1. Effects of Familiarity and Feeding on Newborn Speech-Voice Recognition

    Science.gov (United States)

    Valiante, A. Grace; Barr, Ronald G.; Zelazo, Philip R.; Brant, Rollin; Young, Simon N.

    2013-01-01

    Newborn infants preferentially orient to familiar over unfamiliar speech sounds. They are also better at remembering unfamiliar speech sounds for short periods of time if learning and retention occur after a feed than before. It is unknown whether short-term memory for speech is enhanced when the sound is familiar (versus unfamiliar) and, if so,…

  2. Learning-induced neural plasticity of speech processing before birth.

    Science.gov (United States)

    Partanen, Eino; Kujala, Teija; Näätänen, Risto; Liitola, Auli; Sambeth, Anke; Huotilainen, Minna

    2013-09-10

    Learning, the foundation of adaptive and intelligent behavior, is based on plastic changes in neural assemblies, reflected by the modulation of electric brain responses. In infancy, auditory learning implicates the formation and strengthening of neural long-term memory traces, improving discrimination skills, in particular those forming the prerequisites for speech perception and understanding. Although previous behavioral observations show that newborns react differentially to unfamiliar sounds vs. familiar sound material that they were exposed to as fetuses, the neural basis of fetal learning has not thus far been investigated. Here we demonstrate direct neural correlates of human fetal learning of speech-like auditory stimuli. We presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. Furthermore, a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Moreover, the learning effect was generalized to other types of similar speech sounds not included in the training material. Consequently, our results indicate neural commitment specifically tuned to the speech features heard before birth and their memory representations.

  3. Done Wrong or Said Wrong? Young Children Understand the Normative Directions of Fit of Different Speech Acts

    Science.gov (United States)

    Rakoczy, Hannes; Tomasello, Michael

    2009-01-01

    Young children use and comprehend different kinds of speech acts from the beginning of their communicative development. But it is not clear how they understand the conventional and normative structure of such speech acts. In particular, imperative speech acts have a world-to-word direction of fit, such that their fulfillment means that the world…

  4. Breastfeeding and Red Meat Intake Are Associated with Iron Status in Healthy Korean Weaning-age Infants.

    Science.gov (United States)

    Hong, Jeana; Chang, Ju Young; Shin, Sue; Oh, Sohee

    2017-06-01

    The present study investigated risk factors for iron deficiency (ID) and iron deficiency anemia (IDA) during late infancy, including feeding type and complementary feeding (CF) practice. Healthy term Korean infants (8-15 months) were weighed, and questionnaires regarding delivery, feeding, and weaning were completed by their caregivers. We also examined levels of hemoglobin, serum iron/total iron-binding capacity, serum ferritin, and mean corpuscular volume (MCV). Among 619 infants, ID and IDA were present in 174 infants (28.1%) and 87 infants (14.0%), respectively. The 288 infants with exclusively/mostly breastfeeding until late infancy (BFL) were most likely to exhibit ID (53.1%) and IDA (28.1%). The risk of ID was independently associated with BFL (adjusted odds ratio [aOR], 47.5; 95% confidence interval [CI], 18.3-122.9), male sex (aOR, 1.9; 95% CI, 1.2-2.9), fold weight gain (aOR, 2.6; 95% CI, 1.5-4.6), and perceived inadequacy of red meat intake (aOR, 1.7; 95% CI, 1.0-2.7). In addition to the risk factors for ID, Cesarean section delivery (aOR, 1.9; 95% CI, 1.1-3.2) and low parental CF-related knowledge (aOR, 2.8; 95% CI, 1.5-5.2) were risk factors for IDA. In conclusion, prolonged breastfeeding and perceived inadequacy of red meat intake may be among the important feeding-related risk factors of ID and IDA. Therefore, more meticulous education and monitoring of iron-rich food intake, such as red meat, with iron supplementation or iron status testing during late infancy if necessary, should be considered for breastfed Korean infants, especially for those with additional risk factors for ID or IDA. © 2017 The Korean Academy of Medical Sciences.

  5. The development of speech production in children with cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Chapman, Kathy

    2012-01-01

    The purpose of this chapter is to provide an overview of speech development of children with cleft palate +/- cleft lip. The chapter will begin with a discussion of the impact of clefting on speech. Next, we will provide a brief description of those factors impacting speech development...... for this population of children. Finally, research examining various aspects of speech development of infants and young children with cleft palate (birth to age five) will be reviewed. This final section will be organized by typical stages of speech sound development (e.g., prespeech, the early word stage...

  6. Infants distinguish antisocial actions directed towards fair and unfair agents.

    Directory of Open Access Journals (Sweden)

    Marek Meristo

    Full Text Available Three experiments provide evidence of an incipient sense of fairness in preverbal infants. Ten-month-old infants were shown cartoon videos with two agents, the 'donors', who distributed resources to two identical recipients. One donor always distributed the goods equally, while the other performed unequal distributions by giving everything to one recipient. In the test phase, a third agent hit or took resources away from either the fair or the unfair donor. We found that infants looked longer when the antisocial actions were directed towards the unfair rather than the fair donor. These findings support the view that infants are able to evaluate agents based on their distributive actions and suggest that the foundations of human socio-moral competence are acquired independently of parental feedback and linguistic experience.

  7. Direct and indirect measures of speech articulator motions using low power EM sensors

    International Nuclear Information System (INIS)

    Barnes, T; Burnett, G; Gable, T; Holzrichter, J F; Ng, L

    1999-01-01

    Low power Electromagnetic (EM) Wave sensors can measure general properties of human speech articulator motions, as speech is produced. See Holzrichter, Burnett, Ng, and Lea, J.Acoust.Soc.Am. 103 (1) 622 (1998). Experiments have demonstrated extremely accurate pitch measurements ( and lt; 1 Hz per pitch cycle) and accurate onset of voiced speech. Recent measurements of pressure-induced tracheal motions enable very good spectra and amplitude estimates of a voiced excitation function. The use of the measured excitation functions and pitch synchronous processing enable the determination of each pitch cycle of an accurate transfer function and, indirectly, of the corresponding articulator motions. In addition, direct measurements have been made of EM wave reflections from articulator interfaces, including jaw, tongue, and palate, simultaneously with acoustic and glottal open/close signals. While several types of EM sensors are suitable for speech articulator measurements, the homodyne sensor has been found to provide good spatial and temporal resolution for several applications

  8. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  9. Musical intervention enhances infants’ neural processing of temporal structure in music and speech

    OpenAIRE

    Zhao, T. Christina; Kuhl, Patricia K.

    2016-01-01

    Musicians show enhanced musical pitch and meter processing, effects that generalize to speech. Yet potential differences between musicians and nonmusicians limit conclusions. We examined the effects of a randomized laboratory-controlled music intervention on music and speech processing in 9-mo-old infants. The Intervention exposed infants to music in triple meter (the waltz) in a social environment. Controls engaged in similar social play without music. After 12 sessions, infants’ temporal in...

  10. CDS is not what you think - Hypoarticulation in Danish Child Directed Speech

    DEFF Research Database (Denmark)

    Dideriksen, Christina Rejkjær; Fusaroli, Riccardo

    et al. 2008). A previous study relying on lab-elicited stimuli indicated that Danish CDS might be peculiar, with a surprising lack of increased articulation (Bohn 2013). In the current study, we focused on longer naturalistic recordings in an environment known and safe for both child and mother...... common CDS acoustic traits: increased pitch and pitch variability and lower speech rate. However, we also find a significantly reduced vowel space when compared to adult-directed speech, which is especially surprising given the wide range of Danish vocalic sounds. We are currently extending the analysis...... and cultural affordances and the many complex routes to learn a language....

  11. Sex-specific automatic responses to infant cries: TMS reveals greater excitability in females than males in motor evoked potentials

    Directory of Open Access Journals (Sweden)

    Irene eMessina

    2016-01-01

    Full Text Available Neuroimaging reveals that infant cries activate parts of the premotor cortical system. To validate this effect in a more direct way, we used event-related transcranial magnetic stimulation (TMS. Here, we investigated the presence and the time course of modulation of motor cortex excitability in young adults who listened to infant cries. Specifically, we recorded motor evoked potentials (MEPs from the biceps brachii (BB and interosseus dorsalis primus (ID1 muscles as produced by TMS delivered from 0 to 250 ms from sound onset in six steps of 50 ms in 10 females and 10 males. We observed an excitatory modulation of MEPs at 100 ms from the onset of the infant cry specific to females and to the ID1 muscle. We regard this modulation as a response to natural cry sounds because it was delayed, attenuated to stimuli increasingly different from natural cry, and was absent in a separate group of females who listened to non-cry stimuli physically matched to natural infant cries. Furthermore, the 100-ms latency of this modulation is not compatible with a voluntary reaction to the stimulus but suggests an automatic, bottom-up audiomotor association. The brains of adult females appear to be tuned to respond to infant cries with automatic motor excitation. This effect may reflect the greater and longstanding burden on females in caregiving infants.

  12. Understanding the Abstract Role of Speech in Communication at 12 Months

    Science.gov (United States)

    Martin, Alia; Onishi, Kristine H.; Vouloumanos, Athena

    2012-01-01

    Adult humans recognize that even unfamiliar speech can communicate information between third parties, demonstrating an ability to separate communicative function from linguistic content. We examined whether 12-month-old infants understand that speech can communicate before they understand the meanings of specific words. Specifically, we test the…

  13. Id1 and Id3 expression is associated with increasing grade of prostate cancer: Id3 preferentially regulates CDKN1B

    International Nuclear Information System (INIS)

    Sharma, Pankaj; Patel, Divya; Chaudhary, Jaideep

    2012-01-01

    As transcriptional regulators of basic helix–oop–helix (bHLH) transcription and non-bHLH factors, the inhibitor of differentiation (Id1, Id2, Id3, and Id4) proteins play a critical role in coordinated regulation of cell growth, differentiation, tumorigenesis, and angiogenesis. Id1 regulates prostate cancer (PCa) cell proliferation, apoptosis, and androgen independence, but its clinical significance in PCa remains controversial. Moreover, there is lack of evidence on the expression of Id2 and Id3 in PCa progression. In this study we investigated the expression of Id2 and Id3 and reevaluated the expression of Id1 in PCa. We show that increased Id1 and Id3 protein expression is strongly associated with increasing grade of PCa. At the molecular level, we report that silencing either Id1 or Id3 attenuates cell cycle. Although structurally and mechanistically similar, our results show that both these proteins are noncompensatory at least in PCa progression. Moreover, through gene silencing approaches we show that Id1 and Id3 primarily attenuates CDKN1A (p21) and CDKN1B (p27), respectively. We also demonstrate that silencing Id3 alone significantly attenuates proliferation of PCa cells as compared with Id1. We propose that increased Id1 and Id3 expression attenuates all three cyclin-dependent kinase inhibitors (CDKN2B, -1A, and -1B) resulting in a more aggressive PCa phenotype

  14. Exploring untrained interpreters' use of direct versus indirect speech

    DEFF Research Database (Denmark)

    Dubslaff, Friedel; Martinsen, Bodil

    2005-01-01

    by personalizing the indefinite pronoun 'one' when relaying from doctor to patient. All other pronoun shifts occurred in connection with interactional problems caused almost exclusively by the interpreters' lack of knowledge about medical terminology - even though the terms used were in fact non-specialized ones....... The study also indicates that primary parties' shifts from direct to indirect address are closely related either to the form or to the content of the interpreter's prior utterance. Finally, it emerges that repeated one-language talk, triggered by the interpreter's problems with medical terminology, can......This study examines the interrelations between the use of direct vs. indirect speech by primary participants and by dialogue interpreters by focusing on pronoun shifts and their interactional functions. The data consist of four simulated interpreter-mediated medical interviews based on the same...

  15. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Hearing faces: how the infant brain matches the face it sees with the speech it hears.

    Science.gov (United States)

    Bristow, Davina; Dehaene-Lambertz, Ghislaine; Mattout, Jeremie; Soares, Catherine; Gliga, Teodora; Baillet, Sylvain; Mangin, Jean-François

    2009-05-01

    Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory-visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on.

  17. Speech and Language Development after Infant Tracheostomy.

    Science.gov (United States)

    Hill, Betsy P.; Singer, Lynn T.

    1990-01-01

    When assessed for speech/language development, 31 children (age 1-12) fitted with endotracheal tubes for more than 3 months beginning by age 13 months showed overall language functioning within normal limits and commensurate with cognitive ability. However, a pattern of expressive language disability was noted in the oldest group. (Author/JDD)

  18. Direct and indirect speech in aphasia : studies of spoken discourse production and comprehension

    NARCIS (Netherlands)

    Groenewold, Rimke

    2015-01-01

    Speakers with aphasia (a language impairment due to acquired brain damage) have difficulty processing grammatically complex sentences. In this dissertation we study the processing of direct speech constructions (e.g., John said: “I have to leave”) by people with and without aphasia. First, we study

  19. The representation of language within language : A syntactico-pragmatic typology of direct speech

    NARCIS (Netherlands)

    de Vries, M.

    The recursive phenomenon of direct speech (quotation) comes in many different forms, and it is arguably an important and widely used ingredient of both spoken and written language. This article builds on (and provides indirect support for) the idea that quotations are to be defined pragmatically as

  20. Language input and acquisition in a Mayan village: how important is directed speech?

    Science.gov (United States)

    Shneidman, Laura A; Goldin-Meadow, Susan

    2012-09-01

    Theories of language acquisition have highlighted the importance of adult speakers as active participants in children's language learning. However, in many communities children are reported to be directly engaged by their caregivers only rarely (Lieven, 1994). This observation raises the possibility that these children learn language from observing, rather than participating in, communicative exchanges. In this paper, we quantify naturally occurring language input in one community where directed interaction with children has been reported to be rare (Yucatec Mayan). We compare this input to the input heard by children growing up in large families in the United States, and we consider how directed and overheard input relate to Mayan children's later vocabulary. In Study 1, we demonstrate that 1-year-old Mayan children do indeed hear a smaller proportion of total input in directed speech than children from the US. In Study 2, we show that for Mayan (but not US) children, there are great increases in the proportion of directed input that children receive between 13 and 35 months. In Study 3, we explore the validity of using videotaped data in a Mayan village. In Study 4, we demonstrate that word types directed to Mayan children from adults at 24 months (but not word types overheard by children or word types directed from other children) predict later vocabulary. These findings suggest that adult talk directed to children is important for early word learning, even in communities where much of children's early language input comes from overheard speech. © 2012 Blackwell Publishing Ltd.

  1. A Systematic Review to Define the Speech and Language Benefit of Early (<12 Months) Pediatric Cochlear Implantation.

    Science.gov (United States)

    Bruijnzeel, Hanneke; Ziylan, Fuat; Stegeman, Inge; Topsakal, Vedat; Grolman, Wilko

    2016-01-01

    This review aimed to evaluate the additional benefit of pediatric cochlear implantation before 12 months of age considering improved speech and language development and auditory performance. We conducted a search in PubMed, EMBASE and CINAHL databases and included studies comparing groups with different ages at implantation and assessing speech perception and speech production, receptive language and/or auditory performance. We included studies with a high directness of evidence (DoE). We retrieved 3,360 articles. Ten studies with a high DoE were included. Four articles with medium DoE were discussed in addition. Six studies compared infants implanted before 12 months with children implanted between 12 and 24 months. Follow-up ranged from 6 months to 9 years. Cochlear implantation before the age of 2 years is beneficial according to one speech perception score (phonetically balanced kindergarten combined with consonant-nucleus-consonant) but not on Glendonald auditory screening procedure scores. Implantation before 12 months resulted in better speech production (diagnostic evaluation of articulation and phonology and infant-toddler meaningful auditory integration scale), auditory performance (Categories of Auditory Performance-II score) and receptive language scores (2 out of 5; Preschool Language Scale combined with oral and written language skills and Peabody Picture Vocabulary Test). The current best evidence lacks level 1 evidence studies and consists mainly of cohort studies with a moderate to high risk of bias. Included studies showed consistent evidence that cochlear implantation should be performed early in life, but evidence is inconsistent on all speech and language outcome measures regarding the additional benefit of implantation before the age of 12 months. Long-term follow-up studies are necessary to provide insight on additional benefits of early pediatric cochlear implantation. © 2016 S. Karger AG, Basel.

  2. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  3. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  4. SecurID

    CERN Multimedia

    Now called RSA SecurID, SecurID is a mechanism developed by Security Dynamics that allows two-factor authentication for a user on a network resource. It works on the principle of the unique password mode, based on a shared secret. Every sixty seconds, the component generates a new six-digit token on the screen. The latter comes from the current time (internal clock) and the seed (SecurID private key that is available on the component, and is also from the SecurID server). During an authentication request, the SecurID server will check the entered token by performing exactly the same calculation as that performed by your component. The server knows the two information required for this calculation: the current time and the seed of your component. Access is allowed if the token calculated by the server matches the token you specified.

  5. Influence of directionality and maximal power output on speech understanding with bone anchored hearing implants in single sided deafness

    OpenAIRE

    Krempaska, Silvia; Koval, Juraj; Schmid, Christoph; Pfiffner, Flurin; Kurz, Anja; Kompis, Martin

    2014-01-01

    Bone-anchored hearing implants (BAHI) are routinely used to alleviate the effects of the acoustic head shadow in single-sided sensorineural deafness (SSD). In this study, the influence of the directional microphone setting and the maximum power output of the BAHI sound processor on speech understanding in noise in a laboratory setting were investigated. Eight adult BAHI users with SSD participated in this pilot study. Speech understanding in noise was measured using a new Slovak speech-in-noi...

  6. On the Perception of Speech Sounds as Biologically Significant Signals1,2

    Science.gov (United States)

    Pisoni, David B.

    2012-01-01

    This paper reviews some of the major evidence and arguments currently available to support the view that human speech perception may require the use of specialized neural mechanisms for perceptual analysis. Experiments using synthetically produced speech signals with adults are briefly summarized and extensions of these results to infants and other organisms are reviewed with an emphasis towards detailing those aspects of speech perception that may require some need for specialized species-specific processors. Finally, some comments on the role of early experience in perceptual development are provided as an attempt to identify promising areas of new research in speech perception. PMID:399200

  7. Linguistic Processing of Accented Speech Across the Lifespan

    Directory of Open Access Journals (Sweden)

    Alejandrina eCristia

    2012-11-01

    Full Text Available In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work.

  8. Hypohidrotic ectodermal dysplasia, osteopetrosis, lymphedema, and immunodeficiency in an infant with multiple opportunistic infections.

    Science.gov (United States)

    Carlberg, Valerie M; Lofgren, Sabra M; Mann, Julianne A; Austin, Jared P; Nolt, Dawn; Shereck, Evan B; Davila-Saldana, Blachy; Zonana, Jonathan; Krol, Alfons L

    2014-01-01

    Osteopetrosis, lymphedema, hypohidrotic ectodermal dysplasia, and immunodeficiency (OL-HED-ID) is a rare X-linked disorder with only three reported prior cases in the English-language literature. We describe a case of OL-HED-ID in a male infant who initially presented with congenital lymphedema, leukocytosis, and thrombocytopenia of unknown etiology at 7 days of age. He subsequently developed gram-negative sepsis and multiple opportunistic infections including high-level cytomegalovirus viremia and Pneumocystis jiroveci pneumonia. The infant was noted to have mildly xerotic skin, fine sparse hair, and periorbital wrinkling, all features suggestive of ectodermal dysplasia. Skeletal imaging showed findings consistent with osteopetrosis, and immunologic investigation revealed hypogammaglobulinemia and mixed T- and B-cell dysfunction. Genetic testing revealed a novel mutation in the nuclear factor kappa beta (NF-KB) essential modulator (NEMO) gene, confirming the diagnosis of OL-HED-ID. Mutations in the NEMO gene have been reported in association with hypohidrotic ectodermal dysplasia with immunodeficiency (HED-ID), OL-HED-ID, and incontinentia pigmenti. In this case, we report a novel mutation in the NEMO gene associated with OL-HED-ID. This article highlights the dermatologic manifestations of a rare disorder, OL-HED-ID, and underscores the importance of early recognition and prompt intervention to prevent life-threatening infections. © 2013 Wiley Periodicals, Inc.

  9. 78 FR 65555 - Establishment of Class E Airspace; Salmon, ID

    Science.gov (United States)

    2013-11-01

    ...-0531; Airspace Docket No. 13-ANM-20] Establishment of Class E Airspace; Salmon, ID AGENCY: Federal... at the Salmon VHF Omni-Directional Radio Range/Distance Measuring Equipment (VOR/DME) navigation aid, Salmon, ID, to facilitate vectoring of Instrument Flight Rules (IFR) aircraft under control of Salt Lake...

  10. Speech neglect: A strange educational blind spot

    Science.gov (United States)

    Harris, Katherine Safford

    2005-09-01

    Speaking is universally acknowledged as an important human talent, yet as a topic of educated common knowledge, it is peculiarly neglected. Partly, this is a consequence of the relatively recent growth of research on speech perception, production, and development, but also a function of the way that information is sliced up by undergraduate colleges. Although the basic acoustic mechanism of vowel production was known to Helmholtz, the ability to view speech production as a physiological event is evolving even now with such techniques as fMRI. Intensive research on speech perception emerged only in the early 1930s as Fletcher and the engineers at Bell Telephone Laboratories developed the transmission of speech over telephone lines. The study of speech development was revolutionized by the papers of Eimas and his colleagues on speech perception in infants in the 1970s. Dissemination of knowledge in these fields is the responsibility of no single academic discipline. It forms a center for two departments, Linguistics, and Speech and Hearing, but in the former, there is a heavy emphasis on other aspects of language than speech and, in the latter, a focus on clinical practice. For psychologists, it is a rather minor component of a very diverse assembly of topics. I will focus on these three fields in proposing possible remedies.

  11. Tonal synchrony in mother-infant interaction based on harmonic and pentatonic series.

    Science.gov (United States)

    Van Puyvelde, Martine; Vanfleteren, Pol; Loots, Gerrit; Deschuyffeleer, Sara; Vinck, Bart; Jacquet, Wolfgang; Verhelst, Werner

    2010-12-01

    This study reports the occurrence of 'tonal synchrony' as a new dimension of early mother-infant interaction synchrony. The findings are based on a tonal and temporal analysis of vocal interactions between 15 mothers and their 3-month-old infants during 5 min of free-play in a laboratory setting. In total, 558 vocal exchanges were identified and analysed, of which 84% reflected harmonic or pentatonic series. Another 10% of the exchanges contained absolute and/or relative pitch and/or interval imitations. The total durations of dyads being in tonal synchrony were normally distributed (M=3.71, SD=2.44). Vocalisations based on harmonic series appeared organised around the major triad, containing significantly more simple frequency ratios (octave, fifth and third) than complex ones (non-major triad tones). Tonal synchrony and its characteristics are discussed in relation to infant-directed speech, communicative musicality, pre-reflective communication and its impact on the quality of early mother-infant interaction and child's development. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  13. Current Policies and New Directions for Speech-Language Pathology Assistants.

    Science.gov (United States)

    Paul-Brown, Diane; Goldberg, Lynette R

    2001-01-01

    This article provides an overview of current American Speech-Language-Hearing Association (ASHA) policies for the appropriate use and supervision of speech-language pathology assistants with an emphasis on the need to preserve the role of fully qualified speech-language pathologists in the service delivery system. Seven challenging issues surrounding the appropriate use of speech-language pathology assistants are considered. These include registering assistants and approving training programs; membership in ASHA; discrepancies between state requirements and ASHA policies; preparation for serving diverse multicultural, bilingual, and international populations; supervision considerations; funding and reimbursement for assistants; and perspectives on career-ladder/bachelor-level personnel. The formation of a National Leadership Council is proposed to develop a coordinated strategic plan for addressing these controversial and potentially divisive issues related to speech-language pathology assistants. This council would implement strategies for future development in the areas of professional education pertaining to assistant-level supervision, instruction of assistants, communication networks, policy development, research, and the dissemination/promotion of information regarding assistants.

  14. Look Who’s Talking NOW! Parentese Speech, Social Context, and Language Development Across Time

    Directory of Open Access Journals (Sweden)

    Nairán Ramírez-Esparza

    2017-06-01

    Full Text Available In previous studies, we found that the social interactions infants experience in their everyday lives at 11- and 14-months of age affect language ability at 24 months of age. These studies investigated relationships between the speech style (i.e., parentese speech vs. standard speech and social context [i.e., one-on-one (1:1 vs. group] of language input in infancy and later speech development (i.e., at 24 months of age, controlling for socioeconomic status (SES. Results showed that the amount of exposure to parentese speech-1:1 in infancy was related to productive vocabulary at 24 months. The general goal of the present study was to investigate changes in (1 the pattern of social interactions between caregivers and their children from infancy to childhood and (2 relationships among speech style, social context, and language learning across time. Our study sample consisted of 30 participants from the previously published infant studies, evaluated at 33 months of age. Social interactions were assessed at home using digital first-person perspective recordings of the auditory environment. We found that caregivers use less parentese speech-1:1, and more standard speech-1:1, as their children get older. Furthermore, we found that the effects of parentese speech-1:1 in infancy on later language development at 24 months persist at 33 months of age. Finally, we found that exposure to standard speech-1:1 in childhood was the only social interaction that related to concurrent word production/use. Mediation analyses showed that standard speech-1:1 in childhood fully mediated the effects of parentese speech-1:1 in infancy on language development in childhood, controlling for SES. This study demonstrates that engaging in one-on-one interactions in infancy and later in life has important implications for language development.

  15. Batf3 and Id2 have a synergistic effect on Irf8-directed classical CD8α+ dendritic cell development

    KAUST Repository

    Jaiswal, Hemant

    2013-11-13

    Dendritic cells (DCs) are heterogeneous cell populations represented by different subtypes, each varying in terms of gene expression patterns and specific functions. Recent studies identified transcription factors essential for the development of different DC subtypes, yet molecular mechanisms for the developmental program and functions remain poorly understood. In this study, we developed and characterized a mouse DC progenitor-like cell line, designated DC9, from Irf8-/- bone marrow cells as a model for DC development and function. Expression of Irf8 in DC9 cells led to plasmacytoid DCs and CD8α+ DC-like cells, with a concomitant increase in plasmacytoid DC- and CD8α+ DC-specific gene transcripts and induction of type I IFNs and IL12p40 following TLR ligand stimulation. Irf8 expression in DC9 cells led to an increase in Id2 and Batf3 transcript levels, transcription factors shown to be important for the development of CD8α+ DCs. We show that, without Irf8 , expression of Id2 and Batf3 was not sufficient for directing classical CD8α+ DC development. When coexpressed with Irf8, Batf3 and Id2 had a synergistic effect on classical CD8α+ DC development. We demonstrate that Irf8 is upstream of Batf3 and Id2 in the classical CD8α+ DC developmental program and define the hierarchical relationship of transcription factors important for classical CD8α+ DC development.

  16. A Study of Directive Speech Acts Used by Iranian Nursery School Children: The Impact of Context on Children’s Linguistic Choices

    Directory of Open Access Journals (Sweden)

    Shohreh Shahpouri Arani

    2012-09-01

    Full Text Available This paper aims at finding out the forms and functions of directive speech acts uttered by Persian-speaking children. The writer’s goal is to discover the distinct strategies applied by speakers of nursery school age children regarding three parameters: the choice of form, the negotiation of communicative goals within conversation, and the protection of face. The data collected for this purpose are based on actual school conversational situations that were audio recorded in four nursery schools during classroom work and playtime activities. Children, who are the subjects of this study, are of both sexes and various social backgrounds. The results revealed that (1 the investigation of children’s directive speech acts confirm the fact that they are aware of social parameters of talk (Andersen- Slosberg,1990; Ervin, Tripp et al., 1990; (2 they use linguistic forms that are different from what is used by adults as politeness marker, such as, polite 2nd plural subject-agreement on the verb, “please” and “thank you” words; (3 They use declaratives with illocutionary force in order to mark distance (Georgalidou, 2001. Keywords: Iranian children’s speech; Directive speech act; Politeness, Conversational analysis; Persian

  17. Expressive Vocabulary Acquisition in Children with Intellectual Disability: Speech or Manual Signs?

    Science.gov (United States)

    Vandereet, Joke; Maes, Bea; Lembrechts, Dirk; Zink, Inge

    2011-01-01

    Background: The aim of this study was to examine the degree to which children with intellectual disability (ID) depend on manual signs during their expressive vocabulary acquisition, in relation to child and social-environmental characteristics. Method: Expressive vocabulary acquisition in speech and manual signs was monitored over a 2-year period…

  18. Validation of a Dutch language screening instrument for 5-year-old preterm infants.

    NARCIS (Netherlands)

    Knuijt, S.; Sondaar, M.; Kleine, M.J. de; Kollee, L.A.A.

    2004-01-01

    AIM: The validation of the Dutch Taal Screenings Test (TST), a language-screening test, which is included in a follow-up instrument developed to enable paediatricians to assess 5-y-old preterm infants for their motor, cognitive and speech and language development. METHODS: The speech and language

  19. 78 FR 45478 - Proposed Establishment of Class E Airspace; Salmon, ID

    Science.gov (United States)

    2013-07-29

    ...-0531; Airspace Docket No. 13-ANM-20] Proposed Establishment of Class E Airspace; Salmon, ID AGENCY... action proposes to establish Class E airspace at the Salmon VHF Omni-Directional Radio Range/Distance Measuring Equipment (VOR/DME) navigation aid, Salmon, ID, to facilitate vectoring of Instrument Flight Rules...

  20. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  1. Fundamental Frequency and Direction-of-Arrival Estimation for Multichannel Speech Enhancement

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam

    Audio systems receive the speech signals of interest usually in the presence of noise. The noise has profound impacts on the quality and intelligibility of the speech signals, and it is therefore clear that the noisy signals must be cleaned up before being played back, stored, or analyzed. We can...... estimate the speech signal of interest from the noisy signals using a priori knowledge about it. A human speech signal is broadband and consists of both voiced and unvoiced parts. The voiced part is quasi-periodic with a time-varying fundamental frequency (or pitch as it is commonly referred to). We...... their time differences which eventually may further reduce the effects of noise. This thesis introduces a number of principles and methods to estimate periodic signals in noisy environments with application to multichannel speech enhancement. We propose model-based signal enhancement concerning the model...

  2. The interaction between acoustic salience and language experience in developmental speech perception: evidence from nasal place discrimination.

    Science.gov (United States)

    Narayan, Chandan R; Werker, Janet F; Beddor, Patrice Speeter

    2010-05-01

    Previous research suggests that infant speech perception reorganizes in the first year: young infants discriminate both native and non-native phonetic contrasts, but by 10-12 months difficult non-native contrasts are less discriminable whereas performance improves on native contrasts. In the current study, four experiments tested the hypothesis that, in addition to the influence of native language experience, acoustic salience also affects the perceptual reorganization that takes place in infancy. Using a visual habituation paradigm, two nasal place distinctions that differ in relative acoustic salience, acoustically robust labial-alveolar [ma]-[na] and acoustically less salient alveolar-velar [na]-[ enga], were presented to infants in a cross-language design. English-learning infants at 6-8 and 10-12 months showed discrimination of the native and acoustically robust [ma]-[na] (Experiment 1), but not the non-native (in initial position) and acoustically less salient [na]-[ enga] (Experiment 2). Very young (4-5-month-old) English-learning infants tested on the same native and non-native contrasts also showed discrimination of only the [ma]-[na] distinction (Experiment 3). Filipino-learning infants, whose ambient language includes the syllable-initial alveolar (/n/)-velar (/ eng/) contrast, showed discrimination of native [na]-[ enga] at 10-12 months, but not at 6-8 months (Experiment 4). These results support the hypothesis that acoustic salience affects speech perception in infancy, with native language experience facilitating discrimination of an acoustically similar phonetic distinction [na]-[ enga]. We discuss the implications of this developmental profile for a comprehensive theory of speech perception in infancy.

  3. Designing speech for a recipient

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    This study asks how speakers adjust their speech to their addressees, focusing on the potential roles of cognitive representations such as partner models, automatic processes such as interactive alignment, and social processes such as interactional negotiation. The nature of addressee orientation......, psycholinguistics and conversation analysis, and offers both overviews of child-directed, foreigner-directed and robot-directed speech and in-depth analyses of the processes involved in adjusting to a communication partner....

  4. Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots

    Directory of Open Access Journals (Sweden)

    Harvey Martin Sussman

    2015-12-01

    Full Text Available Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production.

  5. Infants' social withdrawal symptoms assessed with a direct infant observation method in primary health care.

    Science.gov (United States)

    Puura, Kaija; Mäntymaa, Mirjami; Luoma, Ilona; Kaukonen, Pälvi; Guedeney, Antoine; Salmelin, Raili; Tamminen, Tuula

    2010-12-01

    Distressed infants may withdraw from social interaction, but recognising infants' social withdrawal is difficult. The aims of the study were to see whether an infant observation method can be reliably used by front line workers, and to examine the prevalence of infants' social withdrawal symptoms. A random sample of 363 families with four, eight or 18-month-old infants participated in the study. The infants were examined by general practitioners (GPs) in well-baby clinics with the Alarm Distress BaBy Scale (ADBB), an observation method developed for clinical settings. A score of five or more on the ADBB Scale in two subsequent assessments at a two-week interval was regarded as a sign of clinically significant infant social withdrawal. Kappas were calculated for the GPs' correct rating of withdrawn/not withdrawn against a set of videotapes rated by developer of the method, Professor Guedeney and his research group. The kappas for their ratings ranged from 0.5 to 1. The frequency of infants scoring above the cut off in two subsequent assessments was 3%. The ADBB Scale is a promising method for detecting infant social withdrawal in front line services. Three percents of infants were showing sustained social withdrawal as a sign of distress in this normal population sample. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Speech understanding and directional hearing for hearing-impaired subjects with in-the-ear and behind-the-ear hearing aids

    NARCIS (Netherlands)

    Leeuw, A. R.; Dreschler, W. A.

    1987-01-01

    With respect to acoustical properties, in-the-ear (ITE) aids should give better understanding and directional hearing than behind-the-ear (BTE) aids. Also hearing-impaired subjects often prefer ITEs. A study was performed to assess objectively the improvement in speech understanding and directional

  7. Pengalaman Komunikasi Pelanggan Zalora.co.id (Studi Fenomenologi Pelanggan Zalora.co.id)

    OpenAIRE

    Aji, Widya Andhika; Pradekso, Tandiyo; Ulfa, Nurist Surayya

    2013-01-01

    1PENGALAMAN KOMUNIKASI PELANGGAN ZALORA.CO.ID(Studi Fenomenologi Pelanggan Zalora.co.id)Oleh:Widya Andhika AjiFakultas Ilmu Sosial dan Ilmu PolitikUniversitas Diponegoro SemarangABSTRAKPenelitian ini bertujuan untuk menganalisis pengalaman komunikasi danpemahaman penerimaan konsumen terkait pesan komunikasi pemasaran dalamberbelanja di Zalora.co.id. Dengan menggunakan sampel pada lima informandan metode wawancara, dapat ditarik kesimpulan: Pelanggan Zalora.co.idmemiliki pengalaman komunikasi ...

  8. Speech Recognition for the iCub Platform

    Directory of Open Access Journals (Sweden)

    Bertrand Higy

    2018-02-01

    Full Text Available This paper describes open source software (available at https://github.com/robotology/natural-speech to build automatic speech recognition (ASR systems and run them within the YARP platform. The toolkit is designed (i to allow non-ASR experts to easily create their own ASR system and run it on iCub and (ii to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human–iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: “articulatory” and “unsupervised” speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning-based baselines. The second type of recognition systems, the “unsupervised” systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems. To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2.5-h speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates.

  9. Id-1 and Id-2 genes and products as markers of epithelial cancer

    Science.gov (United States)

    Desprez, Pierre-Yves [El Cerrito, CA; Campisi, Judith [Berkeley, CA

    2008-09-30

    A method for detection and prognosis of breast cancer and other types of cancer. The method comprises detecting expression, if any, for both an Id-1 and an Id-2 genes, or the ratio thereof, of gene products in samples of breast tissue obtained from a patient. When expressed, Id-1 gene is a prognostic indicator that breast cancer cells are invasive and metastatic, whereas Id-2 gene is a prognostic indicator that breast cancer cells are localized and noninvasive in the breast tissue.

  10. Fast phonetic learning occurs already in 2-to-3-month old infants. An ERP study

    NARCIS (Netherlands)

    Wanrooij, K.; Boersma, P.; van Zuijen, T.L.

    2014-01-01

    An important mechanism for learning speech sounds in the first year of life is ‘distributional learning’, i.e., learning by simply listening to the frequency distributions of the speech sounds in the environment. In the lab, fast distributional learning has been reported for infants in the second

  11. Speech Synthesis Applied to Language Teaching.

    Science.gov (United States)

    Sherwood, Bruce

    1981-01-01

    The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…

  12. THE ONTOGENESIS OF SPEECH DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. E. Braudo

    2017-01-01

    Full Text Available The purpose of this article is to acquaint the specialists, working with children having developmental disorders, with age-related norms for speech development. Many well-known linguists and psychologists studied speech ontogenesis (logogenesis. Speech is a higher mental function, which integrates many functional systems. Speech development in infants during the first months after birth is ensured by the innate hearing and emerging ability to fix the gaze on the face of an adult. Innate emotional reactions are also being developed during this period, turning into nonverbal forms of communication. At about 6 months a baby starts to pronounce some syllables; at 7–9 months – repeats various sounds combinations, pronounced by adults. At 10–11 months a baby begins to react on the words, referred to him/her. The first words usually appear at an age of 1 year; this is the start of the stage of active speech development. At this time it is acceptable, if a child confuses or rearranges sounds, distorts or misses them. By the age of 1.5 years a child begins to understand abstract explanations of adults. Significant vocabulary enlargement occurs between 2 and 3 years; grammatical structures of the language are being formed during this period (a child starts to use phrases and sentences. Preschool age (3–7 y. o. is characterized by incorrect, but steadily improving pronunciation of sounds and phonemic perception. The vocabulary increases; abstract speech and retelling are being formed. Children over 7 y. o. continue to improve grammar, writing and reading skills. The described stages may not have strict age boundaries, as soon as they are dependent not only on environment, but also on the child’s mental constitution, heredity and character.

  13. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  14. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  15. Infants' Background Television Exposure during Play: Negative Relations to the Quantity and Quality of Mothers' Speech and Infants' Vocabulary Acquisition

    Science.gov (United States)

    Masur, Elise Frank; Flynn, Valerie; Olson, Janet

    2016-01-01

    Research on immediate effects of background television during mother-infant toy play shows that an operating television in the room disrupts maternal communicative behaviors crucial for infants' vocabulary acquisition. This study is the first to examine associations between frequent background TV/video exposure during mother-infant toy play at…

  16. Improving Understanding of Emotional Speech Acoustic Content

    Science.gov (United States)

    Tinnemore, Anna

    Children with cochlear implants show deficits in identifying emotional intent of utterances without facial or body language cues. A known limitation to cochlear implants is the inability to accurately portray the fundamental frequency contour of speech which carries the majority of information needed to identify emotional intent. Without reliable access to the fundamental frequency, other methods of identifying vocal emotion, if identifiable, could be used to guide therapies for training children with cochlear implants to better identify vocal emotion. The current study analyzed recordings of adults speaking neutral sentences with a set array of emotions in a child-directed and adult-directed manner. The goal was to identify acoustic cues that contribute to emotion identification that may be enhanced in child-directed speech, but are also present in adult-directed speech. Results of this study showed that there were significant differences in the variation of the fundamental frequency, the variation of intensity, and the rate of speech among emotions and between intended audiences.

  17. Perceptual statistical learning over one week in child speech production.

    Science.gov (United States)

    Richtsmeier, Peter T; Goffman, Lisa

    2017-07-01

    What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Infant Gaze Following during Parent-Infant Coviewing of Baby Videos

    Science.gov (United States)

    Demers, Lindsay B.; Hanson, Katherine G.; Kirkorian, Heather L.; Pempek, Tiffany A.; Anderson, Daniel R.

    2013-01-01

    A total of 122 parent–infant dyads were observed as they watched a familiar or novel infant-directed video in a laboratory setting. Infants were between 12-15 and 18-21 months old. Infants were more likely to look toward the TV immediately following their parents' look toward the TV. This apparent social influence on infant looking at television…

  19. The Longevity of Statistical Learning: When Infant Memory Decays, Isolated Words Come to the Rescue

    Science.gov (United States)

    Karaman, Ferhat; Hay, Jessica F.

    2018-01-01

    Research over the past 2 decades has demonstrated that infants are equipped with remarkable computational abilities that allow them to find words in continuous speech. Infants can encode information about the transitional probability (TP) between syllables to segment words from artificial and natural languages. As previous research has tested…

  20. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    Science.gov (United States)

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  1. Direct magnitude estimates of speech intelligibility in dysarthria: effects of a chosen standard.

    Science.gov (United States)

    Weismer, Gary; Laures, Jacqueline S

    2002-06-01

    Direct magnitude estimation (DME) has been used frequently as a perceptual scaling technique in studies of the speech intelligibility of persons with speech disorders. The technique is typically used with a standard, or reference stimulus, chosen as a good exemplar of "midrange" intelligibility. In several published studies, the standard has been chosen subjectively, usually on the basis of the expertise of the investigators. The current experiment demonstrates that a fixed set of sentence-level utterances, obtained from 4 individuals with dysarthria (2 with Parkinson disease, 2 with traumatic brain injury) as well as 3 neurologically normal speakers, is scaled differently depending on the identity of the standard. Four different standards were used in the main experiment, three of which were judged qualitatively in two independent evaluations to be good exemplars of midrange intelligibility. Acoustic analyses did not reveal obvious differences between these four standards but suggested that the standard with the worst-scaled intelligibility had much poorer voice source characteristics compared to the other three standards. Results are discussed in terms of possible standardization of midrange intelligibility exemplars for DME experiments.

  2. Association of health profession and direct-to-consumer marketing with infant formula choice and switching.

    Science.gov (United States)

    Huang, Yi; Labiner-Wolfe, Judith; Huang, Hui; Choiniere, Conrad J; Fein, Sara B

    2013-03-01

    Infant formula is marketed by health professionals and directly to consumers. Formula marketing has been shown to reduce breastfeeding, but the relation with switching formulas has not been studied. Willingness to switch formula can enable families to spend less on formula. Data are from the Infant Feeding Practices Study II, a United States national longitudinal study. Mothers were asked about media exposure to formula information during pregnancy, receiving formula samples or coupons at hospital discharge, reasons for their formula choice at infant age 1 month, and formula switching at infant ages 2, 5, 7, and 9 months. Analysis included 1,700 mothers who fed formula at infant age 1 month; it used logistic regression and longitudinal data analysis methods to evaluate the association between marketing and formula choice and switching. Most mothers were exposed to both types of formula marketing. Mothers who received a sample of formula from the hospital at birth were more likely to use the hospital formula 1 month later. Mothers who chose formula at 1 month because their doctor recommended it were less likely to switch formula than those who chose in response to direct-to-consumer marketing. Mothers who chose a formula because it was used in the hospital were less likely to switch if they had not been exposed to Internet web-based formula information when pregnant or if they received a formula sample in the mail. Marketing formula through health professionals may decrease mothers' willingness to switch formula. © 2013, Copyright the Authors Journal compilation © 2013, Wiley Periodicals, Inc.

  3. The interpersonal level in English: reported speech

    NARCIS (Netherlands)

    Keizer, E.

    2009-01-01

    The aim of this article is to describe and classify a number of different forms of English reported speech (or thought), and subsequently to analyze and represent them within the theory of FDG. First, the most prototypical forms of reported speech are discussed (direct and indirect speech);

  4. Categorization in 3- and 4-Month-Old Infants: An Advantage of Words over Tones

    Science.gov (United States)

    Ferry, Alissa L.; Hespos, Susan J.; Waxman, Sandra R.

    2010-01-01

    Neonates prefer human speech to other nonlinguistic auditory stimuli. However, it remains an open question whether there are any conceptual consequences of words on object categorization in infants younger than 6 months. The current study examined the influence of words and tones on object categorization in forty-six 3- to 4-month-old infants.…

  5. The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood

    Science.gov (United States)

    Havy, Mélanie; Foroud, Afra; Fais, Laurel; Werker, Janet F.

    2017-01-01

    Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in…

  6. Fast phonetic learning occurs already in 2-to-3-month old infants: an ERP study.

    Directory of Open Access Journals (Sweden)

    Karin eWanrooij

    2014-02-01

    Full Text Available An important mechanism for learning speech sounds in the first year of life is ‘distributional learning’, i.e., learning by simply listening to the frequency distributions of the speech sounds in the environment. In the lab, fast distributional learning has been reported for infants in the second half of the first year; the present study examined whether it can also be demonstrated at a much younger age, long before the onset of language-specific speech perception (which roughly emerges between 6 and 12 months. To investigate this, Dutch infants aged 2 to 3 months were presented with either a unimodal or a bimodal vowel distribution based on the English /æ/~/ε/ contrast, for only twelve minutes. Subsequently, mismatch responses (MMRs were measured in an oddball paradigm, where one half of the infants in each group heard a representative [æ] as the standard and a representative [ε] as the deviant, and the other half heard the same reversed. The results (from the combined MMRs during wakefulness and active sleep disclosed a larger MMR, implying better discrimination of [æ] and [ε], for bimodally than unimodally trained infants, thus extending an effect of distributional training found in previous behavioral research to a much younger age when speech perception is still universal rather than language-specific, and to a new method (ERP. Moreover, the analysis revealed a robust interaction between the distribution (unimodal vs. bimodal and the identity of the standard stimulus ([æ] vs. [ε], which provides evidence for an interplay between a perceptual asymmetry and distributional learning. The outcomes show that distributional learning can affect vowel perception already in the first months of life.

  7. Simultaneous natural speech and AAC interventions for children with childhood apraxia of speech: lessons from a speech-language pathologist focus group.

    Science.gov (United States)

    Oommen, Elizabeth R; McCarthy, John W

    2015-03-01

    In childhood apraxia of speech (CAS), children exhibit varying levels of speech intelligibility depending on the nature of errors in articulation and prosody. Augmentative and alternative communication (AAC) strategies are beneficial, and commonly adopted with children with CAS. This study focused on the decision-making process and strategies adopted by speech-language pathologists (SLPs) when simultaneously implementing interventions that focused on natural speech and AAC. Eight SLPs, with significant clinical experience in CAS and AAC interventions, participated in an online focus group. Thematic analysis revealed eight themes: key decision-making factors; treatment history and rationale; benefits; challenges; therapy strategies and activities; collaboration with team members; recommendations; and other comments. Results are discussed along with clinical implications and directions for future research.

  8. Reference Ranges of Reticulocyte Haemoglobin Content in Preterm and Term Infants: A Retrospective Analysis.

    Science.gov (United States)

    Lorenz, Laila; Peter, Andreas; Arand, Jörg; Springer, Fabian; Poets, Christian F; Franz, Axel R

    2017-01-01

    Despite iron supplementation, some preterm infants develop iron deficiency (ID). The optimal iron status parameter for early detection of ID has yet to be determined. To establish reference ranges for reticulocyte haemoglobin content (Ret-He) in preterm and term infants and to identify confounding factors. Retrospective analyses of Ret-He and complete blood count in infants with a clinically indicated blood sample obtained within 24 h after birth. Mean (SD) Ret-He was 30.7 (3.0) pg in very preterm infants with a gestational age (GA) of pH (r = -0.07). There was a slight variation in Ret-He with mode of delivery [normal vaginal delivery: 32.3 (3.2) pg, secondary caesarean section (CS): 31.4 (3.0) pg, instrumental delivery: 31.3 (2.7) pg and elective CS: 31.2 (2.8) pg]. GA at birth has a negligible impact on Ret-He, and the lower limit of the normal reference range in newborns within 24 h after birth can be set to 25 pg. Moreover, Ret-He seems to be a robust parameter which is not influenced by perinatal factors within the first 24 h after birth. © 2016 S. Karger AG, Basel.

  9. Comparison of the optimized conditions for genotyping of ACE ID ...

    African Journals Online (AJOL)

    ACE ID polymorphism is inevitable for genetic epidemiology of several cardiovascular and non cardiovascular diseases due to its direct influence on ACE activity level. In the present work, conditions were optimized for its analysis using conventional and direct blood PCR (DB PCR). Blood samples from nine normotensive ...

  10. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  11. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  13. Infants generalize representations of statistically segmented words

    Directory of Open Access Journals (Sweden)

    Katharine eGraf Estes

    2012-10-01

    Full Text Available The acoustic variation in language presents learners with a substantial challenge. To learn by tracking statistical regularities in speech, infants must recognize words across tokens that differ based on characteristics such as the speaker’s voice, affect, or the sentence context. Previous statistical learning studies have not investigated how these types of surface form variation affect learning. The present experiments used tasks tailored to two distinct developmental levels to investigate the robustness of statistical learning to variation. Experiment 1 examined statistical word segmentation in 11-month-olds and found that infants can recognize statistically segmented words across a change in the speaker’s voice from segmentation to testing. The direction of infants’ preferences suggests that recognizing words across a voice change is more difficult than recognizing them in a consistent voice. Experiment 2 tested whether 17-month-olds can generalize the output of statistical learning across variation to support word learning. The infants were successful in their generalization; they associated referents with statistically defined words despite a change in voice from segmentation to label learning. Infants’ learning patterns also indicate that they formed representations of across-word syllable sequences during segmentation. Thus, low probability sequences can act as object labels in some conditions. The findings of these experiments suggest that the units that emerge during statistical learning are not perceptually constrained, but rather are robust to naturalistic acoustic variation.

  14. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  15. Feeding outcomes in infants after supraglottoplasty.

    Science.gov (United States)

    Eustaquio, Marcia; Lee, Erika Nevin; Digoy, G Paul

    2011-11-01

    Review the impact of bilateral supraglottoplasty on feeding and compare the risk of postoperative feeding difficulties between infants with and without additional comorbidities. Case series with chart review. Children's hospital. The medical records of all patients between birth and 12 months of age treated for laryngomalacia with bilateral supraglottoplasty by a single surgeon (GPD) between December 2005 and September 2009 and followed for a minimum of 1 month were reviewed. Infants with significant comorbidities were evaluated separately. Nutritional intake before and after surgery, as well as speech and language pathology reports, was reviewed to qualify any feeding difficulties. Age at the time of surgery, additional surgical interventions, medical comorbidities, and length of follow-up were also noted during chart review. Of 81 infants who underwent bilateral supraglottoplasty, 75 were eligible for this review. In the cohort of infants without comorbidities, 46 of 48 (96%) had no change or an improvement in their oral intake after surgery. Of the 2 patients with initial worsening of feeding, all resumed oral intake within 2 months. In the group of patients with additional medical comorbidities, 22% required further interventions such as nasogastric tube, dietary modification, or gastrostomy tube placement. Supraglottoplasty in infants has a low incidence of persistent postoperative dysphagia. Infants with additional comorbidities are at a higher risk of feeding difficulty than otherwise healthy infants.

  16. Speech Motor Programming in Apraxia of Speech: Evidence from a Delayed Picture-Word Interference Task

    Science.gov (United States)

    Mailend, Marja-Liisa; Maas, Edwin

    2013-01-01

    Purpose: Apraxia of speech (AOS) is considered a speech motor programming impairment, but the specific nature of the impairment remains a matter of debate. This study investigated 2 hypotheses about the underlying impairment in AOS framed within the Directions Into Velocities of Articulators (DIVA; Guenther, Ghosh, & Tourville, 2006) model: The…

  17. Learning to pronounce first words in three languages: an investigation of caregiver and infant behavior using a computational model of an infant.

    Directory of Open Access Journals (Sweden)

    Ian S Howard

    Full Text Available Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1 speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers' utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija's utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words' component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to

  18. Speech reception with different bilateral directional processing schemes: Influence of binaural hearing, audiometric asymmetry, and acoustic scenario.

    Science.gov (United States)

    Neher, Tobias; Wagener, Kirsten C; Latzel, Matthias

    2017-09-01

    Hearing aid (HA) users can differ markedly in their benefit from directional processing (or beamforming) algorithms. The current study therefore investigated candidacy for different bilateral directional processing schemes. Groups of elderly listeners with symmetric (N = 20) or asymmetric (N = 19) hearing thresholds for frequencies below 2 kHz, a large spread in the binaural intelligibility level difference (BILD), and no difference in age, overall degree of hearing loss, or performance on a measure of selective attention took part. Aided speech reception was measured using virtual acoustics together with a simulation of a linked pair of completely occluding behind-the-ear HAs. Five processing schemes and three acoustic scenarios were used. The processing schemes differed in the tradeoff between signal-to-noise ratio (SNR) improvement and binaural cue preservation. The acoustic scenarios consisted of a frontal target talker presented against two speech maskers from ±60° azimuth or spatially diffuse cafeteria noise. For both groups, a significant interaction between BILD, processing scheme, and acoustic scenario was found. This interaction implied that, in situations with lateral speech maskers, HA users with BILDs larger than about 2 dB profited more from preserved low-frequency binaural cues than from greater SNR improvement, whereas for smaller BILDs the opposite was true. Audiometric asymmetry reduced the influence of binaural hearing. In spatially diffuse noise, the maximal SNR improvement was generally beneficial. N 0 S π detection performance at 500 Hz predicted the benefit from low-frequency binaural cues. Together, these findings provide a basis for adapting bilateral directional processing to individual and situational influences. Further research is needed to investigate their generalizability to more realistic HA conditions (e.g., with low-frequency vent-transmitted sound). Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Effect of Body Position on Energy Expenditure of Preterm Infants as Determined by Simultaneous Direct and Indirect Calorimetry.

    Science.gov (United States)

    Bell, Edward F; Johnson, Karen J; Dove, Edwin L

    2017-04-01

    Background  Indirect calorimetry is the standard method for estimating energy expenditure in clinical research. Few studies have evaluated indirect calorimetry in infants by comparing it with simultaneous direct calorimetry. Our purpose was (1) to compare the energy expenditure of preterm infants determined by these two methods, direct calorimetry and indirect calorimetry; and (2) to examine the effect of body position, supine or prone, on energy expenditure. Study Design  We measured energy expenditure by simultaneous direct (heat loss by gradient-layer calorimeter corrected for heat storage) and indirect calorimetry (whole-body oxygen consumption and carbon dioxide production) in 15 growing preterm infants during two consecutive interfeeding intervals, once in the supine position and once in the prone position. Results  The mean energy expenditure for all measurements in both positions did not differ significantly by the method used: 2.82 (standard deviation [SD] 0.42) kcal/kg/h by direct calorimetry and 2.78 (SD 0.48) kcal/kg/h by indirect calorimetry. The energy expenditure was significantly lower, by 10%, in the prone than in the supine position, whether examined by direct calorimetry (2.67 vs. 2.97 kcal/kg/h, p  Direct calorimetry and indirect calorimetry gave similar estimates of energy expenditure. Energy expenditure was 10% lower in the prone position than in the supine position. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Development of isotope dilution-liquid chromatography/mass spectrometry combined with standard addition techniques for the accurate determination of tocopherols in infant formula

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joonhee; Jang, Eun-Sil; Kim, Byungjoo, E-mail: byungjoo@kriss.re.kr

    2013-07-17

    Graphical abstract: -- Highlights: •ID-LC/MS method showed biased results for tocopherols analysis in infant formula. •H/D exchange of deuterated tocopherols in sample preparation was the source of bias. •Standard addition (SA)-ID-LC/MS was developed as an alternative to ID-LC/MS. •Details of calculation and uncertainty evaluation of the SA-IDMS were described. •SA-ID-LC/MS showed a higher-order metrological quality as a reference method. -- Abstract: During the development of isotope dilution-liquid chromatography/mass spectrometry (ID-LC/MS) for tocopherol analysis in infant formula, biased measurement results were observed when deuterium-labeled tocopherols were used as internal standards. It turned out that the biases came from intermolecular H/D exchange and intramolecular H/D scrambling of internal standards in sample preparation processes. Degrees of H/D exchange and scrambling showed considerable dependence on sample matrix. Standard addition-isotope dilution mass spectrometry (SA-IDMS) based on LC/MS was developed in this study to overcome the shortcomings of using deuterium-labeled internal standards while the inherent advantage of isotope dilution techniques is utilized for the accurate recovery correction in sample preparation processes. Details of experimental scheme, calculation equation, and uncertainty evaluation scheme are described in this article. The proposed SA-IDMS method was applied to several infant formula samples to test its validity. The method was proven to have a higher-order metrological quality with providing very accurate and precise measurement results.

  1. Risk profiles of infants ≥32 weeks' gestational age with ...

    African Journals Online (AJOL)

    Background. Infants in neonatal intensive care are at risk of swallowing difficulties, in particular oropharyngeal dysphagia (OPD) and oesophageal dysphagia (OD). OPD is treated by speech-language therapists while OD is managed by doctors. Diagnosis of dysphagia is a challenge as equipment for instrumental ...

  2. THE USE OF EXPRESSIVE SPEECH ACTS IN HANNAH MONTANA SESSION 1

    Directory of Open Access Journals (Sweden)

    Nur Vita Handayani

    2015-07-01

    Full Text Available This study aims to describe kinds and forms of expressive speech act in Hannah Montana Session 1. It belongs to descriptive qualitative method. The research object was expressive speech act. The data source was utterances which contain expressive speech acts in the film Hannah Montana Session 1. The researcher used observation method and noting technique in collecting the data. In analyzing the data, descriptive qualitative method was used. The research findings show that there are ten kinds of expressive speech act found in Hannah Montana Session 1, namely expressing apology, expressing thanks, expressing sympathy, expressing attitudes, expressing greeting, expressing wishes, expressing joy, expressing pain, expressing likes, and expressing dislikes. The forms of expressive speech act are direct literal expressive speech act, direct non-literal expressive speech act, indirect literal expressive speech act, and indirect non-literal expressive speech act.

  3. Prelinguistic Behavior of Infants of Assisted Reproductive Techniques

    Science.gov (United States)

    Noori, Soudabeh; Nedaeifard, Leila; Agarasouli, Zahra; Koohpaiehzadeh, Jalil; Kermani, Ramin Mozafari; Fazeli, Abolhasan Shahzadeh

    2012-01-01

    Objective The aim of this study is assessment of effects of different assisted reproductive techniques (ART) like in vitro fertilization (IVF) and intra cytoplasmic sperm injection (ICSI) on prelinguistic behavior of infants conceived by these techniques. Methods In this descriptive, cross sectional study, prelinguistic behavior of 151 full term ART infants of Royan Institute have been assessed in Children's Health and Development Research Center of Tehran from August 2007 until August 2009. Questionnaires were completed by parents at 9 months old. The questionnaire was standard according to Early Language Milestone Scale-2 (ELM-2). Data were analyzed by SPSS version 16 and using chi-square test. Findings Twenty-two (14.5%) of infants were conceived by IVF and 129 (85.4%) by ICSI. Number of infants with delay in reduplicated babbling in ICSI method was more than in IVF. There was only a significant difference in echolalia delay in the two sexes. Echolalia was delayed more in boys. Delay of reduplicated babbling was more in infants of younger mothers. There was no relation between speech and language defect of parents and infants. Conclusion This study showed that prelingustic behavior of ART infants are affected by kind of ART method, infant sex, and mother's age at the time of pregnancy. PMID:23431035

  4. Microbiological assessment and evaluation of rehydration instructions on powdered infant formulas, follow-up formulas, and infant foods in Malaysia.

    Science.gov (United States)

    Abdullah Sani, N; Hartantyo, S H P; Forsythe, S J

    2013-01-01

    A total of 90 samples comprising powdered infant formulas (n=51), follow-up formulas (n=21), and infant foods (n=18) from 15 domestic and imported brands were purchased from various retailers in Klang Valley, Malaysia and evaluated in terms of microbiological quality and the similarity of rehydration instructions on the product label to guidelines set by the World Health Organization. Microbiological analysis included the determination of aerobic plate count (APC) and the presence of Enterobacteriaceae and Cronobacter spp. Isolates of interest were identified using ID 32E (bioMérieux France, Craponne, France). In this study, 87% of powdered infant formulas, follow-up formulas, and infant foods analyzed had an APC below the permitted level of 70°C for formula preparation, as specified by the 2008 revised World Health Organization guidelines. Six brands instructed the use of water at 40 to 55°C, a temperature range that would support the survival and even growth of Enterobacteriaceae. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. Guinea pig ID-like families of SINEs.

    Science.gov (United States)

    Kass, David H; Schaetz, Brian A; Beitler, Lindsey; Bonney, Kevin M; Jamison, Nicole; Wiesner, Cathy

    2009-05-01

    Previous studies have indicated a paucity of SINEs within the genomes of the guinea pig and nutria, representatives of the Hystricognathi suborder of rodents. More recent work has shown that the guinea pig genome contains a large number of B1 elements, expanding to various levels among different rodents. In this work we utilized A-B PCR and screened GenBank with sequences from isolated clones to identify potentially uncharacterized SINEs within the guinea pig genome, and identified numerous sequences with a high degree of similarity (>92%) specific to the guinea pig. The presence of A-tails and flanking direct repeats associated with these sequences supported the identification of a full-length SINE, with a consensus sequence notably distinct from other rodent SINEs. Although most similar to the ID SINE, it clearly was not derived from the known ID master gene (BC1), hence we refer to this element as guinea pig ID-like (GPIDL). Using the consensus to screen the guinea pig genomic database (Assembly CavPor2) with Ensembl BlastView, we estimated at least 100,000 copies, which contrasts markedly to just over 100 copies of ID elements. Additionally we provided evidence of recent integrations of GPIDL as two of seven analyzed conserved GPIDL-containing loci demonstrated presence/absence variants in Cavia porcellus and C. aperea. Using intra-IDL PCR and sequence analyses we also provide evidence that GPIDL is derived from a hystricognath-specific SINE family. These results demonstrate that this SINE family continues to contribute to the dynamics of genomes of hystricognath rodents.

  6. Bi-directional effects of depressed mood in the postnatal period on mother-infant non-verbal engagement with picture books.

    Science.gov (United States)

    Reissland, Nadja; Burt, Mike

    2010-12-01

    The purpose of the present study is to examine the bi-directional nature of maternal depressed mood in the postnatal period on maternal and infant non-verbal behaviors while looking at a picture book. Although, it is acknowledged that non-verbal engagement with picture books in infancy plays an important role, the effect of maternal depressed mood on stimulating the interest of infants in books is not known. Sixty-one mothers and their infants, 38 boys and 23 girls, were observed twice approximately 3 months apart (first observation: mean age 6.8 months, range 3-11 months, 32 mothers with depressed mood; second observation: mean age 10.2 months, range 6-16 months, 17 mothers with depressed mood). There was a significant effect for depressed mood on negative behaviors: infants of mothers with depressed mood tended to push away and close books more often. The frequency of negative behaviors (pushing the book away/closing it on the part of the infant and withholding the book and restraining the infant on the part of the mother) were behaviors which if expressed during the first visit were more likely to be expressed during the second visit. Levels of negative behaviors by mother and infant were strongly related during each visit. Additionally, the pattern between visits suggests that maternal negative behavior may be the cause of her infant negative behavior. These results are discussed in terms of the effects of maternal depressed mood on the bi-directional relation of non-verbal engagement of mother and child. Crown Copyright © 2010. Published by Elsevier Inc. All rights reserved.

  7. Determination of local chromatin composition by CasID.

    Science.gov (United States)

    Schmidtmann, Elisabeth; Anton, Tobias; Rombaut, Pascaline; Herzog, Franz; Leonhardt, Heinrich

    2016-09-02

    Chromatin structure and function are determined by a plethora of proteins whose genome-wide distribution is typically assessed by immunoprecipitation (ChIP). Here, we developed a novel tool to investigate the local chromatin environment at specific DNA sequences. We combined the programmable DNA binding of dCas9 with the promiscuous biotin ligase BirA* (CasID) to biotinylate proteins in the direct vicinity of specific loci. Subsequent streptavidin-mediated precipitation and mass spectrometry identified both known and previously unknown chromatin factors associated with repetitive telomeric, major satellite and minor satellite DNA. With super-resolution microscopy, we confirmed the localization of the putative transcription factor ZNF512 at chromocenters. The versatility of CasID facilitates the systematic elucidation of functional protein complexes and locus-specific chromatin composition.

  8. A proposed HTTP service based IDS

    Directory of Open Access Journals (Sweden)

    Mohamed M. Abd-Eldayem

    2014-03-01

    Full Text Available The tremendous growth of the web-based applications has increased information security vulnerabilities over the Internet. Security administrators use Intrusion-Detection System (IDS to monitor network traffic and host activities to detect attacks against hosts and network resources. In this paper IDS based on Naïve Bayes classifier is analyzed. The main objective is to enhance IDS performance through preparing the training data set allowing to detect malicious connections that exploit the http service. Results of application are demonstrated and discussed. In the training phase of the proposed IDS, at first a feature selection technique based on Naïve Bayes classifier is used, this technique identifies the most important HTTP traffic features that can be used to detect HTTP attacks. In the testing and running phases proposed IDS classifies the network traffic based on the requested service, then based on the selected features Naïve Bayes classifier is used to analyze the HTTP service based traffic and identifies the HTTP normal connections and attacks. The performance of the IDS is measured through experiments using NSL-KDD data set. The results show that the detection rate of the IDS is about 99%, the false-positive rate is about 1%, and the false-negative rate is about 0.25%; therefore, proposed IDS holds the highest detection rate and the lowest false alarm compared with other leading IDS. In addition, the proposed IDS based on Naïve Bayes is used to classify network connections as a normal or attack. And it holds a high detection rate and a low false alarm.

  9. Emerging technologies with potential for objectively evaluating speech recognition skills.

    Science.gov (United States)

    Rawool, Vishakha Waman

    2016-01-01

    Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed.

  10. Newborn infants' sensitivity to perceptual cues to lexical and grammatical words.

    Science.gov (United States)

    Shi, R; Werker, J F; Morgan, J L

    1999-09-30

    In our study newborn infants were presented with lists of lexical and grammatical words prepared from natural maternal speech. The results show that newborns are able to categorically discriminate these sets of words based on a constellation of perceptual cues that distinguish them. This general ability to detect and categorically discriminate sets of words on the basis of multiple acoustic and phonological cues may provide a perceptual base that can help older infants bootstrap into the acquisition of grammatical categories and syntactic structure.

  11. Inhibitor of differentiation 4 (Id4) is a potential tumor suppressor in prostate cancer

    International Nuclear Information System (INIS)

    Carey, Jason PW; Asirvatham, Ananthi J; Galm, Oliver; Ghogomu, Tandeih A; Chaudhary, Jaideep

    2009-01-01

    Inhibitor of differentiation 4 (Id4), a member of the Id gene family is also a dominant negative regulator of basic helix loop helix (bHLH) transcription factors. Some of the functions of Id4 appear to be unique as compared to its other family members Id1, Id2 and Id3. Loss of Id4 gene expression in many cancers in association with promoter hypermethylation has led to the proposal that Id4 may act as a tumor suppressor. In this study we provide functional evidence that Id4 indeed acts as a tumor suppressor and is part of a cancer associated epigenetic re-programming. Data mining was used to demonstrate Id4 expression in prostate cancer. Methylation specific polymerase chain reaction (MSP) analysis was performed to understand molecular mechanisms associated with Id4 expression in prostate cancer cell lines. The effect of ectopic Id4 expression in DU145 cells was determined by cell cycle analysis (3H thymidine incorporation and FACS), expression of androgen receptor, p53 and cyclin dependent kinase inhibitors p27 and p21 by a combination of RT-PCR, real time-PCR, western blot and immuno-cytochemical analysis. Id4 expression was down-regulated in prostate cancer. Id4 expression was also down-regulated in prostate cancer line DU145 due to promoter hyper-methylation. Ectopic Id4 expression in DU145 prostate cancer cell line led to increased apoptosis and decreased cell proliferation due in part by an S-phase arrest. In addition to S-phase arrest, ectopic Id4 expression in PC3 cells also resulted in prolonged G2/M phase. At the molecular level these changes were associated with increased androgen receptor (AR), p21, p27 and p53 expression in DU145 cells. The results suggest that Id4 acts directly as a tumor suppressor by influencing a hierarchy of cellular processes at multiple levels that leads to a decreased cell proliferation and change in morphology that is possibly mediated through induction of previously silenced tumor suppressors

  12. When Meaning Is Not Enough: Distributional and Semantic Cues to Word Categorization in Child Directed Speech.

    Science.gov (United States)

    Feijoo, Sara; Muñoz, Carmen; Amadó, Anna; Serrat, Elisabet

    2017-01-01

    One of the most important tasks in first language development is assigning words to their grammatical category. The Semantic Bootstrapping Hypothesis postulates that, in order to accomplish this task, children are guided by a neat correspondence between semantic and grammatical categories, since nouns typically refer to objects and verbs to actions. It is this correspondence that guides children's initial word categorization. Other approaches, on the other hand, suggest that children might make use of distributional cues and word contexts to accomplish the word categorization task. According to such approaches, the Semantic Bootstrapping assumption offers an important limitation, as it might not be true that all the nouns that children hear refer to specific objects or people. In order to explore that, we carried out two studies based on analyses of children's linguistic input. We analyzed child-directed speech addressed to four children under the age of 2;6, taken from the CHILDES database. The corpora were selected from the Manchester corpus. The corpora from the four selected children contained a total of 10,681 word types and 364,196 word tokens. In our first study, discriminant analyses were performed using semantic cues alone. The results show that many of the nouns found in parents' speech do not relate to specific objects and that semantic information alone might not be sufficient for successful word categorization. Given that there must be an additional source of information which, alongside with semantics, might assist young learners in word categorization, our second study explores the availability of both distributional and semantic cues in child-directed speech. Our results confirm that this combination might yield better results for word categorization. These results are in line with theories that suggest the need for an integration of multiple cues from different sources in language development.

  13. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  14. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  15. idRHa+ProMod - Rail Hardening Control System

    International Nuclear Information System (INIS)

    Ferro, L

    2016-01-01

    idRHa+ProMod is the process control system developed by Primetals Technologies to foresee the thermo-mechanical evolution and micro-structural composition of rail steels subjected to slack quenching into idRHa+ Rail Hardening equipments in a simulation environment. This tool can be used both off-line or in-line, giving the user the chance to test and study the best cooling strategies or letting the automatic control system free to adjust the proper cooling recipe. Optimization criteria have been tailored in order to determine the best cooling conditions according to the metallurgical requirements imposed by the main rail standards and also taking into account the elastoplastic bending phenomena occurring during all stages of the head hardening process. The computational core of idRHa+ProMod is a thermal finite element procedure coupled with special algorithms developed to work out the main thermo-physical properties of steel, to predict the non-isothermal austenite decomposition into all the relevant phases and subsequently to evaluate the amount of latent heat of transformation released, the compound thermal expansion coefficient and the amount of plastic deformation in the material. Air mist and air blades boundary conditions have been carefully investigated by means of pilot plant tests aimed to study the jet impingement on rail surfaces and the cooling efficiency at all working conditions. Heat transfer coefficients have been further checked and adjusted directly on field during commissioning. idRHa+ is a trademark of Primetals Technologies Italy Srl (paper)

  16. idRHa+ProMod - Rail Hardening Control System

    Science.gov (United States)

    Ferro, L.

    2016-03-01

    idRHa+ProMod is the process control system developed by Primetals Technologies to foresee the thermo-mechanical evolution and micro-structural composition of rail steels subjected to slack quenching into idRHa+ Rail Hardening equipments in a simulation environment. This tool can be used both off-line or in-line, giving the user the chance to test and study the best cooling strategies or letting the automatic control system free to adjust the proper cooling recipe. Optimization criteria have been tailored in order to determine the best cooling conditions according to the metallurgical requirements imposed by the main rail standards and also taking into account the elastoplastic bending phenomena occurring during all stages of the head hardening process. The computational core of idRHa+ProMod is a thermal finite element procedure coupled with special algorithms developed to work out the main thermo-physical properties of steel, to predict the non-isothermal austenite decomposition into all the relevant phases and subsequently to evaluate the amount of latent heat of transformation released, the compound thermal expansion coefficient and the amount of plastic deformation in the material. Air mist and air blades boundary conditions have been carefully investigated by means of pilot plant tests aimed to study the jet impingement on rail surfaces and the cooling efficiency at all working conditions. Heat transfer coefficients have been further checked and adjusted directly on field during commissioning. idRHa+ is a trademark of Primetals Technologies Italy Srl

  17. Early Behavioral Intervention to Improve Social Communication Function in Infants with TSC

    Science.gov (United States)

    2016-10-01

    Disability (ID) using the...infant to complete the JASPER intervention at the UCLA site, acquired during our Social Scenes Paradigm (see Figures 1 & 2). The plot demonstrates that...Theory in Intellectual and Developmental Disabilities San Diego, CA • Jeste, S.S., (2016). Can rare disorders pave the way to

  18. Massage therapy improves the development of HIV-exposed infants living in a low socio-economic, peri-urban community of South Africa.

    Science.gov (United States)

    Perez, E M; Carrara, H; Bourne, L; Berg, A; Swanevelder, S; Hendricks, M K

    2015-02-01

    The aim of this study was to assess the effect of massage therapy on the growth and development of infants of HIV-infected mothers in a low socio-economic community in Cape Town. It was a prospective, randomised, controlled intervention trial that included massage therapy and control groups of HIV-infected mothers and their normal birth weight infants who were enrolled in the prevention of mother-to-child transmission (PMTCT) programme. Participants were recruited at the 6-week clinic visit and followed up every 2 weeks until their infants were 9 months of age. Mother-infant pairs in the massage therapy and control groups included 73 and 88 at 6 weeks and 55 and 58 at 9 months, respectively. Mothers in the intervention group were trained to massage their infants for 15 min daily. The socioeconomic status, immunity, relationship with the partner and mental pain of mothers; the infants' dietary intake, anthropometry and development (Griffiths Mental Development Scales); and haematological and iron status of mothers and infants were assessed at baseline and follow-up. Nine infants (5.3%) were HIV-infected on the HIV DNA PCR test at 6 weeks. Despite significantly higher levels of maternal mental pain, infants in the massage therapy compared to control group scored higher in all five of the Griffiths Scales of Mental Development and significantly higher in the mean quotient (p=0.002) and mean percentile (p=0.004) for the hearing and speech scale at 9 months. Based on the mean difference in scores, the massage therapy group showed greater improvement for all five scales compared to the control group. The mean difference in scores was significantly greater for the hearing and speech quotient (21.9 vs. 11.2) (pdevelopment and had a significant effect on the hearing and speech and general quotient of HIV-exposed infants in this study. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Tinjauan Desain Website Kemlu.Go.Id

    OpenAIRE

    Danu Widhyatmoko

    2013-01-01

    Review of kemlu.go.id website design is a research report on Kemlu.go.id website design. Kemlu.go.id website aims to be the information gateway of Ministry of Foreign Affairs, and also as guidelines of foreign politic policies of Republic of Indonesia. The review had been accomplished by using analytical method based on the "Nine Essential Principles for Good Web Design" developed by Collis Ta'eed (2007). At the end of the article, several recommendations in developing kemlu.go.id website are...

  20. Speech emotion recognition methods: A literature review

    Science.gov (United States)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  1. A characterization of verb use in Turkish agrammatic narrative speech.

    Science.gov (United States)

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.

  2. The role of gestures in spatial working memory and speech.

    Science.gov (United States)

    Morsella, Ezequiel; Krauss, Robert M

    2004-01-01

    Co-speech gestures traditionally have been considered communicative, but they may also serve other functions. For example, hand-arm movements seem to facilitate both spatial working memory and speech production. It has been proposed that gestures facilitate speech indirectly by sustaining spatial representations in working memory. Alternatively, gestures may affect speech production directly by activating embodied semantic representations involved in lexical search. Consistent with the first hypothesis, we found participants gestured more when describing visual objects from memory and when describing objects that were difficult to remember and encode verbally. However, they also gestured when describing a visually accessible object, and gesture restriction produced dysfluent speech even when spatial memory was untaxed, suggesting that gestures can directly affect both spatial memory and lexical retrieval.

  3. Electron ID in ATLAS Run 2

    CERN Document Server

    Thais, Savannah Jennifer; The ATLAS collaboration

    2018-01-01

    Efficient and accurate electron identification is of critical importance to measuring many physics processes with leptons in the final state, including H->4l, dark vector boson searches, and various SUSY searches. This poster will describe the current status of the Likelihood driven Electron ID, highlighting the recent move from a MC driven ID to a data-driven ID. It will include the most recent identification efficiency and scale-factor measurements. Additionally, it will describe continued improvements for Run 2 electron ID, highlighting improvements in the low pt region and potential Machine Learning improvements.

  4. Tinjauan Desain Website Kemlu.Go.Id

    Directory of Open Access Journals (Sweden)

    Danu Widhyatmoko

    2013-04-01

    Full Text Available Review of kemlu.go.id website design is a research report on Kemlu.go.id website design. Kemlu.go.id website aims to be the information gateway of Ministry of Foreign Affairs, and also as guidelines of foreign politic policies of Republic of Indonesia. The review had been accomplished by using analytical method based on the "Nine Essential Principles for Good Web Design" developed by Collis Ta'eed (2007. At the end of the article, several recommendations in developing kemlu.go.id website are presented to create better appearance.  

  5. The native-language benefit for talker identification is robust in 7.5-month-old infants.

    Science.gov (United States)

    Fecher, Natalie; Johnson, Elizabeth K

    2018-04-26

    Adults recognize talkers better when the talkers speak a familiar language than when they speak an unfamiliar language. This language familiarity effect (LFE) demonstrates the inseparable nature of linguistic and indexical information in adult spoken language processing. Relatively little is known about children's integration of linguistic and indexical information in speech. For example, to date, only one study has explored the LFE in infants. Here, we sought to better understand the maturation of speech processing abilities in infants by replicating this earlier study using a more stringent experimental design (eliminating a potential voice-language confound), a different test population (English- rather than Dutch-learning infants), and a new language pairing (English vs. Polish rather than Dutch vs. Italian or Japanese). Furthermore, we explored the language exposure conditions required for infants to develop an LFE for a formerly unfamiliar language. We hypothesized based on previous studies (including the perceptual narrowing literature) that infants might develop an LFE more readily than would adults. Although our findings replicate those of the earlier study-demonstrating that the LFE is robust in 7.5-month-olds-we found no evidence that infants need less language exposure than do adults to develop an LFE. We concluded that both infants and adults need extensive (potentially live) exposure to an unfamiliar language before talker identification in that language improves. Moreover, our study suggests that the LFE is likely rooted in early emerging phonology rather than shared lexical knowledge and that infants already closely resemble adults in their processing of linguistic and indexical information. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Methods for eliciting, annotating, and analyzing databases for child speech development.

    Science.gov (United States)

    Beckman, Mary E; Plummer, Andrew R; Munson, Benjamin; Reidy, Patrick F

    2017-09-01

    Methods from automatic speech recognition (ASR), such as segmentation and forced alignment, have facilitated the rapid annotation and analysis of very large adult speech databases and databases of caregiver-infant interaction, enabling advances in speech science that were unimaginable just a few decades ago. This paper centers on two main problems that must be addressed in order to have analogous resources for developing and exploiting databases of young children's speech. The first problem is to understand and appreciate the differences between adult and child speech that cause ASR models developed for adult speech to fail when applied to child speech. These differences include the fact that children's vocal tracts are smaller than those of adult males and also changing rapidly in size and shape over the course of development, leading to between-talker variability across age groups that dwarfs the between-talker differences between adult men and women. Moreover, children do not achieve fully adult-like speech motor control until they are young adults, and their vocabularies and phonological proficiency are developing as well, leading to considerably more within-talker variability as well as more between-talker variability. The second problem then is to determine what annotation schemas and analysis techniques can most usefully capture relevant aspects of this variability. Indeed, standard acoustic characterizations applied to child speech reveal that adult-centered annotation schemas fail to capture phenomena such as the emergence of covert contrasts in children's developing phonological systems, while also revealing children's nonuniform progression toward community speech norms as they acquire the phonological systems of their native languages. Both problems point to the need for more basic research into the growth and development of the articulatory system (as well as of the lexicon and phonological system) that is oriented explicitly toward the construction of

  7. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Idékatalog Appetit på maden

    DEFF Research Database (Denmark)

    Jensen, Tenna; Jespersen, Astrid Pernille; Grønnow, Liv Cæcilie

    2015-01-01

    Idékataloget er en selvstændig publikation hørende til projekt Appetit på maden. Idékataloget er udviklet til brug i Københavns Kommune.......Idékataloget er en selvstændig publikation hørende til projekt Appetit på maden. Idékataloget er udviklet til brug i Københavns Kommune....

  9. DNA origami-based shape IDs for single-molecule nanomechanical genotyping

    Science.gov (United States)

    Zhang, Honglu; Chao, Jie; Pan, Dun; Liu, Huajie; Qiang, Yu; Liu, Ke; Cui, Chengjun; Chen, Jianhua; Huang, Qing; Hu, Jun; Wang, Lianhui; Huang, Wei; Shi, Yongyong; Fan, Chunhai

    2017-04-01

    Variations on DNA sequences profoundly affect how we develop diseases and respond to pathogens and drugs. Atomic force microscopy (AFM) provides a nanomechanical imaging approach for genetic analysis with nanometre resolution. However, unlike fluorescence imaging that has wavelength-specific fluorophores, the lack of shape-specific labels largely hampers widespread applications of AFM imaging. Here we report the development of a set of differentially shaped, highly hybridizable self-assembled DNA origami nanostructures serving as shape IDs for magnified nanomechanical imaging of single-nucleotide polymorphisms. Using these origami shape IDs, we directly genotype single molecules of human genomic DNA with an ultrahigh resolution of ~10 nm and the multiplexing ability. Further, we determine three types of disease-associated, long-range haplotypes in samples from the Han Chinese population. Single-molecule analysis allows robust haplotyping even for samples with low labelling efficiency. We expect this generic shape ID-based nanomechanical approach to hold great potential in genetic analysis at the single-molecule level.

  10. Characterizing a neurodegenerative syndrome: primary progressive apraxia of speech.

    Science.gov (United States)

    Josephs, Keith A; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Senjem, Matthew L; Master, Ankit V; Lowe, Val J; Jack, Clifford R; Whitwell, Jennifer L

    2012-05-01

    and increased mean diffusivity of the superior longitudinal fasciculus, particularly the premotor components. Statistical parametric mapping of the [(18)F]-fluorodeoxyglucose positron emission tomography scans revealed focal hypometabolism of superior lateral premotor cortex and supplementary motor area, although there was some variability across subjects noted with CortexID analysis. [(11)C]-Pittsburg compound B positron emission tomography binding was increased in only one of the 12 subjects, although it was unclear whether the increase was actually related to the primary progressive apraxia of speech. A syndrome characterized by progressive pure apraxia of speech clearly exists, with a neuroanatomic correlate of superior lateral premotor and supplementary motor atrophy, making this syndrome distinct from primary progressive aphasia.

  11. Infants' Developing Understanding of Social Gaze

    Science.gov (United States)

    Beier, Jonathan S.; Spelke, Elizabeth S.

    2012-01-01

    Young infants are sensitive to self-directed social actions, but do they appreciate the intentional, target-directed nature of such behaviors? The authors addressed this question by investigating infants' understanding of social gaze in third-party interactions (N = 104). Ten-month-old infants discriminated between 2 people in mutual versus…

  12. Inhibitor of differentiation 4 (Id4 is a potential tumor suppressor in prostate cancer

    Directory of Open Access Journals (Sweden)

    Carey Jason PW

    2009-06-01

    Full Text Available Abstract Background Inhibitor of differentiation 4 (Id4, a member of the Id gene family is also a dominant negative regulator of basic helix loop helix (bHLH transcription factors. Some of the functions of Id4 appear to be unique as compared to its other family members Id1, Id2 and Id3. Loss of Id4 gene expression in many cancers in association with promoter hypermethylation has led to the proposal that Id4 may act as a tumor suppressor. In this study we provide functional evidence that Id4 indeed acts as a tumor suppressor and is part of a cancer associated epigenetic re-programming. Methods Data mining was used to demonstrate Id4 expression in prostate cancer. Methylation specific polymerase chain reaction (MSP analysis was performed to understand molecular mechanisms associated with Id4 expression in prostate cancer cell lines. The effect of ectopic Id4 expression in DU145 cells was determined by cell cycle analysis (3H thymidine incorporation and FACS, expression of androgen receptor, p53 and cyclin dependent kinase inhibitors p27 and p21 by a combination of RT-PCR, real time-PCR, western blot and immuno-cytochemical analysis. Results Id4 expression was down-regulated in prostate cancer. Id4 expression was also down-regulated in prostate cancer line DU145 due to promoter hyper-methylation. Ectopic Id4 expression in DU145 prostate cancer cell line led to increased apoptosis and decreased cell proliferation due in part by an S-phase arrest. In addition to S-phase arrest, ectopic Id4 expression in PC3 cells also resulted in prolonged G2/M phase. At the molecular level these changes were associated with increased androgen receptor (AR, p21, p27 and p53 expression in DU145 cells. Conclusion The results suggest that Id4 acts directly as a tumor suppressor by influencing a hierarchy of cellular processes at multiple levels that leads to a decreased cell proliferation and change in morphology that is possibly mediated through induction of previously

  13. Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.

    Science.gov (United States)

    Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat

    2016-01-01

    To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress

  14. Iron requirements of infants and toddlers

    DEFF Research Database (Denmark)

    Domellöf, Magnus; Braegger, Christian; Campoy, Cristina

    2014-01-01

    Iron deficiency (ID) is the most common micronutrient deficiency worldwide and young children are a special risk group since their rapid growth leads to high iron requirements. Risk factors associated with a higher prevalence of iron deficiency anemia (IDA) include low birth weight, high cow's milk.......There is no evidence that iron supplementation of pregnant women improves iron status in their offspring in a European setting. Delayed cord clamping reduces the risk of iron deficiency. There is insufficient evidence to support general iron supplementation of healthy, European infants and toddlers of normal birth...... intake, low intake of iron-rich complementary foods, low socioeconomic status and immigrant status.The aim of this position paper is to review the field and provide recommendations regarding iron requirements in infants and toddlers, including those of moderately or marginally low birth weight...

  15. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  16. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  17. Direct identification and recognition of yeast species from clinical material by using albicans ID and CHROMagar Candida plates.

    OpenAIRE

    Baumgartner, C; Freydiere, A M; Gille, Y

    1996-01-01

    Two chromogenic media, Albicans ID and CHROMagar Candida agar plates, were compared with a reference medium, Sabouraud-chloramphenicol agar, and standard methods for the identification of yeast species. This study involved 951 clinical specimens. The detection rates for the two chromogenic media for polymicrobial specimens were 20% higher than that for the Sabouraud-chloramphenicol agar plates. The rates of identification of Candida albicans for Albicans ID and CHROMagar Candida agar plates w...

  18. VLM Tool for IDS Integration

    Directory of Open Access Journals (Sweden)

    Cǎtǎlin NAE

    2010-03-01

    Full Text Available This paper is dedicated to a very specific type of analysis tool (VLM - Vortex Lattice Method to be integrated in a IDS - Integrated Design System, tailored for the usage of small aircraft industry. The major interest is to have the possibility to simulate at very low computational costs a preliminary set of aerodynamic characteristics for basic aerodynamic global characteristics (Lift, Drag, Pitching Moment and aerodynamic derivatives for longitudinal and lateral-directional stability analysis. This work enables fast investigations of the influence of configuration changes in a very efficient computational environment. Using experimental data and/or CFD information for a specific calibration of VLM method, reliability of the analysis may me increased so that a first type (iteration zero aerodynamic evaluation of the preliminary 3D configuration is possible. The output of this tool is basic state aerodynamic and associated stability and control derivatives, as well as a complete set of information on specific loads on major airframe components.The major interest in using and validating this type of methods is coming from the possibility to integrate it as a tool in an IDS system for conceptual design phase, as considered for development for CESAR project (IP, UE FP6.

  19. 77 FR 55813 - Transition of DOE-ID Public Reading Room

    Science.gov (United States)

    2012-09-11

    ... to the INL Research Library at 1776 Science Center Drive, Idaho Falls, ID 83401, beginning September 1, 2012. Access to documents will also be electronically accessible through the World Wide Web. For direction in accessing documents electronically through the World Wide Web, please refer to the Idaho...

  20. Direct and Indirect Effects of Behavioral Parent Training on Infant Language Production.

    Science.gov (United States)

    Bagner, Daniel M; Garcia, Dainelys; Hill, Ryan

    2016-03-01

    Given the strong association between early behavior problems and language impairment, we examined the effect of a brief home-based adaptation of Parent-child Interaction Therapy on infant language production. Sixty infants (55% male; mean age 13.47±1.31 months) were recruited at a large urban primary care clinic and were included if their scores exceeded the 75th percentile on a brief screener of early behavior problems. Families were randomly assigned to receive the home-based parenting intervention or standard pediatric primary care. The observed number of infant total (i.e., token) and different (i.e., type) utterances spoken during an observation of an infant-led play and a parent-report measure of infant externalizing behavior problems were examined at pre- and post-intervention and at 3- and 6-month follow-ups. Infants receiving the intervention demonstrated a significantly higher number of observed different and total utterances at the 6-month follow-up compared to infants in standard care. Furthermore, there was an indirect effect of the intervention on infant language production, such that the intervention led to decreases in infant externalizing behavior problems from pre- to post-intervention, which, in turn, led to increases in infant different utterances at the 3- and 6-month follow-ups and total utterances at the 6-month follow-up. Results provide initial evidence for the effect of this brief and home-based intervention on infant language production, including the indirect effect of the intervention on infant language through improvements in infant behavior, highlighting the importance of targeting behavior problems in early intervention. Copyright © 2015. Published by Elsevier Ltd.

  1. Infants' Temperament and Mothers', and Fathers' Depression Predict Infants' Attention to Objects Paired with Emotional Faces.

    Science.gov (United States)

    Aktar, Evin; Mandell, Dorothy J; de Vente, Wieke; Majdandžić, Mirjana; Raijmakers, Maartje E J; Bögels, Susan M

    2016-07-01

    Between 10 and 14 months, infants gain the ability to learn about unfamiliar stimuli by observing others' emotional reactions to those stimuli, so called social referencing (SR). Joint processing of emotion and head/gaze direction is essential for SR. This study tested emotion and head/gaze direction effects on infants' attention via pupillometry in the period following the emergence of SR. Pupil responses of 14-to-17-month-old infants (N = 57) were measured during computerized presentations of unfamiliar objects alone, before-and-after being paired with emotional (happy, sad, fearful vs. neutral) faces gazing towards (vs. away) from objects. Additionally, the associations of infants' temperament, and parents' negative affect/depression/anxiety with infants' pupil responses were explored. Both mothers and fathers of participating infants completed questionnaires about their negative affect, depression and anxiety symptoms and their infants' negative temperament. Infants allocated more attention (larger pupils) to negative vs. neutral faces when the faces were presented alone, while they allocated less attention to objects paired with emotional vs. neutral faces independent of head/gaze direction. Sad (but not fearful) temperament predicted more attention to emotional faces. Infants' sad temperament moderated the associations of mothers' depression (but not anxiety) with infants' attention to objects. Maternal depression predicted more attention to objects paired with emotional expressions in infants low in sad temperament, while it predicted less attention in infants high in sad temperament. Fathers' depression (but not anxiety) predicted more attention to objects paired with emotional expressions independent of infants' temperament. We conclude that infants' own temperamental dispositions for sadness, and their exposure to mothers' and fathers' depressed moods may influence infants' attention to emotion-object associations in social learning contexts.

  2. Speech misperception: speaking and seeing interfere differently with hearing.

    Directory of Open Access Journals (Sweden)

    Takemi Mochida

    Full Text Available Speech perception is thought to be linked to speech motor production. This linkage is considered to mediate multimodal aspects of speech perception, such as audio-visual and audio-tactile integration. However, direct coupling between articulatory movement and auditory perception has been little studied. The present study reveals a clear dissociation between the effects of a listener's own speech action and the effects of viewing another's speech movements on the perception of auditory phonemes. We assessed the intelligibility of the syllables [pa], [ta], and [ka] when listeners silently and simultaneously articulated syllables that were congruent/incongruent with the syllables they heard. The intelligibility was compared with a condition where the listeners simultaneously watched another's mouth producing congruent/incongruent syllables, but did not articulate. The intelligibility of [ta] and [ka] were degraded by articulating [ka] and [ta] respectively, which are associated with the same primary articulator (tongue as the heard syllables. But they were not affected by articulating [pa], which is associated with a different primary articulator (lips from the heard syllables. In contrast, the intelligibility of [ta] and [ka] was degraded by watching the production of [pa]. These results indicate that the articulatory-induced distortion of speech perception occurs in an articulator-specific manner while visually induced distortion does not. The articulator-specific nature of the auditory-motor interaction in speech perception suggests that speech motor processing directly contributes to our ability to hear speech.

  3. Child speech, language and communication need re-examined in a public health context: a new direction for the speech and language therapy profession.

    Science.gov (United States)

    Law, James; Reilly, Sheena; Snow, Pamela C

    2013-01-01

    Historically speech and language therapy services for children have been framed within a rehabilitative framework with explicit assumptions made about providing therapy to individuals. While this is clearly important in many cases, we argue that this model needs revisiting for a number of reasons. First, our understanding of the nature of disability, and therefore communication disabilities, has changed over the past century. Second, there is an increasing understanding of the impact that the social gradient has on early communication difficulties. Finally, understanding how these factors interact with one other and have an impact across the life course remains poorly understood. To describe the public health paradigm and explore its implications for speech and language therapy with children. We test the application of public health methodologies to speech and language therapy services by looking at four dimensions of service delivery: (1) the uptake of services and whether those children who need services receive them; (2) the development of universal prevention services in relation to social disadvantage; (3) the risk of over-interpreting co-morbidity from clinical samples; and (4) the overlap between communicative competence and mental health. It is concluded that there is a strong case for speech and language therapy services to be reconceptualized to respond to the needs of the whole population and according to socially determined needs, focusing on primary prevention. This is not to disregard individual need, but to highlight the needs of the population as a whole. Although the socio-political context is different between countries, we maintain that this is relevant wherever speech and language therapists have a responsibility for covering whole populations. Finally, we recommend that speech and language therapy services be conceptualized within the framework laid down in The Ottawa Charter for Health Promotion. © 2013 Royal College of Speech and Language

  4. Reporting and Reacting: Concurrent Responses to Reported Speech.

    Science.gov (United States)

    Holt, Elizabeth

    2000-01-01

    Uses conversation analysis to investigate reported speech in talk-in-interaction. Beginning with an examination of direct and indirect reported speech, the article highlights some of the design features of the former, and the sequential environments in which it occurs. (Author/VWL)

  5. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  6. Social interaction facilitates word learning in preverbal infants: Word-object mapping and word segmentation.

    Science.gov (United States)

    Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo

    2017-08-01

    In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Facts about Infectious Diseases (ID)

    Science.gov (United States)

    ... an ID Specialist? Facts about ID Pocketcard Infectious diseases are caused by microscopic organisms that penetrate the body’s natural ... from diseases such as AIDS or treatment of diseases such as cancer, may allow ... of contaminated food or water, bites from vectors such as ticks or mosquitoes ...

  8. One ID Card for the Entire Campus.

    Science.gov (United States)

    Ridenour, David P.; Ferguson, Linda M.

    1986-01-01

    The implementation by Indiana State University of a machine-readable photo ID system for their food services prompted an investigation into the available alternatives and requirements for a more efficient all-University ID card system. The new ID system is described. (AUTHOR/MLW)

  9. Power in methods: language to infants in structured and naturalistic contexts.

    Science.gov (United States)

    Tamis-LeMonda, Catherine S; Kuchirko, Yana; Luo, Rufan; Escobar, Kelly; Bornstein, Marc H

    2017-11-01

    Methods can powerfully affect conclusions about infant experiences and learning. Data from naturalistic observations may paint a very different picture of learning and development from those based on structured tasks, as illustrated in studies of infant walking, object permanence, intention understanding, and so forth. Using language as a model system, we compared the speech of 40 mothers to their 13-month-old infants during structured play and naturalistic home routines. The contrasting methods yielded unique portrayals of infant language experiences, while simultaneously underscoring cross-situational correspondence at an individual level. Infants experienced substantially more total words and different words per minute during structured play than they did during naturalistic routines. Language input during structured play was consistently dense from minute to minute, whereas language during naturalistic routines showed striking fluctuations interspersed with silence. Despite these differences, infants' language experiences during structured play mirrored the peak language interactions infants experienced during naturalistic routines, and correlations between language inputs in the two conditions were strong. The implications of developmental methods for documenting the nature of experiences and individual differences are discussed. © 2017 John Wiley & Sons Ltd.

  10. Imitation and speech: commonalities within Broca's area.

    Science.gov (United States)

    Kühn, Simone; Brass, Marcel; Gallinat, Jürgen

    2013-11-01

    The so-called embodiment of communication has attracted considerable interest. Recently a growing number of studies have proposed a link between Broca's area's involvement in action processing and its involvement in speech. The present quantitative meta-analysis set out to test whether neuroimaging studies on imitation and overt speech show overlap within inferior frontal gyrus. By means of activation likelihood estimation (ALE), we investigated concurrence of brain regions activated by object-free hand imitation studies as well as overt speech studies including simple syllable and more complex word production. We found direct overlap between imitation and speech in bilateral pars opercularis (BA 44) within Broca's area. Subtraction analyses revealed no unique localization neither for speech nor for imitation. To verify the potential of ALE subtraction analysis to detect unique involvement within Broca's area, we contrasted the results of a meta-analysis on motor inhibition and imitation and found separable regions involved for imitation. This is the first meta-analysis to compare the neural correlates of imitation and overt speech. The results are in line with the proposed evolutionary roots of speech in imitation.

  11. The Advantage of Story-Telling: Children's Interpretation of Reported Speech in Narratives

    Science.gov (United States)

    Köder, Franziska; Maier, Emar

    2018-01-01

    Children struggle with the interpretation of pronouns in direct speech ("Ann said, 'I get a cookie'"), but not in indirect speech ("Ann said that she gets a cookie") (Köder & Maier, 2016). Yet children's books consistently favor direct over indirect speech (Baker & Freebody, 1989). To reconcile these seemingly…

  12. THE REGULATIONS RELATING TO FOODSTUFFS FOR INFANTS AND YOUNG CHILDREN (R 991: A FORMULA FOR THE PROMOTION OF BREASTFEEDING OR CENSORSHIP OF COMMERCIAL SPEECH?

    Directory of Open Access Journals (Sweden)

    Lize Mills

    2014-04-01

    Full Text Available The regulation of commercial speech in the interests of public health is an issue which recently has become the topic of numerous debates. Two examples of such governmental regulation are the subjects of discussion in this article, namely the prohibition on the advertising and promotion of tobacco products, as well as the proposed prohibition on the advertising and promotion of infant formulae and other foods and products marketed as being suitable for infants or young children. The article seek to evaluate the recently proposed regulations published in terms of the Foodstuffs, Cosmetics and Disinfectants Act in the light of the reasoning by the Supreme Court of Appeal in the British American Tobacco South Africa (Pty Limited v Minister of Health 463/2011 [2012] ZASCA 107 (20 June 2012 decision, and in particular in terms of the section 36 test of reasonableness and proportionality found in the Constitution of the Republic of South Africa, 1996. It argues that, although the South African Department of Health must be applauded for its attempt at improving public health in the country, some of the provisions of the proposed regulations are not constitutionally sound. It will be contended that, despite the fact that the promotion of breastfeeding is a laudable goal, the introduction only of measures which restrict the right to advertise these types of products will not necessarily achieve this objective.

  13. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  14. Id-1 and Id-2 genes and products as therapeutic targets for treatment of breast cancer and other types of carcinoma

    Science.gov (United States)

    Desprez, Pierre-Yves; Campisi, Judith

    2014-09-30

    A method for treatment and amelioration of breast, cervical, ovarian, endometrial, squamous cells, prostate cancer and melanoma in a patient comprising targeting Id-1 or Id-2 gene expression with a delivery vehicle comprising a product which modulates Id-1 or Id-2 expression.

  15. Speech impairment in Down syndrome: a review.

    Science.gov (United States)

    Kent, Ray D; Vorperian, Houri K

    2013-02-01

    This review summarizes research on disorders of speech production in Down syndrome (DS) for the purposes of informing clinical services and guiding future research. Review of the literature was based on searches using MEDLINE, Google Scholar, PsycINFO, and HighWire Press, as well as consideration of reference lists in retrieved documents (including online sources). Search terms emphasized functions related to voice, articulation, phonology, prosody, fluency, and intelligibility. The following conclusions pertain to four major areas of review: voice, speech sounds, fluency and prosody, and intelligibility. The first major area is voice. Although a number of studies have reported on vocal abnormalities in DS, major questions remain about the nature and frequency of the phonatory disorder. Results of perceptual and acoustic studies have been mixed, making it difficult to draw firm conclusions or even to identify sensitive measures for future study. The second major area is speech sounds. Articulatory and phonological studies show that speech patterns in DS are a combination of delayed development and errors not seen in typical development. Delayed (i.e., developmental) and disordered (i.e., nondevelopmental) patterns are evident by the age of about 3 years, although DS-related abnormalities possibly appear earlier, even in infant babbling. The third major area is fluency and prosody. Stuttering and/or cluttering occur in DS at rates of 10%-45%, compared with about 1% in the general population. Research also points to significant disturbances in prosody. The fourth major area is intelligibility. Studies consistently show marked limitations in this area, but only recently has the research gone beyond simple rating scales.

  16. Speech-Language Dissociations, Distractibility, and Childhood Stuttering

    Science.gov (United States)

    Conture, Edward G.; Walden, Tedra A.; Lambert, Warren E.

    2015-01-01

    Purpose This study investigated the relation among speech-language dissociations, attentional distractibility, and childhood stuttering. Method Participants were 82 preschool-age children who stutter (CWS) and 120 who do not stutter (CWNS). Correlation-based statistics (Bates, Appelbaum, Salcedo, Saygin, & Pizzamiglio, 2003) identified dissociations across 5 norm-based speech-language subtests. The Behavioral Style Questionnaire Distractibility subscale measured attentional distractibility. Analyses addressed (a) between-groups differences in the number of children exhibiting speech-language dissociations; (b) between-groups distractibility differences; (c) the relation between distractibility and speech-language dissociations; and (d) whether interactions between distractibility and dissociations predicted the frequency of total, stuttered, and nonstuttered disfluencies. Results More preschool-age CWS exhibited speech-language dissociations compared with CWNS, and more boys exhibited dissociations compared with girls. In addition, male CWS were less distractible than female CWS and female CWNS. For CWS, but not CWNS, less distractibility (i.e., greater attention) was associated with more speech-language dissociations. Last, interactions between distractibility and dissociations did not predict speech disfluencies in CWS or CWNS. Conclusions The present findings suggest that for preschool-age CWS, attentional processes are associated with speech-language dissociations. Future investigations are warranted to better understand the directionality of effect of this association (e.g., inefficient attentional processes → speech-language dissociations vs. inefficient attentional processes ← speech-language dissociations). PMID:26126203

  17. UBE2A deficiency syndrome: Mild to severe intellectual disability accompanied by seizures, absent speech, urogenital, and skin anomalies in male patients.

    NARCIS (Netherlands)

    Leeuw, N. de; Bulk, S.; Green, A.; Jaeckle-Santos, L.; Baker, L.A.; Zinn, A.R.; Kleefstra, T.; Smagt, J.J. van der; Vianne Morgante, A.M.; Vries, L.B.A. de; Bokhoven, J.H.L.M. van; Brouwer, A.P.M. de

    2010-01-01

    We describe three patients with a comparable deletion encompassing SLC25A43, SLC25A5, CXorf56, UBE2A, NKRF, and two non-coding RNA genes, U1 and LOC100303728. Moderate to severe intellectual disability (ID), psychomotor retardation, severely impaired/absent speech, seizures, and urogenital anomalies

  18. Speaker gaze increases information coupling between infant and adult brains.

    Science.gov (United States)

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah; Wass, Sam

    2017-12-12

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 ( n = 17), infants viewed videos of an adult who was singing nursery rhymes with ( i ) direct gaze (looking forward), ( ii ) indirect gaze (head and eyes averted by 20°), or ( iii ) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 ( n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. Copyright © 2017 the Author(s). Published by PNAS.

  19. The Antibody Response of Pregnant Cameroonian Women to VAR2CSA ID1-ID2a, a Small Recombinant Protein Containing the CSA-Binding Site

    Science.gov (United States)

    Babakhanyan, Anna; Leke, Rose G. F.; Salanti, Ali; Bobbili, Naveen; Gwanmesia, Philomina; Leke, Robert J. I.; Quakyi, Isabella A.; Chen, John J.; Taylor, Diane Wallace

    2014-01-01

    In pregnant women, Plasmodium falciparum-infected erythrocytes expressing the VAR2CSA antigen bind to chondroitin sulfate A in the placenta causing placental malaria. The binding site of VAR2CSA is present in the ID1-ID2a region. This study sought to determine if pregnant Cameroonian women naturally acquire antibodies to ID1-ID2a and if antibodies to ID1-ID2a correlate with absence of placental malaria at delivery. Antibody levels to full-length VAR2CSA and ID1-ID2a were measured in plasma samples from 745 pregnant Cameroonian women, 144 Cameroonian men, and 66 US subjects. IgM levels and IgG avidity to ID1-ID2a were also determined. As expected, antibodies to ID1-ID2a were absent in US controls. Although pregnant Cameroonian women developed increasing levels of antibodies to full-length VAR2CSA during pregnancy, no increase in either IgM or IgG to ID1-ID2a was observed. Surprisingly, no differences in antibody levels to ID1-ID2a were detected between Cameroonian men and pregnant women. For example, in rural settings only 8–9% of males had antibodies to full-length VAR2CSA, but 90–96% had antibodies to ID1-ID2a. In addition, no significant difference in the avidity of IgG to ID1-ID2a was found between pregnant women and Cameroonian men, and no correlation between antibody levels at delivery and absence of placental malaria was found. Thus, the response to ID1-ID2a was not pregnancy specific, but predominantly against cross-reactivity epitopes, which may have been induced by other PfEMP1 antigens, malarial antigens, or microbes. Currently, ID1-ID2a is a leading vaccine candidate, since it binds to the CSA with the same affinity as the full-length molecule and elicits binding-inhibitory antibodies in animals. Further studies are needed to determine if the presence of naturally acquired cross-reactive antibodies in women living in malaria endemic countries will alter the response to ID1-ID2a following vaccination with ID1-ID2a. PMID:24505415

  20. The antibody response of pregnant Cameroonian women to VAR2CSA ID1-ID2a, a small recombinant protein containing the CSA-binding site.

    Directory of Open Access Journals (Sweden)

    Anna Babakhanyan

    Full Text Available In pregnant women, Plasmodium falciparum-infected erythrocytes expressing the VAR2CSA antigen bind to chondroitin sulfate A in the placenta causing placental malaria. The binding site of VAR2CSA is present in the ID1-ID2a region. This study sought to determine if pregnant Cameroonian women naturally acquire antibodies to ID1-ID2a and if antibodies to ID1-ID2a correlate with absence of placental malaria at delivery. Antibody levels to full-length VAR2CSA and ID1-ID2a were measured in plasma samples from 745 pregnant Cameroonian women, 144 Cameroonian men, and 66 US subjects. IgM levels and IgG avidity to ID1-ID2a were also determined. As expected, antibodies to ID1-ID2a were absent in US controls. Although pregnant Cameroonian women developed increasing levels of antibodies to full-length VAR2CSA during pregnancy, no increase in either IgM or IgG to ID1-ID2a was observed. Surprisingly, no differences in antibody levels to ID1-ID2a were detected between Cameroonian men and pregnant women. For example, in rural settings only 8-9% of males had antibodies to full-length VAR2CSA, but 90-96% had antibodies to ID1-ID2a. In addition, no significant difference in the avidity of IgG to ID1-ID2a was found between pregnant women and Cameroonian men, and no correlation between antibody levels at delivery and absence of placental malaria was found. Thus, the response to ID1-ID2a was not pregnancy specific, but predominantly against cross-reactivity epitopes, which may have been induced by other PfEMP1 antigens, malarial antigens, or microbes. Currently, ID1-ID2a is a leading vaccine candidate, since it binds to the CSA with the same affinity as the full-length molecule and elicits binding-inhibitory antibodies in animals. Further studies are needed to determine if the presence of naturally acquired cross-reactive antibodies in women living in malaria endemic countries will alter the response to ID1-ID2a following vaccination with ID1-ID2a.

  1. Evaluation of ID-PaGIA syphilis antibody test.

    Science.gov (United States)

    Naaber, Paul; Makoid, Ene; Aus, Anneli; Loivukene, Krista; Poder, Airi

    2009-01-01

    Laboratory diagnosis of syphilis is usually accomplished by serology. There are currently a large number of different commercial treponemal tests available that vary in format, sensitivity and specificity. To evaluate the ID-PaGIA Syphilis Antibody Test as an alternative to other specific treponemal tests for primary screening or confirmation of diagnosis. Serum samples from healthy adults (n = 100) were used for detection of specificity of ID-PaGIA. To evaluate sensitivity of ID-PaGIA serum samples (n = 101) from patients with confirmed or suspected syphilis were tested for syphilis antibodies with FTA-Abs IgM, ID-PaGIA, ELISA IgM and TPHA tests. No false-positive results were found with ID-PaGIA. Sensitivity of various treponemal tests was the following: FTA-Abs IgM: 95.5%, ID-PaGIA and ELISA IgM: 94%, and TPHA 75%. The positive and negative predictive values of ID-PaGIA were 100 and 89.5%, respectively. Compared with other treponemal tests ID-PaGIA has excellent sensitivity and specificity.

  2. Start/End Delays of Voiced and Unvoiced Speech Signals

    Energy Technology Data Exchange (ETDEWEB)

    Herrnstein, A

    1999-09-24

    Recent experiments using low power EM-radar like sensors (e.g, GEMs) have demonstrated a new method for measuring vocal fold activity and the onset times of voiced speech, as vocal fold contact begins to take place. Similarly the end time of a voiced speech segment can be measured. Secondly it appears that in most normal uses of American English speech, unvoiced-speech segments directly precede or directly follow voiced-speech segments. For many applications, it is useful to know typical duration times of these unvoiced speech segments. A corpus, assembled earlier of spoken ''Timit'' words, phrases, and sentences and recorded using simultaneously measured acoustic and EM-sensor glottal signals, from 16 male speakers, was used for this study. By inspecting the onset (or end) of unvoiced speech, using the acoustic signal, and the onset (or end) of voiced speech using the EM sensor signal, the average duration times for unvoiced segments preceding onset of vocalization were found to be 300ms, and for following segments, 500ms. An unvoiced speech period is then defined in time, first by using the onset of the EM-sensed glottal signal, as the onset-time marker for the voiced speech segment and end marker for the unvoiced segment. Then, by subtracting 300ms from the onset time mark of voicing, the unvoiced speech segment start time is found. Similarly, the times for a following unvoiced speech segment can be found. While data of this nature have proven to be useful for work in our laboratory, a great deal of additional work remains to validate such data for use with general populations of users. These procedures have been useful for applying optimal processing algorithms over time segments of unvoiced, voiced, and non-speech acoustic signals. For example, these data appear to be of use in speaker validation, in vocoding, and in denoising algorithms.

  3. Communicating prognosis with parents of critically ill infants: direct observation of clinician behaviors.

    Science.gov (United States)

    Boss, R D; Lemmon, M E; Arnold, R M; Donohue, P K

    2017-11-01

    Delivering prognostic information to families requires clinicians to forecast an infant's illness course and future. We lack robust empirical data about how prognosis is shared and how that affects clinician-family concordance regarding infant outcomes. Prospective audiorecording of neonatal intensive care unit family conferences, immediately followed by parent/clinician surveys. Existing qualitative analysis frameworks were applied. We analyzed 19 conferences. Most prognostic discussion targeted predicted infant functional needs, for example, medications or feeding. There was little discussion of how infant prognosis would affect infant/family quality of life. Prognostic framing was typically optimistic. Most parents left the conference believing their infant's prognosis to be more optimistic than did clinicians. Clinician approach to prognostic disclosure in these audiotaped family conferences tended to be broad and optimistic, without detail regarding implications of infant health for infant/family quality of life. Families and clinicians left these conversations with little consensus about infant prognosis.

  4. The fragility of freedom of speech.

    Science.gov (United States)

    Shackel, Nicholas

    2013-05-01

    Freedom of speech is a fundamental liberty that imposes a stringent duty of tolerance. Tolerance is limited by direct incitements to violence. False notions and bad laws on speech have obscured our view of this freedom. Hence, perhaps, the self-righteous intolerance, incitements and threats in response to Giubilini and Minerva. Those who disagree have the right to argue back but their attempts to shut us up are morally wrong.

  5. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  6. An Empirical Study of the Use of Norm-based Direct Speech in Danish Courtrooms - or Deviations from such Use - on the Basis of the Prescriptive Norm-based Contents of Danish Court Proceedings

    DEFF Research Database (Denmark)

    Christensen, Tina Paulsen

    in terms of what constitutes good interpreting. When selecting a form of address, the guidelines state that the Danish legislative community equates good interpreting and interpreting quality with the use of the direct, first-person style and that this applies to all actors in a court room. For example......, Berk-Seligson (1990: 61/151) notes that professional actors often approach the interpreter and not the non-native speaking individual. The present empirical study examines the legal discourse in terms of the use of direct speech in Danish court proceedings. In this context, it is examined whether any...... conclusions may be drawn for situations in which professional actors, specifically judges and justices, deviate from the recommended use of the direct speech. Thus, the primary objective of my paper is to study the potential correlation between the use of direct/indirect speech and certain contents of court...

  7. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  8. Mutations affecting the SAND domain of DEAF1 cause intellectual disability with severe speech impairment and behavioral problems.

    Science.gov (United States)

    Vulto-van Silfhout, Anneke T; Rajamanickam, Shivakumar; Jensik, Philip J; Vergult, Sarah; de Rocker, Nina; Newhall, Kathryn J; Raghavan, Ramya; Reardon, Sara N; Jarrett, Kelsey; McIntyre, Tara; Bulinski, Joseph; Ownby, Stacy L; Huggenvik, Jodi I; McKnight, G Stanley; Rose, Gregory M; Cai, Xiang; Willaert, Andy; Zweier, Christiane; Endele, Sabine; de Ligt, Joep; van Bon, Bregje W M; Lugtenberg, Dorien; de Vries, Petra F; Veltman, Joris A; van Bokhoven, Hans; Brunner, Han G; Rauch, Anita; de Brouwer, Arjan P M; Carvill, Gemma L; Hoischen, Alexander; Mefford, Heather C; Eichler, Evan E; Vissers, Lisenka E L M; Menten, Björn; Collard, Michael W; de Vries, Bert B A

    2014-05-01

    Recently, we identified in two individuals with intellectual disability (ID) different de novo mutations in DEAF1, which encodes a transcription factor with an important role in embryonic development. To ascertain whether these mutations in DEAF1 are causative for the ID phenotype, we performed targeted resequencing of DEAF1 in an additional cohort of over 2,300 individuals with unexplained ID and identified two additional individuals with de novo mutations in this gene. All four individuals had severe ID with severely affected speech development, and three showed severe behavioral problems. DEAF1 is highly expressed in the CNS, especially during early embryonic development. All four mutations were missense mutations affecting the SAND domain of DEAF1. Altered DEAF1 harboring any of the four amino acid changes showed impaired transcriptional regulation of the DEAF1 promoter. Moreover, behavioral studies in mice with a conditional knockout of Deaf1 in the brain showed memory deficits and increased anxiety-like behavior. Our results demonstrate that mutations in DEAF1 cause ID and behavioral problems, most likely as a result of impaired transcriptional regulation by DEAF1. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  9. Differences in Neural Correlates of Speech Perception in 3 Month Olds at High and Low Risk for Autism Spectrum Disorder.

    Science.gov (United States)

    Edwards, Laura A; Wagner, Jennifer B; Tager-Flusberg, Helen; Nelson, Charles A

    2017-10-01

    In this study, we investigated neural precursors of language acquisition as potential endophenotypes of autism spectrum disorder (ASD) in 3-month-old infants at high and low familial ASD risk. Infants were imaged using functional near-infrared spectroscopy while they listened to auditory stimuli containing syllable repetitions; their neural responses were analyzed over left and right temporal regions. While female low risk infants showed initial neural activation that decreased over exposure to repetition-based stimuli, potentially indicating a habituation response to repetition in speech, female high risk infants showed no changes in neural activity over exposure. This finding may indicate a potential neural endophenotype of language development or ASD specific to females at risk for the disorder.

  10. Freedom of racist speech: Ego and expressive threats.

    Science.gov (United States)

    White, Mark H; Crandall, Christian S

    2017-09-01

    Do claims of "free speech" provide cover for prejudice? We investigate whether this defense of racist or hate speech serves as a justification for prejudice. In a series of 8 studies (N = 1,624), we found that explicit racial prejudice is a reliable predictor of the "free speech defense" of racist expression. Participants endorsed free speech values for singing racists songs or posting racist comments on social media; people high in prejudice endorsed free speech more than people low in prejudice (meta-analytic r = .43). This endorsement was not principled-high levels of prejudice did not predict endorsement of free speech values when identical speech was directed at coworkers or the police. Participants low in explicit racial prejudice actively avoided endorsing free speech values in racialized conditions compared to nonracial conditions, but participants high in racial prejudice increased their endorsement of free speech values in racialized conditions. Three experiments failed to find evidence that defense of racist speech by the highly prejudiced was based in self-relevant or self-protective motives. Two experiments found evidence that the free speech argument protected participants' own freedom to express their attitudes; the defense of other's racist speech seems motivated more by threats to autonomy than threats to self-regard. These studies serve as an elaboration of the Justification-Suppression Model (Crandall & Eshleman, 2003) of prejudice expression. The justification of racist speech by endorsing fundamental political values can serve to buffer racial and hate speech from normative disapproval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. The influence of spectral and spatial characteristics of early reflections on speech intelligibility

    DEFF Research Database (Denmark)

    Arweiler, Iris; Buchholz, Jörg; Dau, Torsten

    The auditory system employs different strategies to facilitate speech intelligibility in complex listening conditions. One of them is the integration of early reflections (ER’s) with the direct sound (DS) to increase the effective speech level. So far the underlying mechanisms of ER processing have...... of listeners that speech intelligibility improved with added ER energy, but less than with added DS energy. An efficiency factor was introduced to quantify this effect. The difference in speech intelligibility could be mainly ascribed to the differences in the spectrum between the speech signals....... binaural). The direction-dependency could be explained by the spectral changes introduced by the pinna, head, and torso. The results will be important with regard to the influence of signal processing strategies in modern hearing aids on speech intelligibility, because they might alter the spectral...

  12. Comparison of two speech privacy measurements, articulation index (AI) and speech privacy noise isolation class (NIC'), in open workplaces

    Science.gov (United States)

    Yoon, Heakyung C.; Loftness, Vivian

    2002-05-01

    Lack of speech privacy has been reported to be the main dissatisfaction among occupants in open workplaces, according to workplace surveys. Two speech privacy measurements, Articulation Index (AI), standardized by the American National Standards Institute in 1969, and Speech Privacy Noise Isolation Class (NIC', Noise Isolation Class Prime), adapted from Noise Isolation Class (NIC) by U. S. General Services Administration (GSA) in 1979, have been claimed as objective tools to measure speech privacy in open offices. To evaluate which of them, normal privacy for AI or satisfied privacy for NIC', is a better tool in terms of speech privacy in a dynamic open office environment, measurements were taken in the field. AIs and NIC's in the different partition heights and workplace configurations have been measured following ASTM E1130 (Standard Test Method for Objective Measurement of Speech Privacy in Open Offices Using Articulation Index) and GSA test PBS-C.1 (Method for the Direct Measurement of Speech-Privacy Potential (SPP) Based on Subjective Judgments) and PBS-C.2 (Public Building Service Standard Method of Test Method for the Sufficient Verification of Speech-Privacy Potential (SPP) Based on Objective Measurements Including Methods for the Rating of Functional Interzone Attenuation and NC-Background), respectively.

  13. Coming Out of Their Shell: The Speech and Writing of Two Young Bilinguals in the Classroom.

    Science.gov (United States)

    Parke, Tim; Drury, Rose

    2000-01-01

    Examines the linguistic complexity and functional variety of the speech and writing of 2 bilingual children in year 2 of a British infant school. Focuses on the contexts of language use and shows the children making causal connections between separate episodes of the observation phase, considered proof of learning. (JPB)

  14. On the validity of Ksub(Id)-measurements in instrumented impact tests

    International Nuclear Information System (INIS)

    Kalthoff, J.F.; Winkler, S.; Klemm, W.; Beinert, J.

    1979-01-01

    The influence of inertia effects in determining the dynamic fracture toughness Ksub(Id) by instrumented impact testing is investigated. Model experiments in the brittle fracture regime are carried out with precracked bend specimens machined from the epoxy resin Araldite B. As is usual in these tests, the loads at the tup of the impinging striker are recorded as a function of time during the impact process. For reference purposes, the dynamic fracture toughness value Ksub(Id)sup(m1) is derived from the measured maximum load utilizing static stress intensity factor formulas. In addition to this conventional procedure, the actual stress intensity factors are measured directly at the tip of the crack by means of the shadow optical method of caustics applied in combination with high speed photography. The critical value of these optically measured stress intensity factors (for onset of crack propagation), Ksub(Id)sup(opt), is the true dynamic fracture toughness. In the experiments, the specimen size and the impact velocity were varied. In accordance with expectations, it is found that the hammer load signal is not correlated with the actual crack tip stress intensity factor values by a simple proportionality. The conventionally determined Ksub(Id)sup(m1)-value overestimates the true dynamic fracture toughness Ksub(Id)sup(opt). This overestimation becomes larger for larger specimen sizes and larger impact velocities. The results demonstrate the dominating influence inertia effects can have on hammer load measurements and emphasize the importance of eliminating these effects in order to determine non-erroneous dynamic fracture toughness values. (orig.)

  15. Acoustic processing of temporally modulated sounds in infants: evidence from a combined near-infrared spectroscopy and EEG study

    Directory of Open Access Journals (Sweden)

    Silke eTelkemeyer

    2011-04-01

    Full Text Available Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG and near-infrared spectroscopy (NIRS. NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language

  16. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  17. The Personal Coat-of-Arms Speech: Application in the Basic Course.

    Science.gov (United States)

    Matula, Theodore

    The personal coat-of-arms speech provides students with specific directions in constructing a speech of introduction and has been used with success at Illinois State University, Daley College (Chicago), St. Xavier University (Chicago), and Ohio State University. The personal coat-of-arms speech gives students concrete experience on which to draw…

  18. Modern Tools in Patient-Centred Speech Therapy for Romanian Language

    Directory of Open Access Journals (Sweden)

    Mirela Danubianu

    2016-03-01

    Full Text Available The most common way to communicate with those around us is speech. Suffering from a speech disorder can have negative social effects: from leaving the individuals with low confidence and moral to problems with social interaction and the ability to live independently like adults. The speech therapy intervention is a complex process having particular objectives such as: discovery and identification of speech disorder and directing the therapy to correction, recovery, compensation, adaptation and social integration of patients. Computer-based Speech Therapy systems are a real help for therapists by creating a special learning environment. The Romanian language is a phonetic one, with special linguistic particularities. This paper aims to present a few computer-based speech therapy systems developed for the treatment of various speech disorders specific to Romanian language.

  19. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  20. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  1. Infant word recognition: Insights from TRACE simulations.

    Science.gov (United States)

    Mayor, Julien; Plunkett, Kim

    2014-02-01

    The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants' graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan's stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life.

  2. Effect of Time and Temperature on Thickened Infant Formula.

    Science.gov (United States)

    Gosa, Memorie M; Dodrill, Pamela

    2017-04-01

    Unlike adult populations, who primarily depend on liquids for hydration alone, infants rely on liquids to provide them with hydration and nutrition. Speech-language pathologists working within pediatric medical settings often identify dysphagia in patients and subsequently recommend thickened liquids to reduce aspiration risk. Caregivers frequently report difficulty attempting to prepare infant formula to the prescribed thickness. This study was designed to determine (1) the relationship between consistencies in modified barium swallow studies and thickened infant formulas and (2) the effects of time and temperature on the resulting thickness of infant formula. Prepackaged barium consistencies and 1 standard infant formula that was thickened with rice cereal and with 2 commercially available thickening agents were studied. Thickness was determined via a line spread test after various time and temperature conditions were met. There were significant differences between the thickened formula and barium test consistencies. Formula thickened with rice cereal separated over time into thin liquid and solid residue. Formula thickened with a starch-based thickening agent was thicker than the desired consistency immediately after mixing, and it continued to thicken over time. The data from this project suggest that nectar-thick and honey-thick infant formulas undergo significant changes in flow rates within 30 minutes of preparation or if refrigerated and then reheated after 3 hours. Additional empirical evidence is warranted to determine the most reliable methods and safest products for thickening infant formula when necessary for effective dysphagia management.

  3. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  4. Identification of Staphylococcus species and subspecies with the MicroScan Pos ID and Rapid Pos ID panel systems.

    Science.gov (United States)

    Kloos, W E; George, C G

    1991-01-01

    The accuracies of the MicroScan Pos ID and Rapid Pos ID panel systems (Baxter Diagnostic Inc., MicroScan Division, West Sacramento, Calif.) were compared with each other and with the accuracies of conventional methods for the identification of 25 Staphylococcus species and 4 subspecies. Conventional methods included those used in the original descriptions of species and subspecies and DNA-DNA hybridization. The Pos ID panel uses a battery of 18 tests, and the Rapid Pos ID panel uses a battery of 42 tests for the identification of Staphylococcus species. The Pos ID panel has modified conventional and chromogenic tests that can be read after 15 to 48 h of incubation; the Rapid Pos ID panel has tests that use fluorogenic substrates or fluorometric indicators, and test results can be read after 2 h of incubation in the autoSCAN-W/A. Results indicated that both MicroScan systems had a high degree of congruence (greater than or equal to 90%) with conventional methods for the species S. capitis, S. aureus, S. auricularis, S. saprophyticus, S. cohnii, S. arlettae, S. carnosus, S. lentus, and S. sciuri and, in particular, the subspecies S. capitis subsp. capitis and S. cohnii subsp. cohnii. The Rapid Pos ID panel system also had greater than or equal to 90% congruence with conventional methods for S. epidermidis, S. caprae, S. warneri subsp. 2, S. xylosus, S. kloosii, and S. caseolyticus. For both MicroScan systems, congruence with conventional methods was 80 to 90% for S. haemolyticus subsp. 1, S. equorum, S. intermedius, and S. hyicus; and in addition, with the Rapid Pos ID panel system congruence was 80 to 89% for S. capitis subsp. ureolyticus, S. warneri subsp. 1, S. hominis, S. cohnii subsp. urealyticum, and S. simulans. The MicroScan systems identified a lower percentage (50 to 75%) of strains of S. lugdunensis, S. gallinarum, S. schleiferi, and S. chromogenes, although the addition of specific tests to the systems might increase the accuracy of identification

  5. The selective role of premotor cortex in speech perception: a contribution to phoneme judgements but not speech comprehension.

    Science.gov (United States)

    Krieger-Redwood, Katya; Gaskell, M Gareth; Lindsay, Shane; Jefferies, Elizabeth

    2013-12-01

    Several accounts of speech perception propose that the areas involved in producing language are also involved in perceiving it. In line with this view, neuroimaging studies show activation of premotor cortex (PMC) during phoneme judgment tasks; however, there is debate about whether speech perception necessarily involves motor processes, across all task contexts, or whether the contribution of PMC is restricted to tasks requiring explicit phoneme awareness. Some aspects of speech processing, such as mapping sounds onto meaning, may proceed without the involvement of motor speech areas if PMC specifically contributes to the manipulation and categorical perception of phonemes. We applied TMS to three sites-PMC, posterior superior temporal gyrus, and occipital pole-and for the first time within the TMS literature, directly contrasted two speech perception tasks that required explicit phoneme decisions and mapping of speech sounds onto semantic categories, respectively. TMS to PMC disrupted explicit phonological judgments but not access to meaning for the same speech stimuli. TMS to two further sites confirmed that this pattern was site specific and did not reflect a generic difference in the susceptibility of our experimental tasks to TMS: stimulation of pSTG, a site involved in auditory processing, disrupted performance in both language tasks, whereas stimulation of occipital pole had no effect on performance in either task. These findings demonstrate that, although PMC is important for explicit phonological judgments, crucially, PMC is not necessary for mapping speech onto meanings.

  6. Speed and direction changes induce the perception of animacy in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Birgit eTräuble

    2014-10-01

    Full Text Available A large body of research has documented infants’ ability to classify animate and inanimate objects based on static or dynamic information. It has been shown that infants less than one year of age transfer animacy-specific expectations from dynamic point-light displays to static images. The present study examined whether basic motion cues that typically trigger judgments of perceptual animacy in older children and adults lead 7-month-olds to infer an ambiguous object’s identity from dynamic information. Infants were tested with a novel paradigm that required inferring the animacy status of an ambiguous moving shape. An ambiguous shape emerged from behind a screen and its identity could only be inferred from its motion. Its motion pattern varied distinctively between scenes: it either changed speed and direction in an animate way, or it moved along a straight path at a constant speed (i.e. in an inanimate way. At test, the identity of the shape was revealed and it was either consistent or inconsistent with its motion pattern. Infants looked longer on trials with the inconsistent outcome. We conclude that 7-month-olds’ representations of animates and inanimates include category-specific associations between static and dynamic attributes. Moreover, these associations seem to hold for simple dynamic cues that are considered minimal conditions for animacy perception.

  7. Apraxia of Speech

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  8. Infant mortality in the Marshall Islands.

    Science.gov (United States)

    Levy, S J; Booth, H

    1988-12-01

    Levy and Booth present previously unpublished infant mortality rates for the Marshall Islands. They use an indirect method to estimate infant mortality from the 1973 and 1980 censuses, then apply indirect and direct methods of estimation to data from the Marshall Islands Women's Health Survey of 1985. Comparing the results with estimates of infant mortality obtained from vital registration data enables them to estimate the extent of underregistration of infant deaths. The authors conclude that 1973 census appears to be the most valid information source. Direct estimates from the Women's Health Survey data suggest that infant mortality has increased since 1970-1974, whereas the indirect estimates indicate a decreasing trend in infant mortality rates, converging with the direct estimates in more recent years. In view of increased efforts to improve maternal and child health in the mid-1970s, the decreasing trend is plausible. It is impossible to estimate accurately infant mortality in the Marshall Islands during 1980-1984 from the available data. Estimates based on registration data for 1975-1979 are at least 40% too low. The authors speculate that the estimate of 33 deaths per 1000 live births obtained from registration data for 1984 is 40-50% too low. In round figures, a value of 60 deaths per 1000 may be taken as the final estimate for 1980-1984.

  9. Commissioning of the IDS Neutron Detector and $\\beta$-decay fast-timing studies at IDS

    CERN Document Server

    Piersa, Monika

    2016-01-01

    The following report describes my scientific activities performed during the Summer Student Programme at ISOLDE. The main part of my project was focused on commissioning the neutron detector dedicated to nuclear decay studies at ISOLDE Decay Station (IDS). I have participated in all the steps needed to make it operational for the IS609 experiment. In the testing phase, we obtained expected detector response and calibrations confirmed its successful commissioning. The detector was mounted in the desired geometry at IDS and used in measurements of the beta-delayed neutron emission of $^8$He. After completing aforementioned part of my project, I became familiar with the fast-timing method. This technique was applied at IDS in the IS610 experiment performed in June 2016 to explore the structure of neutron-rich $^{130-134}$Sn nuclei. Since the main part of my PhD studies will be the analysis of data collected in this experiment, the second part of my project was dedicated to acquiring knowledge about technical de...

  10. Social signal processing for studying parent-infant interaction

    Directory of Open Access Journals (Sweden)

    Marie eAvril

    2014-12-01

    Full Text Available Studying early interactions is a core issue of infant development and psychopathology. Automatic social signal processing theoretically offers the possibility to extract and analyse communication by taking an integrative perspective, considering the multimodal nature and dynamics of behaviours (including synchrony. This paper proposes an explorative method to acquire and extract relevant social signals from a naturalistic early parent-infant interaction. An experimental setup is proposed based on both clinical and technical requirements. We extracted various cues from body postures and speech productions of partners using the IMI2S (Interaction, Multimodal Integration, and Social Signal Framework. Preliminary clinical and computational results are reported for two dyads (one pathological in a situation of severe emotional neglect and one normal control as an illustration of our cross-disciplinary protocol. The results from both clinical and computational analyses highlight similar differences: the pathological dyad shows dyssynchronic interaction led by the infant whereas the control dyad shows synchronic interaction and a smooth interactive dialog. The results suggest that the current method might be promising for future studies.

  11. A new module in neural differentiation control: two microRNAs upregulated by retinoic acid, miR-9 and -103, target the differentiation inhibitor ID2.

    Directory of Open Access Journals (Sweden)

    Daniela Annibali

    Full Text Available The transcription factor ID2 is an important repressor of neural differentiation strongly implicated in nervous system cancers. MicroRNAs (miRNAs are increasingly involved in differentiation control and cancer development. Here we show that two miRNAs upregulated on differentiation of neuroblastoma cells--miR-9 and miR-103--restrain ID2 expression by directly targeting the coding sequence and 3' untranslated region of the ID2 encoding messenger RNA, respectively. Notably, the two miRNAs show an inverse correlation with ID2 during neuroblastoma cell differentiation induced by retinoic acid. Overexpression of miR-9 and miR-103 in neuroblastoma cells reduces proliferation and promotes differentiation, as it was shown to occur upon ID2 inhibition. Conversely, an ID2 mutant that cannot be targeted by either miRNA prevents retinoic acid-induced differentiation more efficient than wild-type ID2. These findings reveal a new regulatory module involving two microRNAs upregulated during neural differentiation that directly target expression of the key differentiation inhibitor ID2, suggesting that its alteration may be involved in neural cancer development.

  12. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  13. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  14. Accentuate or repeat? Brain signatures of developmental periods in infant word recognition.

    Science.gov (United States)

    Männel, Claudia; Friederici, Angela D

    2013-01-01

    Language acquisition has long been discussed as an interaction between biological preconditions and environmental input. This general interaction seems particularly salient in lexical acquisition, where infants are already able to detect unknown words in sentences at 7 months of age, guided by phonological and statistical information in the speech input. While this information results from the linguistic structure of a given language, infants also exploit situational information, such as speakers' additional word accentuation and word repetition. The current study investigated the developmental trajectory of infants' sensitivity to these two situational input cues in word recognition. Testing infants at 6, 9, and 12 months of age, we hypothesized that different age groups are differentially sensitive to accentuation and repetition. In a familiarization-test paradigm, event-related brain potentials (ERPs) revealed age-related differences in infants' word recognition as a function of situational input cues: at 6 months infants only recognized previously accentuated words, at 9 months both accentuation and repetition played a role, while at 12 months only repetition was effective. These developmental changes are suggested to result from infants' advancing linguistic experience and parallel auditory cortex maturation. Our data indicate very narrow and specific input-sensitive periods in infant word recognition, with accentuation being effective prior to repetition. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Language or Music, Mother or Mozart? Structural and Environmental Influences on Infants' Language Networks

    Science.gov (United States)

    Dehaene-Lambertz, G.; Montavont, A.; Jobert, A.; Allirol, L.; Dubois, J.; Hertz-Pannier, L.; Dehaene, S.

    2010-01-01

    Understanding how language emerged in our species calls for a detailed investigation of the initial specialization of the human brain for speech processing. Our earlier research demonstrated that an adult-like left-lateralized network of perisylvian areas is already active when infants listen to sentences in their native language, but did not…

  16. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  17. Estimation of the dose to the nursing infant due to direct irradiation from activity present in maternal organs and tissues

    International Nuclear Information System (INIS)

    Hunt, J. G.; Nosske, D.; Dos Santos, D. S.

    2005-01-01

    Radionuclides deposited internally in the mother will give rise to a radiation dose in the infant in two ways. The radionuclides may be transferred through milk and give rise to an internal dose in the infant, or the radionuclides may emit photons that are absorbed by the infant, giving rise to an external dose. In this paper, the external dose to the newborn infant caused by direct irradiation was estimated for monoenergetic photons. Voxel models (also called voxel phantoms) of the mother and infant were made in three geometries. These models, consisting of volume elements, or voxels, were designed so that the infant model was placed in the lap, at the breast and on the shoulder of the mother model. The Visual Monte Carlo (VMC) code was used to transport the photons through the voxel models. Source regions for the emitted photons, such as the whole body, the thyroid, the lung, the liver and the skeleton, were chosen. For the validation of the calculation procedure, VMC results were favourably compared with the results obtained by using other Monte Carlo programs and also with the previously published results for specific absorbed fractions. This paper provides estimates of the external dose per photon to the infant for photon energies between 0.05 and 2.5 MeV. The external dose per photon estimates were made for the three geometries and for the sources listed above. The results show that, for the geometry of the nursing infant model at the breast, the highest dose to the infant per photon comes from radionuclides deposited in the mother's liver. For the nursing infant model at the shoulder, the highest dose to the infant per photon comes from radionuclides deposited in the mother's thyroid, and for the nursing infant model in the lap, the highest dose to the infant per photon comes from radionuclides deposited uniformly in the whole body. The dose per photon results were then used to estimate the dose an infant might receive over the lactation period (6 months

  18. Sounds and silence: An optical topography study of language recognition at birth

    Science.gov (United States)

    Peña, Marcela; Maki, Atsushi; Kovaic, Damir; Dehaene-Lambertz, Ghislaine; Koizumi, Hideaki; Bouquet, Furio; Mehler, Jacques

    2003-09-01

    Does the neonate's brain have left hemisphere (LH) dominance for speech? Twelve full-term neonates participated in an optical topography study designed to assess whether the neonate brain responds specifically to linguistic stimuli. Participants were tested with normal infant-directed speech, with the same utterances played in reverse and without auditory stimulation. We used a 24-channel optical topography device to assess changes in the concentration of total hemoglobin in response to auditory stimulation in 12 areas of the right hemisphere and 12 areas of the LH. We found that LH temporal areas showed significantly more activation when infants were exposed to normal speech than to backward speech or silence. We conclude that neonates are born with an LH superiority to process specific properties of speech.

  19. Direct identification and recognition of yeast species from clinical material by using albicans ID and CHROMagar Candida plates.

    Science.gov (United States)

    Baumgartner, C; Freydiere, A M; Gille, Y

    1996-02-01

    Two chromogenic media, Albicans ID and CHROMagar Candida agar plates, were compared with a reference medium, Sabouraud-chloramphenicol agar, and standard methods for the identification of yeast species. This study involved 951 clinical specimens. The detection rates for the two chromogenic media for polymicrobial specimens were 20% higher than that for the Sabouraud-chloramphenicol agar plates. The rates of identification of Candida albicans for Albicans ID and CHROMagar Candida agar plates were, respectively, 37.0 and 6.0% after 24 h of incubation and 93.6 and 92.2% after 72 h of incubation, with specificities of 99.8 and 100%. Furthermore, CHROMagar Candida plates identified 13 of 14 Candida tropicalis and 9 of 12 Candida krusei strains after 48 h of incubation.

  20. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  1. Automatic Emotion Recognition in Speech: Possibilities and Significance

    Directory of Open Access Journals (Sweden)

    Milana Bojanić

    2009-12-01

    Full Text Available Automatic speech recognition and spoken language understanding are crucial steps towards a natural humanmachine interaction. The main task of the speech communication process is the recognition of the word sequence, but the recognition of prosody, emotion and stress tags may be of particular importance as well. This paper discusses thepossibilities of recognition emotion from speech signal in order to improve ASR, and also provides the analysis of acoustic features that can be used for the detection of speaker’s emotion and stress. The paper also provides a short overview of emotion and stress classification techniques. The importance and place of emotional speech recognition is shown in the domain of human-computer interactive systems and transaction communication model. The directions for future work are given at the end of this work.

  2. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  3. Ultrasound applicability in Speech Language Pathology and Audiology.

    Science.gov (United States)

    Barberena, Luciana da Silva; Brasil, Brunah de Castro; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli; Keske-Soares, Márcia

    2014-01-01

    To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Pathology and Audiology Sciences. The keywords "ultrasound," "ultrasonography," "swallow," "orofacial myofunctional therapy," and "orofacial myology" were also used in the search. Studies in humans from the past 5 years were selected. In the preselection, duplicated studies, articles not fully available, and those that did not present direct relation between ultrasound and Speech Language Pathology and Audiology Sciences were discarded. The data were analyzed descriptively and classified subareas of Speech Language Pathology and Audiology Sciences. The following items were considered: purposes, participants, procedures, and results. We selected 12 articles for ultrasound versus speech/phonetics subarea, 5 for ultrasound versus voice, 1 for ultrasound versus muscles of mastication, and 10 for ultrasound versus swallow. Studies relating "ultrasound" and "Speech Language Pathology and Audiology Sciences" in the past 5 years were not found. Different studies on the use of ultrasound in Speech Language Pathology and Audiology Sciences were found. Each of them, according to its purpose, confirms new possibilities of the use of this instrument in the several subareas, aiming at a more accurate diagnosis and new evaluative and therapeutic possibilities.

  4. Expression of Id2 in the Second Heart Field and Cardiac Defects in Id2 Knock-Out Mice

    NARCIS (Netherlands)

    Jongbloed, M. R. M.; Vicente-Steijn, R.; Douglas, Y. L.; Wisse, L. J.; Mori, K.; Yokota, Y.; Bartelings, M. M.; Schalij, M. J.; Mahtab, E. A.; Poelmann, R. E.; Gittenberger-De Groot, A. C.

    2011-01-01

    The inhibitor of differentiation Id2 is expressed in mesoderm of the second heart field, which contributes myocardial and mesenchymal cells to the primary heart tube. The role of Id2 in cardiac development is insufficiently known. Heart development was studied in sequential developmental stages in

  5. Stuttering Frequency, Speech Rate, Speech Naturalness, and Speech Effort During the Production of Voluntary Stuttering.

    Science.gov (United States)

    Davidow, Jason H; Grossman, Heather L; Edge, Robin L

    2018-05-01

    Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.

  6. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  8. Impact of the linguistic environment on speech perception : comparing bilingual and monolingual populations

    OpenAIRE

    Roessler, Abeba, 1981-

    2012-01-01

    The present dissertation set out to investigate how the linguistic environment affects speech perception. Three sets of studies have explored effects of bilingualism on word recognition in adults and infants and the impact of first language linguistic knowledge on rule learning in adults. In the present work, we have found evidence in three auditory priming studies that bilingual adults, in contrast to monolinguals have developed mechanisms to effectively overcome interference from irrelevant...

  9. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  10. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  11. Digitized Ethnic Hate Speech: Understanding Effects of Digital Media Hate Speech on Citizen Journalism in Kenya

    Directory of Open Access Journals (Sweden)

    Stephen Gichuhi Kimotho

    2016-06-01

    Full Text Available Ethnicity in Kenya permeates all spheres of life. However, it is in politics that ethnicity is most visible. Election time in Kenya often leads to ethnic competition and hatred, often expressed through various media. Ethnic hate speech characterized the 2007 general elections in party rallies and through text messages, emails, posters and leaflets. This resulted in widespread skirmishes that left over 1200 people dead, and many displaced (KNHRC, 2008. In 2013, however, the new battle zone was the war of words on social media platform. More than any other time in Kenyan history, Kenyans poured vitriolic ethnic hate speech through digital media like Facebook, tweeter and blogs. Although scholars have studied the role and effects of the mainstream media like television and radio in proliferating the ethnic hate speech in Kenya (Michael Chege, 2008; Goldstein & Rotich, 2008a; Ismail & Deane, 2008; Jacqueline Klopp & Prisca Kamungi, 2007, little has been done in regard to social media.  This paper investigated the nature of digitized hate speech by: describing the forms of ethnic hate speech on social media in Kenya; the effects of ethnic hate speech on Kenyan’s perception of ethnic entities; ethnic conflict and ethics of citizen journalism. This study adopted a descriptive interpretive design, and utilized Austin’s Speech Act Theory, which explains use of language to achieve desired purposes and direct behaviour (Tarhom & Miracle, 2013. Content published between January and April 2013 from six purposefully identified blogs was analysed. Questionnaires were used to collect data from university students as they form a good sample of Kenyan population, are most active on social media and are drawn from all parts of the country. Qualitative data were analysed using NVIVO 10 software, while responses from the questionnaire were analysed using IBM SPSS version 21. The findings indicated that Facebook and Twitter were the main platforms used to

  12. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  13. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  14. Speech production gains following constraint-induced movement therapy in children with hemiparesis.

    Science.gov (United States)

    Allison, Kristen M; Reidy, Teressa Garcia; Boyle, Mary; Naber, Erin; Carney, Joan; Pidcock, Frank S

    2017-01-01

    The purpose of this study was to investigate changes in speech skills of children who have hemiparesis and speech impairment after participation in a constraint-induced movement therapy (CIMT) program. While case studies have reported collateral speech gains following CIMT, the effect of CIMT on speech production has not previously been directly investigated to the knowledge of these investigators. Eighteen children with hemiparesis and co-occurring speech impairment participated in a 21-day clinical CIMT program. The Goldman-Fristoe Test of Articulation-2 (GFTA-2) was used to assess children's articulation of speech sounds before and after the intervention. Changes in percent of consonants correct (PCC) on the GFTA-2 were used as a measure of change in speech production. Children made significant gains in PCC following CIMT. Gains were similar in children with left and right-sided hemiparesis, and across age groups. This study reports significant collateral gains in speech production following CIMT and suggests benefits of CIMT may also spread to speech motor domains.

  15. "Frequent frames" in German child-directed speech: a limited cue to grammatical categories.

    Science.gov (United States)

    Stumper, Barbara; Bannard, Colin; Lieven, Elena; Tomasello, Michael

    2011-08-01

    Mintz (2003) found that in English child-directed speech, frequently occurring frames formed by linking the preceding (A) and succeeding (B) word (A_x_B) could accurately predict the syntactic category of the intervening word (x). This has been successfully extended to French (Chemla, Mintz, Bernal, & Christophe, 2009). In this paper, we show that, as for Dutch (Erkelens, 2009), frequent frames in German do not enable such accurate lexical categorization. This can be explained by the characteristics of German including a less restricted word order compared to English or French and the frequent use of some forms as both determiner and pronoun in colloquial German. Finally, we explore the relationship between the accuracy of frames and their potential utility and find that even some of those frames showing high token-based accuracy are of limited value because they are in fact set phrases with little or no variability in the slot position. Copyright © 2011 Cognitive Science Society, Inc.

  16. Sleep confers a benefit for retention of statistical language learning in 6.5month old infants.

    Science.gov (United States)

    Simon, Katharine N S; Werchan, Denise; Goldstein, Michael R; Sweeney, Lucia; Bootzin, Richard R; Nadel, Lynn; Gómez, Rebecca L

    2017-04-01

    Infants show robust ability to track transitional probabilities within language and can use this information to extract words from continuous speech. The degree to which infants remember these words across a delay is unknown. Given well-established benefits of sleep on long-term memory retention in adults, we examine whether sleep similarly facilitates memory in 6.5month olds. Infants listened to an artificial language for 7minutes, followed by a period of sleep or wakefulness. After a time-matched delay for sleep and wakefulness dyads, we measured retention using the head-turn-preference procedure. Infants who slept retained memory for the extracted words that was prone to interference during the test. Infants who remained awake showed no retention. Within the nap group, retention correlated with three electrophysiological measures (1) absolute theta across the brain, (2) absolute alpha across the brain, and (3) greater fronto-central slow wave activity (SWA). Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Early dyadic patterns of mother-infant interactions and outcomes of prematurity at 18 months.

    Science.gov (United States)

    Forcada-Guex, Margarita; Pierrehumbert, Blaise; Borghini, Ayala; Moessinger, Adrien; Muller-Nix, Carole

    2006-07-01

    With the increased survival of very preterm infants, there is a growing concern for their developmental and socioemotional outcomes. The quality of the early mother-infant relationship has been noted as 1 of the factors that may exacerbate or soften the potentially adverse impact of preterm birth, particularly concerning the infant's later competencies and development. The first purpose of the study was to identify at 6 months of corrected age whether there were specific dyadic mother-infant patterns of interaction in preterm as compared with term mother-infant dyads. The second purpose was to examine the potential impact of these dyadic patterns on the infant's behavioral and developmental outcomes at 18 months of corrected age. During a 12-month period (January-December 1998), all preterm infants who were compliance, difficult, and passivity). At 18 months, behavioral outcomes of the children were assessed on the basis of a semistructured interview of the mother, the Symptom Check List. The Symptom Check List explores 4 groups of behavioral symptoms: sleeping problems, eating problems, psychosomatic symptoms, and behavioral and emotional disorders. At the same age, developmental outcomes were evaluated using the Griffiths Developmental Scales. Five areas were evaluated: locomotor, personal-social, hearing and speech, eye-hand coordination, and performance. Among the possible dyadic patterns of interaction, 2 patterns emerge recurrently in mother-infant preterm dyads: a "cooperative pattern" with a sensitive mother and a cooperative-responsive infant (28%) and a "controlling pattern" with a controlling mother and a compulsive-compliant infant (28%). The remaining 44% form a heterogeneous group that gathers all of the other preterm dyads and is composed of 1 sensitive mother-passive infant; 10 controlling mothers with a cooperative, difficult, or passive infant; and 10 unresponsive mothers with a cooperative, difficult, or passive infant. Among the term control

  18. Speech Intelligibility Advantages using an Acoustic Beamformer Display

    Science.gov (United States)

    Begault, Durand R.; Sunder, Kaushik; Godfroy, Martine; Otto, Peter

    2015-01-01

    A speech intelligibility test conforming to the Modified Rhyme Test of ANSI S3.2 "Method for Measuring the Intelligibility of Speech Over Communication Systems" was conducted using a prototype 12-channel acoustic beamformer system. The target speech material (signal) was identified against speech babble (noise), with calculated signal-noise ratios of 0, 5 and 10 dB. The signal was delivered at a fixed beam orientation of 135 deg (re 90 deg as the frontal direction of the array) and the noise at 135 deg (co-located) and 0 deg (separated). A significant improvement in intelligibility from 57% to 73% was found for spatial separation for the same signal-noise ratio (0 dB). Significant effects for improved intelligibility due to spatial separation were also found for higher signal-noise ratios (5 and 10 dB).

  19. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  20. Speech recognition implementation in radiology

    International Nuclear Information System (INIS)

    White, Keith S.

    2005-01-01

    Continuous speech recognition (SR) is an emerging technology that allows direct digital transcription of dictated radiology reports. The SR systems are being widely deployed in the radiology community. This is a review of technical and practical issues that should be considered when implementing an SR system. (orig.)

  1. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  2. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  3. Body-Part Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Motion tracking is a widely used technique to analyze and measure adult human movement. However, these methods cannot be transferred directly to motion tracking of infants due to the big differences in the underlying human model. However, motion tracking of infants can be used for automatic...

  4. Low-income fathers’ speech to toddlers during book reading versus toy play*

    Science.gov (United States)

    Salo, Virginia C.; Rowe, Meredith L.; Leech, Kathryn A.; Cabrera, Natasha J.

    2016-01-01

    Fathers’ child-directed speech across two contexts was examined. Father–child dyads from sixty-nine low-income families were videotaped interacting during book reading and toy play when children were 2;0. Fathers used more diverse vocabulary and asked more questions during book reading while their mean length of utterance was longer during toy play. Variation in these specific characteristics of fathers’ speech that differed across contexts was also positively associated with child vocabulary skill measured on the MacArthur-Bates Communicative Development Inventory. Results are discussed in terms of how different contexts elicit specific qualities of child-directed speech that may promote language use and development. PMID:26541647

  5. Low-income fathers' speech to toddlers during book reading versus toy play.

    Science.gov (United States)

    Salo, Virginia C; Rowe, Meredith L; Leech, Kathryn A; Cabrera, Natasha J

    2016-11-01

    Fathers' child-directed speech across two contexts was examined. Father-child dyads from sixty-nine low-income families were videotaped interacting during book reading and toy play when children were 2;0. Fathers used more diverse vocabulary and asked more questions during book reading while their mean length of utterance was longer during toy play. Variation in these specific characteristics of fathers' speech that differed across contexts was also positively associated with child vocabulary skill measured on the MacArthur-Bates Communicative Development Inventory. Results are discussed in terms of how different contexts elicit specific qualities of child-directed speech that may promote language use and development.

  6. Distributional structure in language: contributions to noun-verb difficulty differences in infant word recognition.

    Science.gov (United States)

    Willits, Jon A; Seidenberg, Mark S; Saffran, Jenny R

    2014-09-01

    What makes some words easy for infants to recognize, and other words difficult? We addressed this issue in the context of prior results suggesting that infants have difficulty recognizing verbs relative to nouns. In this work, we highlight the role played by the distributional contexts in which nouns and verbs occur. Distributional statistics predict that English nouns should generally be easier to recognize than verbs in fluent speech. However, there are situations in which distributional statistics provide similar support for verbs. The statistics for verbs that occur with the English morpheme -ing, for example, should facilitate verb recognition. In two experiments with 7.5- and 9.5-month-old infants, we tested the importance of distributional statistics for word recognition by varying the frequency of the contextual frames in which verbs occur. The results support the conclusion that distributional statistics are utilized by infant language learners and contribute to noun-verb differences in word recognition. Copyright © 2014. Published by Elsevier B.V.

  7. A Danish open-set speech corpus for competing-speech studies

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten; Neher, Tobias

    2014-01-01

    Studies investigating speech-on-speech masking effects commonly use closed-set speech materials such as the coordinate response measure [Bolia et al. (2000). J. Acoust. Soc. Am. 107, 1065-1066]. However, these studies typically result in very low (i.e., negative) speech recognition thresholds (SRTs......) when the competing speech signals are spatially separated. To achieve higher SRTs that correspond more closely to natural communication situations, an open-set, low-context, multi-talker speech corpus was developed. Three sets of 268 unique Danish sentences were created, and each set was recorded...... with one of three professional female talkers. The intelligibility of each sentence in the presence of speech-shaped noise was measured. For each talker, 200 approximately equally intelligible sentences were then selected and systematically distributed into 10 test lists. Test list homogeneity was assessed...

  8. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    Science.gov (United States)

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production

  9. Out of the Mouths of Babes: Vocal Production in Infant Siblings of Children with ASD

    Science.gov (United States)

    Paul, Rhea; Fuerst, Yael; Ramsay, Gordon; Chawarska, Kasia; Klin, Ami

    2011-01-01

    Background: Younger siblings of children with autism spectrum disorders (ASD) are at higher risk for acquiring these disorders than the general population. Language development is usually delayed in children with ASD. The present study examines the development of pre-speech vocal behavior in infants at risk for ASD due to the presence of an older…

  10. Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children's Understanding.

    Science.gov (United States)

    De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F A

    2016-01-01

    As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures-speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools' language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and

  11. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  12. Speech motor coordination in Dutch-speaking children with DAS studied with EMMA

    NARCIS (Netherlands)

    Nijland, L.; Maassen, B.A.M.; Hulstijn, W.; Peters, H.F.M.

    2004-01-01

    Developmental apraxia of speech (DAS) is generally classified as a 'speech motor' disorder. Direct measurement of articulatory movement is, however, virtually non-existent. In the present study we investigated the coordination between articulators in children with DAS using kinematic measurements.

  13. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  14. The influence of spectral characteristics of early reflections on speech intelligibility

    DEFF Research Database (Denmark)

    Arweiler, Iris; Buchholz, Jörg

    2011-01-01

    The auditory system takes advantage of early reflections (ERs) in a room by integrating them with the direct sound (DS) and thereby increasing the effective speech level. In the present paper the benefit from realistic ERs on speech intelligibility in diffuse speech-shaped noise was investigated...... ascribed to their altered spectrum compared to the DS and to the filtering by the torso, head, and pinna. No binaural processing other than a binaural summation effect could be observed....

  15. Markers of Deception in Italian Speech

    Directory of Open Access Journals (Sweden)

    Katelyn eSpence

    2012-10-01

    Full Text Available Lying is a universal activity and the detection of lying a universal concern. Presently, there is great interest in determining objective measures of deception. The examination of speech, in particular, holds promise in this regard; yet, most of what we know about the relationship between speech and lying is based on the assessment of English-speaking participants. Few studies have examined indicators of deception in languages other than English. The world’s languages differ in significant ways, and cross-linguistic studies of deceptive communications are a research imperative. Here we review some of these differences amongst the world’s languages, and provide an overview of a number of recent studies demonstrating that cross-linguistic research is a worthwhile endeavour. In addition, we report the results of an empirical investigation of pitch, response latency, and speech rate as cues to deception in Italian speech. True and false opinions were elicited in an audio-taped interview. A within subjects analysis revealed no significant difference between the average pitch of the two conditions; however, speech rate was significantly slower, while response latency was longer, during deception compared with truth-telling. We explore the implications of these findings and propose directions for future research, with the aim of expanding the cross-linguistic branch of research on markers of deception.

  16. Long-term outcome in term breech infants with low Apgar score--a population-based follow-up

    DEFF Research Database (Denmark)

    Krebs, L; Langhoff-Roos, J; Thorngren-Jerneck, K

    2001-01-01

    and 218 controls. RESULTS: Four cases (4.6%) and one control (0.5%) had cerebral palsy. In infants without cerebral palsy, speech/language problems were more frequent than controls (10.6 versus 3.2%) (P=0.02). There were no differences in rates of deficits in attention, motor control and perception (DAMP...

  17. The impact of direct speech framing expressions on the narrative: a contrastive case study of Gabriel García Márquez’s Buen viaje, señor Presidente and its English translation

    Directory of Open Access Journals (Sweden)

    Jadwiga Linde-Usiekniewicz

    2014-09-01

    Full Text Available The impact of direct speech framing expressions on the narrative: a contrastive case study of Gabriel García Márquez’s Buen viaje, señor Presidente and its English translation This paper discusses an application of Relevance Theory methodology to an analysis of a literary text: a short story of Gabriel García Márquez “Buen viaje, señor Presidente” and its English translation. “Close reading” technique carried out on rather linguistic than literary basis allows for adding yet another layer of interpretation to this complex story. The analysis concentrates on the representation of direct speech and particularly on the impact of direct speech framing clauses on the reading of dialogic turns. Specifically, it is argued that the explicit mention of the addressee by indirect object pronouns (which are optional in direct speech framing turns in Spanish makes the tension between the two protagonists even more palpable, therefore apparently courteous turns can be interpreted as defiant or otherwise antagonistic. In English similar role is played by the contrast between the absence of quotative inversion with subject pronouns and its presence when speakers are identified by full nominals. The parallel effect in both linguistic versions is traced to the distinction between linguistic items carrying mainly conceptual meaning (nominals and carrying mainly procedural meaning (pronouns and to the different ways these two kind of elements are processed in comprehension. The paper also provides some arguments for leaving aside literary considerations and treating a literary text as an act of ostensive communication.

  18. Comparison of analgesic effect of direct breastfeeding, oral 25% dextrose solution and placebo during 1st DPT vaccination in healthy term infants: a randomized, placebo controlled trial.

    Science.gov (United States)

    Goswami, Gaurav; Upadhyay, Amit; Gupta, Navratan Kumar; Chaudhry, Rajesh; Chawla, Deepak; Sreenivas, V

    2013-07-01

    To compare analgesic effect of direct breast feeding, 25% dextrose solution and placebo as we give 1st intramuscular whole cell DPT injection to 6week - 3month old infants. Randomized, placebo controlled trial. Immunization clinic of Department of Pediatrics, LLRM Medical College. Infants coming for their 1st DPT vaccination were randomized in to three groups of 40 each. The primary outcome variable was the duration of cry after vaccination. Secondary outcome variables were Modified Facial Coding Score (MFCS) and latency of onset of cry. 120 babies were equally enrolled in breast feed group, 25% dextrose fed group and distilled water fed group. Median (interquartile range) of duration of cry was significantly lower in breast fed (33.5 (17-54) seconds) and 25% dextrose fed babies (47.5 (31-67.5) seconds) as compared to babies given distilled water (80.5 (33.5-119.5) seconds) (P<0.001). MFCS at 1 min and 3 min was significantly lower in direct breast fed and dextrose fed babies. Direct breastfeeding and 25% dextrose act as analgesic in young infants undergoing DPT vaccination in young infants less than 3 month of age.

  19. Effects of directional microphone and adaptive multichannel noise reduction algorithm on cochlear implant performance.

    Science.gov (United States)

    Chung, King; Zeng, Fan-Gang; Acker, Kyle N

    2006-10-01

    Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.

  20. Cyclin D1, Id1 and EMT in breast cancer

    International Nuclear Information System (INIS)

    Tobin, Nicholas P; Sims, Andrew H; Lundgren, Katja L; Lehn, Sophie; Landberg, Göran

    2011-01-01

    Cyclin D1 is a well-characterised cell cycle regulator with established oncogenic capabilities. Despite these properties, studies report contrasting links to tumour aggressiveness. It has previously been shown that silencing cyclin D1 increases the migratory capacity of MDA-MB-231 breast cancer cells with concomitant increase in 'inhibitor of differentiation 1' (ID1) gene expression. Id1 is known to be associated with more invasive features of cancer and with the epithelial-mesenchymal transition (EMT). Here, we sought to determine if the increase in cell motility following cyclin D1 silencing was mediated by Id1 and enhanced EMT-features. To further substantiate these findings we aimed to delineate the link between CCND1, ID1 and EMT, as well as clinical properties in primary breast cancer. Protein and gene expression of ID1, CCND1 and EMT markers were determined in MDA-MB-231 and ZR75 cells by western blot and qPCR. Cell migration and promoter occupancy were monitored by transwell and ChIP assays, respectively. Gene expression was analysed from publicly available datasets. The increase in cell migration following cyclin D1 silencing in MDA-MB-231 cells was abolished by Id1 siRNA treatment and we observed cyclin D1 occupancy of the Id1 promoter region. Moreover, ID1 and SNAI2 gene expression was increased following cyclin D1 knock-down, an effect reversed with Id1 siRNA treatment. Similar migratory and SNAI2 increases were noted for the ER-positive ZR75-1 cell line, but in an Id1-independent manner. In a meta-analysis of 1107 breast cancer samples, CCND1 low /ID1 high tumours displayed increased expression of EMT markers and were associated with reduced recurrence free survival. Finally, a greater percentage of CCND1 low /ID1 high tumours were found in the EMT-like 'claudin-low' subtype of breast cancer than in other subtypes. These results indicate that increased migration of MDA-MB-231 cells following cyclin D1 silencing can be mediated by Id

  1. FlowIDS

    OpenAIRE

    Sabolčák, Peter

    2006-01-01

    FlowIDS is system which can detect some of the undesirable traffi c in computer networks (undesirable traffic could be also ie. virus activity or overloading of network) and mostly doing counteraction which are set by administrator. Information about data ows are provided by hardware of network infrastructure, eliminitaion of undesirable activity is done through changes in network hardware settings. Regarding the quantity of solution which are provided on market, I decided to focus on Cisco n...

  2. Parent-child interaction in motor speech therapy.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Jethava, Vibhuti; Pukonen, Margit; Huynh, Anna; Goshulak, Debra; Kroll, Robert; van Lieshout, Pascal

    2018-01-01

    This study measures the reliability and sensitivity of a modified Parent-Child Interaction Observation scale (PCIOs) used to monitor the quality of parent-child interaction. The scale is part of a home-training program employed with direct motor speech intervention for children with speech sound disorders. Eighty-four preschool age children with speech sound disorders were provided either high- (2×/week/10 weeks) or low-intensity (1×/week/10 weeks) motor speech intervention. Clinicians completed the PCIOs at the beginning, middle, and end of treatment. Inter-rater reliability (Kappa scores) was determined by an independent speech-language pathologist who assessed videotaped sessions at the midpoint of the treatment block. Intervention sensitivity of the scale was evaluated using a Friedman test for each item and then followed up with Wilcoxon pairwise comparisons where appropriate. We obtained fair-to-good inter-rater reliability (Kappa = 0.33-0.64) for the PCIOs using only video-based scoring. Child-related items were more strongly influenced by differences in treatment intensity than parent-related items, where a greater number of sessions positively influenced parent learning of treatment skills and child behaviors. The adapted PCIOs is reliable and sensitive to monitor the quality of parent-child interactions in a 10-week block of motor speech intervention with adjunct home therapy. Implications for rehabilitation Parent-centered therapy is considered a cost effective method of speech and language service delivery. However, parent-centered models may be difficult to implement for treatments such as developmental motor speech interventions that require a high degree of skill and training. For children with speech sound disorders and motor speech difficulties, a translated and adapted version of the parent-child observation scale was found to be sufficiently reliable and sensitive to assess changes in the quality of the parent-child interactions during

  3. Infant word recognition: Insights from TRACE simulations☆

    Science.gov (United States)

    Mayor, Julien; Plunkett, Kim

    2014-01-01

    The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants’ graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan’s stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life. PMID:24493907

  4. Comparison of the reliability of parental reporting and the direct test of the Thai Speech and Language Test.

    Science.gov (United States)

    Prathanee, Benjamas; Angsupakorn, Nipa; Pumnum, Tawitree; Seepuaham, Cholada; Jaiyong, Pechcharat

    2012-11-01

    To find reliability of parental or caregiver's report and testing of the Thai Speech and Language Test for Children Aged 0-4 Years Old. Five investigators assessed speech and language abilities from video both contexts: parental or caregivers' report and test forms of Thai Speech and Language Test for Children Aged 0-4 Years Old. Twenty-five normal and 30 children with delayed development or risk for delayed speech and language skills were assessed at age intervals of 3, 6, 9, 12, 15, 18, 24, 30, 36 and 48 months. Reliability of parental or caregivers' testing and reporting was at a moderate level (0.41-0.60). Inter-rater reliability among investigators was excellent (0.86-1.00). The parental or caregivers' report form of the Thai Speech and Language test for Children aged 0-4 years old was an indicator for success at a moderate level. Trained professionals could use both forms of this test as reliable tools at an excellent level.

  5. Accountability Steps for Highly Reluctant Speech: Tiered-Services Consultation in a Head Start Classroom

    Science.gov (United States)

    Howe, Heather; Barnett, David

    2013-01-01

    This consultation description reports parent and teacher problem solving for a preschool child with no typical speech directed to teachers or peers, and, by parent report, normal speech at home. This child's initial pattern of speech was similar to selective mutism, a low-incidence disorder often first detected during the preschool years, but…

  6. Learning to walk changes infants' social interactions.

    Science.gov (United States)

    Clearfield, Melissa W

    2011-02-01

    The onset of crawling marks a motor, cognitive and social milestone. The present study investigated whether independent walking marks a second milestone for social behaviors. In Experiment 1, the social and exploratory behaviors of crawling infants were observed while crawling and in a baby-walker, resulting in no differences based on posture. In Experiment 2, the social behaviors of independently walking infants were compared to age-matched crawling infants in a baby-walker. Independently walking infants spent significantly more time interacting with the toys and with their mothers, and also made more vocalizations and more directed gestures compared to infants in the walker. Experiment 3 tracked infants' social behaviors longitudinally across the transition from crawling and walking. Even when controlled for age, the transition to independent walking marked increased interaction time with mothers, as well as more sophisticated interactions, including directing mothers' attention to particular objects. The results suggest a developmental progression linking social interactions with milestones in locomotor development. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Infants? Temperament and Mothers?, and Fathers? Depression Predict Infants? Attention to Objects Paired with Emotional Faces

    OpenAIRE

    Aktar, Evin; Mandell, Dorothy J.; de Vente, Wieke; Majdand?i?, Mirjana; Raijmakers, Maartje E. J.; B?gels, Susan M.

    2015-01-01

    Between 10 and 14 months, infants gain the ability to learn about unfamiliar stimuli by observing others’ emotional reactions to those stimuli, so called social referencing (SR). Joint processing of emotion and head/gaze direction is essential for SR. This study tested emotion and head/gaze direction effects on infants’ attention via pupillometry in the period following the emergence of SR. Pupil responses of 14-to-17-month-old infants (N = 57) were measured during computerized presentations ...

  8. Aliénation et idéologie

    OpenAIRE

    Kanabus, Benoît; Popa, Délia

    2017-01-01

    Dès lors, on entend mieux la question sur laquelle se referment les mille pages du Marx : « La pensée de Marx nous place devant la question abyssale : qu’est-ce que la vie » ? Cette question peut être reposée à nouveaux frais à l’âge du capitalisme avancé, où, comme le notait Adorno, la vie est devenue « l’idéologie de sa propre absence ». La recherche d’un autre rapport à l’idéologie vient alors prendre le relais de la lutte anti-idéologique de la théorie critique. Cette recherche part du po...

  9. Infant emotion regulation: relations to bedtime emotional availability, attachment security, and temperament.

    Science.gov (United States)

    Kim, Bo-Ram; Stifter, Cynthia A; Philbrook, Lauren E; Teti, Douglas M

    2014-11-01

    The present study examines the influences of mothers' emotional availability toward their infants during bedtime, infant attachment security, and interactions between bedtime parenting and attachment with infant temperamental negative affectivity, on infants' emotion regulation strategy use at 12 and 18 months. Infants' emotion regulation strategies were assessed during a frustration task that required infants to regulate their emotions in the absence of parental support. Whereas emotional availability was not directly related to infants' emotion regulation strategies, infant attachment security had direct relations with infants' orienting toward the environment and tension reduction behaviors. Both maternal emotional availability and security of the mother-infant attachment relationship interacted with infant temperamental negative affectivity to predict two strategies that were less adaptive in regulating frustration. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Content Analysis of Language-Promoting Teaching Strategies Used in Infant-Directed Media

    Science.gov (United States)

    Vaala, Sarah E.; Linebarger, Deborah L.; Fenstermacher, Susan K.; Tedone, Ashley; Brey, Elizabeth; Barr, Rachel; Moses, Annie; Shwery, Clay E.; Calvert, Sandra L.

    2010-01-01

    The number of videos produced specifically for infants and toddlers has grown exponentially in the last decade. Many of these products make educational claims regarding young children's language development. This study explores infant media producer claims regarding language development, and the extent to which these claims reflect different…

  11. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  12. ID3 contributes to cerebrospinal fluid seeding and poor prognosis in medulloblastoma

    International Nuclear Information System (INIS)

    Phi, Ji Hoon; Choi, Seung Ah; Lim, Sang-Hee; Lee, Joongyub; Wang, Kyu-Chang; Park, Sung-Hye; Kim, Seung-Ki

    2013-01-01

    The inhibitor of differentiation (ID) genes have been implicated as promoters of tumor progression and metastasis in many human cancers. The current study investigated the expression and functional roles of ID genes in seeding and prognosis of medulloblastoma. ID gene expression was screened in human medulloblastoma tissues. Knockdown of ID3 gene was performed in medulloblastoma cells in vitro. The expression of metastasis-related genes after ID3 knockdown was assessed. The effect of ID3 knockdown on tumor seeding was observed in an animal model in vivo. The survival of medulloblastoma patients was plotted according to the ID3 expression levels. Significantly higher ID3 expression was observed in medulloblastoma with cerebrospinal fluid seeding than tumors without seeding. Knockdown of ID3 decreased proliferation, increased apoptosis, and suppressed the migration of D283 medulloblastoma cells in vitro. In a seeding model of medulloblastoma, ID3 knockdown in vivo with shRNA inhibited the growth of primary tumors, prevented the development of leptomeningeal seeding, and prolonged animal survival. High ID3 expression was associated with shorter survival of medulloblastoma patients, especially in Group 4 medulloblastomas. High ID3 expression is associated with medullolbastoma seeding and is a poor prognostic factor, especially in patients with Group 4 tumors. ID3 may represent the metastatic/ aggressive phenotype of a subgroup of medulloblastoma

  13. Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.

    Science.gov (United States)

    Schoenmaker, Esther; van de Par, Steven

    2016-01-01

    Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.

  14. An experimental Dutch keyboard-to-speech system for the speech impaired

    NARCIS (Netherlands)

    Deliege, R.J.H.

    1989-01-01

    An experimental Dutch keyboard-to-speech system has been developed to explor the possibilities and limitations of Dutch speech synthesis in a communication aid for the speech impaired. The system uses diphones and a formant synthesizer chip for speech synthesis. Input to the system is in

  15. Speech Function and Speech Role in Carl Fredricksen's Dialogue on Up Movie

    OpenAIRE

    Rehana, Ridha; Silitonga, Sortha

    2013-01-01

    One aim of this article is to show through a concrete example how speech function and speech role used in movie. The illustrative example is taken from the dialogue of Up movie. Central to the analysis proper form of dialogue on Up movie that contain of speech function and speech role; i.e. statement, offer, question, command, giving, and demanding. 269 dialogue were interpreted by actor, and it was found that the use of speech function and speech role.

  16. ID-pilet teeb Tallinnas võidukäiku / Anneli Lepp

    Index Scriptorium Estoniae

    Lepp, Anneli

    2006-01-01

    Vt. ka Postimees : na russkom jazõke 12. jaan. lk 3. Tallinna transpordiameti peaökonomisti Mart Moosuse sõnul moodustas ID-pilet 60,7% ühistranspordi piletituludest, Tartus oli vastav näitaja aga alla 1%. Tartut teenindava bussifirma AS Connex Eesti finantsjuht Tiina Telling peab suure erinevuse põhjuseks asjaolu, et Tallinnas on ID-pilet paberpiletist 40-88% soodsam, Tartus hinnavahe aga puudub või on lausa negatiivne ID-pileti kahjuks. Lisa: ID-pilet

  17. PERSONAL BRANDING PRABOWO SUBIANTO (ANALISIS ISI KUANTITATIF PERSONAL BRANDING PRABOWO SUBIANTO DI SITUS BERITA ONLINE REPUBLIKA.CO.ID DAN TEMPO.CO.ID TANGGAL 9 JUNI - 9 JULI 2014

    Directory of Open Access Journals (Sweden)

    Hendro Agus Prakoso

    2016-09-01

    berita kedua media tersebut. Frekuensi berita berjumlah 107 di Republika.co.id dan 51 berita di Tempo.co.id Ragam berita Republika.co.id didominasi konsep Spesialisasi sebesar 22,4% dari 107 berita, sedangkan pada Tempo.co.id didominasi konsep Nama baik sebesar 23,5% dari 51 berita. Sumber berita pada Republika.co.id mayoritas berasal dari Liputan langsung sebesar 42,9% dari 107 berita sedangkan pada Tempo.co.id mayoritas berasal dari Intelektual sebesar 54,9% dari 51 berita.

  18. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  19. Asymmetric dynamic attunement of speech and gestures in the construction of children’s understanding

    Directory of Open Access Journals (Sweden)

    Lisette eDe Jonge-Hoekstra

    2016-03-01

    Full Text Available As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6 from Kindergarten (n = 5 and first grade (n = 7 participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on 1 the temporal relation between gestures and speech, 2 the relative strength and direction of the interaction between gestures and speech, 3 the relative strength and direction between gestures and speech for different levels of understanding, and 4 relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal asymmetry in the gestures-speech interaction. For younger children, the balance leans more towards gestures leading speech in time, while the balance leans more towards speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools’ language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between

  20. Look Who's Talking: Speech Style and Social Context in Language Input to Infants Are Linked to Concurrent and Future Speech Development

    Science.gov (United States)

    Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.

    2014-01-01

    Language input is necessary for language learning, yet little is known about whether, in natural environments, the speech style and social context of language input to children impacts language development. In the present study we investigated the relationship between language input and language development, examining both the style of parental…

  1. Motivational Projections of Russian Spontaneous Speech

    Directory of Open Access Journals (Sweden)

    Galina M. Shipitsina

    2017-06-01

    Full Text Available The article deals with the semantic, pragmatic and structural features of words, phrases, dialogues motivation, in the contemporary Russian popular speech. These structural features are characterized by originality and unconventional use. Language material is the result of authors` direct observation of spontaneous verbal communication between people of different social and age groups. The words and remarks were analyzed in compliance with the communication system of national Russian language and cultural background of popular speech. Studies have discovered that in spoken discourse there are some other ways to increase the expression statement. It is important to note that spontaneous speech identifies lacunae in the nominative language and its vocabulary system. It is proved, prefixation is also effective and regular way of the same action presenting. The most typical forms, ways and means to update language resources as a result of the linguistic creativity of native speakers were identified.

  2. SPEECH ACT OF ILTIFAT AND ITS INDONESIAN TRANSLATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Zaka Al Farisi

    2015-01-01

    Full Text Available Abstract: Iltifat (shifting speech act is distinctive and considered unique style of Arabic. It has potential errors when it is translated into Indonesian. Therefore, translation of iltifat speech act into another language can be an important issue. The objective of the study is to know translation procedures/techniques and ideology required in dealing with iltifat speech act. This research is directed at translation as a cognitive product of a translator. The data used in the present study were the corpus of Koranic verses that contain iltifat speech act along with their translation. Data analysis typically used descriptive-evaluative method with content analysis model. The data source of this research consisted of the Koran and its translation. The purposive sampling technique was employed, with the sample of the iltifat speech act contained in the Koran. The results showed that more than 60% of iltifat speech act were translated by using literal procedure. The significant number of literal translation of the verses asserts that the Ministry of Religious Affairs tended to use literal method of translation. In other words, the Koran translation made by the Ministry of Religious Affairs tended to be oriented to the source language in dealing with iltifat speech act. The number of the literal procedure used shows a tendency of foreignization ideology. Transitional pronouns contained in the iltifat speech act can be clearly translated when thick translations were used in the form of description in parentheses. In this case, explanation can be a choice in translating iltifat speech act.

  3. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  4. Co-variation of tonality in the music and speech of different cultures.

    Directory of Open Access Journals (Sweden)

    Shui' er Han

    Full Text Available Whereas the use of discrete pitch intervals is characteristic of most musical traditions, the size of the intervals and the way in which they are used is culturally specific. Here we examine the hypothesis that these differences arise because of a link between the tonal characteristics of a culture's music and its speech. We tested this idea by comparing pitch intervals in the traditional music of three tone language cultures (Chinese, Thai and Vietnamese and three non-tone language cultures (American, French and German with pitch intervals between voiced speech segments. Changes in pitch direction occur more frequently and pitch intervals are larger in the music of tone compared to non-tone language cultures. More frequent changes in pitch direction and larger pitch intervals are also apparent in the speech of tone compared to non-tone language cultures. These observations suggest that the different tonal preferences apparent in music across cultures are closely related to the differences in the tonal characteristics of voiced speech.

  5. Greater contribution of cerebral than extracerebral hemodynamics to near-infrared spectroscopy signals for functional activation and resting-state connectivity in infants.

    Science.gov (United States)

    Funane, Tsukasa; Homae, Fumitaka; Watanabe, Hama; Kiguchi, Masashi; Taga, Gentaro

    2014-10-01

    While near-infrared spectroscopy (NIRS) has been increasingly applied to neuroimaging and functional connectivity studies in infants, it has not been quantitatively examined as to what extent the deep tissue (such as cerebral tissue) as opposed to shallow tissue (such as scalp), contributes to NIRS signals measured in infants. A method for separating the effects of deep- and shallow-tissue layers was applied to data of nine sleeping three-month-old infants who had been exposed to 3-s speech sounds or silence (i.e., resting state) and whose hemodynamic changes over their bilateral temporal cortices had been measured by using an NIRS system with multiple source-detector (S-D) distances. The deep-layer contribution was found to be large during resting [67% at S-D 20 mm, 78% at S-D 30 mm for oxygenated hemoglobin (oxy-Hb)] as well as during the speech condition (72% at S-D 20 mm, 82% at S-D 30 mm for oxy-Hb). A left-right connectivity analysis showed that correlation coefficients between left and right channels did not differ between original- and deep-layer signals under no-stimulus conditions and that of original- and deep-layer signals were larger than those of the shallow layer. These results suggest that NIRS signals obtained in infants with appropriate S-D distances largely reflected cerebral hemodynamic changes.

  6. A functionally significant polymorphism in ID3 is associated with human coronary pathology.

    Directory of Open Access Journals (Sweden)

    Ani Manichaikul

    Full Text Available We previously identified association between the ID3 SNP rs11574 and carotid intima-media thickness in the Diabetes Heart Study, a predominantly White diabetic population. The nonsynonymous SNP rs11574 results in an amino acid substitution in the C-terminal region of ID3, attenuating the dominant negative function of ID3 as an inhibitor of basic HLH factor E12-mediated transcription. In the current investigation, we characterize the association between the functionally significant polymorphism in ID3, rs11574, with human coronary pathology.The Multi-Ethnic Study of Atherosclerosis (MESA is a longitudinal study of subclinical cardiovascular disease, including non-Hispanic White (n = 2,588, African American (n = 2,560 and Hispanic (n = 2,130 participants with data on coronary artery calcium (CAC. The Coronary Assessment in Virginia cohort (CAVA included 71 patients aged 30-80 years, undergoing a medically necessary cardiac catheterization and intravascular ultrasound (IVUS at the University of Virginia. ID3 SNP rs11574 risk allele was associated with the presence of CAC in MESA Whites (P = 0.017. In addition, the risk allele was associated with greater atheroma burden and stenosis in the CAVA cohort (P = 0.003, P = 0.04 respectively. The risk allele remained predictive of atheroma burden in multivariate analysis (Model 1: covariates age, gender, and LDL, regression coefficient = 9.578, SE = 3.657, p = 0.0110; Model 2: covariates Model 1, presence of hypertension, presence of diabetes, regression coefficient = 8.389, SE = 4.788, p = 0.0163.We present additional cohorts that demonstrate association of ID3 SNP rs11574 directly with human coronary artery pathology as measured by CAC and IVUS: one a multiethnic, relatively healthy population with low levels of diabetes and the second a predominantly White population with a higher incidence of T2DM referred for cardiac catheterization.

  7. Epigenetic inactivation of inhibitor of differentiation 4 (Id4) correlates with prostate cancer

    International Nuclear Information System (INIS)

    Sharma, Pankaj; Chinaranagari, Swathi; Patel, Divya; Carey, Jason; Chaudhary, Jaideep

    2012-01-01

    The inhibitor of DNA-binding (Id) proteins, Id1–4 are negative regulators of basic helix-loop-helix (bHLH) transcription factors. As key regulators of cell cycle and differentiation, expression of Id proteins are increasingly observed in many cancers and associated with aggressiveness of the disease. Of all the four Id proteins, the expression of Id1, Id2, and to a lesser extent, Id3 in prostate cancer and the underlying molecular mechanism is relatively well known. On the contrary, our previous results demonstrated that Id4 acts as a potential tumor suppressor in prostate cancer. In the present study, we extend these observations and demonstrate that Id4 is down-regulated in prostate cancer due to promoter hypermethylation. We used prostate cancer tissue microarrays to investigate Id4 expression. Methylation specific PCR on bisulfite treated DNA was used to determine methylation status of Id4 promoter in laser capture micro-dissected normal, stroma and prostate cancer regions. High Id4 expression was observed in the normal prostate epithelial cells. In prostate cancer, a stage-dependent decrease in Id4 expression was observed with majority of high grade cancers showing no Id4 expression. Furthermore, Id4 expression progressively decreased in prostate cancer cell line LNCaP and with no expression in androgen-insensitive LNCaP-C81 cell line. Conversely, Id4 promoter hypermethylation increased in LNCaP-C81 cells suggesting epigenetic silencing. In prostate cancer samples, loss of Id4 expression was also associated with promoter hypermethylation. Our results demonstrate loss of Id4 expression in prostate cancer due to promoter hypermethylation. The data strongly support the role of Id4 as a tumor suppressor

  8. Coordination of head movements and speech in first encounter dialogues

    DEFF Research Database (Denmark)

    Paggio, Patrizia

    2015-01-01

    This paper presents an analysis of the temporal alignment be- tween head movements and associated speech segments in the NOMCO corpus of first encounter dialogues [1]. Our results show that head movements tend to start slightly before the onset of the corresponding speech sequence and to end...... slightly after, but also that there are delays in both directions in the range of -/+ 1s. Various factors that may influence delay duration are investigated. Correlations are found between delay length and the duration of the speech sequences associated with the head movements. Effects due to the different...

  9. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  10. Structured Semantic Knowledge Can Emerge Automatically from Predicting Word Sequences in Child-Directed Speech

    Science.gov (United States)

    Huebner, Philip A.; Willits, Jon A.

    2018-01-01

    Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary “deep learning” approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory) to predict word sequences in a 5-million-word corpus of speech directed to children ages 0–3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM) and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system. PMID

  11. Structured Semantic Knowledge Can Emerge Automatically from Predicting Word Sequences in Child-Directed Speech

    Directory of Open Access Journals (Sweden)

    Philip A. Huebner

    2018-02-01

    Full Text Available Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary “deep learning” approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory to predict word sequences in a 5-million-word corpus of speech directed to children ages 0–3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing

  12. Compressed Domain Packet Loss Concealment of Sinusoidally Coded Speech

    DEFF Research Database (Denmark)

    Rødbro, Christoffer A.; Christensen, Mads Græsbøll; Andersen, Søren Vang

    2003-01-01

    We consider the problem of packet loss concealment for voice over IP (VoIP). The speech signal is compressed at the transmitter using a sinusoidal coding scheme working at 8 kbit/s. At the receiver, packet loss concealment is carried out working directly on the quantized sinusoidal parameters......, based on time-scaling of the packets surrounding the missing ones. Subjective listening tests show promising results indicating the potential of sinusoidal speech coding for VoIP....

  13. DELVING INTO SPEECH ACT A Case Of Indonesian EFL Young Learners

    Directory of Open Access Journals (Sweden)

    Swastika Septiani, S.Pd

    2017-04-01

    Full Text Available This study attempts to describe the use of speech acts applied in primary school. This study is intended to identify the speech acts performed in primary school, to find the most dominant speech acts performed in elementary school, to give brief description of how speech acts applied in primary school, and to know how to apply the result of the study in English teaching learning to young learners. The speech acts performed in primary school is classified based on Searle‘s theory of speech acts. The most dominant speech acts performed in primary school is Directive (41.17%, the second speech act mostly performed is Declarative (33.33%, the third speech act mostly performed is Representative and Expressive (each 11.76%, and the least speech act performed is Commisive (1.9%. The speech acts performed in elementary school is applied on the context of situation determined by the National Education Standards Agency (BSNP. The speech acts performed in fourth grade have to be applied in the context of classroom, and the speech acts performed in fifth grade have to be applied in the context of school, whereas the speech acts performed in sixth grade have to be applied in the context of the students‘ surroundings. The result of this study is highy expected to give significant contribution to English teaching learning to young learners. By acknowledging the characteristics of young learners, the way they learn English as a foreign language, the teachers are expected to have inventive strategies and various techniques to create a fun and condusive atmosphere in English class.

  14. Comparison of Perinatal Risk Factors Associated with Autism Spectrum Disorder (ASD), Intellectual Disability (ID), and Co-Occurring ASD and ID

    Science.gov (United States)

    Schieve, Laura A.; Clayton, Heather B.; Durkin, Maureen S.; Wingate, Martha S.; Drews-Botsch, Carolyn

    2015-01-01

    While studies report associations between perinatal outcomes and both autism spectrum disorder (ASD) and intellectual disability (ID), there has been little study of ASD with versus without co-occurring ID. We compared perinatal risk factors among 7547 children in the 2006-2010 Autism and Developmental Disability Monitoring Network classified as…

  15. Achieving Payoffs from an Industry Cloud Ecosystem at BankID

    DEFF Research Database (Denmark)

    Eaton, Ben; Hallingby, Hanne Kristine; Nesse, Per-Jonny

    2014-01-01

    BankID is an industry cloud owned by Norwegian banks. It provides electronic identity, authentication and electronic signing capabilities for banking, merchant and government services. More than 60% of the population uses BankID services. As the broader ecosystem around BankID evolved, challenges......—arising from tensions between different parts of the ecosystem—had to be resolved. The four lessons learned from the BankID case will help others to build an industry cloud and establish a healthy ecosystem to service a broad user base....

  16. Speech disorders - children

    Science.gov (United States)

    ... disorder; Voice disorders; Vocal disorders; Disfluency; Communication disorder - speech disorder; Speech disorder - stuttering ... evaluation tools that can help identify and diagnose speech disorders: Denver Articulation Screening Examination Goldman-Fristoe Test of ...

  17. Early Childhood Neurodevelopmental Outcomes in Infants Exposed to Infectious Syphilis In Utero.

    Science.gov (United States)

    Verghese, Valsan P; Hendson, Leonora; Singh, Ameeta; Guenette, Tamara; Gratrix, Jennifer; Robinson, Joan L

    2018-06-01

    There are minimal neurodevelopmental follow-up data for infants exposed to syphilis in utero. This is an inception cohort study of infants exposed to syphilis in utero. We reviewed women with reactive syphilis serology in pregnancy or at delivery in Edmonton (Canada), 2002 through 2010 and describe the neurodevelopmental outcomes of children with and without congenital syphilis. There were 39 births to women with reactive syphilis serology, 9 of whom had late latent syphilis (n = 4), stillbirths (n = 2) or early neonatal deaths (n = 3), leaving 30 survivors of which 11 with and 7 without congenital syphilis had neurodevelopmental assessment. Those with congenital syphilis were all born to women with inadequate syphilis treatment before delivery. Neurodevelopmental impairment was documented in 3 of 11 (27%) infants with congenital syphilis and one of 7 (14%) without congenital syphilis with speech language delays in 4 of 11 (36%) with congenital syphilis and 3 of 7 (42%) without congenital syphilis. Infants born to mothers with reactive syphilis serology during pregnancy are at high risk for neurodevelopmental impairment, whether or not they have congenital syphilis, so should all be offered neurodevelopmental assessments and early referral for services as required.

  18. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  19. The influence of masker type on early reflection processing and speech intelligibility (L)

    DEFF Research Database (Denmark)

    Arweiler, Iris; Buchholz, Jörg M.; Dau, Torsten

    2013-01-01

    Arweiler and Buchholz [J. Acoust. Soc. Am. 130, 996-1005 (2011)] showed that, while the energy of early reflections (ERs) in a room improves speech intelligibility, the benefit is smaller than that provided by the energy of the direct sound (DS). In terms of integration of ERs and DS, binaural...... listening did not provide a benefit from ERs apart from a binaural energy summation, such that monaural auditory processing could account for the data. However, a diffuse speech shaped noise (SSN) was used in the speech intelligibility experiments, which does not provide distinct binaural cues...... to the auditory system. In the present study, the monaural and binaural benefit from ERs for speech intelligibility was investigated using three directional maskers presented from 90° azimuth: a SSN, a multi-talker babble, and a reversed two-talker masker. For normal-hearing as well as hearing-impaired listeners...

  20. Can Chimpanzee Infants ("Pan Troglodytes") Form Categorical Representations in the Same Manner as Human Infants ("Homo Sapiens")?

    Science.gov (United States)

    Murai, Chizuko; Kosugi, Daisuke; Tomonaga, Masaki; Tanaka, Masayuki; Matsuzawa, Tetsuro; Itakura, Shoji

    2005-01-01

    We directly compared chimpanzee infants and human infants for categorical representations of three global-like categories (mammals, furniture and vehicles), using the familiarization-novelty preference technique. Neither species received any training during the experiments. We used the time that participants spent looking at the stimulus object…

  1. Spectral Ripple Discrimination in Normal-Hearing Infants.

    Science.gov (United States)

    Horn, David L; Won, Jong Ho; Rubinstein, Jay T; Werner, Lynne A

    Spectral resolution is a correlate of open-set speech understanding in postlingually deaf adults and prelingually deaf children who use cochlear implants (CIs). To apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in normal-hearing children. In this study, spectral ripple discrimination (SRD) was used to measure listeners' sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90° shift in phase of the sinusoidally-modulated amplitude spectrum. A 2 × 3 between-subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough "depth" (10, 13, and 20 dB) on SRD in normal-hearing listeners (experiment 1). In experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). In experiment 1, there was a significant interaction between age and ripple depth. The infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in experiment 2 infant performance was significantly poorer than adults at 20 d

  2. IMI's CANCER-ID: Status of liquid biopsy standardization

    NARCIS (Netherlands)

    Pantel, Klaus; Terstappen, Leon W. M. M.; Baggiani, Barbara; Krahn, Thomas; Schlange, Thomas

    The CANCER-ID (www.cancer-id.eu) consortium was established in early 2015 with more than 30 partners as part of the Innovative Medicines Initiative (IMI), Europe's largest public-private partnership funded in equal parts by the European Union and the European Federation of Pharmaceutical Industries

  3. Security analysis for biometric data in ID documents

    NARCIS (Netherlands)

    Schimke, S.; Kiltz, S.; Vielhauer, C.; Kalker, A.A.C.M.

    2005-01-01

    In this paper we analyze chances and challenges with respect to the security of using biometrics in ID documents. We identify goals for ID documents, set by national and international authorities, and discuss the degree of security, which is obtainable with the inclusion of biometric into documents

  4. Using others' words: conversational use of reported speech by individuals with aphasia and their communication partners.

    Science.gov (United States)

    Hengst, Julie A; Frame, Simone R; Neuman-Stritzel, Tiffany; Gannaway, Rachel

    2005-02-01

    Reported speech, wherein one quotes or paraphrases the speech of another, has been studied extensively as a set of linguistic and discourse practices. Researchers agree that reported speech is pervasive, found across languages, and used in diverse contexts. However, to date, there have been no studies of the use of reported speech among individuals with aphasia. Grounded in an interactional sociolinguistic perspective, the study presented here documents and analyzes the use of reported speech by 7 adults with mild to moderately severe aphasia and their routine communication partners. Each of the 7 pairs was videotaped in 4 everyday activities at home or around the community, yielding over 27 hr of conversational interaction for analysis. A coding scheme was developed that identified 5 types of explicitly marked reported speech: direct, indirect, projected, indexed, and undecided. Analysis of the data documented reported speech as a common discourse practice used successfully by the individuals with aphasia and their communication partners. All participants produced reported speech at least once, and across all observations the target pairs produced 400 reported speech episodes (RSEs), 149 by individuals with aphasia and 251 by their communication partners. For all participants, direct and indirect forms were the most prevalent (70% of RSEs). Situated discourse analysis of specific episodes of reported speech used by 3 of the pairs provides detailed portraits of the diverse interactional, referential, social, and discourse functions of reported speech and explores ways that the pairs used reported speech to successfully frame talk despite their ongoing management of aphasia.

  5. Shared Musical Knowledge in 11-Month-Old Infants

    Science.gov (United States)

    Mehr, Samuel A.; Spelke, Elizabeth S.

    2018-01-01

    Five-month-old infants selectively attend to novel people who sing melodies originally learned from a parent, but not melodies learned from a musical toy or from an unfamiliar singing adult, suggesting that music conveys social information to infant listeners. Here, we test this interpretation further in older infants with a more direct measure of…

  6. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  7. A brief review of revocable ID-based public key cryptosystem

    Directory of Open Access Journals (Sweden)

    Tsu-Yang Wu

    2016-03-01

    Full Text Available The design of ID-based cryptography has received much attention from researchers. However, how to revoke the misbehaviour/compromised user in ID-based public key cryptosystem becomes an important research issue. Recently, Tseng and Tsai proposed a novel public key cryptosystem called revocable ID-based public key cryptosystem (RIBE to solve the revocation problem. Later on, numerous research papers based on the Tseng-Tsai key RIBE were proposed. In this paper, we brief review Tseng and Tsai's RIBE. We hope this review can help the readers to understand the Tseng and Tsai's revocable ID-based public key cryptosystem.

  8. Id1 represses osteoclast-dependent transcription and affects bone formation and hematopoiesis.

    Directory of Open Access Journals (Sweden)

    April S Chan

    2009-11-01

    Full Text Available The bone-bone marrow interface is an area of the bone marrow microenvironment in which both bone remodeling cells, osteoblasts and osteoclasts, and hematopoietic cells are anatomically juxtaposed. The close proximity of these cells naturally suggests that they interact with one another, but these interactions are just beginning to be characterized.An Id1(-/- mouse model was used to assess the role of Id1 in the bone marrow microenvironment. Micro-computed tomography and fracture tests showed that Id1(-/- mice have reduced bone mass and increased bone fragility, consistent with an osteoporotic phenotype. Osteoclastogenesis and pit formation assays revealed that loss of Id1 increased osteoclast differentiation and resorption activity, both in vivo and in vitro, suggesting a cell autonomous role for Id1 as a negative regulator of osteoclast differentiation. Examination by flow cytometry of the hematopoietic compartment of Id1(-/- mice showed an increase in myeloid differentiation. Additionally, we found increased expression of osteoclast genes, TRAP, Oscar, and CTSK in the Id1(-/- bone marrow microenvironment. Lastly, transplantation of wild-type bone marrow into Id1(-/- mice repressed TRAP, Oscar, and CTSK expression and activity and rescued the hematopoietic and bone phenotype in these mice.In conclusion, we demonstrate an osteoporotic phenotype in Id1(-/- mice and a mechanism for Id1 transcriptional control of osteoclast-associated genes. Our results identify Id1 as a principal player responsible for the dynamic cross-talk between bone and bone marrow hematopoietic cells.

  9. Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder

    Science.gov (United States)

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2006-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…

  10. Infant ERPs Separate Children at Risk of Dyslexia Who Become Good Readers from Those Who Become Poor Readers

    Science.gov (United States)

    Zuijen, Titia L.; Plakas, Anna; Maassen, Ben A. M.; Maurits, Natasha M.; van der Leij, Aryan

    2013-01-01

    Dyslexia is heritable and associated with phonological processing deficits that can be reflected in the event-related potentials (ERPs). Here, we recorded ERPs from 2-month-old infants at risk of dyslexia and from a control group to investigate whether their auditory system processes /bAk/ and /dAk/ changes differently. The speech sounds were…

  11. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  12. The Prevalence of Speech Disorder in Primary School Students in Yazd-Iran

    Directory of Open Access Journals (Sweden)

    Sedighah Akhavan Karbasi

    2011-01-01

    Full Text Available Communication disorder is a widespread disabling problems and associated with adverse, long term outcome that impact on individuals, families and academic achievement of children in the school years and affect vocational choices later in adulthood. The aim of this study was to determine prevalence of speech disorders specifically stuttering, voice, and speech-sound disorders in primary school students in Iran-Yazd. In a descriptive study, 7881 primary school students in Yazd evaluated in view from of speech disorders with use of direct and face to face assessment technique in 2005. The prevalence of total speech disorders was 14.8% among whom 13.8% had speech-sound disorder, 1.2% stuttering and 0.47% voice disorder. The prevalence of speech disorders was higher than in males (16.7% as compared to females (12.7%. Pattern of prevalence of the three speech disorders was significantly different according to gender, parental education and by number of family member. There was no significant difference across speech disorders and birth order, religion and paternal consanguinity. These prevalence figures are higher than more studies that using parent or teacher reports.

  13. La certificazione professionale I&D in Europa

    OpenAIRE

    Franco, Augusta

    2003-01-01

    The statement points out the activity of I&D European associations in the field of competences' certification, in ECIA federation's area as well as in the national one. In the statement there are references on history and outcomes of DECIDoc project; objectives and operative stages of CERTIDoc project, promoted by ECIA; and enterprises of I&D associations in France, Spain, Germany and Italy. Secondly, it is illustrated the activity of Italian librarians and archivists associatio...

  14. Spectral Ripple Discrimination in Normal Hearing Infants

    Science.gov (United States)

    Horn, David L.; Won, Jong Ho; Rubinstein, Jay T.; Werner, Lynne A.

    2016-01-01

    Objectives Spectral resolution is a correlate of open-set speech understanding in post-lingually deaf adults as well as pre-lingually deaf children who use cochlear implants (CIs). In order to apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in NH children. In this study, spectral ripple discrimination (SRD) was used to measure listeners’ sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. Design SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90 degree shift in phase of the sinusoidally-modulated amplitude spectrum. A 2X3 between subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough “depth” (10, 13, and 20 dB) on SRD in normal hearing listeners (Experiment 1). In Experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In Experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). Results In Experiment 1, there was a significant interaction between age and ripple depth. The Infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in Experiment 2 infant performance was

  15. Music training for the development of speech segmentation.

    Science.gov (United States)

    François, Clément; Chobert, Julie; Besson, Mireille; Schön, Daniele

    2013-09-01

    The role of music training in fostering brain plasticity and developing high cognitive skills, notably linguistic abilities, is of great interest from both a scientific and a societal perspective. Here, we report results of a longitudinal study over 2 years using both behavioral and electrophysiological measures and a test-training-retest procedure to examine the influence of music training on speech segmentation in 8-year-old children. Children were pseudo-randomly assigned to either music or painting training and were tested on their ability to extract meaningless words from a continuous flow of nonsense syllables. While no between-group differences were found before training, both behavioral and electrophysiological measures showed improved speech segmentation skills across testing sessions for the music group only. These results show that music training directly causes facilitation in speech segmentation, thereby pointing to the importance of music for speech perception and more generally for children's language development. Finally these results have strong implications for promoting the development of music-based remediation strategies for children with language-based learning impairments.

  16. Speech and Language Delay

    Science.gov (United States)

    ... OTC Relief for Diarrhea Home Diseases and Conditions Speech and Language Delay Condition Speech and Language Delay Share Print Table of Contents1. ... Treatment6. Everyday Life7. Questions8. Resources What is a speech and language delay? A speech and language delay ...

  17. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  18. Speech-like rhythm in a voiced and voiceless orangutan call.

    Directory of Open Access Journals (Sweden)

    Adriano R Lameira

    Full Text Available The evolutionary origins of speech remain obscure. Recently, it was proposed that speech derived from monkey facial signals which exhibit a speech-like rhythm of ∼5 open-close lip cycles per second. In monkeys, these signals may also be vocalized, offering a plausible evolutionary stepping stone towards speech. Three essential predictions remain, however, to be tested to assess this hypothesis' validity; (i Great apes, our closest relatives, should likewise produce 5Hz-rhythm signals, (ii speech-like rhythm should involve calls articulatorily similar to consonants and vowels given that speech rhythm is the direct product of stringing together these two basic elements, and (iii speech-like rhythm should be experience-based. Via cinematic analyses we demonstrate that an ex-entertainment orangutan produces two calls at a speech-like rhythm, coined "clicks" and "faux-speech." Like voiceless consonants, clicks required no vocal fold action, but did involve independent manoeuvring over lips and tongue. In parallel to vowels, faux-speech showed harmonic and formant modulations, implying vocal fold and supralaryngeal action. This rhythm was several times faster than orangutan chewing rates, as observed in monkeys and humans. Critically, this rhythm was seven-fold faster, and contextually distinct, than any other known rhythmic calls described to date in the largest database of the orangutan repertoire ever assembled. The first two predictions advanced by this study are validated and, based on parsimony and exclusion of potential alternative explanations, initial support is given to the third prediction. Irrespectively of the putative origins of these calls and underlying mechanisms, our findings demonstrate irrevocably that great apes are not respiratorily, articulatorilly, or neurologically constrained for the production of consonant- and vowel-like calls at speech rhythm. Orangutan clicks and faux-speech confirm the importance of rhythmic speech

  19. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  20. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    Science.gov (United States)

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  1. Restoring the missing features of the corrupted speech using linear interpolation methods

    Science.gov (United States)

    Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.

    2017-10-01

    One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.

  2. Can very early music interventions promote at-risk infants' development?

    Science.gov (United States)

    Virtala, Paula; Partanen, Eino

    2018-04-30

    Music and musical activities are often a natural part of parenting. As accumulating evidence shows, music can promote auditory and language development in infancy and early childhood. It may even help to support auditory and language skills in infants whose development is compromised by heritable conditions, like the reading deficit dyslexia, or by environmental factors, such as premature birth. For example, infants born to dyslexic parents can have atypical brain responses to speech sounds and subsequent challenges in language development. Children born very preterm, in turn, have an increased likelihood of sensory, cognitive, and motor deficits. To ameliorate these deficits, we have developed early interventions focusing on music. Preliminary results of our ongoing longitudinal studies suggest that music making and parental singing promote infants' early language development and auditory neural processing. Together with previous findings in the field, the present studies highlight the role of active, social music making in supporting auditory and language development in at-risk children and infants. Once completed, the studies will illuminate both risk and protective factors in development and offer a comprehensive model of understanding the promises of music activities in promoting positive developmental outcomes during the first years of life. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals Inc. on behalf of The New York Academy of Sciences.

  3. Iron Stores of Breastfed Infants during the First Year of Life

    Directory of Open Access Journals (Sweden)

    Ekhard E. Ziegler

    2014-05-01

    Full Text Available The birth iron endowment provides iron for growth in the first months of life. We describe the iron endowment under conditions of low dietary iron supply. Subjects were infants participating in a trial of Vitamin D supplementation from 1 to 9 months. Infants were exclusively breastfed at enrollment but could receive complementary foods from 4 months but not formula. Plasma ferritin (PF and transferrin receptor (TfR were determined at 1, 2, 4, 5.5, 7.5, 9 and 12 months. At 1 month PF ranged from 38 to 752 µg/L and was only weakly related to maternal PF. PF declined subsequently and flattened out at 5.5 months. PF of females was significantly higher than PF of males except at 12 months. TfR increased with age and was inversely correlated with PF. PF and TfR tracked strongly until 9 months. Iron deficiency (PF < 10 µg/L began to appear at 4 months and increased in frequency until 9 months. Infants with ID were born with low iron endowment. We concluded that the birth iron endowment is highly variable in size and a small endowment places infants at risk of iron deficiency before 6 months. Boys have smaller iron endowments and are at greater risk of iron deficiency than girls.

  4. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  5. How may the basal ganglia contribute to auditory categorization and speech perception?

    Directory of Open Access Journals (Sweden)

    Sung-Joo eLim

    2014-08-01

    Full Text Available Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.

  6. An Intelligibility Assessment of Toddlers with Cleft Lip and Palate Who Received and Did Not Receive Presurgical Infant Orthopedic Treatment.

    Science.gov (United States)

    Konst, Emmy M.; Weersink-Braks, Hanny; Rietveld, Toni; Peters, Herman

    2000-01-01

    The influence of presurgical infant orthopedic treatment (PIO) on speech intelligibility was evaluated with 10 toddlers who used PIO during the first year of life and 10 who did not. Treated children were rated as exhibiting greater intelligibility, however, transcription data indicated there were not group differences in actual intelligibility.…

  7. People with ID as interviewers and co-researchers: experiences and reflection.

    NARCIS (Netherlands)

    Lieshout, H. van

    2012-01-01

    Aim: To share the experience of working with people with intellectual disabilities (ID) as interviewers in a qualitative study about community participation of people with ID. We reflect on two perspectives: the interviewers and the researchers. Method: Eighteen people with ID were interviewed by

  8. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy ... most kids with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech ...

  9. Prelinguistic communication development in children with childhood apraxia of speech: a retrospective analysis.

    Science.gov (United States)

    Highman, Chantelle; Leitão, Suze; Hennessey, Neville; Piek, Jan

    2012-02-01

    In a retrospective study of prelinguistic communication development, clinically referred preschool children (n = 9) aged 3-4 years, who as infants had failed a community-based screening program, were evaluated for features of childhood apraxia of speech (CAS). Four children showed no features and either delayed or normal language, five had from three-to-seven CAS features and all exhibited delayed language. These children were matched by age with 21 children with typically-developing (TD) speech and language skills. Case-control comparisons of retrospective data from 9 months of age for two participants with more severe features of CAS at preschool age showed a dissociated pattern with low expressive quotients on the Receptive-Expressive Emergent Language Assessment-Second Edition (REEL-2) and records of infrequent babbling, but normal receptive quotients. However, other profiles were observed. Two children with milder CAS features showed poor receptive and expressive development similar to other clinically referred children with no CAS features, and one child with severe CAS features showed poor receptive but normal expressive developmental milestones at 9 months and records of frequent babbling. Results suggest some but not all children with features of suspected CAS have a selective deficit originating within speech motor development.

  10. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  11. Developmental apraxia of speech in children. Quantitive assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  12. Causal inference of asynchronous audiovisual speech

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2013-11-01

    Full Text Available During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions abut the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.

  13. An online ID identification system for liquefied-gas cylinder plant

    Science.gov (United States)

    He, Jin; Ding, Zhenwen; Han, Lei; Zhang, Hao

    2017-11-01

    An automatic ID identification system for gas cylinders' online production was developed based on the production conditions and requirements of the Technical Committee for Standardization of Gas Cylinders. A cylinder ID image acquisition system was designed to improve the image contrast of ID regions on gas cylinders against the background. Then the ID digits region was located by the CNN template matching algorithm. Following that, an adaptive threshold method based on the analysis of local average grey value and standard deviation was proposed to overcome defects of non-uniform background in the segmentation results. To improve the single digit identification accuracy, two BP neural networks were trained respectively for the identification of all digits and the easily confusable digits. If the single digit was classified as one of confusable digits by the former BP neural network, it was further tested by the later one, and the later result was taken as the final identification result of this single digit. At last, the majority voting was adopted to decide the final identification result for the 6-digit cylinder ID. The developed system was installed on a production line of a liquefied-petroleum-gas cylinder plant and worked in parallel with the existing weighing step on the line. Through the field test, the correct identification rate for single ID digit was 94.73%, and none of the tested 2000 cylinder ID was misclassified through the majority voting.

  14. How musical expertise shapes speech perception: evidence from auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel

    2015-09-24

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

  15. Transformation of Flaubert’s Free Indirect Speech in Film Adaptation Madame Bovary by Claude Chabrol

    OpenAIRE

    Florence Gacoin Marks

    2013-01-01

    The paper deals with the transformation of Flaubert’s free indirect speech in the film Madame Bovary by Claude Chabrol. Conversion of free indirect speech into direct speech or into narration by an external narrator (voice-over) cannot be avoided, it does, however, pose many problems because of the potential ambiguousness (polyphony) of free indirect speech. In such cases, Chabrol often finds effective solutions which bring the film closer to Flaubert’s style. Nevertheless, it remains clear t...

  16. Analysis of vocal signal in its amplitude - time representation. speech synthesis-by-rules

    International Nuclear Information System (INIS)

    Rodet, Xavier

    1977-01-01

    In the first part of this dissertation, the natural speech production and the resulting acoustic waveform are examined under various aspects: communication, phonetics, frequency and temporal analysis. Our own study of direct signal is compared to other researches in these different fields, and fundamental features of vocal signals are described. The second part deals with the numerous methods already used for automatic text-to-speech synthesis. In the last part, we expose the new speech synthesis-by-rule methods that we have worked out, and we present in details the structure of the real-time speech synthesiser that we have implemented on a mini-computer. (author) [fr

  17. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems.

    Science.gov (United States)

    Greene, Beth G; Logan, John S; Pisoni, David B

    1986-03-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.

  18. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    Science.gov (United States)

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  19. Cerebellar tDCS dissociates the timing of perceptual decisions from perceptual change in speech

    NARCIS (Netherlands)

    Lametti, D.R.; Oostwoud Wijdenes, L.; Bonaiuto, J.; Bestmann, S.; Rothwell, J.C.

    2016-01-01

    Neuroimaging studies suggest that the cerebellum might play a role in both speech perception and speech perceptual learning. However, it remains unclear what this role is: does the cerebellum directly contribute to the perceptual decision? Or does it contribute to the timing of perceptual decisions?

  20. Direct implantation of rapamycin-eluting stents with bioresorbable drug carrier technology utilising the Svelte coronary stent-on-a-wire: the DIRECT II study.

    Science.gov (United States)

    Verheye, Stefan; Khattab, Ahmed A; Carrie, Didier; Stella, Pieter; Slagboom, Ton; Bartunek, Jozef; Onuma, Yoshinobu; Serruys, Patrick W

    2016-08-05

    Our aim was to demonstrate the safety and efficacy of the Svelte sirolimus-eluting coronary stent-on-a-wire Integrated Delivery System (IDS) with bioresorbable drug coating compared to the Resolute Integrity zotarolimus-eluting stent with durable polymer in patients with de novo coronary artery lesions. Direct stenting, particularly in conjunction with transradial intervention (TRI), has been associated with reduced bleeding complications, procedure time, radiation exposure and contrast administration compared to conventional stenting with wiring and predilatation. The low-profile Svelte IDS is designed to facilitate TRI and direct stenting, reducing the number of procedural steps, time and cost associated with coronary stenting. DIRECT II was a prospective, multicentre trial which enrolled 159 patients to establish non-inferiority of the Svelte IDS versus Resolute Integrity using a 2:1 randomisation. The primary endpoint was angiographic in-stent late lumen loss (LLL) at six months. Target vessel failure (TVF), as well as secondary clinical endpoints, will be assessed annually up to five years. At six months, in-stent LLL was 0.09±0.31 mm in the Svelte IDS group compared to 0.13±0.27 mm in the Resolute Integrity group (p<0.001 for non-inferiority). TVF at one year was similar across the Svelte IDS and Resolute Integrity groups (6.5% vs. 9.8%, respectively). DIRECT II demonstrated the non-inferiority of the Svelte IDS to Resolute Integrity with respect to in-stent LLL at six months. Clinical outcomes at one year were comparable between the two groups.

  1. The speech perception skills of children with and without speech sound disorder.

    Science.gov (United States)

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Experience is Instrumental in Tuning a Link Between Language and Cognition: Evidence from 6- to 7- Month-Old Infants' Object Categorization.

    Science.gov (United States)

    Perszyk, Danielle R; Waxman, Sandra R

    2017-04-19

    At birth, infants not only prefer listening to human vocalizations, but also have begun to link these vocalizations to cognition: For infants as young as three months of age, listening to human language supports object categorization, a core cognitive capacity. This precocious link is initially broad: At 3 and 4 months, vocalizations of both humans and nonhuman primates support categorization. But by 6 months, infants have narrowed the link: Only human vocalizations support object categorization. Here we ask what guides infants as they tune their initially broad link to a more precise one, engaged only by the vocalizations of our species. Across three studies, we use a novel exposure paradigm to examine the effects of experience. We document that merely exposing infants to nonhuman primate vocalizations enables infants to preserve the early-established link between this signal and categorization. In contrast, exposing infants to backward speech - a signal that fails to support categorization at any age - offers no such advantage. Our findings reveal the power of early experience as infants specify which signals, from an initially broad set, they will continue to link to cognition.

  3. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  4. Implementation of a Tour Guide Robot System Using RFID Technology and Viterbi Algorithm-Based HMM for Speech Recognition

    Directory of Open Access Journals (Sweden)

    Neng-Sheng Pai

    2014-01-01

    Full Text Available This paper applied speech recognition and RFID technologies to develop an omni-directional mobile robot into a robot with voice control and guide introduction functions. For speech recognition, the speech signals were captured by short-time processing. The speaker first recorded the isolated words for the robot to create speech database of specific speakers. After the speech pre-processing of this speech database, the feature parameters of cepstrum and delta-cepstrum were obtained using linear predictive coefficient (LPC. Then, the Hidden Markov Model (HMM was used for model training of the speech database, and the Viterbi algorithm was used to find an optimal state sequence as the reference sample for speech recognition. The trained reference model was put into the industrial computer on the robot platform, and the user entered the isolated words to be tested. After processing by the same reference model and comparing with previous reference model, the path of the maximum total probability in various models found using the Viterbi algorithm in the recognition was the recognition result. Finally, the speech recognition and RFID systems were achieved in an actual environment to prove its feasibility and stability, and implemented into the omni-directional mobile robot.

  5. Relating hearing loss and executive functions to hearing aid users’ preference for, and speech recognition with, different combinations of binaural noise reduction and microphone directionality

    Directory of Open Access Journals (Sweden)

    Tobias eNeher

    2014-12-01

    Full Text Available Knowledge of how executive functions relate to preferred hearing aid (HA processing is sparse and seemingly inconsistent with related knowledge for speech recognition outcomes. This study thus aimed to find out if (1 performance on a measure of reading span (RS is related to preferred binaural noise reduction (NR strength, (2 similar relations exist for two different, nonverbal measures of executive function, (3 pure-tone average hearing loss (PTA, signal-to-noise ratio (SNR, and microphone directionality (DIR also influence preferred NR strength, and (4 preference and speech recognition outcomes are similar. Sixty elderly HA users took part. Six HA conditions consisting of omnidirectional or cardioid microphones followed by inactive, moderate, or strong binaural NR as well as linear amplification were tested. Outcome was assessed at fixed SNRs using headphone simulations of a frontal target talker in a busy cafeteria. Analyses showed positive effects of active NR and DIR on preference, and negative and positive effects of, respectively, strong NR and DIR on speech recognition. Also, while moderate NR was the most preferred NR setting overall, preference for strong NR increased with SNR. No relation between RS and preference was found. However, larger PTA was related to weaker preference for inactive NR and stronger preference for strong NR for both microphone modes. Equivalent (but weaker relations between worse performance on one nonverbal measure of executive function and the HA conditions without DIR were found. For speech recognition, there were relations between HA condition, PTA, and RS, but their pattern differed from that for preference. Altogether, these results indicate that, while moderate NR works well in general, a notable proportion of HA users prefer stronger NR. Furthermore, PTA and executive functions can account for some of the variability in preference for, and speech recognition with, different binaural NR and DIR settings.

  6. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  7. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  8. Clear Speech - Mere Speech? How segmental and prosodic speech reduction shape the impression that speakers create on listeners

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2017-01-01

    of reduction levels and perceived speaker attributes in which moderate reduction can make a better impression on listeners than no reduction. In addition to its relevance in reduction models and theories, this interplay is instructive for various fields of speech application from social robotics to charisma...... whether variation in the degree of reduction also has a systematic effect on the attributes we ascribe to the speaker who produces the speech signal. A perception experiment was carried out for German in which 46 listeners judged whether or not speakers showing 3 different combinations of segmental...... and prosodic reduction levels (unreduced, moderately reduced, strongly reduced) are appropriately described by 13 physical, social, and cognitive attributes. The experiment shows that clear speech is not mere speech, and less clear speech is not just reduced either. Rather, results revealed a complex interplay...

  9. Preserved speech abilities and compensation following prefrontal damage.

    Science.gov (United States)

    Buckner, R L; Corbetta, M; Schatz, J; Raichle, M E; Petersen, S E

    1996-02-06

    Lesions to left frontal cortex in humans produce speech production impairments (nonfluent aphasia). These impairments vary from subject to subject and performance on certain speech production tasks can be relatively preserved in some patients. A possible explanation for preservation of function under these circumstances is that areas outside left prefrontal cortex are used to compensate for the injured brain area. We report here a direct demonstration of preserved language function in a stroke patient (LF1) apparently due to the activation of a compensatory brain pathway. We used functional brain imaging with positron emission tomography (PET) as a basis for this study.

  10. Can chimpanzee infants (Pan troglodytes) form categorical representations in the same manner as human infants (Homo sapiens)?

    Science.gov (United States)

    Murai, Chizuko; Kosugi, Daisuke; Tomonaga, Masaki; Tanaka, Masayuki; Matsuzawa, Tetsuro; Itakura, Shoji

    2005-05-01

    We directly compared chimpanzee infants and human infants for categorical representations of three global-like categories (mammals, furniture and vehicles), using the familiarization-novelty preference technique. Neither species received any training during the experiments. We used the time that participants spent looking at the stimulus object while touching it as a measure. During the familiarization phase, participants were presented with four familiarization objects from one of three categories (e.g. mammals). Then, they were tested with a pair of novel objects, one was a familiar-category object and another was a novel-category object (e.g. vehicle) in the test phase. The chimpanzee infants did not show significant habituation, whereas human infants did. However, most important, both species showed significant novelty-preference in the test phase. This indicates that not only human infants, but also chimpanzee infants formed categorical representations of a global-like level. Implications for the shared origins and species-specificity of categorization abilities, and the cognitive operations underlying categorization, are discussed.

  11. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  12. Electrophysiological and hemodynamic mismatch responses in rats listening to human speech syllables.

    Directory of Open Access Journals (Sweden)

    Mahdi Mahmoudzadeh

    Full Text Available Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG and hemodynamic responses (using fNIRS to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga and to a change of voice (male vs. female. Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.

  13. Supporting co-creation with software, the idSpace platform

    NARCIS (Netherlands)

    Van Rosmalen, Peter; Boon, Jo; Bitter-Rijpkema, Marlies; Sie, Rory; Sloep, Peter

    2014-01-01

    Innovation, in general, requires teamwork among specialist of different disciplines. The idSpace project developed ideas on how teams of collaborating innovators could best be supported. These ideas were embodied in a platform that the project developed. This idSpace platform allows its users to

  14. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  15. PRACTICING SPEECH THERAPY INTERVENTION FOR SOCIAL INTEGRATION OF CHILDREN WITH SPEECH DISORDERS

    Directory of Open Access Journals (Sweden)

    Martin Ofelia POPESCU

    2016-11-01

    Full Text Available The article presents a concise speech correction intervention program in of dyslalia in conjunction with capacity development of intra, interpersonal and social integration of children with speech disorders. The program main objectives represent: the potential increasing of individual social integration by correcting speech disorders in conjunction with intra- and interpersonal capacity, the potential growth of children and community groups for social integration by optimizing the socio-relational context of children with speech disorder. In the program were included 60 children / students with dyslalia speech disorders (monomorphic and polymorphic dyslalia, from 11 educational institutions - 6 kindergartens and 5 schools / secondary schools, joined with inter-school logopedic centre (CLI from Targu Jiu city and areas of Gorj district. The program was implemented under the assumption that therapeutic-formative intervention to correct speech disorders and facilitate the social integration will lead, in combination with correct pronunciation disorders, to social integration optimization of children with speech disorders. The results conirm the hypothesis and gives facts about the intervention program eficiency.

  16. Schizophrenia alters intra-network functional connectivity in the caudate for detecting speech under informational speech masking conditions.

    Science.gov (United States)

    Zheng, Yingjun; Wu, Chao; Li, Juanhua; Li, Ruikeng; Peng, Hongjun; She, Shenglin; Ning, Yuping; Li, Liang

    2018-04-04

    Speech recognition under noisy "cocktail-party" environments involves multiple perceptual/cognitive processes, including target detection, selective attention, irrelevant signal inhibition, sensory/working memory, and speech production. Compared to health listeners, people with schizophrenia are more vulnerable to masking stimuli and perform worse in speech recognition under speech-on-speech masking conditions. Although the schizophrenia-related speech-recognition impairment under "cocktail-party" conditions is associated with deficits of various perceptual/cognitive processes, it is crucial to know whether the brain substrates critically underlying speech detection against informational speech masking are impaired in people with schizophrenia. Using functional magnetic resonance imaging (fMRI), this study investigated differences between people with schizophrenia (n = 19, mean age = 33 ± 10 years) and their matched healthy controls (n = 15, mean age = 30 ± 9 years) in intra-network functional connectivity (FC) specifically associated with target-speech detection under speech-on-speech-masking conditions. The target-speech detection performance under the speech-on-speech-masking condition in participants with schizophrenia was significantly worse than that in matched healthy participants (healthy controls). Moreover, in healthy controls, but not participants with schizophrenia, the strength of intra-network FC within the bilateral caudate was positively correlated with the speech-detection performance under the speech-masking conditions. Compared to controls, patients showed altered spatial activity pattern and decreased intra-network FC in the caudate. In people with schizophrenia, the declined speech-detection performance under speech-on-speech masking conditions is associated with reduced intra-caudate functional connectivity, which normally contributes to detecting target speech against speech masking via its functions of suppressing masking-speech signals.

  17. A Contextual Model for Identity Management (IdM) Interfaces

    Science.gov (United States)

    Fuller, Nathaniel J.

    2014-01-01

    The usability of Identity Management (IdM) systems is highly dependent upon design that simplifies the processes of identification, authentication, and authorization. Recent findings reveal two critical problems that degrade IdM usability: (1) unfeasible techniques for managing various digital identifiers, and (2) ambiguous security interfaces.…

  18. Clarté des idées innées ?

    DEFF Research Database (Denmark)

    Schøsler, Jørn

    2013-01-01

    Der gives en analyse af begreberne 'evidens' og 'medfødte idéer' hos Descartes og Locke samt hos de franske oplysningsfilosoffer.......Der gives en analyse af begreberne 'evidens' og 'medfødte idéer' hos Descartes og Locke samt hos de franske oplysningsfilosoffer....

  19. Cost Analysis of Direct versus Indirect and Individual versus Group Modes of Manual-Based Speech-and-Language Therapy for Primary School-Age Children with Primary Language Impairment

    Science.gov (United States)

    Dickson, Kirstin; Marshall, Marjorie; Boyle, James; McCartney, Elspeth; O'Hare, Anne; Forbes, John

    2009-01-01

    Background: The study is the first within trial cost analysis of direct versus indirect and individual versus group modes of speech-and-language therapy for children with primary language impairment. Aims: To compare the short-run resource consequences of the four interventions alongside the effects achieved measured by standardized scores on a…

  20. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  1. Functional connectivity in the first year of life in infants at risk for autism spectrum disorder: an EEG study.

    Directory of Open Access Journals (Sweden)

    Giulia Righi

    Full Text Available In the field of autism research, recent work has been devoted to studying both behavioral and neural markers that may aide in early identification of autism spectrum disorder (ASD. These studies have often tested infants who have a significant family history of autism spectrum disorder, given the increased prevalence observed among such infants. In the present study we tested infants at high- and low-risk for ASD (based on having an older sibling diagnosed with the disorder or not at 6- and 12-months-of-age. We computed intrahemispheric linear coherence between anterior and posterior sites as a measure of neural functional connectivity derived from electroencephalography while the infants were listening to speech sounds. We found that by 12-months-of-age infants at risk for ASD showed reduced functional connectivity compared to low risk infants. Moreover, by 12-months-of-age infants later diagnosed with ASD showed reduced functional connectivity, compared to both infants at low risk for the disorder and infants at high risk who were not later diagnosed with ASD. Significant differences in functional connectivity were also found between low-risk infants and high-risk infants who did not go onto develop ASD. These results demonstrate that reduced functional connectivity appears to be related to genetic vulnerability for ASD. Moreover, they provide further evidence that ASD is broadly characterized by differences in neural integration that emerge during the first year of life.

  2. Functional connectivity in the first year of life in infants at risk for autism spectrum disorder: an EEG study.

    Science.gov (United States)

    Righi, Giulia; Tierney, Adrienne L; Tager-Flusberg, Helen; Nelson, Charles A

    2014-01-01

    In the field of autism research, recent work has been devoted to studying both behavioral and neural markers that may aide in early identification of autism spectrum disorder (ASD). These studies have often tested infants who have a significant family history of autism spectrum disorder, given the increased prevalence observed among such infants. In the present study we tested infants at high- and low-risk for ASD (based on having an older sibling diagnosed with the disorder or not) at 6- and 12-months-of-age. We computed intrahemispheric linear coherence between anterior and posterior sites as a measure of neural functional connectivity derived from electroencephalography while the infants were listening to speech sounds. We found that by 12-months-of-age infants at risk for ASD showed reduced functional connectivity compared to low risk infants. Moreover, by 12-months-of-age infants later diagnosed with ASD showed reduced functional connectivity, compared to both infants at low risk for the disorder and infants at high risk who were not later diagnosed with ASD. Significant differences in functional connectivity were also found between low-risk infants and high-risk infants who did not go onto develop ASD. These results demonstrate that reduced functional connectivity appears to be related to genetic vulnerability for ASD. Moreover, they provide further evidence that ASD is broadly characterized by differences in neural integration that emerge during the first year of life.

  3. How to save distressed IDS-physician marriages: a case study.

    Science.gov (United States)

    Collins, H; Johnson, B A

    1998-04-01

    A hospital-driven IDS that encounters serious problems resulting from ownership of a physician practice should address those problems by focusing on three core areas: vision and leadership, effectiveness of operations, and physician compensation arrangements. If changes in these areas do not lead to improvements, the IDS may need to consider organizational restructuring. In one case study, a hospital-driven IDS faced the problem of owning a poorly performing MSO with a captive physician group. The IDS's governing board determined that the organization lacked effective communication with the physicians and that realization of the organization's vision would require greater physician involvement in organizational decision making. The organization is expected to undergo some corporate reorganization in which physicians will acquire an equity interest in the enterprise.

  4. Speech and Speech-Related Quality of Life After Late Palate Repair: A Patient's Perspective.

    Science.gov (United States)

    Schönmeyr, Björn; Wendby, Lisa; Sharma, Mitali; Jacobson, Lia; Restrepo, Carolina; Campbell, Alex

    2015-07-01

    Many patients with cleft palate deformities worldwide receive treatment at a later age than is recommended for normal speech to develop. The outcomes after late palate repairs in terms of speech and quality of life (QOL) still remain largely unstudied. In the current study, questionnaires were used to assess the patients' perception of speech and QOL before and after primary palate repair. All of the patients were operated at a cleft center in northeast India and had a cleft palate with a normal lip or with a cleft lip that had been previously repaired. A total of 134 patients (7-35 years) were interviewed preoperatively and 46 patients (7-32 years) were assessed in the postoperative survey. The survey showed that scores based on the speech handicap index, concerning speech and speech-related QOL, did not improve postoperatively. In fact, the questionnaires indicated that the speech became more unpredictable (P reported that their self-confidence had improved after the operation. Thus, the majority of interviewed patients who underwent late primary palate repair were satisfied with the surgery. At the same time, speech and speech-related QOL did not improve according to the speech handicap index-based survey. Speech predictability may even become worse and nasal regurgitation may increase after late palate repair, according to these results.

  5. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  6. A Good IDS Response Protocol of MANET Containment Strategies

    Science.gov (United States)

    Cheng, Bo-Chao; Chen, Huan; Tseng, Ryh-Yuh

    Much recent research concentrates on designing an Intrusion Detection System (IDS) to detect the misbehaviors of the malicious node in MANET with ad-hoc and mobility natures. However, without rapid and appropriate IDS response mechanisms performing follow-up management services, even the best IDS cannot achieve the desired primary goal of the incident response. A competent containment strategy is needed to limit the extent of an attack in the Incident Response Life Cycle. Inspired by the T-cell mechanisms in the human immune system, we propose an efficient MANET IDS response protocol (T-SecAODV) that can rapidly and accurately disseminate alerts of the malicious node attacks to other nodes so as to modify their AODV routing tables to isolate the malicious nodes. Simulations are conducted by the network simulator (Qualnet), and the experiment results indicate that T-SecAODV is able to spread alerts steadily while greatly reduce faulty rumors under simultaneous multiple malicious node attacks.

  7. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Ekspert : mobiil-ID kasutamine valimistel turvariske ei tekita / Holger Roonemaa

    Index Scriptorium Estoniae

    Roonemaa, Holger

    2008-01-01

    E-valimiste projektijuhi Taavi Martensi väitel pole põhjust kahelda mobiil-ID turvalisuses. Valimiskomisjoni esimehe Heiki Sibula meelest peaks m-ID kaarte välja andma kodakondsus- ja migratsiooniamet

  9. Picture This: How to Establish an Effective School ID Card Program

    Science.gov (United States)

    Finkelstein, David

    2013-01-01

    Most school districts do not have an ID card policy that everyone knows and follows, yet. many school districts are implementing ID card programs to address concerns about safety, efficiency, and convenience. A well-thought-out ID card program leads to greater security and smoother operations throughout the school and should thus be a priority.…

  10. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  11. Id-1 is not expressed in the luminal epithelial cells of mammary glands

    International Nuclear Information System (INIS)

    Uehara, Norihisa; Chou, Yu-Chien; Galvez, Jose J; Candia, Paola de; Cardiff, Robert D; Benezra, Robert; Shyamala, Gopalan

    2003-01-01

    The family of inhibitor of differentiation/DNA binding (Id) proteins is known to regulate development in several tissues. One member of this gene family, Id-1, has been implicated in mammary development and carcinogenesis. Mammary glands contain various cell types, among which the luminal epithelial cells are primarily targeted for proliferation, differentiation and carcinogenesis. Therefore, to assess the precise significance of Id-1 in mammary biology and carcinogenesis, we examined its cellular localization in vivo using immunohistochemistry. Extracts of whole mammary glands from wild type and Id-1 null mutant mice, and tissue sections from paraffin-embedded mouse mammary glands from various developmental stages and normal human breast were subjected to immunoblot and immunohistochemical analyses, respectively. In both these procedures, an anti-Id-1 rabbit polyclonal antibody was used for detection of Id-1. In immunoblot analyses, using whole mammary gland extracts, Id-1 was detected. In immunohistochemical analyses, however, Id-1 was not detected in the luminal epithelial cells of mammary glands during any stage of development, but it was detected in vascular endothelial cells. Id-1 is not expressed in the luminal epithelial cells of mammary glands

  12. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  13. Maternal prenatal cortisol and infant cognitive development: moderation by infant-mother attachment.

    Science.gov (United States)

    Bergman, Kristin; Sarkar, Pampa; Glover, Vivette; O'Connor, Thomas G

    2010-06-01

    Experimental animal studies suggest that early glucocorticoid exposure may have lasting effects on the neurodevelopment of the offspring; animal studies also suggest that this effect may be eliminated by positive postnatal rearing. The relevance of these findings to humans is not known. We prospectively followed 125 mothers and their normally developing children from pregnancy through 17 months postnatal. Amniotic fluid was obtained at, on average, 17.2 weeks gestation; infants were assessed at an average age of 17 months with the Bayley Scales of Infant Development, and ratings of infant-mother attachment classification were made from the standard Ainsworth Strange Situation assessment. Prenatal cortisol exposure, indexed by amniotic fluid levels, negatively predicted cognitive ability in the infant, independent of prenatal, obstetric, and socioeconomic factors. This association was moderated by child-mother attachment: in children with an insecure attachment, the correlation was [r(54) = -.47, p < .001]; in contrast, the association was nonexistent in children who had a secure attachment [r(70) = -.05, ns]. These findings mimic experimental animal findings and provide the first direct human evidence that increased cortisol in utero is associated with impaired cognitive development, and that its impact is dependent on the quality of the mother-infant relationship. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  14. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  15. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    Science.gov (United States)

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  16. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  17. Speech-associated gestures, Broca’s area, and the human mirror system

    Science.gov (United States)

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L

    2009-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements. PMID:17533001

  18. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  19. Speech understanding in noise with an eyeglass hearing aid: asymmetric fitting and the head shadow benefit of anterior microphones.

    Science.gov (United States)

    Mens, Lucas H M

    2011-01-01

    To test speech understanding in noise using array microphones integrated in an eyeglass device and to test if microphones placed anteriorly at the temple provide better directivity than above the pinna. Sentences were presented from the front and uncorrelated noise from 45, 135, 225 and 315°. Fifteen hearing impaired participants with a significant speech discrimination loss were included, as well as 5 normal hearing listeners. The device (Varibel) improved speech understanding in noise compared to most conventional directional devices with a directional benefit of 5.3 dB in the asymmetric fit mode, which was not significantly different from the bilateral fully directional mode (6.3 dB). Anterior microphones outperformed microphones at a conventional position above the pinna by 2.6 dB. By integrating microphones in an eyeglass frame, a long array can be used resulting in a higher directionality index and improved speech understanding in noise. An asymmetric fit did not significantly reduce performance and can be considered to increase acceptance and environmental awareness. Directional microphones at the temple seemed to profit more from the head shadow than above the pinna, better suppressing noise from behind the listener.

  20. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  1. Ethical Challenges in Infant Feeding Research

    Directory of Open Access Journals (Sweden)

    Colin Binns

    2017-01-01

    Full Text Available Infants have a complex set of nutrient requirements to meet the demands of their high metabolic rate, growth, and immunological and cognitive development. Infant nutrition lays the foundation for health throughout life. While infant feeding research is essential, it must be conducted to the highest ethical standards. The objective of this paper is to discuss the implications of developments in infant nutrition for the ethics of infant feeding research and the implications for obtaining informed consent. A search was undertaken of the papers in the medical literature using the PubMed, Science Direct, Web of Knowledge, Proquest, and CINAHL databases. From a total of 9303 papers identified, the full text of 87 articles that contained discussion of issues in consent in infant feeding trials were obtained and read and after further screening 42 papers were included in the results and discussion. Recent developments in infant nutrition of significance to ethics assessment include the improved survival of low birth weight infants, increasing evidence of the value of breastfeeding and evidence of the lifelong importance of infant feeding and development in the first 1000 days of life in chronic disease epidemiology. Informed consent is a difficult issue, but should always include information on the value of preserving breastfeeding options. Project monitoring should be cognisant of the long term implications of growth rates and early life nutrition.

  2. Joint Service Aircrew Mask (JSAM) - Tactical Aircraft (TA) A/P22P-14A Respirator Assembly (V)5: Speech Intelligibility Performance with Double Hearing Protection, HGU-84/P Flight Helmet

    Science.gov (United States)

    2017-04-06

    data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture , use, or sell any patented...airworthiness. The JSAM-TA Respirator Assembly (V)5 (Figure 2) is a chemical, biological, and radiological respirator assembly manufactured by Cam Lock...Classic™ sizing matrix for speech intelligibility Subject ID# Gender HGU- 84/P Helmet Helmet Liner (inches) Earcup Spacers (centered behind

  3. An analysis of the masking of speech by competing speech using self-report data.

    Science.gov (United States)

    Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

  4. Speech Understanding with a New Implant Technology: A Comparative Study with a New Nonskin Penetrating Baha System

    Directory of Open Access Journals (Sweden)

    Anja Kurz

    2014-01-01

    Full Text Available Objective. To compare hearing and speech understanding between a new, nonskin penetrating Baha system (Baha Attract to the current Baha system using a skin-penetrating abutment. Methods. Hearing and speech understanding were measured in 16 experienced Baha users. The transmission path via the abutment was compared to a simulated Baha Attract transmission path by attaching the implantable magnet to the abutment and then by adding a sample of artificial skin and the external parts of the Baha Attract system. Four different measurements were performed: bone conduction thresholds directly through the sound processor (BC Direct, aided sound field thresholds, aided speech understanding in quiet, and aided speech understanding in noise. Results. The simulated Baha Attract transmission path introduced an attenuation starting from approximately 5 dB at 1000 Hz, increasing to 20–25 dB above 6000 Hz. However, aided sound field threshold shows smaller differences and aided speech understanding in quiet and in noise does not differ significantly between the two transmission paths. Conclusion. The Baha Attract system transmission path introduces predominately high frequency attenuation. This attenuation can be partially compensated by adequate fitting of the speech processor. No significant decrease in speech understanding in either quiet or in noise was found.

  5. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  6. Mothers who are securely attached in pregnancy show more attuned infant mirroring at 7 months postpartum

    Science.gov (United States)

    Kim, Sohye; Fonagy, Peter; Allen, Jon; Martinez, Sheila; Iyengar, Udita; Strathearn, Lane

    2014-01-01

    This study contrasted two forms of mother-infant mirroring: the mother's imitation of the infant's facial, gestural, or vocal behavior (i.e., “direct mirroring”) and the mother's ostensive verbalization of the infant's internal state, marked as distinct from the infant's experience (i.e., “intention mirroring”). Fifty mothers completed the Adult Attachment Interview during the third trimester of pregnancy. Mothers returned with their infants 7 months postpartum and completed a modified still-face procedure. While direct mirroring did not distinguish between secure and insecure/dismissing mothers, secure mothers were observed to engage in intention mirroring more than twice as frequently as did insecure/dismissing mothers. Infants of the two mother groups also demonstrated differences, with infants of secure mothers directing their attention toward their mothers at a higher frequency than did infants of insecure/dismissing mothers. The findings underscore marked and ostensive verbalization as a distinguishing feature of secure mothers’ well-attuned, affect-mirroring communication with their infants. PMID:25020112

  7. Influence of father-infant relationship on infant development: A father-involvement intervention in Vietnam.

    Science.gov (United States)

    Rempel, Lynn A; Rempel, John K; Khuc, Toan Nang; Vui, Le Thi

    2017-10-01

    We examined the extent to which fathers can be taught and encouraged to develop positive relationships with their children, especially in infancy, and the effects of this fathering intervention on infant development. A multifaceted relationally focused intervention was used to assist fathers in Vietnam to engage in responsive direct and indirect involvement with their infants and work together with the mother as part of a parenting team. Fathers and mothers from 13 communes in a rural and semiurban district were recruited to the intervention group. Intervention fathers received group and individual counseling before and after birth, an interactive print resource, community messages about fathering, and the opportunity to participate in a Fathers Club. Couples from 12 comparable communes in a noncontiguous district were recruited to the control group. Fathers and mothers completed questionnaires at the prebirth recruitment and at 1-, 4-, and 9-months postbirth. Intervention fathers demonstrated greater increase in knowledge and attitudes regarding father-infant relationships. Both fathers and mothers reported that fathers engaged in more affection, care-taking, and play in the early months of their infants' lives and fathers felt more attached to their infants right from birth. A developmental assessment at 9 months showed that intervention infants demonstrated higher levels of motor, language, and personal/social development. This study demonstrated that fathers can be taught to interact more sensitively, responsively, and effectively with their newborn infants. Their increased interaction and emotional attachment appears to lay the foundation for enhanced infant development. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Part-of-speech effects on text-to-speech synthesis

    CSIR Research Space (South Africa)

    Schlunz, GI

    2010-11-01

    Full Text Available One of the goals of text-to-speech (TTS) systems is to produce natural-sounding synthesised speech. Towards this end various natural language processing (NLP) tasks are performed to model the prosodic aspects of the TTS voice. One of the fundamental...

  9. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014

  10. Parent-infant psychotherapy for improving parental and infant mental health.

    Science.gov (United States)

    Barlow, Jane; Bennett, Cathy; Midgley, Nick; Larkin, Soili K; Wei, Yinghui

    2015-01-08

    Parent-infant psychotherapy (PIP) is a dyadic intervention that works with parent and infant together, with the aim of improving the parent-infant relationship and promoting infant attachment and optimal infant development. PIP aims to achieve this by targeting the mother's view of her infant, which may be affected by her own experiences, and linking them to her current relationship to her child, in order to improve the parent-infant relationship directly. 1. To assess the effectiveness of PIP in improving parental and infant mental health and the parent-infant relationship.2. To identify the programme components that appear to be associated with more effective outcomes and factors that modify intervention effectiveness (e.g. programme duration, programme focus). We searched the following electronic databases on 13 January 2014: Cochrane Central Register of Controlled Trials (CENTRAL, 2014, Issue 1), Ovid MEDLINE, EMBASE, CINAHL, PsycINFO, BIOSIS Citation Index, Science Citation Index, ERIC, and Sociological Abstracts. We also searched the metaRegister of Controlled Trials, checked reference lists, and contacted study authors and other experts. Two review authors assessed study eligibility independently. We included randomised controlled trials (RCT) and quasi-randomised controlled trials (quasi-RCT) that compared a PIP programme directed at parents with infants aged 24 months or less at study entry, with a control condition (i.e. waiting-list, no treatment or treatment-as-usual), and used at least one standardised measure of parental or infant functioning. We also included studies that only used a second treatment group. We adhered to the standard methodological procedures of The Cochrane Collaboration. We standardised the treatment effect for each outcome in each study by dividing the mean difference (MD) in post-intervention scores between the intervention and control groups by the pooled standard deviation. We presented standardised mean differences (SMDs) and

  11. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

    Directory of Open Access Journals (Sweden)

    Jing Mi

    2016-09-01

    Full Text Available Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  12. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.

    Science.gov (United States)

    Mi, Jing; Colburn, H Steven

    2016-10-03

    Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.

  13. ID4 promotes AR expression and blocks tumorigenicity of PC3 prostate cancer cells

    International Nuclear Information System (INIS)

    Komaragiri, Shravan Kumar; Bostanthirige, Dhanushka H.; Morton, Derrick J.; Patel, Divya; Joshi, Jugal; Upadhyay, Sunil; Chaudhary, Jaideep

    2016-01-01

    Deregulation of tumor suppressor genes is associated with tumorigenesis and the development of cancer. In prostate cancer, ID4 is epigenetically silenced and acts as a tumor suppressor. In normal prostate epithelial cells, ID4 collaborates with androgen receptor (AR) and p53 to exert its tumor suppressor activity. Previous studies have shown that ID4 promotes tumor suppressive function of AR whereas loss of ID4 results in tumor promoter activity of AR. Previous study from our lab showed that ectopic ID4 expression in DU145 attenuates proliferation and promotes AR expression suggesting that ID4 dependent AR activity is tumor suppressive. In this study, we examined the effect of ectopic expression of ID4 on highly malignant prostate cancer cell, PC3. Here we show that stable overexpression of ID4 in PC3 cells leads to increased apoptosis and decreased cell proliferation and migration. In addition, in vivo studies showed a decrease in tumor size and volume of ID4 overexpressing PC3 cells, in nude mice. At the molecular level, these changes were associated with increased androgen receptor (AR), p21, and AR dependent FKBP51 expression. At the mechanistic level, ID4 may regulate the expression or function of AR through specific but yet unknown AR co-regulators that may determine the final outcome of AR function. - Highlights: • ID4 expression induces AR expression in PC3 cells, which generally lack AR. • ID4 expression increased apoptosis and decreased cell proliferation and invasion. • Overexpression of ID4 reduces tumor growth of subcutaneous xenografts in vivo. • ID4 induces p21 and FKBP51 expression- co-factors of AR tumor suppressor activity.

  14. ID4 promotes AR expression and blocks tumorigenicity of PC3 prostate cancer cells

    Energy Technology Data Exchange (ETDEWEB)

    Komaragiri, Shravan Kumar; Bostanthirige, Dhanushka H.; Morton, Derrick J.; Patel, Divya; Joshi, Jugal; Upadhyay, Sunil; Chaudhary, Jaideep, E-mail: jchaudhary@cau.edu

    2016-09-09

    Deregulation of tumor suppressor genes is associated with tumorigenesis and the development of cancer. In prostate cancer, ID4 is epigenetically silenced and acts as a tumor suppressor. In normal prostate epithelial cells, ID4 collaborates with androgen receptor (AR) and p53 to exert its tumor suppressor activity. Previous studies have shown that ID4 promotes tumor suppressive function of AR whereas loss of ID4 results in tumor promoter activity of AR. Previous study from our lab showed that ectopic ID4 expression in DU145 attenuates proliferation and promotes AR expression suggesting that ID4 dependent AR activity is tumor suppressive. In this study, we examined the effect of ectopic expression of ID4 on highly malignant prostate cancer cell, PC3. Here we show that stable overexpression of ID4 in PC3 cells leads to increased apoptosis and decreased cell proliferation and migration. In addition, in vivo studies showed a decrease in tumor size and volume of ID4 overexpressing PC3 cells, in nude mice. At the molecular level, these changes were associated with increased androgen receptor (AR), p21, and AR dependent FKBP51 expression. At the mechanistic level, ID4 may regulate the expression or function of AR through specific but yet unknown AR co-regulators that may determine the final outcome of AR function. - Highlights: • ID4 expression induces AR expression in PC3 cells, which generally lack AR. • ID4 expression increased apoptosis and decreased cell proliferation and invasion. • Overexpression of ID4 reduces tumor growth of subcutaneous xenografts in vivo. • ID4 induces p21 and FKBP51 expression- co-factors of AR tumor suppressor activity.

  15. Neuronal populations in the occipital cortex of the blind synchronize to the temporal dynamics of speech

    Science.gov (United States)

    Van Ackeren, Markus Johannes; Barbero, Francesca M; Mattioni, Stefania; Bottini, Roberto

    2018-01-01

    The occipital cortex of early blind individuals (EB) activates during speech processing, challenging the notion of a hard-wired neurobiology of language. But, at what stage of speech processing do occipital regions participate in EB? Here we demonstrate that parieto-occipital regions in EB enhance their synchronization to acoustic fluctuations in human speech in the theta-range (corresponding to syllabic rate), irrespective of speech intelligibility. Crucially, enhanced synchronization to the intelligibility of speech was selectively observed in primary visual cortex in EB, suggesting that this region is at the interface between speech perception and comprehension. Moreover, EB showed overall enhanced functional connectivity between temporal and occipital cortices that are sensitive to speech intelligibility and altered directionality when compared to the sighted group. These findings suggest that the occipital cortex of the blind adopts an architecture that allows the tracking of speech material, and therefore does not fully abstract from the reorganized sensory inputs it receives. PMID:29338838

  16. 75 FR 26701 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-05-12

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... proposed compensation rates for Interstate TRS, Speech-to-Speech Services (STS), Captioned Telephone... costs reported in the data submitted to NECA by VRS providers. In this regard, document DA 10-761 also...

  17. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  18. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  19. Cross-language differences in cue use for speech segmentation

    NARCIS (Netherlands)

    Tyler, M.D.; Cutler, A.

    2009-01-01

    Two artificial-language learning experiments directly compared English, French, and Dutch listeners' use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable

  20. The Self-Identity Protein IdsD Is Communicated between Cells in Swarming Proteus mirabilis Colonies.

    Science.gov (United States)

    Saak, Christina C; Gibbs, Karine A

    2016-12-15

    Proteus mirabilis is a social bacterium that is capable of self (kin) versus nonself recognition. Swarming colonies of this bacterium expand outward on surfaces to centimeter-scale distances due to the collective motility of individual cells. Colonies of genetically distinct populations remain separate, while those of identical populations merge. Ids proteins are essential for this recognition behavior. Two of these proteins, IdsD and IdsE, encode identity information for each strain. These two proteins bind in vitro in an allele-restrictive manner. IdsD-IdsE binding is correlated with the merging of populations, whereas a lack of binding is correlated with the separation of populations. Key questions remained about the in vivo interactions of IdsD and IdsE, specifically, whether IdsD and IdsE bind within single cells or whether IdsD-IdsE interactions occur across neighboring cells and, if so, which of the two proteins is exchanged. Here we demonstrate that IdsD must originate from another cell to communicate identity and that this nonresident IdsD interacts with IdsE resident in the recipient cell. Furthermore, we show that unbound IdsD in recipient cells does not cause cell death and instead appears to contribute to a restriction in the expansion radius of the swarming colony. We conclude that P. mirabilis communicates IdsD between neighboring cells for nonlethal kin recognition, which suggests that the Ids proteins constitute a type of cell-cell communication. We demonstrate that self (kin) versus nonself recognition in P. mirabilis entails the cell-cell communication of an identity-encoding protein that is exported from one cell and received by another. We further show that this intercellular exchange affects swarm colony expansion in a nonlethal manner, which adds social communication to the list of potential swarm-related regulatory factors. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  1. Notes for a Theory of the Infant Poetry

    Directory of Open Access Journals (Sweden)

    Ramón Luis Herrera Rojas

    2017-11-01

    Full Text Available The essential features of infant poetry, under an integrating and interdisciplinary vision, taking into account Language and Literature contemporary sciences are presented in this work. Language turns on itself in a phonic-rhythmic materializing of stimulating meanings of ludic actions; interculturality with oral traditional flow; the isotopic density favoring text coherence; speech dynamiting narration; the prevailing of imaginary remarkable sense over abstract; the visual constitution of lyric infantile speakers are, among other, factors tending to a singular relation with tradition and moving among exceptionally solved tensions in the work of many highly creative poets. Such aspects are exemplified by means of some texts, by relevant Spanish and Latin American authors.

  2. Événement, idéologie et utopie

    Directory of Open Access Journals (Sweden)

    Jean-Luc AMALRIC

    2014-12-01

    Full Text Available RESUMEL’hypothèse que tente d’esquisser cet article est que l’idée ricœurienne d’une médiatisation dynamique des contradictions de l’imaginaire social présuppose une corrélation originaire de l’idéologie et de l’utopie qui ne peut elle-même être comprise qu’à partir de l’événement de l’institution d’un imaginaire social constituant. Dans un premier temps, l’article s’efforce de cerner ce qui fait la spécificité de la théorie ricœurienne de l’idéologie et de l’utopie comme « pratiques imaginatives », en soulignant à ce titre l’influence déterminante des thèses de Jacques Ellul sur l’idéologie. Puis, dans un second temps, il s’engage dans une analyse régressive qui conduit de la réappropriation ricœurienne de la conception dialectique de l’idéologie et de l’utopie exposée par Mannheim à l’idée d’un fondement événementiel de ces deux formes opposées de l’imaginaire social.Mots-Clés : Evénement, idéologie, utopie, Ellul, Mannheim. ABSTRACTThis paper attempts to sketch out the hypothesis that the Ricœurian conception of a dynamic mediatization of the contradictions of the social imaginary presupposes an original correlation between ideology and utopia, which can itself be understood only from the event that institutes a constitutive social imaginary. The first part of the paper marks out the specificity of the Ricœurian theory of ideology and utopia in terms of “imaginative practices”, underlining the determining influence of Jacques Ellul’s theses on ideology. The second part tries to develop a regressive argument, starting from the Ricœurian re-appropriation of Mannheim’s dialectical conception of ideology and utopia, and leading back to the idea of an event foundation for these two opposed forms of the social imaginary.Keywords : Event, Ideology, Utopia, Ellul, Mannheim   

  3. The functional anatomy of speech perception: Dorsal and ventral processing pathways

    Science.gov (United States)

    Hickok, Gregory

    2003-04-01

    Drawing on recent developments in the cortical organization of vision, and on data from a variety of sources, Hickok and Poeppel (2000) have proposed a new model of the functional anatomy of speech perception. The model posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, involved in mapping sound onto meaning, and a dorsal stream, involved in mapping sound onto articulatory-based representations. The ventral stream projects ventrolaterally toward inferior posterior temporal cortex which serves as an interface between sound and meaning. The dorsal stream projects dorsoposteriorly toward the parietal lobe and ultimately to frontal regions. This network provides a mechanism for the development and maintenance of ``parity'' between auditory and motor representations of speech. Although the dorsal stream represents a tight connection between speech perception and speech production, it is not a critical component of the speech perception process under ecologically natural listening conditions. Some degree of bi-directionality in both the dorsal and ventral pathways is also proposed. A variety of recent empirical tests of this model have provided further support for the proposal.

  4. 75 FR 54040 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-09-03

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...; speech-to-speech (STS); pay-per-call (900) calls; types of calls; and equal access to interexchange... of a report, due April 16, 2011, addressing whether it is necessary for the waivers to remain in...

  5. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    Science.gov (United States)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  6. Speech therapy in peripheral facial palsy: an orofacial myofunctional approach

    Directory of Open Access Journals (Sweden)

    Hipólito Virgílio Magalhães Júnior

    2009-12-01

    Full Text Available Objective: To delineate the contributions of speech therapy in the rehabilitation of peripheral facial palsy, describing the role of orofacial myofunctional approach in this process. Methods: A literature review of published articles since 1995, held from March to December 2008, based on the characterization of peripheral facial palsy and its relation with speechlanguage disorders related to orofacial disorders in mobility, speech and chewing, among others. The review prioritized scientific journal articles and specific chapters from the studied period. As inclusion criteria, the literature should contain data on peripheral facial palsy, quotes on the changes in the stomatognathic system and on orofacial miofunctional approach. We excluded studies that addressed central paralysis, congenital palsy and those of non idiopathic causes. Results: The literature has addressed the contribution of speech therapy in the rehabilitation of facial symmetry, with improvement in the retention of liquids and soft foods during chewing and swallowing. The orofacial myofunctional approach contextualized the role of speech therapy in the improvement of the coordination of speech articulation and in the gain of oral control during chewing and swallowing Conclusion: Speech therapy in peripheral facial palsy contributed and was outlined by applying the orofacial myofunctional approach in the reestablishment of facial symmetry, from the work directed to the functions of the stomatognathic system, including oralfacial exercises and training of chewing in association with the training of the joint. There is a need for a greater number of publications in this specific area for speech therapy professional.

  7. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  8. Regulation of Id2 expression in EL4 T lymphoma cells overexpressing growth hormone.

    Science.gov (United States)

    Weigent, Douglas A

    2009-01-01

    In previous studies, we have shown that overexpression of growth hormone (GH) in cells of the immune system upregulates proteins involved in cell growth and protects from apoptosis. Here, we report that overexpression of GH in EL4 T lymphoma cells (GHo) also significantly increased levels of the inhibitor of differentiation-2 (Id2). The increase in Id2 was suggested in both Id2 promoter luciferase assays and by Western analysis for Id2 protein. To identify the regulatory elements that mediate transcriptional activation by GH in the Id2 promoter, promoter deletion analysis was performed. Deletion analysis revealed that transactivation involved a 301-132bp region upstream to the Id2 transcriptional start site. The pattern in the human GHo Jurkat T lymphoma cell line paralleled that found in the mouse GHo EL4 T lymphoma cell line. Significantly less Id2 was detected in the nucleus of GHo EL4 T lymphoma cells compared to vector alone controls. Although serum increased the levels of Id2 in control vector alone cells, no difference was found in the total levels of Id2 in GHo EL4 T lymphoma cells treated with or without serum. The increase in Id2 expression in GHo EL4 T lymphoma cells measured by Id2 promoter luciferase expression and Western blot analysis was blocked by the overexpression of a dominant-negative mutant of STAT5. The results suggest that in EL4 T lymphoma cells overexpressing GH, there is an upregulation of Id2 protein that appears to involve STAT protein activity.

  9. Emotionally conditioning the target-speech voice enhances recognition of the target speech under "cocktail-party" listening conditions.

    Science.gov (United States)

    Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang

    2018-05-01

    Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.

  10. Direction of Attentional Focus in Biofeedback Treatment for /R/ Misarticulation

    Science.gov (United States)

    McAllister Byun, Tara; Swartz, Michelle T.; Halpin, Peter F.; Szeredi, Daniel; Maas, Edwin

    2016-01-01

    Background: Maintaining an external direction of focus during practice is reported to facilitate acquisition of non-speech motor skills, but it is not known whether these findings also apply to treatment for speech errors. This question has particular relevance for treatment incorporating visual biofeedback, where clinician cueing can direct the…

  11. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  12. Best practices for the implementation of the REAL ID Act.

    Science.gov (United States)

    2015-10-01

    The REAL ID Act specifies the minimum standards that must be used to produce and issue drivers license and : identification cards that are REAL ID compliant. Beginning in 2020, if a person does not possess a form of : identification that meets REA...

  13. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Science.gov (United States)

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  14. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Directory of Open Access Journals (Sweden)

    Paul Adam Bremner

    2016-02-01

    Full Text Available Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realised remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  15. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  16. Strain Map of the Tongue in Normal and ALS Speech Patterns from Tagged and Diffusion MRI.

    Science.gov (United States)

    Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Reese, Timothy G; Atassi, Nazem; Wedeen, Van J; El Fakhri, Georges; Woo, Jonghye

    2018-02-01

    Amyotrophic Lateral Sclerosis (ALS) is a neurological disease that causes death of neurons controlling muscle movements. Loss of speech and swallowing functions is a major impact due to degeneration of the tongue muscles. In speech studies using magnetic resonance (MR) techniques, diffusion tensor imaging (DTI) is used to capture internal tongue muscle fiber structures in three-dimensions (3D) in a non-invasive manner. Tagged magnetic resonance images (tMRI) are used to record tongue motion during speech. In this work, we aim to combine information obtained with both MR imaging techniques to compare the functionality characteristics of the tongue between normal and ALS subjects. We first extracted 3D motion of the tongue using tMRI from fourteen normal subjects in speech. The estimated motion sequences were then warped using diffeomorphic registration into the b0 spaces of the DTI data of two normal subjects and an ALS patient. We then constructed motion atlases by averaging all warped motion fields in each b0 space, and computed strain in the line of action along the muscle fiber directions provided by tractography. Strain in line with the fiber directions provides a quantitative map of the potential active region of the tongue during speech. Comparison between normal and ALS subjects explores the changing volume of compressing tongue tissues in speech facing the situation of muscle degradation. The proposed framework provides for the first time a dynamic map of contracting fibers in ALS speech patterns, and has the potential to provide more insight into the detrimental effects of ALS on speech.

  17. Ontwerpen van onderwijs om ‘self-directed learning’ te stimuleren [Desiging instruction to foster self-directed learning

    NARCIS (Netherlands)

    Brand-Gruwel, Saskia

    2010-01-01

    Brand-Gruwel, S. (2010, March). Ontwerpen van onderwijs om ‘self-directed learning’ te stimuleren [Desiging instruction to foster self-directed learning]. Key-note presented at the 3th 4C/ID-conference, Utrecht, The Netherlands.

  18. ID card number detection algorithm based on convolutional neural network

    Science.gov (United States)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  19. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  20. Patients with hippocampal amnesia successfully integrate gesture and speech.

    Science.gov (United States)

    Hilverman, Caitlin; Clough, Sharice; Duff, Melissa C; Cook, Susan Wagner

    2018-06-19

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus - known for its role in relational memory and information integration - is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms. Copyright © 2018. Published by Elsevier Ltd.

  1. Infant-Directed Media: An Analysis of Product Information and Claims

    Science.gov (United States)

    Fenstermacher, Susan K.; Barr, Rachel; Salerno, Katherine; Garcia, Amaya; Shwery, Clay E.; Calvert, Sandra L.; Linebarger, Deborah L.

    2010-01-01

    Infant DVDs typically have titles and even company names that imply some educational benefit. It is not known whether these educational claims are reflected in actual content. The present study examined this question. Of 686 claims (across 58 programs) listed on packaging, websites and promotional materials, implicit claims were most frequent…

  2. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  3. REFORMASI PEMAHAMAN TEORI MAQᾹṢID SYARIAH Analisis Pendekatan Sistem Jasser Auda

    Directory of Open Access Journals (Sweden)

    Muhammad Iqbal Fasa

    2016-12-01

    Full Text Available This paper attempts to offer reform ideas Jasser Auda thought related reforms theoretical understanding of maqᾱṣid sharia. The systems approach is done by Auda critize the theory of classical maqᾱṣid more likely to hierarchical and narrow mindset. The classical maqᾱṣid pressure a point more on protection and preservation. Whereas the new maqᾱṣid theory emphasizes development (construction, development and rights (rights. Thus, Auda developed the concept of human development as the main target of maslahah (public interest. Auda offers the systems approach, namely: cognitive nature; interrelated; wholeness; openness; multi dimentionality and purposefulness. At the end of the discussion in this paper, the authors develop the idea of Jasser Auda by offering Sharia Maqᾱṣid concept in the context of Islamic Economics

  4. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  5. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  6. Soundless Speech/ Wordless Writing: Language and German Silent Cinema

    Directory of Open Access Journals (Sweden)

    Marc Silberman

    2010-12-01

    Full Text Available Speech / Wordless Writing: Language and German Silent Cinema Marc Silberman If language loses its communicative and interpretative functions in direct proportion to the loss of its referential grounding, then the modernist crisis is simultaneously a crisis of its signifying practices. This means that the evolution of the silent cinema is a particularly rich site to examine the problematic relationship of language and image. This essay presents several expressionist films as a specific response to this crisis in order to describe the diverse cinematic forms of resistance to the word, to articulated speech. While some film makers developed the silence of the silent film into a “gestural language” that dramatized light and movement, others reproduced the film figures’ silent speech by means of graphically stylized intertitles. My thesis is that the expressionist cinema maintained an idealistic notion of the film as a pure work of art that aimed at a unified composition of all elements and missed the opportunity to explore the rich semiotic possibilities of the new technological medium with its hybrid, synergetic forms and provocative force. Hence, the expressionist cinema marks a transition or even the endpoint of a long process of reflection about the communicative possibilities of language that shifted to a fundamentally new level with the invention of sound cinema at the end of the 1920s. Parole muette / écriture sans mot: Le langage et le cinéma allemand muet Marc Silberman Le langage, dit-on, perd de ses functions communicatrices et interpretatives en proportion directe à la perte de sa force référentielle. On dira que la crise moderniste est également une crise des pratiques signifiantes. Ce qui revient à dire aussi que l’évolution du cinéma muet serait une site particulièrement riche pour examiner les problématiques du langage vs. l’image. Cet essai présente quelques films expressionnistes comme réactions à la crise

  7. Interoperability for electronic ID

    OpenAIRE

    Zygadlo, Zuzanna

    2009-01-01

    Electronic Business, including eBanking, eCommerce and eGovernmental services, is today based on a large variety of security solutions, comprising electronic IDs provided by a broad community of Public Key Infrastructure (PKI) vendors. Significant differences in implementations of those solutions introduce a problem of lack of interoperability in electronic business, which have not yet been resolved by standardization and interoperability initiatives based on existing PKI trust models. It i...

  8. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The effects of asymmetric directional microphone fittings on acceptance of background noise.

    Science.gov (United States)

    Kim, Jong S; Bryan, Melinda Freyaldenhoven

    2011-05-01

    The effects of asymmetric directional microphone fittings (i.e., an omnidirectional microphone on one ear and a directional microphone on the other) on speech understanding in noise and acceptance of background noise were investigated in 15 full-time hearing aid users. Subjects were fitted binaurally with four directional microphone conditions (i.e., binaural omnidirectional, right asymmetric directional, left asymmetric directional and binaural directional microphones) using Siemens Intuis Directional behind-the-ear hearing aids. Speech understanding in noise was assessed using the Hearing in Noise Test, and acceptance of background noise was assessed using the Acceptable Noise Level procedure. Speech was presented from 0° while noise was presented from 180° azimuth. The results revealed that speech understanding in noise improved when using asymmetric directional microphones compared to binaural omnidirectional microphone fittings and was not significantly hindered compared to binaural directional microphone fittings. The results also revealed that listeners accepted more background noise when fitted with asymmetric directional microphones as compared to binaural omnidirectional microphones. Lastly, the results revealed that the acceptance of noise was further increased for the binaural directional microphones when compared to the asymmetric directional microphones, maximizing listeners' willingness to accept background noise in the presence of noise. Clinical implications will be discussed.

  10. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  11. Associations of Maternal and Infant Testosterone and Cortisol Levels With Maternal Depressive Symptoms and Infant Socioemotional Problems

    Science.gov (United States)

    Cho, June; Su, Xiaogang; Phillips, Vivien; Holditch-Davis, Diane

    2015-01-01

    This study examined the associations of testosterone and cortisol levels with maternal depressive symptoms and infant socioemotional (SE) problems that are influenced by infant gender. A total of 62 mothers and their very-low-birth weight (VLBW) infants were recruited from a neonatal intensive care unit at a tertiary medical center in the southeast United States. Data were collected at three time points (before 40 weeks’ postmenstrual age [PMA] and at 3 months and 6 months of age corrected for prematurity). Measures included infant medical record review, maternal interview, biochemical assays of salivary hormone levels in mother-VLBWinfant pairs, and standard questionnaires. Generalized estimating equations with separate analyses for boys and girls showed that maternal testosterone level was negatively associated with depressive symptoms in mothers of boys, whereas infant testosterone level was negatively associated with maternal report of infant SE problems in girls after controlling for characteristics of mothers and infants and number of days post birth of saliva collection. Not surprisingly, the SE problems were positively associated with a number of medical complications. Mothers with more depressive symptoms reported that their infants had more SE problems. Mothers with higher testosterone levels reported that girls, but not boys, had fewer SE problems. In summary, high levels of testosterone could have a protective role for maternal depressive symptoms and infant SE problems. Future research need to be directed toward clinical application of these preliminary results. PMID:25954021

  12. Effects of Maternal Anxiety Disorders on Infant Self-Comforting Behaviors: The Role of Maternal Bonding, Infant Gender and Age.

    Science.gov (United States)

    Müller, Mitho; Tronick, Ed; Zietlow, Anna-Lena; Nonnenmacher, Nora; Verschoor, Stephan; Träuble, Birgit

    We investigated the links between maternal bonding, maternal anxiety disorders, and infant self-comforting behaviors. Furthermore, we looked at the moderating roles of infant gender and age. Our sample (n = 69) comprised 28 mothers with an anxiety disorder (according to DSM-IV criteria) and 41 controls, each with their 2.5- to 8-month-old infant (41 females and 28 males). Infant behaviors were recorded during the Face-to-Face Still-Face paradigm. Maternal bonding was assessed by the Postpartum Bonding Questionnaire. Conditional process analyses revealed that lower maternal bonding partially mediated between maternal anxiety disorders and increased self-comforting behaviors but only in older female infants (over 5.5 months of age). However, considering maternal anxiety disorders without the influence of bonding, older female infants (over 5.5 months of age) showed decreased rates of self-comforting behaviors, while younger male infants (under 3 months of age) showed increased rates in the case of maternal anxiety disorder. The results suggest that older female infants (over 5.5 months of age) are more sensitive to lower maternal bonding in the context of maternal anxiety disorders. Furthermore, results suggest a different use of self-directed regulation strategies for male and female infants of mothers with anxiety disorders and low bonding, depending on infant age. The results are discussed in the light of gender-specific developmental trajectories. © 2016 S. Karger AG, Basel.

  13. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  14. Automatic ID heat load generation in ANSYS code

    International Nuclear Information System (INIS)

    Wang, Zhibi.

    1992-01-01

    Detailed power density profiles are critical in the execution of a thermal analysis using a finite element (FE) code such as ANSYS. Unfortunately, as yet there is no easy way to directly input the precise power profiles into ANSYS. A straight-forward way to do this is to hand-calculate the power of each node or element and then type the data into the code. Every time a change is made to the FE model, the data must be recalculated and reentered. One way to solve this problem is to generate a set of discrete data, using another code such as PHOTON2, and curve-fit the data. Using curve-fitted formulae has several disadvantages. It is time consuming because of the need to run a second code for generation of the data, curve-fitting, and doing the data check, etc. Additionally, because there is no generality for different beamlines or different parameters, the above work must be repeated for each case. And, errors in the power profiles due to curve-fitting result in errors in the analysis. To solve the problem once and for all and with the capability to apply to any insertion device (ID), a program for ED power profile was written in ANSYS Parametric Design Language (APDL). This program is implemented as an ANSYS command with input parameters of peak magnetic field, deflection parameter, length of ID, and distance from the source. Once the command is issued, all the heat load will be automatically generated by the code

  15. Two-month-old infants at risk for dyslexia do not discriminate vertical bar bAk vertical bar from vertical bar dAk vertical bar : A brain-mapping study

    NARCIS (Netherlands)

    van Leeuwen, Theo; Been, Pieter; van Herten, Marieke; Zwarts, Frans; Maassen, Ben; Leij, Aryan van der

    Dyslexics have problems with categorization of speech sounds, in particular when rapid temporal processing is involved such as in formant transitions of stop-consonants. Infants are already sensitive to such auditory features, but here we show that precursors of impaired categorization are already

  16. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Science.gov (United States)

    Coutinho, Eduardo; Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  17. Speech Respiratory Measures in Spastic Cerebral Palsied and Normal Children

    Directory of Open Access Journals (Sweden)

    Hashem Shemshadi

    2007-10-01

    Full Text Available Objective: Research is designed to determine speech respiratory measures in spastic cerebral palsied children versus normal ones, to be used as an applicable tool in speech therapy plans.  Materials & Methods: Via a comparative cross-sectional study (case–control, and through a directive goal oriented sampling in case and convenience approach for controls twenty spastic cerebral palsied and twenty control ones with age (5-12 years old and sex (F=20, M=20 were matched and identified. All possible inclusion and exclusion criteria were considered by thorough past medical, clinical and para clinical such as chest X-ray and Complete Blood Counts reviews to rule out any possible pulmonary and/or systemic disorders. Their speech respiratory indices were determined by Respirometer (ST 1-dysphonia, made and normalized by Glasgow University. Obtained data were analyzed by independent T test. Results: There were significant differences between cases and control groups for "mean tidal volume", "phonatory volume" and "vital capacity" at a=0/05 values and these values in patients were less (34% than normal children (P<0/001. Conclusion: Measures obtained are highly crucial for speech therapist in any speech therapy primary rehabilitative plans for spactic cerebral palsied children.

  18. DIRECTIONS OF USEING ELECTRONIC MEANS IN TEACHING SCIENTIFIC STYLE OF SPEECH

    Directory of Open Access Journals (Sweden)

    Л Б Белоглазова

    2015-12-01

    Full Text Available The article notes that the modern human cognitive activity related to the implementation of information processes, by means of information and communication technologies. The author identifies three main areas of use of electronic media in teaching scientific style of speech. These include: 1 work with electronic textbooks; 2 search of the scientific literature in electronic libraries; 3 use computer software for content analysis of scientific texts. The analysis of these areas is done. It stated that the introduction in the educational process should be accompanied by electronic means creating specialized audiences and providing them with modern equipment.

  19. Psychometrics and latent structure of the IDS and QIDS with young adult students.

    Science.gov (United States)

    González, David Andrés; Boals, Adriel; Jenkins, Sharon Rae; Schuler, Eric R; Taylor, Daniel

    2013-07-01

    Students and young adults have high rates of suicide and depression, thus are a population of interest. To date, there is no normative psychometric information on the IDS and QIDS in these populations. Furthermore, there is equivocal evidence on the factor structure and subscales of the IDS. Two samples of young adult students (ns=475 and 1681) were given multiple measures to test the psychometrics and dimensionality of the IDS and QIDS. The IDS, its subscales, and QIDS had acceptable internal consistencies (αs=.79-90) and favorable convergent and divergent validity correlations. A three-factor structure and two Rasch-derived subscales best fit the IDS. The samples were collected from one university, which may influence generalizability. The IDS and QIDS are desirable measures of depressive symptoms when studying young adult students. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the