WorldWideScience

Sample records for playing analyzing speeches

  1. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  2. Real time speech formant analyzer and display

    Energy Technology Data Exchange (ETDEWEB)

    Holland, George E. (Ames, IA); Struve, Walter S. (Ames, IA); Homer, John F. (Ames, IA)

    1987-01-01

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user.

  3. A Role-Playing Exercise for Analyzing Intercultural Communication.

    Science.gov (United States)

    Lehman, Carol M.; Taylor, G. Stephen

    1994-01-01

    Presents a business communication scenario for students to role-play and analyze, using a systematic process for identifying communication roadblocks occurring on the job, especially when communicating with people from diverse backgrounds and other countries. Discusses ways to improve the communication process to enable students to develop…

  4. Vocal tract resonances in speech, singing, and playing musical instruments.

    Science.gov (United States)

    Wolfe, Joe; Garnier, Maëva; Smith, John

    2009-01-01

    IN BOTH THE VOICE AND MUSICAL WIND INSTRUMENTS, A VALVE (VOCAL FOLDS, LIPS, OR REED) LIES BETWEEN AN UPSTREAM AND DOWNSTREAM DUCT: trachea and vocal tract for the voice; vocal tract and bore for the instrument. Examining the structural similarities and functional differences gives insight into their operation and the duct-valve interactions. In speech and singing, vocal tract resonances usually determine the spectral envelope and usually have a smaller influence on the operating frequency. The resonances are important not only for the phonemic information they produce, but also because of their contribution to voice timbre, loudness, and efficiency. The role of the tract resonances is usually different in brass and some woodwind instruments, where they modify and to some extent compete or collaborate with resonances of the instrument to control the vibration of a reed or the player's lips, andor the spectrum of air flow into the instrument. We give a brief overview of oscillator mechanisms and vocal tract acoustics. We discuss recent and current research on how the acoustical resonances of the vocal tract are involved in singing and the playing of musical wind instruments. Finally, we compare techniques used in determining tract resonances and suggest some future developments.

  5. Smile Analyzer: A Software Package for Analyzing the Characteristics of the Speech and Smile

    Directory of Open Access Journals (Sweden)

    Farzin Heravi

    2012-09-01

    Full Text Available Taking into account the factors related to lip-tooth relationships in orthodontic diagnosis and treatment planning is of prime importance. Manual quantitative analysis of facial parameters on photographs during smile and speech is a difficult and time-consuming job. Since there is no comprehensive and user-friendly software package, we developed a software program called "Smile Analyzer" in the Department of Orthodontics of Mashhad Faculty of Dentistry for measuring the parameters related to lip-tooth relationships and other facial landmarks on the photographs taken during various facial expressions. The software was designed using visual basic. NET and the ADO. NET was used for developing its Microsoft Access database. The program runs on Microsoft Windows. It is capable of analyzing many parameters or variables in many patients' photographs, although 19 more common variables are previously defined as a default list of variables. When all variables are measured or calculated, a report can be generated and saved in either PDF or MS Excel format. Data are readily transferable to statistical software like SPSS for Windows.

  6. Smile Analyzer: A Software Package for Analyzing the Characteristics of the Speech and Smile

    Directory of Open Access Journals (Sweden)

    Roozbeh Rashed

    2013-01-01

    Full Text Available Taking into account the factors related to lip-tooth relationships in orthodontic diagnosis and treatment planning is of prime importance. Manual quantitative analysis of facial parameters on photographs during smile and speech is a difficult and time-consuming job. Since there is no comprehensive and user-friendly software package, we developed a software program called "Smile Analyzer" in the Department of Orthodontics of Mashhad Faculty of Dentistry for measuring the parameters related to lip-tooth relationships and other facial landmarks on the photographs taken during various facial expressions. The software was designed using visual basic. NET and the ADO. NET was used for developing its Microsoft Access database. The program runs on Microsoft Windows. It is capable of analyzing many parameters or variables in many patients' photographs, although 19 more common variables are previously defined as a default list of variables. When all variables are measured or calculated, a report can be generated and saved in either PDF or MS Excel format. Data are readily transferable to statistical software like SPSS for Windows.  

  7. Five analogies between a King's Speech treatment and contemporary play therapies.

    Science.gov (United States)

    Terr, Lenore C

    2012-01-01

    Psychiatric patients frequently respond positively to play therapy, which may rely on psychoanalytic, Jungian, cognitive-behavioral, familial, school-based, or other theories. I wished to determine if there were unifying principles that tie together these various types of play treatments. The fact-based film, The King's Speech, vividly illustrates play utilized by Lionel Logue in his speech treatment (1926-1939) of the future King of England. In the film I found five analogies to the play therapy I employ in office practice. The play scenes in The King's Speech point to five unifying principles among contemporary play therapies: (1) the crucial nature of the relationship, (2) the centrality of having fun, (3) the occasional reliance on others, (4) the interjection of pithy talk, and (5) the usefulness of a little drama. No matter what theory a play therapist ascribes to, these five unifying principles should be kept in mind during treatment.

  8. Park Play: a picture description task for assessing childhood motor speech disorders.

    Science.gov (United States)

    Patel, Rupal; Connaghan, Kathryn

    2014-08-01

    The purpose of this study was to develop a picture description task for eliciting connected speech from children with motor speech disorders. The Park Play scene is a child-friendly picture description task aimed at augmenting current assessment protocols for childhood motor speech disorders. The design process included a literature review to: (1) establish optimal design features for child assessment, (2) identify a set of evidence-based speech targets specifically tailored to tax the motor speech system, and (3) enhance current assessment tools. To establish proof of concept, five children (ages 4;3-11;1) with dysarthria or childhood apraxia of speech were audio-recorded while describing the Park Play scene. Feedback from the feasibility test informed iterative design modifications. Descriptive, segmental, and prosodic analyses revealed the task was effective in eliciting desired targets in a connected speech sample, thereby yielding additional information beyond the syllables, words, and sentences generally elicited through imitation during the traditional motor speech examination. Further discussion includes approaches to adapt the task for a variety of clinical needs.

  9. Low-income fathers' speech to toddlers during book reading versus toy play.

    Science.gov (United States)

    Salo, Virginia C; Rowe, Meredith L; Leech, Kathryn A; Cabrera, Natasha J

    2016-11-01

    Fathers' child-directed speech across two contexts was examined. Father-child dyads from sixty-nine low-income families were videotaped interacting during book reading and toy play when children were 2;0. Fathers used more diverse vocabulary and asked more questions during book reading while their mean length of utterance was longer during toy play. Variation in these specific characteristics of fathers' speech that differed across contexts was also positively associated with child vocabulary skill measured on the MacArthur-Bates Communicative Development Inventory. Results are discussed in terms of how different contexts elicit specific qualities of child-directed speech that may promote language use and development.

  10. Low-Income Fathers' Speech to Toddlers during Book Reading versus Toy Play

    Science.gov (United States)

    Salo, Virginia C.; Rowe, Meredith L.; Leech, Kathryn A.; Cabrera, Natasha J.

    2016-01-01

    Fathers' child-directed speech across two contexts was examined. Father-child dyads from sixty-nine low-income families were videotaped interacting during book reading and toy play when children were 2;0. Fathers used more diverse vocabulary and asked more questions during book reading while their mean length of utterance was longer during toy…

  11. Analyzing the Politeness of Speech from the Influence of Pragmatic Factors

    Institute of Scientific and Technical Information of China (English)

    陈静

    2009-01-01

    This paper focuses on the pragmatic factors affecting the politeness of speech. Through the analysis of the importance of politeness in speech communication and the influence of socio-linguistic, psycho-linguistic, cultural linguistic and pragmatics on the language use, I discuss respectively three main pragmatic factors affecting the politeness of speech: 1. social factors; 2. cultural factors; 3. linguistic factors.

  12. MODELS OF THE SPEECH PORTRAIT AND THE LINGVOPSYCHOLOGICAL PORTRAIT OF DRAMATIC CHARACTERS IN A.P. CHEKHOV'S PLAY "UNCLE VANYA"

    Directory of Open Access Journals (Sweden)

    Jiang Zhiyan

    2016-01-01

    Full Text Available This article highlights the problem of the models of the speech portrait and portraying the dramatic characters by studying the lingvopsychological lexicon in terms of a lingvopsychology. In spite of the fact that today a large number of works on the speech portrait was published, poorly studied a problem of the speech portrait of the dramatic characters of Chekhov's play. Moreover, the researchers did not find an effective approach to the description and analysis of the speech portrait of the dramatic character of Chekhov's play. Thus, in our study, we have created firstly the model for the study of the speech portrait of Chekhov's play (based on the play "Uncle Vanya", taking into account the linguistic culture and lingvopsychology. The lingvopsychological portrait consists of the lingvopsychological lexicons that reflect a mental state and a mental activity, in our opinion. Through the research object of a psychology we use the linguistic research methods, carrying out of a comparison analysis the semantiс fields which form lingvopsychological lexicons in the mental state and the mental activity. In the psychosematic fields manifest the opposition of the semantic fields of the lingvopsychological lexicons by the mental state: about passive and active, about pessimism and the optimism. In addition, the estimated-semantic and moral-semantic field of a mental state can reveal the lingvopsychological portrait. Opposition of the semantic fields by the mental activity make up the lingvopsychological lexicons: love ‒ hate, envy, like ‒ bother.

  13. Computerized Method for Diagnosing and Analyzing Speech Articulation Disorder for Malay Language Speakers

    Directory of Open Access Journals (Sweden)

    Mohd. Nizam Mazenan

    2014-06-01

    Full Text Available This study aims to develop a computerized technique that uses speech recognition as a helping tool in speech therapy diagnosis for early detection. Somehow speech disorder can be divided into few categories which not all type will be fully covered in this research. This study only purposes a solving method for diagnosis of a patient that suffers from articulation disorder. Therefore a set of Malay language vocabulary has already been designed to tackle this issue where it will cover selected Malay consonants as a target words. Ten Malay target words had been choose to test the recognition accuracy of this system and the sample are taken from real patient from Hospital Sultanah Aminah (HSA: Speech therapist at Speech Therapy Center where the hospital assists in the clinical trial. The result accuracy of the systems will help the Speech Therapist (ST to give an early diagnosis analysis for the patient before next step can be purposed. An early percentage of correct sample achieved almost 50% in this experiment.

  14. Using Video Modeling to Teach Young Children with Autism Developmentally Appropriate Play and Connected Speech

    Science.gov (United States)

    Scheflen, Sarah Clifford; Freeman, Stephanny F. N.; Paparella, Tanya

    2012-01-01

    Four children with autism were taught play skills through the use of video modeling. Video instruction was used to model play and appropriate language through a developmental sequence of play levels integrated with language techniques. Results showed that children with autism could successfully use video modeling to learn how to play appropriately…

  15. Using Video Modeling to Teach Young Children with Autism Developmentally Appropriate Play and Connected Speech

    Science.gov (United States)

    Scheflen, Sarah Clifford; Freeman, Stephanny F. N.; Paparella, Tanya

    2012-01-01

    Four children with autism were taught play skills through the use of video modeling. Video instruction was used to model play and appropriate language through a developmental sequence of play levels integrated with language techniques. Results showed that children with autism could successfully use video modeling to learn how to play appropriately…

  16. Ibsen's Plays in China And Their Ethical Value:A Speech at the Closing Ceremony of the Third International Ibsen Conference in China

    Institute of Scientific and Technical Information of China (English)

    Nie Zhenzhao

    2005-01-01

    This is a speech delivered at the closing ceremony of The Third International Ibsen Conference in China, which introduces some people and their studies on Ibsen in China, and takes A Doll's House as an example for an ethical analysis. The speech chooses Professor Wang Zhongxiang, Professor Kwok-kan Tam, Professor Knut Brynhildsvoll and others for an evaluation of their studies as the evidence of achievements in the Ibsen studies in the New Period in China. The speech, from the perspective of ethical literary criticism, also analyses A Doll's House as a moral play to raise moral questions and concludes that Ibsen's so-called social problem plays are ethical problem ones.

  17. Analyzing Members' Motivations to Participate in Role-Playing and Self-Expression Based Virtual Communities

    Science.gov (United States)

    Lee, Young Eun; Saharia, Aditya

    With the rapid growth of computer mediated communication technologies in the last two decades, various types of virtual communities have emerged. Some communities provide a role playing arena, enabled by avatars, while others provide an arena for expressing and promoting detailed personal profiles to enhance their offline social networks. Due to different focus of these virtual communities, different factors motivate members to participate in these communities. In this study, we examine differences in members’ motivations to participate in role-playing versus self-expression based virtual communities. To achieve this goal, we apply the Wang and Fesenmaier (2004) framework, which explains members’ participation in terms of their functional, social, psychological, and hedonic needs. The primary contributions of this study are two folds: First, it demonstrates differences between role-playing and self-expression based communities. Second, it provides a comprehensive framework describing members’ motivation to participate in virtual communities.

  18. Analyzing children games played in Konya during republic era from different variables

    Directory of Open Access Journals (Sweden)

    Abdükadir Kabadayı

    2006-12-01

    Full Text Available Play is what young children are involved in most of their waking hours. Through play, they integrate all their knowledge and skills. Play serves a number of functions for the young child: It aids in developing problem-solving skills; promotes social and cognitive competence; aids in the development of the distinction between reality and fantasy; promotes curiosity; helps communication, attention span, self control, social language and literacy skills; provides a vehicle for the adult to learn how children view the world; and can be therapeutic.In this research, individual and team games the children have played in the centre of Konya and its centre towns since the declaration of Turkish Republic were introduced and examined via interview with 24 people who were born and have lived in Konya. Nearly 32 individual and team games were found by the application of living sources and by introducing one by one, their contributions were evaluated on the basis of statistics in respect of child education and development.

  19. Analyzing the Learning Process of an Online Role-Playing Discussion Activity

    Science.gov (United States)

    Hou, Huei-Tse

    2012-01-01

    Instructional activities based on online discussion strategies have gained prevalence in recent years. Within this context, a crucial research topic is to design innovative and appropriate online discussion strategies that assist learners in attaining a deeper level of interaction and higher cognitive skills. By analyzing the process of online…

  20. Critical Thinking Process in English Speech

    Institute of Scientific and Technical Information of China (English)

    WANG Jia-li

    2016-01-01

    With the development of mass media, English speech has become an important way for international cultural exchange in the context of globalization. Whether it is a political speech, a motivational speech, or an ordinary public speech, the wisdom and charm of critical thinking are always given into full play. This study analyzes the cultivation of critical thinking in English speech with the aid of representative examples, which is significant for cultivating college students’critical thinking as well as developing their critical thinking skills in English speech.

  1. Speech Development

    Science.gov (United States)

    ... The speech-language pathologist should consistently assess your child’s speech and language development, as well as screen for hearing problems (with ... and caregivers play a vital role in a child’s speech and language development. It is important that you talk to your ...

  2. How many mechanisms are needed to analyze speech? A connectionist simulation of structural rule learning in artificial language acquisition.

    Science.gov (United States)

    Laakso, Aarre; Calvo, Paco

    2011-01-01

    Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule-following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network studies demonstrating that a single statistical mechanism can mimic the apparent discovery of structural regularities, beyond the segmentation of speech. We argue that our results undermine one argument for the MOM hypothesis. Copyright © 2011 Cognitive Science Society, Inc.

  3. The influence of maternal language responsiveness on the expressive speech production of children with autism spectrum disorders: a microanalysis of mother-child play interactions.

    Science.gov (United States)

    Walton, Katherine M; Ingersoll, Brooke R

    2015-05-01

    Adult responsiveness is related to language development both in young typically developing children and in children with autism spectrum disorders, such that parents who use more responsive language with their children have children who develop better language skills over time. This study used a micro-analytic technique to examine how two facets of maternal utterances, relationship to child focus of attention and degree of demandingness, influenced the immediate use of appropriate expressive language of preschool-aged children with autism spectrum disorders (n = 28) and toddlers with typical development (n = 16) within a naturalistic mother-child play session. Mothers' use of follow-in demanding language was most likely to elicit appropriate expressive speech in both children with autism spectrum disorders and children with typical development. For children with autism spectrum disorders, but not children with typical development, mothers' use of orienting cues conferred an additional benefit for expressive speech production. These findings are consistent with the naturalistic behavioral intervention philosophy and suggest that following a child's lead while prompting for language is likely to elicit speech production in children with autism spectrum disorders and children with typical development. Furthermore, using orienting cues may help children with autism spectrum disorders to verbally respond. © The Author(s) 2014.

  4. Do syllables play a role in German speech perception? Behavioural and electrophysiological data from primed lexical decision

    Directory of Open Access Journals (Sweden)

    Heidrun eBien

    2015-01-01

    Full Text Available We investigated the role of the syllable during speech processing in German, in an auditory-auditory fragment priming study with lexical decision and simultaneous EEG registration. Spoken fragment primes either shared segments (related with the spoken targets or not (unrelated, and this segmental overlap either corresponded to the first syllable of the target (e.g., /teis/ - /teisti/, or not (e.g., /teis/ - /teistləs/. Similar prime conditions applied for word and pseudoword targets. Lexical decision latencies revealed facilitation due to related fragments that corresponded to the first syllable of the target (/teis/ - /teisti/ in some but not all (/teist/ - /teistləs/ conditions. Despite segmental overlap, there were no positive effects for related fragments that mismatched the first syllable. No facilitation was observed for pseudowords. The EEG analyses showed a consistent effect of relatedness, independent of syllabic match, from 200 – 500 ms, including the P350 and N400 windows. Moreover, this held for words and pseudowords alike. The only specific effect of syllabic match for related prime - target pairs was observed in the time window from 200 – 300 ms. We discuss the nature and potential origin of these effects, and their relevance for speech processing and lexical access.

  5. Infants' Background Television Exposure during Play: Negative Relations to the Quantity and Quality of Mothers' Speech and Infants' Vocabulary Acquisition

    Science.gov (United States)

    Masur, Elise Frank; Flynn, Valerie; Olson, Janet

    2016-01-01

    Research on immediate effects of background television during mother-infant toy play shows that an operating television in the room disrupts maternal communicative behaviors crucial for infants' vocabulary acquisition. This study is the first to examine associations between frequent background TV/video exposure during mother-infant toy play at…

  6. A Case Study: Analyzing City Vitality with Four Pillars of Activity-Live, Work, Shop, and Play.

    Science.gov (United States)

    Griffin, Matt; Nordstrom, Blake W; Scholes, Jon; Joncas, Kate; Gordon, Patrick; Krivenko, Elliott; Haynes, Winston; Higdon, Roger; Stewart, Elizabeth; Kolker, Natali; Montague, Elizabeth; Kolker, Eugene

    2016-03-01

    This case study evaluates and tracks vitality of a city (Seattle), based on a data-driven approach, using strategic, robust, and sustainable metrics. This case study was collaboratively conducted by the Downtown Seattle Association (DSA) and CDO Analytics teams. The DSA is a nonprofit organization focused on making the city of Seattle and its Downtown a healthy and vibrant place to Live, Work, Shop, and Play. DSA primarily operates through public policy advocacy, community and business development, and marketing. In 2010, the organization turned to CDO Analytics ( cdoanalytics.org ) to develop a process that can guide and strategically focus DSA efforts and resources for maximal benefit to the city of Seattle and its Downtown. CDO Analytics was asked to develop clear, easily understood, and robust metrics for a baseline evaluation of the health of the city, as well as for ongoing monitoring and comparisons of the vitality, sustainability, and growth. The DSA and CDO Analytics teams strategized on how to effectively assess and track the vitality of Seattle and its Downtown. The two teams filtered a variety of data sources, and evaluated the veracity of multiple diverse metrics. This iterative process resulted in the development of a small number of strategic, simple, reliable, and sustainable metrics across four pillars of activity: Live, Work, Shop, and Play. Data during the 5 years before 2010 were used for the development of the metrics and model and its training, and data during the 5 years from 2010 and on were used for testing and validation. This work enabled DSA to routinely track these strategic metrics, use them to monitor the vitality of Downtown Seattle, prioritize improvements, and identify new value-added programs. As a result, the four-pillar approach became an integral part of the data-driven decision-making and execution of the Seattle community's improvement activities. The approach described in this case study is actionable, robust, inexpensive

  7. Lope and the Battle-Speech

    Directory of Open Access Journals (Sweden)

    Juan Carlos Iglesias-Zoido

    2013-05-01

    Full Text Available This article analyzes the way in which Lope de Vega conceives in his theater the pre-battle harangue, the most characteristic speech in ancient and renaissance historiography. Having this aim in mind, I have analyzed the role played by this type of speech in a group of plays dealing with historical and military subjects. These plays were written in a period when Lope was particularly interested in historical issues: La Santa Liga (1598-1603, Arauco domado (1599, El asalto de Mastrique (1595-1606 and Los Guanches de Tenerife (1604-1606.

  8. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  9. Fifty years after Martin Luther King’s speech, Obama’s gradual approach to political change still needs King’s visionary dream to play against

    OpenAIRE

    Kier, Ruth

    2013-01-01

    Last week saw the 50th anniversary of Martin Luther King’s ‘I have a dream’ speech, which was marked at an\\ud event by President Barack Obama. Rune Kier writes that while King’s speech was one which\\ud articulated abrupt and revolutionary change to achieve equality against an apparently stagnant establishment,\\ud Obama’s rhetoric is that of gradual, hard won, political change. Despite these differences, King’s speech is still\\ud the vision that Obama is striving for.

  10. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  11. The Rhetoric in English Speech

    Institute of Scientific and Technical Information of China (English)

    马鑫

    2014-01-01

    English speech has a very long history and always attached importance of people highly. People usually give a speech in economic activities, political forums and academic reports to express their opinions to investigate or persuade others. English speech plays a rather important role in English literature. The distinct theme of speech should attribute to the rhetoric. It discusses parallelism, repetition and rhetorical question in English speech, aiming to help people appreciate better the charm of them.

  12. Temporal modulations in speech and music.

    Science.gov (United States)

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-02-14

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.

  13. Visual speech form influences the speed of auditory speech processing.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2013-09-01

    An important property of visual speech (movements of the lips and mouth) is that it generally begins before auditory speech. Research using brain-based paradigms has demonstrated that seeing visual speech speeds up the activation of the listener's auditory cortex but it is not clear whether these observed neural processes link to behaviour. It was hypothesized that the very early portion of visual speech (occurring before auditory speech) will allow listeners to predict the following auditory event and so facilitate the speed of speech perception. This was tested in the current behavioural experiments. Further, we tested whether the salience of the visual speech played a role in this speech facilitation effect (Experiment 1). We also determined the relative contributions that visual form (what) and temporal (when) cues made (Experiment 2). The results showed that visual speech cues facilitated response times and that this was based on form rather than temporal cues. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. The University and Free Speech

    OpenAIRE

    Grcic, Joseph

    2014-01-01

    Free speech is a necessary condition for the growth of knowledge and the implementation of real and rational democracy. Educational institutions play a central role in socializing individuals to function within their society. Academic freedom is the right to free speech in the context of the university and tenure, properly interpreted, is a necessary component of protecting academic freedom and free speech.

  15. Public Speech.

    Science.gov (United States)

    Green, Thomas F.

    1994-01-01

    Discusses the importance of public speech in society, noting the power of public speech to create a world and a public. The paper offers a theory of public speech, identifies types of public speech, and types of public speech fallacies. Two ways of speaking of the public and of public life are distinguished. (SM)

  16. Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language: Computational techniques are presented to analyze and model expressed and perceived human behavior-variedly characterized as typical, atypical, distressed, and disordered-from speech and language cues and their applications in health, commerce, education, and beyond.

    Science.gov (United States)

    Narayanan, Shrikanth; Georgiou, Panayiotis G

    2013-02-01

    The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.

  17. Speech Problems

    Science.gov (United States)

    ... of your treatment plan may include seeing a speech therapist , a person who is trained to treat speech disorders. How often you have to see the speech therapist will vary — you'll probably start out seeing ...

  18. Aesthetic Play

    DEFF Research Database (Denmark)

    Bang, Jytte Susanne

    2012-01-01

    The present article explores the role of music-related artefacts and technologies in children’s lives. More specifically, it analyzes how four 10- to 11-year old girls use CDs and DVD games in their music-play activities and which developmental themes and potentials may accrue from such activities...... to the children’s complex life-worlds. Further, this leads to an analysis of music-play activities as play with an art-form (music), which includes aesthetic dimensions and gives the music-play activities its character of being aesthetic play. Following Lev Vygotsky’s insight that art is a way of building life...

  19. 基于词性分析的产品评价信息挖掘%Information of product review mining based on analyzing of part of speech

    Institute of Scientific and Technical Information of China (English)

    冯秀珍; 郝鹏

    2013-01-01

    在对语料库中表达产品特征及相应评价的词的词性进行分析的基础上,确定了表达产品特征及评价最为常见的词性和词性的重要程度顺序,提出了一种产品特征及其相应评价的信息抽取规则,并根据规则建立评价语句的语义倾向的计算公式.实验结果表明,该方法在产品特征抽取及其相应评价的语义倾向判断上具有很高的准确性.通过对产品特征及其相应的评价信息进行挖掘可以为企业新产品的开发和产品的推荐提供重要的参考价值,是进行下一步生产决策的重要的理论依据.%Based on analysis of part of speech of word which can express the character of product and the corresponding review in corpus, the most frequent part of speech and the corresponding order of importance is determinted, a new information extraction rules of the character of product and the corresponding review is proposed, and the formula of computering of semantic of sentence is established according to the rules. Experiment show that this method have a high accuracy in extracting of the character of product and computing of semantic orientation of the corresponding of review. It will provide a huge value of new product's development and product recommendation in enterprise and as a important theoretical for the next step of product decision.

  20. Initial consonant deletion in bilingual Spanish-English-speaking children with speech sound disorders.

    Science.gov (United States)

    Fabiano-Smith, Leah; Cuzner, Suzanne Lea

    2017-09-13

    The purpose of this study was to utilize a theoretical model of bilingual speech sound production as a framework for analyzing the speech of bilingual children with speech sound disorders. In order to distinguish speech difference from speech disorder, we examined between-language interaction on initial consonant deletion, an error pattern found cross-linguistically in the speech of children with speech sound disorders. Thirteen monolingual English-speaking and bilingual Spanish-and English-speaking preschoolers with speech sound disorders were audio-recorded during a single word picture-naming task and their recordings were phonetically transcribed. Initial consonant deletion errors were examined both quantitatively and qualitatively. An analysis of cross-linguistic effects and an analysis of phonemic complexity were performed. Monolingual English-speaking children exhibited initial consonant deletion at a significantly lower rate than bilingual children in their Spanish productions; however, no other quantitative differences were found across groups or languages. Qualitative differences yielded between-language interaction in the error patterns of bilingual children. Phonemic complexity appeared to play a role in initial consonant deletion. Evidence from the speech of bilingual children with speech sound disorders supports analysing bilingual speech using a cross-linguistic framework. Both theoretical and clinical implications are discussed.

  1. Changes in breathing while listening to read speech: the effect of reader and speech mode

    Science.gov (United States)

    Rochet-Capellan, Amélie; Fuchs, Susanne

    2013-01-01

    The current paper extends previous work on breathing during speech perception and provides supplementary material regarding the hypothesis that adaptation of breathing during perception “could be a basis for understanding and imitating actions performed by other people” (Paccalin and Jeannerod, 2000). The experiments were designed to test how the differences in reader breathing due to speaker-specific characteristics, or differences induced by changes in loudness level or speech rate influence the listener breathing. Two readers (a male and a female) were pre-recorded while reading short texts with normal and then loud speech (both readers) or slow speech (female only). These recordings were then played back to 48 female listeners. The movements of the rib cage and abdomen were analyzed for both the readers and the listeners. Breathing profiles were characterized by the movement expansion due to inhalation and the duration of the breathing cycle. We found that both loudness and speech rate affected each reader’s breathing in different ways. Listener breathing was different when listening to the male or the female reader and to the different speech modes. However, differences in listener breathing were not systematically in the same direction as reader differences. The breathing of listeners was strongly sensitive to the order of presentation of speech mode and displayed some adaptation in the time course of the experiment in some conditions. In contrast to specific alignments of breathing previously observed in face-to-face dialog, no clear evidence for a listener–reader alignment in breathing was found in this purely auditory speech perception task. The results and methods are relevant to the question of the involvement of physiological adaptations in speech perception and to the basic mechanisms of listener–speaker coupling. PMID:24367344

  2. Changes in breathing while listening to read speech: the effect of reader and speech mode.

    Science.gov (United States)

    Rochet-Capellan, Amélie; Fuchs, Susanne

    2013-01-01

    The current paper extends previous work on breathing during speech perception and provides supplementary material regarding the hypothesis that adaptation of breathing during perception "could be a basis for understanding and imitating actions performed by other people" (Paccalin and Jeannerod, 2000). The experiments were designed to test how the differences in reader breathing due to speaker-specific characteristics, or differences induced by changes in loudness level or speech rate influence the listener breathing. Two readers (a male and a female) were pre-recorded while reading short texts with normal and then loud speech (both readers) or slow speech (female only). These recordings were then played back to 48 female listeners. The movements of the rib cage and abdomen were analyzed for both the readers and the listeners. Breathing profiles were characterized by the movement expansion due to inhalation and the duration of the breathing cycle. We found that both loudness and speech rate affected each reader's breathing in different ways. Listener breathing was different when listening to the male or the female reader and to the different speech modes. However, differences in listener breathing were not systematically in the same direction as reader differences. The breathing of listeners was strongly sensitive to the order of presentation of speech mode and displayed some adaptation in the time course of the experiment in some conditions. In contrast to specific alignments of breathing previously observed in face-to-face dialog, no clear evidence for a listener-reader alignment in breathing was found in this purely auditory speech perception task. The results and methods are relevant to the question of the involvement of physiological adaptations in speech perception and to the basic mechanisms of listener-speaker coupling.

  3. The Native Plasmid pML21 Plays a Role in Stress Tolerance in Enterococcus faecalis ML21, as Analyzed by Plasmid Curing Using Plasmid Incompatibility.

    Science.gov (United States)

    Zuo, Fang-Lei; Chen, Li-Li; Zeng, Zhu; Feng, Xiu-Juan; Yu, Rui; Lu, Xiao-Ming; Ma, Hui-Qin; Chen, Shang-Wu

    2016-02-01

    To investigate the role of the native plasmid pML21 in Enterococcus faecalis ML21's response to abiotic stresses, the plasmid pML21 was cured based on the principle of plasmid incompatibility and segregational instability, generating E. faecalis mutant strain ML0. The mutant and the wild strains were exposed to abiotic stresses: bile salts, low pH, H2O2, ethanol, heat, and NaCl, and their survival rate was measured. We found that curing of pML21 lead to reduced tolerance to stress in E. faecalis ML0, especially oxidative and osmotic stress. Complementation analysis suggested that the genes from pML21 played different role in stress tolerance. The result indicated that pML21 plays a role in E. faecalis ML21's response to abiotic stresses.

  4. Preventive measures in speech and language therapy

    OpenAIRE

    Slokar, Polona

    2014-01-01

    Preventive care plays an important role in speech and language therapy. Through training, a speech and language therapist informs the expert and the general public about his efforts in the field of feeding, speech and language development, as well as about the missing elements that may appear in relation to communication and feeding. A speech and language therapist is also responsible for early detection of irregularities and of those factors which affect speech and language development. To a...

  5. The Stylistic Analysis of Public Speech

    Institute of Scientific and Technical Information of China (English)

    李龙

    2011-01-01

    Public speech is a very important part in our daily life.The ability to deliver a good public speech is something we need to learn and to have,especially,in the service sector.This paper attempts to analyze the style of public speech,in the hope of providing inspiration to us whenever delivering such a speech.

  6. Conversation, speech acts, and memory.

    Science.gov (United States)

    Holtgraves, Thomas

    2008-03-01

    Speakers frequently have specific intentions that they want others to recognize (Grice, 1957). These specific intentions can be viewed as speech acts (Searle, 1969), and I argue that they play a role in long-term memory for conversation utterances. Five experiments were conducted to examine this idea. Participants in all experiments read scenarios ending with either a target utterance that performed a specific speech act (brag, beg, etc.) or a carefully matched control. Participants were more likely to falsely recall and recognize speech act verbs after having read the speech act version than after having read the control version, and the speech act verbs served as better recall cues for the speech act utterances than for the controls. Experiment 5 documented individual differences in the encoding of speech act verbs. The results suggest that people recognize and retain the actions that people perform with their utterances and that this is one of the organizing principles of conversation memory.

  7. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  8. Sensorimotor Interactions in Speech Learning

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    2011-10-01

    Full Text Available Auditory input is essential for normal speech development and plays a key role in speech production throughout the life span. In traditional models, auditory input plays two critical roles: 1 establishing the acoustic correlates of speech sounds that serve, in part, as the targets of speech production, and 2 as a source of feedback about a talker's own speech outcomes. This talk will focus on both of these roles, describing a series of studies that examine the capacity of children and adults to adapt to real-time manipulations of auditory feedback during speech production. In one study, we examined sensory and motor adaptation to a manipulation of auditory feedback during production of the fricative “s”. In contrast to prior accounts, adaptive changes were observed not only in speech motor output but also in subjects' perception of the sound. In a second study, speech adaptation was examined following a period of auditory–perceptual training targeting the perception of vowels. The perceptual training was found to systematically improve subjects' motor adaptation response to altered auditory feedback during speech production. The results of both studies support the idea that perceptual and motor processes are tightly coupled in speech production learning, and that the degree and nature of this coupling may change with development.

  9. Relationship between speech motor control and speech intelligibility in children with speech sound disorders.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Pukonen, Margit; Goshulak, Debra; Yu, Vickie Y; Kadis, Darren S; Kroll, Robert; Pang, Elizabeth W; De Nil, Luc F

    2013-01-01

    The current study was undertaken to investigate the impact of speech motor issues on the speech intelligibility of children with moderate to severe speech sound disorders (SSD) within the context of the PROMPT intervention approach. The word-level Children's Speech Intelligibility Measure (CSIM), the sentence-level Beginner's Intelligibility Test (BIT) and tests of speech motor control and articulation proficiency were administered to 12 children (3:11 to 6:7 years) before and after PROMPT therapy. PROMPT treatment was provided for 45 min twice a week for 8 weeks. Twenty-four naïve adult listeners aged 22-46 years judged the intelligibility of the words and sentences. For CSIM, each time a recorded word was played to the listeners they were asked to look at a list of 12 words (multiple-choice format) and circle the word while for BIT sentences, the listeners were asked to write down everything they heard. Words correctly circled (CSIM) or transcribed (BIT) were averaged across three naïve judges to calculate percentage speech intelligibility. Speech intelligibility at both the word and sentence level was significantly correlated with speech motor control, but not articulatory proficiency. Further, the severity of speech motor planning and sequencing issues may potentially be a limiting factor in connected speech intelligibility and highlights the need to target these issues early and directly in treatment. The reader will be able to: (1) outline the advantages and disadvantages of using word- and sentence-level speech intelligibility tests; (2) describe the impact of speech motor control and articulatory proficiency on speech intelligibility; and (3) describe how speech motor control and speech intelligibility data may provide critical information to aid treatment planning. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Freedom of Speech as an Academic Discipline.

    Science.gov (United States)

    Haiman, Franklyn S.

    Since its formation, the Speech Communication Association's Committee on Freedom of Speech has played a critical leadership role in course offerings, research efforts, and regional activities in freedom of speech. Areas in which research has been done and in which further research should be carried out include: historical-critical research, in…

  11. Speech Indexing

    NARCIS (Netherlands)

    Ordelman, R.J.F.; Jong, de F.M.G.; Leeuwen, van D.A.; Blanken, H.M.; de Vries, A.P.; Blok, H.E.; Feng, L.

    2007-01-01

    This chapter will focus on the automatic extraction of information from the speech in multimedia documents. This approach is often referred to as speech indexing and it can be regarded as a subfield of audio indexing that also incorporates for example the analysis of music and sounds. If the objecti

  12. Plowing Speech

    OpenAIRE

    Zla ba sgrol ma

    2009-01-01

    This file contains a plowing speech and a discussion about the speech This collection presents forty-nine audio files including: several folk song genres; folktales and; local history from the Sman shad Valley of Sde dge county World Oral Literature Project

  13. Perception of Speech Sounds in School-Aged Children with Speech Sound Disorders.

    Science.gov (United States)

    Preston, Jonathan L; Irwin, Julia R; Turcios, Jacqueline

    2015-11-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System, which has been effectively used to assess preschoolers' ability to perform goodness judgments, is explored for school-aged children with residual speech errors (RSEs). However, data suggest that this particular task may not be sensitive to perceptual differences in school-aged children. The need for the development of clinical tools for assessment of speech perception in school-aged children with RSE is highlighted, and clinical suggestions are provided. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. The Efficient Coding of Speech: Cross-Linguistic Differences.

    Science.gov (United States)

    Guevara Erra, Ramon; Gervain, Judit

    2016-01-01

    Neural coding in the auditory system has been shown to obey the principle of efficient neural coding. The statistical properties of speech appear to be particularly well matched to the auditory neural code. However, only English has so far been analyzed from an efficient coding perspective. It thus remains unknown whether such an approach is able to capture differences between the sound patterns of different languages. Here, we use independent component analysis to derive information theoretically optimal, non-redundant codes (filter populations) for seven typologically distinct languages (Dutch, English, Japanese, Marathi, Polish, Spanish and Turkish) and relate the statistical properties of these filter populations to documented differences in the speech rhythms (Analysis 1) and consonant inventories (Analysis 2) of these languages. We show that consonant class membership plays a particularly important role in shaping the statistical structure of speech in different languages, suggesting that acoustic transience, a property that discriminates consonant classes from one another, is highly relevant for efficient coding.

  15. Speech Therapy Prevention in Kindergarten

    Directory of Open Access Journals (Sweden)

    Vašíková Jana

    2017-08-01

    Full Text Available Introduction: This contribution presents the results of a research focused on speech therapy in kindergartens. This research was realized in Zlín Region. It explains how speech therapy prevention is realized in kindergartens, determines the educational qualifications of teachers for this activity and verifies the quality of the applied methodologies in the daily program of kindergartens. Methods: The empirical part of the study was conducted through a qualitative research. For data collection, we used participant observation. We analyzed the research data and presented them verbally, using frequency tables and graphs, which were subsequently interpreted. Results: In this research, 71% of the teachers completed a course of speech therapy prevention, 28% of the teachers received pedagogical training and just 1% of the teachers are clinical speech pathologists. In spite of this, the research data show that, in most of kindergartens, the aim of speech therapy prevention is performed in order to correct deficiencies in speech and voice. The content of speech therapy prevention is implemented in this direction. Discussion: Awareness of the teachers’/parents’ regarding speech therapy prevention in kindergartens. Limitations: This research was implemented in autumn of 2016 in Zlín Region. Research data cannot be generalized to the entire population. We have the ambition to expand this research to other regions next year. Conclusions: Results show that both forms of speech therapy prevention - individual and group - are used. It is also often a combination of both. The aim of the individual forms is, in most cases, to prepare a child for cooperation during voice correction. The research also confirmed that most teachers do not have sufficient education in speech therapy. Most of them completed a course of speech therapy as primary prevention educators. The results also show that teachers spend a lot of time by speech therapy prevention in

  16. Play Matters

    DEFF Research Database (Denmark)

    Sicart (Vila), Miguel Angel

    , but not necessarily fun. Play can be dangerous, addictive, and destructive. Along the way, Sicart considers playfulness, the capacity to use play outside the context of play; toys, the materialization of play--instruments but also play pals; playgrounds, play spaces that enable all kinds of play; beauty......, the aesthetics of play through action; political play -- from Maradona's goal against England in the 1986 World Cup to the hactivist activities of Anonymous; the political, aesthetic, and moral activity of game design; and why play and computers get along so well....

  17. Advanced Persuasive Speaking, English, Speech: 5114.112.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    Developed as a high school quinmester unit on persuasive speaking, this guide provides the teacher with teaching strategies for a course which analyzes speeches from "Vital Speeches of the Day," political speeches, TV commercials, and other types of speeches. Practical use of persuasive methods for school, community, county, state, and…

  18. Speech Evaluation with Special Focus on Children Suffering from Apraxia of Speech

    Directory of Open Access Journals (Sweden)

    Manasi Dixit

    2013-07-01

    Full Text Available Speech disorders are very complicated in individuals suffering from Apraxia of Speech-AOS. In this paper ,the pathological cases of speech disabled children affected with AOS are analyzed. The speech signalsamples of childrenSpeech disorders are very complicated in individuals suffering from Apraxia of Speech-AOS. In this paper ,the pathological cases of speech disabled children affected with AOS are analyzed. The speech signalsamples of children of age between three to eight years are considered for the present study. These speechsignals are digitized and enhanced using the using the Speech Pause Index, Jitter,Skew ,Kurtosis analysisThis analysis is conducted on speech data samples which are concerned with both place of articulation andmanner of articulation. The speech disability of pathological subjects was estimated using results of aboveanalysis. of age between three to eight years are considered for the present study. These speechsignals are digitized and enhanced using the using the Speech Pause Index, Jitter,Skew ,Kurtosis analysisThis analysis is conducted on speech data samples which are concerned with both place of articulation andmanner of articulation. The speech disability of pathological subjects was estimated using results of aboveanalysis.

  19. Playful Gaming.

    Science.gov (United States)

    Makedon, Alexander

    A philosophical analysis of play and games is undertaken in this paper. Playful gaming, which is shown to be a synthesis of play and games, is utilized as a category for undertaking the examination of play and games. The significance of playful gaming to education is demonstrated through analyses of Plato's, Dewey's, Sartre's, and Marcuse's…

  20. Speak, Move, Play and Learn with Children on the Autism Spectrum: Activities to Boost Communication Skills, Sensory Integration and Coordination Using Simple Ideas from Speech and Language Pathology and Occupational Therapy

    Science.gov (United States)

    Brady, Lois Jean; Gonzalez, America X.; Zawadzki, Maciej; Presley, Corinda

    2012-01-01

    This practical resource is brimming with ideas and guidance for using simple ideas from speech and language pathology and occupational therapy to boost communication, sensory integration, and coordination skills in children on the autism spectrum. Suitable for use in the classroom, at home, and in community settings, it is packed with…

  1. Speak, Move, Play and Learn with Children on the Autism Spectrum: Activities to Boost Communication Skills, Sensory Integration and Coordination Using Simple Ideas from Speech and Language Pathology and Occupational Therapy

    Science.gov (United States)

    Brady, Lois Jean; Gonzalez, America X.; Zawadzki, Maciej; Presley, Corinda

    2012-01-01

    This practical resource is brimming with ideas and guidance for using simple ideas from speech and language pathology and occupational therapy to boost communication, sensory integration, and coordination skills in children on the autism spectrum. Suitable for use in the classroom, at home, and in community settings, it is packed with…

  2. Amharic Speech Recognition for Speech Translation

    OpenAIRE

    Melese, Michael; Besacier, Laurent; Meshesha, Million

    2016-01-01

    International audience; The state-of-the-art speech translation can be seen as a cascade of Automatic Speech Recognition, Statistical Machine Translation and Text-To-Speech synthesis. In this study an attempt is made to experiment on Amharic speech recognition for Amharic-English speech translation in tourism domain. Since there is no Amharic speech corpus, we developed a read-speech corpus of 7.43hr in tourism domain. The Amharic speech corpus has been recorded after translating standard Bas...

  3. Aesthetic Play

    DEFF Research Database (Denmark)

    Bang, Jytte Susanne

    2012-01-01

    to the children’s complex life-worlds. Further, this leads to an analysis of music-play activities as play with an art-form (music), which includes aesthetic dimensions and gives the music-play activities its character of being aesthetic play. Following Lev Vygotsky’s insight that art is a way of building life...

  4. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  5. TEST-RETEST RELIABILITY OF INDEPENDENT PHONOLOGICAL MEASURES OF 2-YEAR-OLD SPEECH: A PILOT STUDY

    Directory of Open Access Journals (Sweden)

    Katherine Marie WITTLER

    2016-09-01

    Full Text Available Introduction: Within the field of speech-language pathology, many assume commonly used informal speech sound measures are reliable. However, lack of scientific evidence to support this assumption is problematic. Speech-language pathologists often use informal speech sound analyses for establishing baseline behaviors from which therapeutic progress can be measured. Few researchers have examined the test-retest reliability of informal phonological measures when evaluating the speech productions of young children. Clinically, data regarding these measures are critical for facilitating evidence-based decision making for speech-language assessment and treatment. Objectives: The aim of the present study was to identify the evidence-base regarding temporal reliability of two such informal speech sound measures, phonetic inventory and word shape analysis, with two-year-old children. Methods: The researchers examined analyses conducted from conversational speech samples taken exactly one week apart for three children 29- to 33-months of age. The videotaped 20-minute play-based conversational samples were completed while the children interacted with their mothers. The samples were then transcribed using the International Phonetic Alphabet (IPA and analyzed using the two informal measures noted above. Results: Based on visual inspection of the data, the test-retest reliability of initial consonant and consonant cluster productions was unstable between the two conversational samples. However, phonetic inventories for final consonants and word shape analyses were relatively stable over time. Conclusion: Although more data is needed, the results of this study indicate that academic faculty, clinical educators, and practicing speech-language pathologists should be cautious when interpreting informal speech sound analyses based on play-based communication samples of young children.

  6. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  7. Speech dynamics

    NARCIS (Netherlands)

    Pols, L.C.W.

    2011-01-01

    In order for speech to be informative and communicative, segmental and suprasegmental variation is mandatory. Only this leads to meaningful words and sentences. The building blocks are no stable entities put next to each other (like beads on a string or like printed text), but there are gradual tran

  8. Acoustic differences among casual, conversational, and read speech

    Science.gov (United States)

    Pinnow, DeAnna

    Speech is a complex behavior that allows speakers to use many variations to satisfy the demands connected with multiple speaking environments. Speech research typically obtains speech samples in a controlled laboratory setting using read material, yet anecdotal observations of such speech, particularly from talkers with a speech and language impairment, have identified a "performance" effect in the produced speech which masks the characteristics of impaired speech outside of the lab (Goberman, Recker, & Parveen, 2010). The aim of the current study was to investigate acoustic differences among laboratory read, laboratory conversational, and casual speech through well-defined speech tasks in the laboratory and in talkers' natural environments. Eleven healthy research participants performed lab recording tasks (19 read sentences and a dialogue about their life) and collected natural-environment recordings of themselves over 3-day periods using portable recorders. Segments were analyzed for articulatory, voice, and prosodic acoustic characteristics using computer software and hand counting. The current study results indicate that lab-read speech was significantly different from casual speech: greater articulation range, improved voice quality measures, lower speech rate, and lower mean pitch. One implication of the results is that different laboratory techniques may be beneficial in obtaining speech samples that are more like casual speech, thus making it easier to correctly analyze abnormal speech characteristics with fewer errors.

  9. The role of the insula in speech and language processing.

    Science.gov (United States)

    Oh, Anna; Duerden, Emma G; Pang, Elizabeth W

    2014-08-01

    Lesion and neuroimaging studies indicate that the insula mediates motor aspects of speech production, specifically, articulatory control. Although it has direct connections to Broca's area, the canonical speech production region, the insula is also broadly connected with other speech and language centres, and may play a role in coordinating higher-order cognitive aspects of speech and language production. The extent of the insula's involvement in speech and language processing was assessed using the Activation Likelihood Estimation (ALE) method. Meta-analyses of 42 fMRI studies with healthy adults were performed, comparing insula activation during performance of language (expressive and receptive) and speech (production and perception) tasks. Both tasks activated bilateral anterior insulae. However, speech perception tasks preferentially activated the left dorsal mid-insula, whereas expressive language tasks activated left ventral mid-insula. Results suggest distinct regions of the mid-insula play different roles in speech and language processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. On Speech Act Theory in Conversations of the Holy Bible

    Institute of Scientific and Technical Information of China (English)

    YANG Hongya

    2014-01-01

    Speech act theory is an important theory in current pragmatics, which is originated with the Oxford philosopher John Langshaw Austin. Speech act theory is started from research on daily language’s function. There are few papers using speech act theory to analyze literature works. The holy bible is a literature treasure in human history, so this paper tries to use speech act theory to analyze conversations of Bible and provide some enlightenment for readers.

  11. Playful Membership

    DEFF Research Database (Denmark)

    Åkerstrøm Andersen, Niels; Pors, Justine Grønbæk

    2014-01-01

    This article studies the implications of current attempts by organizations to adapt to a world of constant change by introducing the notion of playful organizational membership. To this end we conduct a brief semantic history of organizational play and argue that when organizations play, employees...... are expected to engage in playful exploration of alternative selves. Drawing on Niklas Luhmann's theory of time and decision-making and Gregory Bateson's theory of play, the article analyses three empirical examples of how games play with conceptions of time. We explore how games represent an organizational...... desire to reach out - not just to the future - but to futures beyond the future presently imaginable. The article concludes that playful membership is membership through which employees are expected to develop a surplus of potential identities and continuously cross boundaries between real and virtual...

  12. Activities to Encourage Speech and Language Development

    Science.gov (United States)

    ... and Swallowing / Development Activities to Encourage Speech and Language Development Birth to 2 Years Encourage your baby ... Play games with your child such as "house." Exchange roles in the family, with your pretending to ...

  13. Speech communications in noise

    Science.gov (United States)

    1984-07-01

    The physical characteristics of speech, the methods of speech masking measurement, and the effects of noise on speech communication are investigated. Topics include the speech signal and intelligibility, the effects of noise on intelligibility, the articulation index, and various devices for evaluating speech systems.

  14. Playful Literacy

    DEFF Research Database (Denmark)

    Froes, Isabel

    2017-01-01

    these practices, which compose the taxonomy of tablet play. My contribution lies in identifying and proposing a series of theoretical concepts that complement recent theories related to play and digital literacy studies. The data collected through observations informed some noteworthy aspects, including how...... with tablets’ physical and digital affordances shape children’s digital play. This thesis presents how young children’s current practices when playing with tablets inform digital experiences in Denmark and Japan. Through an interdisciplinary lens and a grounded theory approach, I have identified and mapped...... vocabulary in children’s digital play experiences. These early digital experiences set the rules for the playgrounds and assert digital tablets as twenty-first-century toys, shaping young children’s playful literacy....

  15. Speech rate effects on the processing of conversational speech across the adult life span.

    Science.gov (United States)

    Koch, Xaver; Janse, Esther

    2016-04-01

    This study investigates the effect of speech rate on spoken word recognition across the adult life span. Contrary to previous studies, conversational materials with a natural variation in speech rate were used rather than lab-recorded stimuli that are subsequently artificially time-compressed. It was investigated whether older adults' speech recognition is more adversely affected by increased speech rate compared to younger and middle-aged adults, and which individual listener characteristics (e.g., hearing, fluid cognitive processing ability) predict the size of the speech rate effect on recognition performance. In an eye-tracking experiment, participants indicated with a mouse-click which visually presented words they recognized in a conversational fragment. Click response times, gaze, and pupil size data were analyzed. As expected, click response times and gaze behavior were affected by speech rate, indicating that word recognition is more difficult if speech rate is faster. Contrary to earlier findings, increased speech rate affected the age groups to the same extent. Fluid cognitive processing ability predicted general recognition performance, but did not modulate the speech rate effect. These findings emphasize that earlier results of age by speech rate interactions mainly obtained with artificially speeded materials may not generalize to speech rate variation as encountered in conversational speech.

  16. Pretend play.

    Science.gov (United States)

    Weisberg, Deena Skolnick

    2015-01-01

    Pretend play is a form of playful behavior that involves nonliteral action. Although on the surface this activity appears to be merely for fun, recent research has discovered that children's pretend play has connections to important cognitive and social skills, such as symbolic thinking, theory of mind, and counterfactual reasoning. The current article first defines pretend play and then reviews the arguments and evidence for these three connections. Pretend play has a nonliteral correspondence to reality, hence pretending may provide children with practice with navigating symbolic relationships, which may strengthen their language skills. Pretend play and theory of mind reasoning share a focus on others' mental states in order to correctly interpret their behavior, hence pretending and theory of mind may be mutually supportive in development. Pretend play and counterfactual reasoning both involve representing nonreal states of affairs, hence pretending may facilitate children's counterfactual abilities. These connections make pretend play an important phenomenon in cognitive science: Studying children's pretend play can provide insight into these other abilities and their developmental trajectories, and thereby into human cognitive architecture and its development.

  17. Playful Interaction

    DEFF Research Database (Denmark)

    2003-01-01

    The video Playful Interaction describes a future architectural office, and envisions ideas and concepts for playful interactions between people, materials and appliances in a pervasive and augmented working environment. The video both describes existing developments, technologies and designs...... as well as ideas not yet implemented such as playful modes of interaction with an augmented ball. Playful Interaction has been used as a hybrid of a vision video and a video prototype (1). Externally the video has been used to visualising our new ideas, and internally the video has also worked to inspire...

  18. Mediatized play

    DEFF Research Database (Denmark)

    Johansen, Stine Liv

    Children’s play must nowadays be understood as a mediatized field in society and culture. Media – understood in a very broad sense - holds severe explanatory power in describing and understanding the practice of play, since play happens both with, through and inspired by media of different sorts....... In this presentation the case of ‘playing soccer’ will be outlined through its different mediated manifestations, including soccer games and programs on TV, computer games, magazines, books, YouTube videos and soccer trading cards....

  19. Play practices and play moods

    DEFF Research Database (Denmark)

    Karoff, Helle Skovbjerg

    2013-01-01

    The aim of this article is to develop a view of play as a relation between play practices and play moods based on an empirical study of children's everyday life and by using Bateson's term of ‘framing’ [(1955/2001). In Steps to an ecology of mind (pp. 75–80). Chicago: University of Chicago Press......], Schmidt's notion of ‘commonness’ [(2005). Om respekten. København: Danmarks Pædagogiske Universitets Forlag; (2011). On respect. Copenhagen: Danish School of Education University Press] and Heidegger's term ‘mood’ [(1938/1996). Time and being. Cornwall: Wiley-Blackwell.]. Play mood is a state of being...... in which we are open and ready, both to others and their production of meaning and to new opportunities for producing meaning. This play mood is created when we engage with the world during play practices. The article points out four types of play moods – devotion, intensity, tension and euphorica – which...

  20. Symbolic play and language development.

    Science.gov (United States)

    Orr, Edna; Geva, Ronny

    2015-02-01

    Symbolic play and language are known to be highly interrelated, but the developmental process involved in this relationship is not clear. Three hypothetical paths were postulated to explore how play and language drive each other: (1) direct paths, whereby initiation of basic forms in symbolic action or babbling, will be directly related to all later emerging language and motor outputs; (2) an indirect interactive path, whereby basic forms in symbolic action will be associated with more complex forms in symbolic play, as well as with babbling, and babbling mediates the relationship between symbolic play and speech; and (3) a dual path, whereby basic forms in symbolic play will be associated with basic forms of language, and complex forms of symbolic play will be associated with complex forms of language. We micro-coded 288 symbolic vignettes gathered during a yearlong prospective bi-weekly examination (N=14; from 6 to 18 months of age). Results showed that the age of initiation of single-object symbolic play correlates strongly with the age of initiation of later-emerging symbolic and vocal outputs; its frequency at initiation is correlated with frequency at initiation of babbling, later-emerging speech, and multi-object play in initiation. Results support the notion that a single-object play relates to the development of other symbolic forms via a direct relationship and an indirect relationship, rather than a dual-path hypothesis.

  1. Perceptual learning of interrupted speech.

    Directory of Open Access Journals (Sweden)

    Michel Ruben Benard

    Full Text Available The intelligibility of periodically interrupted speech improves once the silent gaps are filled with noise bursts. This improvement has been attributed to phonemic restoration, a top-down repair mechanism that helps intelligibility of degraded speech in daily life. Two hypotheses were investigated using perceptual learning of interrupted speech. If different cognitive processes played a role in restoring interrupted speech with and without filler noise, the two forms of speech would be learned at different rates and with different perceived mental effort. If the restoration benefit were an artificial outcome of using the ecologically invalid stimulus of speech with silent gaps, this benefit would diminish with training. Two groups of normal-hearing listeners were trained, one with interrupted sentences with the filler noise, and the other without. Feedback was provided with the auditory playback of the unprocessed and processed sentences, as well as the visual display of the sentence text. Training increased the overall performance significantly, however restoration benefit did not diminish. The increase in intelligibility and the decrease in perceived mental effort were relatively similar between the groups, implying similar cognitive mechanisms for the restoration of the two types of interruptions. Training effects were generalizable, as both groups improved their performance also with the other form of speech than that they were trained with, and retainable. Due to null results and relatively small number of participants (10 per group, further research is needed to more confidently draw conclusions. Nevertheless, training with interrupted speech seems to be effective, stimulating participants to more actively and efficiently use the top-down restoration. This finding further implies the potential of this training approach as a rehabilitative tool for hearing-impaired/elderly populations.

  2. The inhibition of stuttering via the presentation of natural speech and sinusoidal speech analogs.

    Science.gov (United States)

    Saltuklaroglu, Tim; Kalinowski, Joseph

    2006-08-14

    Sensory signals containing speech or gestural (articulatory) information (e.g., choral speech) have repeatedly been found to be highly effective inhibitors of stuttering. Sine wave analogs of speech consist of a trio of changing pure tones representative of formant frequencies. They are otherwise devoid of traditional speech cues, yet have proven to evoke consistent linguistic percepts in listeners. Thus, we investigated the potency of sinusoidal speech for inhibiting stuttering. Ten adults who stutter read while listening to (a) forward-flowing natural speech; (b) forward-flowing sinusoid analogs of natural speech; (c) reversed natural speech; (d) reversed sinusoid analogs of natural speech; and (e) a continuous 1000 Hz pure tone. The levels of stuttering inhibition achieved using the sinusoidal stimuli were potent and not significantly different from those achieved using natural speech (approximately 50% in forward conditions and approximately 25% in the reversed conditions), suggesting that the patterns of undulating pure tones are sufficient to endow sinusoidal sentences with 'quasi-gestural' qualities. These data highlight the sensitivity of a specialized 'phonetic module' for extracting gestural information from sensory stimuli. Stuttering inhibition is thought to occur when perceived gestural information facilitates fluent productions via the engagement of mirror neurons (e.g., in Broca's area), which appear to play a crucial role in our ability to perceive and produce speech.

  3. Utility of TMS to understand the neurobiology of speech.

    Science.gov (United States)

    Murakami, Takenobu; Ugawa, Yoshikazu; Ziemann, Ulf

    2013-01-01

    According to a traditional view, speech perception and production are processed largely separately in sensory and motor brain areas. Recent psycholinguistic and neuroimaging studies provide novel evidence that the sensory and motor systems dynamically interact in speech processing, by demonstrating that speech perception and imitation share regional brain activations. However, the exact nature and mechanisms of these sensorimotor interactions are not completely understood yet. Transcranial magnetic stimulation (TMS) has often been used in the cognitive neurosciences, including speech research, as a complementary technique to behavioral and neuroimaging studies. Here we provide an up-to-date review focusing on TMS studies that explored speech perception and imitation. Single-pulse TMS of the primary motor cortex (M1) demonstrated a speech specific and somatotopically specific increase of excitability of the M1 lip area during speech perception (listening to speech or lip reading). A paired-coil TMS approach showed increases in effective connectivity from brain regions that are involved in speech processing to the M1 lip area when listening to speech. TMS in virtual lesion mode applied to speech processing areas modulated performance of phonological recognition and imitation of perceived speech. In summary, TMS is an innovative tool to investigate processing of speech perception and imitation. TMS studies have provided strong evidence that the sensory system is critically involved in mapping sensory input onto motor output and that the motor system plays an important role in speech perception.

  4. Utility of TMS to understand the neurobiology of speech

    Directory of Open Access Journals (Sweden)

    Takenobu eMurakami

    2013-07-01

    Full Text Available According to a traditional view, speech perception and production are processed largely separately in sensory and motor brain areas. Recent psycholinguistic and neuroimaging studies provide novel evidence that the sensory and motor systems dynamically interact in speech processing, by demonstrating that speech perception and imitation share regional brain activations. However, the exact nature and mechanisms of these sensorimotor interactions are not completely understood yet.Transcranial magnetic stimulation (TMS has often been used in the cognitive neurosciences, including speech research, as a complementary technique to behavioral and neuroimaging studies. Here we provide an up-to-date review focusing on TMS studies that explored speech perception and imitation.Single-pulse TMS of the primary motor cortex (M1 demonstrated a speech specific and somatotopically specific increase of excitability of the M1 lip area during speech perception (listening to speech or lip reading. A paired-coil TMS approach showed increases in effective connectivity from brain regions that are involved in speech processing to the M1 lip area when listening to speech. TMS in virtual lesion mode applied to speech processing areas modulated performance of phonological recognition and imitation of perceived speech.In summary, TMS is an innovative tool to investigate processing of speech perception and imitation. TMS studies have provided strong evidence that the sensory system is critically involved in mapping sensory input onto motor output and that the motor system plays an important role in speech perception.

  5. Some articulatory details of emotional speech

    Science.gov (United States)

    Lee, Sungbok; Yildirim, Serdar; Bulut, Murtaza; Kazemzadeh, Abe; Narayanan, Shrikanth

    2005-09-01

    Differences in speech articulation among four emotion types, neutral, anger, sadness, and happiness are investigated by analyzing tongue tip, jaw, and lip movement data collected from one male and one female speaker of American English. The data were collected using an electromagnetic articulography (EMA) system while subjects produce simulated emotional speech. Pitch, root-mean-square (rms) energy and the first three formants were estimated for vowel segments. For both speakers, angry speech exhibited the largest rms energy and largest articulatory activity in terms of displacement range and movement speed. Happy speech is characterized by largest pitch variability. It has higher rms energy than neutral speech but articulatory activity is rather comparable to, or less than, neutral speech. That is, happy speech is more prominent in voicing activity than in articulation. Sad speech exhibits longest sentence duration and lower rms energy. However, its articulatory activity is no less than neutral speech. Interestingly, for the male speaker, articulation for vowels in sad speech is consistently more peripheral (i.e., more forwarded displacements) when compared to other emotions. However, this does not hold for female subject. These and other results will be discussed in detail with associated acoustics and perceived emotional qualities. [Work supported by NIH.

  6. Method and apparatus for obtaining complete speech signals for speech recognition applications

    Science.gov (United States)

    Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)

    2009-01-01

    The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.

  7. Going to a Speech Therapist

    Science.gov (United States)

    ... Video: Getting an X-ray Going to a Speech Therapist KidsHealth > For Kids > Going to a Speech Therapist ... therapists (also called speech-language pathologists ). What Do Speech Therapists Help With? Speech therapists help people of all ...

  8. Building Searchable Collections of Enterprise Speech Data.

    Science.gov (United States)

    Cooper, James W.; Viswanathan, Mahesh; Byron, Donna; Chan, Margaret

    The study has applied speech recognition and text-mining technologies to a set of recorded outbound marketing calls and analyzed the results. Since speaker-independent speech recognition technology results in a significantly lower recognition rate than that found when the recognizer is trained for a particular speaker, a number of post-processing…

  9. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    Science.gov (United States)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  10. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  11. Playful Organizations

    DEFF Research Database (Denmark)

    Pors, Justine Grønbæk; Åkerstrøm Andersen, Niels

    2015-01-01

    and undecidability. With an empirical point of departure in Danish public school policy and two concrete examples of games utilised in school development, the article analyses how play is a way for organisations to simultaneously decide and also avoid making a decision, thus keeping flexibility and possibilities...... intact. In its final sections, the article discusses what happens to conditions of decision-making when organisations do not just see undecidability as a given condition, but as a limited resource indispensable for change and renewal. The article advances discussions of organisational play by exploring......This article explores how organisational play becomes a managerial tool to increase and benefit from undecidability. The article draws on Niklas Luhmann's concept of decision and on Gregory Bateson's theory of play to create a conceptual framework for analysing the relation between decision...

  12. Speech research

    Science.gov (United States)

    1992-06-01

    Phonology is traditionally seen as the discipline that concerns itself with the building blocks of linguistic messages. It is the study of the structure of sound inventories of languages and of the participation of sounds in rules or processes. Phonetics, in contrast, concerns speech sounds as produced and perceived. Two extreme positions on the relationship between phonological messages and phonetic realizations are represented in the literature. One holds that the primary home for linguistic symbols, including phonological ones, is the human mind, itself housed in the human brain. The second holds that their primary home is the human vocal tract.

  13. Auditory-perceptual learning improves speech motor adaptation in children.

    Science.gov (United States)

    Shiller, Douglas M; Rochon, Marie-Lyne

    2014-08-01

    Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.

  14. Group play

    DEFF Research Database (Denmark)

    Tychsen, Anders; Hitchens, Michael; Brolund, Thea

    2008-01-01

    of group dynamics, the influence of the fictional game characters and the comparative play experience between the two formats. The results indicate that group dynamics and the relationship between the players and their digital characters, are integral to the quality of the gaming experience in multiplayer......Role-playing games (RPGs) are a well-known game form, existing in a number of formats, including tabletop, live action, and various digital forms. Despite their popularity, empirical studies of these games are relatively rare. In particular there have been few examinations of the effects...... of the various formats used by RPGs on the gaming experience. This article presents the results of an empirical study, examining how multi-player tabletop RPGs are affected as they are ported to the digital medium. Issues examined include the use of disposition assessments to predict play experience, the effect...

  15. Postphenomenological Play

    DEFF Research Database (Denmark)

    Hammar, Emil

    This paper aims to identify an understanding of digital games in virtual environments by using Don Ihde’s (1990) postphenomenological approach to how technology mediates the world to human beings in conjunction with Hans-Georg Gadamer’s (1993) notion of play . Through this tentatively proposed...... amalgamation of theories I point towards an alternative understanding of the relationship between play and game as not only dialectic, but also as socially and ethically relevant qua the design and implementation of the game as technology....

  16. Speech production, Psychology of

    NARCIS (Netherlands)

    Schriefers, H.J.; Vigliocco, G.

    2015-01-01

    Research on speech production investigates the cognitive processes involved in transforming thoughts into speech. This article starts with a discussion of the methodological issues inherent to research in speech production that illustrates how empirical approaches to speech production must differ fr

  17. Clay Play

    Science.gov (United States)

    Rogers, Liz; Steffan, Dana

    2009-01-01

    This article describes how to use clay as a potential material for young children to explore. As teachers, the authors find that their dialogue about the potential of clay as a learning medium raises many questions: (1) What makes clay so enticing? (2) Why are teachers noticing different play and conversation around the clay table as compared to…

  18. 78 FR 49717 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ... reasons that STS ] has not been more widely utilized. Are people with speech disabilities not connected to... COMMISSION 47 CFR Part 64 Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With...

  19. A Software Agent for Speech Abiding Systems

    Directory of Open Access Journals (Sweden)

    R. Manoharan

    2009-01-01

    Full Text Available Problem statement: In order to bring speech into the mainstream of business process an efficient digital signal processor is necessary. The Fast Fourier Transform (FFT and the butter fly structure symmetry will enable the harwaring easier. With the DSP and software proposed, togetherly established by means of a system, named here as “Speech Abiding System (SAS”, a software agent, which involves the digital representation of speech signals and the use of digital processors to analyze, synthesize, or modify such signals. The proposed SAS addresses the issues in two parts. Part I: Capturing the Speaker and the Language independent error free Speech Content for speech applications processing and Part II: To accomplish the speech content as an input to the Speech User Applications/Interface (SUI. Approach: Discrete Fourier Transform (DFT of the speech signal is the essential ingredient to evolve this SAS and Discrete-Time Fourier Transform (DTFT links the discrete-time domain to the continuous-frequency domain. The direct computation of DFT is prohibitively expensive in terms of the required computer operations. Fortunately, a number of “fast” transforms have been developed that are mathematically equivalent to the DFT, but which require significantly a fewer computer operations for their implementation. Results: From Part-I, the SAS able to capture an error free Speech content to facilitate the speech as a good input in the main stream of business processing. Part-II provides an environment to implement the speech user applications at a primitive level. Conclusion/Recommendations: The SAS agent along with the required hardware architecture, a Finite State Automata (FSA machine can be created to develop global oriented domain specific speech user applications easily. It will have a major impact on interoperability and disintermediation in the Information Technology Cycle (ITC for computer program generating.

  20. Paraconsistent semantics of speech acts

    NARCIS (Netherlands)

    Dunin-Kȩplicz, Barbara; Strachocka, Alina; Szałas, Andrzej; Verbrugge, Rineke

    2015-01-01

    This paper discusses an implementation of four speech acts: assert, concede, request and challenge in a paraconsistent framework. A natural four-valued model of interaction yields multiple new cognitive situations. They are analyzed in the context of communicative relations, which partially replace

  1. Speech Enhancement

    DEFF Research Database (Denmark)

    Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll;

    of methods and have been introduced in somewhat different contexts. Linear filtering methods originate in stochastic processes, while subspace methods have largely been based on developments in numerical linear algebra and matrix approximation theory. This book bridges the gap between these two classes......Speech enhancement is a classical problem in signal processing, yet still largely unsolved. Two of the conventional approaches for solving this problem are linear filtering, like the classical Wiener filter, and subspace methods. These approaches have traditionally been treated as different classes...... of methods by showing how the ideas behind subspace methods can be incorporated into traditional linear filtering. In the context of subspace methods, the enhancement problem can then be seen as a classical linear filter design problem. This means that various solutions can more easily be compared...

  2. Speech therapy with obturator.

    Science.gov (United States)

    Shyammohan, A; Sreenivasulu, D

    2010-12-01

    Rehabilitation of speech is tantamount to closure of defect in cases with velopharyngeal insufficiency. Often the importance of speech therapy is sidelined during the fabrication of obturators. Usually the speech part is taken up only at a later stage and is relegated entirely to a speech therapist without the active involvement of the prosthodontist. The article suggests a protocol for speech therapy in such cases to be done in unison with a prosthodontist.

  3. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  4. Group play

    DEFF Research Database (Denmark)

    Tychsen, Anders; Hitchens, Michael; Brolund, Thea

    2008-01-01

    Role-playing games (RPGs) are a well-known game form, existing in a number of formats, including tabletop, live action, and various digital forms. Despite their popularity, empirical studies of these games are relatively rare. In particular there have been few examinations of the effects of the v......Role-playing games (RPGs) are a well-known game form, existing in a number of formats, including tabletop, live action, and various digital forms. Despite their popularity, empirical studies of these games are relatively rare. In particular there have been few examinations of the effects...... of the various formats used by RPGs on the gaming experience. This article presents the results of an empirical study, examining how multi-player tabletop RPGs are affected as they are ported to the digital medium. Issues examined include the use of disposition assessments to predict play experience, the effect...... RPGs, with the first being of greater importance to digital games and the latter to the tabletop version....

  5. Automatic Speech Recognition from Neural Signals: A Focused Review

    Directory of Open Access Journals (Sweden)

    Christian Herff

    2016-09-01

    Full Text Available Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e.~patients suffering from locked-in syndrome. For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people.This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography. As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the emph{Brain-to-text} system.

  6. Gender difference in speech intelligibility using speech intelligibility tests and acoustic analyses

    Science.gov (United States)

    2010-01-01

    PURPOSE The purpose of this study was to compare men with women in terms of speech intelligibility, to investigate the validity of objective acoustic parameters related with speech intelligibility, and to try to set up the standard data for the future study in various field in prosthodontics. MATERIALS AND METHODS Twenty men and women were served as subjects in the present study. After recording of sample sounds, speech intelligibility tests by three speech pathologists and acoustic analyses were performed. Comparison of the speech intelligibility test scores and acoustic parameters such as fundamental frequency, fundamental frequency range, formant frequency, formant ranges, vowel working space area, and vowel dispersion were done between men and women. In addition, the correlations between the speech intelligibility values and acoustic variables were analyzed. RESULTS Women showed significantly higher speech intelligibility scores than men and there were significant difference between men and women in most of acoustic parameters used in the present study. However, the correlations between the speech intelligibility scores and acoustic parameters were low. CONCLUSION Speech intelligibility test and acoustic parameters used in the present study were effective in differentiating male voice from female voice and their values might be used in the future studies related patients involved with maxillofacial prosthodontics. However, further studies are needed on the correlation between speech intelligibility tests and objective acoustic parameters. PMID:21165272

  7. Relationship between Chinese speech intelligibility and speech transmission index in rooms using dichotic listening

    Institute of Scientific and Technical Information of China (English)

    PENG JianXin

    2008-01-01

    Speech intelligibility (SI) is an important index for the design and assessment of speech purpose hall. The relationship between Chinese speech intelligibility scores in rooms and speech transmission index (STI) under diotic listening condition was studied using monaural room impulse responses obtained from the room acoustical simulation software Odeon in previous paper. The present study employs the simulated binaural room impulse responses and auralization technique to obtain the subjective Chi-nese speech intelligibility scores using rhyme test. The relationship between Chinese speech intelligi-bility scores and STI is built and validated in rooms using dichotic (binaural) listening. The result shows that there is a high correlation between Chinese speech intelligibility scores and STI using di-chotic listening. The relationship between Chinese speech intelligibility scores and STI under diotic and dichotic listening conditions is also analyzed. Compared with diotic listening, dichotic (binaural) listening (an actual listening situation) can improve 2.7 dB signal-to-noise ratio for Mandarin Chinese speech intelligibility. STI method can predict and evaluate the speech intelligibility for Mandarin Chi-nese in rooms for dichotic (binaural) listening.

  8. Content words in Hebrew child-directed speech.

    Science.gov (United States)

    Adi-Bensaid, L; Ben-David, A; Tubul-Lavy, G

    2015-08-01

    The goal of the study was to examine whether the 'noun-bias' phenomenon, which exists in the lexicon of Hebrew-speaking children, also exists in Hebrew child-directed speech (CDS) as well as in Hebrew adult-directed speech (ADS). In addition, we aimed to describe the use of the different classes of content words in the speech of Hebrew-speaking parents to their children at different ages compared to the speech of parents to adults (ADS). Thirty infants (age range 8:5-33 months) were divided into three stages according to age: pre-lexical, single-word, and early grammar. The ADS corpus included 18 Hebrew-speaking parents of children at the same three stages of language development as in the CDS corpus. The CDS corpus was collected from parent-child dyads during naturalistic activities at home: mealtime, bathing, and play. The ADS corpus was collected from parent-experimenter interactions including the parent watching a video and then being interviewed by the experimenter. 200 utterances of each sample were transcribed, coded for types and tokens and analyzed quantitatively and qualitatively. Results show that in CDS, when speaking to infants of all ages, parents' use of types and tokens of verbs and nouns was similar and significantly higher than their use of adjectives or adverbs. In ADS, however, verbs were the main lexical category used by Hebrew-speaking parents in both types and tokens. It seems that both the properties of the input language (e.g. the pro-drop parameter) and the interactional styles of the caregivers are important factors that may influence the high presence of verbs in Hebrew-speaking parents' ADS and CDS. The negative correlation between the widespread use of verbs in the speech of parents to their infants and the 'noun-bias' phenomenon in the Hebrew-child lexicon will be discussed in detail.

  9. A Clinician Survey of Speech and Non-Speech Characteristics of Neurogenic Stuttering

    Science.gov (United States)

    Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F.

    2008-01-01

    This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for…

  10. A Clinician Survey of Speech and Non-Speech Characteristics of Neurogenic Stuttering

    Science.gov (United States)

    Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F.

    2008-01-01

    This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for…

  11. Analyzing Obama's Victory Speech in 2008 Presidential Election by Language's Meta-functions%从语言的元功能看政治演讲——以奥巴马2008年大选获胜演说为例

    Institute of Scientific and Technical Information of China (English)

    姜雪; 刘薇

    2009-01-01

    From the perspective of functional grammar and critical text analysis, an attempt is made to analyze how a political speech fulfills the ideational function through transitivity, voice and polarity, the interpersonal function through mood and modality, and the textual function through theme-rheme structure, information structure and cohesion with Barack Obama's victory speech in 2008 presidential election as a sample. It is quite evident that the speaker's command of subjective and objective attitudes, the development of the speech's credibility and persuasion, and the construction of the speaker's social identity, interpersonal relationship and ideology can all be realized by language's three meta-functions.%从功能语言观的角度,利用批评性语篇分析的方法,以奥巴马2008年大选获胜演说为范本,分析了政治演讲这种语篇模式的特征,挖掘了政治演讲在语言上如何通过及物性系统、语态系统和归一性系统实现概念功能,如何通过语气系统和情态系统实现人际功能,如何通过主位结构、信息结构和衔接系统实现语篇功能.政治演讲的演讲者对自己主、客观态度的把握,对演说辞可信度和说服力的塑造,对自己社会身份、人际关系和意识形态的建构都能够通过语言的三大元功能得以体现.

  12. Playing Possum

    Directory of Open Access Journals (Sweden)

    Enrico Euli

    2016-07-01

    Full Text Available Our society is drenched in the catastrophe; where the growth of financial crisis, environmental cataclysm and militarization represents its gaudiest and mortifying phenomena. Humans struggle with depression, sense of impotence, anguish towards a future considered a threat.  A possibility to keep us alive can be represented by the enhancement of our ability in ‘playing Possum’, an exercise of desisting and renitence: to firmly say ‘no’. To say no to a world that proposes just one way of being and living free, that imposes as the only unavoidable possible destiny.

  13. Playful Technology

    DEFF Research Database (Denmark)

    Johansen, Stine Liv; Eriksson, Eva

    2013-01-01

    In this paper, the design of future services for children in Danish public libraries is discussed, in the light of new challenges and opportunities in relation to new media and technologies. The Danish government has over the last few years initiated and described a range of initiatives regarding...... in the library, the changing role of the librarians and the library space. We argue that intertwining traditional library services with new media forms and engaging play is the core challenge for future design in physical public libraries, but also that it is through new media and technology that new...

  14. Playful Technology

    DEFF Research Database (Denmark)

    Johansen, Stine Liv; Eriksson, Eva

    2013-01-01

    in the library, the changing role of the librarians and the library space. We argue that intertwining traditional library services with new media forms and engaging play is the core challenge for future design in physical public libraries, but also that it is through new media and technology that new......In this paper, the design of future services for children in Danish public libraries is discussed, in the light of new challenges and opportunities in relation to new media and technologies. The Danish government has over the last few years initiated and described a range of initiatives regarding...

  15. Playing cards.

    Science.gov (United States)

    1977-01-01

    Mrs. Zahia Marzouk, vice-president of the Alexandria Family Planning Association and a living legend of Egyptian family planning, does not believe in talking about problems. She is far too busy learning from people and teaching them. Her latest brainstorm is a set of playing cards designed to help girls and women to read and learn about family planning at the same time. The 5 packs of cards, representing familiar words and sounds, and each with a family planning joker, took Mrs. Marzouk 6 months to design and paint by hand. They have now been printed, packed into packets provided by UNICEF, and distributed to some 2000 literacy groups in factories and family planning clinics. Each woman who succeeds in learning to read is encouraged to teach 4 others. They then go to the family planning clinic to be examined and gain a certificate. For the teacher who has made them proficient there is a special prize. Girls at El Brinth village outside Alexandria are pictured playing cards at the family planning center where they are learning various skills including how to read.

  16. Relations between affective music and speech: evidence from dynamics of affective piano performance and speech production.

    Science.gov (United States)

    Liu, Xiaoluan; Xu, Yi

    2015-01-01

    This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role.

  17. Relations between affective music and speech: Evidence from dynamics of affective piano performance and speech production

    Directory of Open Access Journals (Sweden)

    Xiaoluan eLiu

    2015-07-01

    Full Text Available This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory distance are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics, with fear in the middle. Fingerings interact with fear in the piano experiment and articulatory distance interacts with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role.

  18. Visual speech fills in both discrimination and identification of non-intact auditory speech in children.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-07-20

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. bæz) coupled to non-intact (excised onsets) auditory speech (signified by /-b/æz). Children discriminated syllable pairs that differed in intactness (i.e. bæz:/-b/æz) and identified non-intact nonwords (/-b/æz). We predicted that visual speech would cause children to perceive the non-intact onsets as intact, resulting in more same responses for discrimination and more intact (i.e. bæz) responses for identification in the audiovisual than auditory mode. Visual speech for the easy-to-speechread /b/ but not for the difficult-to-speechread /g/ boosted discrimination and identification (about 35-45%) in children from four to fourteen years. The influence of visual speech on discrimination was uniquely associated with the influence of visual speech on identification and receptive vocabulary skills.

  19. CONVERGING TOWARDS A COMMON SPEECH CODE: IMITATIVE AND PERCEPTUO-MOTOR RECALIBRATION PROCESSES IN SPEECH PRODUCTION

    Directory of Open Access Journals (Sweden)

    Marc eSato

    2013-07-01

    Full Text Available Auditory and somatosensory systems play a key role in speech motor control. In the act of speaking, segmental speech movements are programmed to reach phonemic sensory goals, which in turn are used to estimate actual sensory feedback in order to further control production. The adult’s tendency to automatically imitate a number of acoustic-phonetic characteristics in another speaker's speech however suggests that speech production not only relies on the intended phonemic sensory goals and actual sensory feedback but also on the processing of external speech inputs. These online adaptive changes in speech production, or phonetic convergence effects, are thought to facilitate conversational exchange by contributing to setting a common perceptuo-motor ground between the speaker and the listener. In line with previous studies on phonetic convergence, we here demonstrate, in a non-interactive situation of communication, online unintentional and voluntary imitative changes in relevant acoustic features of acoustic vowel targets (fundamental and first formant frequencies during speech production and imitation. In addition, perceptuo-motor recalibration processes, or after-effects, occurred not only after vowel production and imitation but also after auditory categorization of the acoustic vowel targets. Altogether, these findings demonstrate adaptive plasticity of phonemic sensory-motor goals and suggest that, apart from sensory-motor knowledge, speech production continuously draws on perceptual learning from the external speech environment.

  20. Phonetic enhancement of Mandarin vowels and tones: Infant-directed speech and Lombard speech.

    Science.gov (United States)

    Tang, Ping; Xu Rattanasone, Nan; Yuen, Ivan; Demuth, Katherine

    2017-08-01

    Speech units are reported to be hyperarticulated in both infant-directed speech (IDS) and Lombard speech. Since these two registers have typically been studied separately, it is unclear if the same speech units are hyperarticulated in the same manner between these registers. The aim of the present study is to compare the effect of register on vowel and tone modification in the tonal language Mandarin Chinese. Vowel and tone productions were produced by 15 Mandarin-speaking mothers during interactions with their 12-month-old infants during a play session (IDS), in conversation with a Mandarin-speaking adult in a 70 dBA eight-talker babble noise environment (Lombard speech), and in a quiet environment (adult-directed speech). Vowel space expansion was observed in IDS and Lombard speech, however, the patterns of vowel-shift were different between the two registers. IDS displayed tone space expansion only in the utterance-final position, whereas there was no tone space expansion in Lombard speech. The overall pitch increased for all tones in both registers. The tone-bearing vowel duration also increased in both registers, but only in utterance-final position. The difference in speech modifications between these two registers is discussed in light of speakers' different communicative needs.

  1. SII-Based Speech Prepocessing for Intelligibility Improvement in Noise

    DEFF Research Database (Denmark)

    Taal, Cees H.; Jensen, Jesper

    2013-01-01

    A linear time-invariant filter is designed in order to improve speech understanding when the speech is played back in a noisy environment. To accomplish this, the speech intelligibility index (SII) is maximized under the constraint that the speech energy is held constant. A nonlinear approximation...... filter sets certain frequency bands to zero when they do not contribute to intelligibility anymore. Experiments show large intelligibility improvements with the proposed method when used in stationary speech-shaped noise. However, it was also found that the method does not perform well for speech...... is used for the SII such that a closed-form solution exists to the constrained optimization problem. The resulting filter is dependent both on the long-term average noise and speech spectrum and the global SNR and, in general, has a high-pass characteristic. In contrast to existing methods, the proposed...

  2. Delayed Speech or Language Development

    Science.gov (United States)

    ... to 2-Year-Old Delayed Speech or Language Development KidsHealth > For Parents > Delayed Speech or Language Development ... child is right on schedule. Normal Speech & Language Development It's important to discuss early speech and language ...

  3. Introducing Two New Terms into the Literature of Hate Speech: “Hate Discourse” and “Hate Speech Act” Application of “speech act theory” into hate speech studies in the era of Web 2.0

    OpenAIRE

    Özarslan, Yrd. Doç. Dr. Zeynep

    2014-01-01

    The aim of this paper is to explain the need for a revision of the term “hate speech” in the era of Web 2.0 and to introduce two new terms into the literature of hate speech with the help of application of “speech act theory”, that is “hate discourse” and “hate speech act.” The need for the revision arises from the examination of the methodology used to analyze hate speech, which is critical discourse analysis (CDA). Even though CDA seems fairly sufficient for hate speech analysis in traditio...

  4. Introducing Two New Terms into the Literature of Hate Speech: “Hate Discourse” and “Hate Speech Act” Application of “speech act theory” into hate speech studies in the era of Web 2.0

    OpenAIRE

    Özarslan, Yrd. Doç. Dr. Zeynep

    2014-01-01

    The aim of this paper is to explain the need for a revision of the term “hate speech” in the era of Web 2.0 and to introduce two new terms into the literature of hate speech with the help of application of “speech act theory”, that is “hate discourse” and “hate speech act.” The need for the revision arises from the examination of the methodology used to analyze hate speech, which is critical discourse analysis (CDA). Even though CDA seems fairly sufficient for hate speech analysis in traditio...

  5. The Influence of Visual and Auditory Information on the Perception of Speech and Non-Speech Oral Movements in Patients with Left Hemisphere Lesions

    Science.gov (United States)

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-01-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…

  6. The Functional Connectome of Speech Control.

    Science.gov (United States)

    Fuertinger, Stefan; Horwitz, Barry; Simonyan, Kristina

    2015-07-01

    In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI) data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs) in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively forged the formation

  7. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  8. Hierarchical organization in the temporal structure of infant-direct speech and song.

    Science.gov (United States)

    Falk, Simone; Kello, Christopher T

    2017-06-01

    Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech. Events were defined as peaks in the amplitude envelope, and clusters of various sizes related to periods of acoustic speech energy at varying timescales. Infant-directed speech and song clearly showed greater event clustering compared with adult-directed registers, at multiple timescales of hundreds of milliseconds to tens of seconds. We discuss the relation of this newly discovered acoustic property to temporal variability in linguistic units and its potential implications for parent-infant communication and infants learning the hierarchical structures of speech and language. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Analyzing Orientations

    Science.gov (United States)

    Ruggles, Clive L. N.

    Archaeoastronomical field survey typically involves the measurement of structural orientations (i.e., orientations along and between built structures) in relation to the visible landscape and particularly the surrounding horizon. This chapter focuses on the process of analyzing the astronomical potential of oriented structures, whether in the field or as a desktop appraisal, with the aim of establishing the archaeoastronomical "facts". It does not address questions of data selection (see instead Chap. 25, "Best Practice for Evaluating the Astronomical Significance of Archaeological Sites", 10.1007/978-1-4614-6141-8_25) or interpretation (see Chap. 24, "Nature and Analysis of Material Evidence Relevant to Archaeoastronomy", 10.1007/978-1-4614-6141-8_22). The main necessity is to determine the azimuth, horizon altitude, and declination in the direction "indicated" by any structural orientation. Normally, there are a range of possibilities, reflecting the various errors and uncertainties in estimating the intended (or, at least, the constructed) orientation, and in more formal approaches an attempt is made to assign a probability distribution extending over a spread of declinations. These probability distributions can then be cumulated in order to visualize and analyze the combined data from several orientations, so as to identify any consistent astronomical associations that can then be correlated with the declinations of particular astronomical objects or phenomena at any era in the past. The whole process raises various procedural and methodological issues and does not proceed in isolation from the consideration of corroborative data, which is essential in order to develop viable cultural interpretations.

  10. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ... COMMISSION 47 CFR Part 64 Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With... this document, the Commission amends telecommunications relay services (TRS) mandatory...

  11. DIRECTIVE SPEECH ACT IN THE MOVIE SLEEPING BEAUTY

    Directory of Open Access Journals (Sweden)

    Muhartoyo

    2013-09-01

    Full Text Available Pragmatics is one of linguistics studies that is quite attractive to learn more about. There are many aspects of pragmatics; one of them is dealing with speech acts. Speech acts consist of many categories; one of them is directive speech act. This study aims to identify the directive speech act performed in Sleeping Beauty movie. Likewise, it will find out how often the directive speech act performed and which type of directive speech act that are most frequently used in the movie. This study used qualitative method in which data collection is done by watching the movie, analyzing the body movement and the dialogues of each character, reading the script and library research. A total of 139 directive speech acts were successfully identified. The result of analysis showed that the directive speech act of ordering is the most frequently used in the movie (21,6%. The least frequently used directive speech act is inviting directive speech act (0,7%. The study also revealed the importance of directive speech act in keeping the flow of storyline of the movie. This study is expected to give some useful insights in understanding what directive speech acts is.

  12. Speech and Language Impairments

    Science.gov (United States)

    ... impairment. Many children are identified as having a speech or language impairment after they enter the public school system. A teacher may notice difficulties in a child’s speech or communication skills and refer the child for ...

  13. Mechanisms of enhancing visual-speech recognition by prior auditory information.

    Science.gov (United States)

    Blank, Helen; von Kriegstein, Katharina

    2013-01-15

    Speech recognition from visual-only faces is difficult, but can be improved by prior information about what is said. Here, we investigated how the human brain uses prior information from auditory speech to improve visual-speech recognition. In a functional magnetic resonance imaging study, participants performed a visual-speech recognition task, indicating whether the word spoken in visual-only videos matched the preceding auditory-only speech, and a control task (face-identity recognition) containing exactly the same stimuli. We localized a visual-speech processing network by contrasting activity during visual-speech recognition with the control task. Within this network, the left posterior superior temporal sulcus (STS) showed increased activity and interacted with auditory-speech areas if prior information from auditory speech did not match the visual speech. This mismatch-related activity and the functional connectivity to auditory-speech areas were specific for speech, i.e., they were not present in the control task. The mismatch-related activity correlated positively with performance, indicating that posterior STS was behaviorally relevant for visual-speech recognition. In line with predictive coding frameworks, these findings suggest that prediction error signals are produced if visually presented speech does not match the prediction from preceding auditory speech, and that this mechanism plays a role in optimizing visual-speech recognition by prior information. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Speech 7 through 12.

    Science.gov (United States)

    Nederland Independent School District, TX.

    GRADES OR AGES: Grades 7 through 12. SUBJECT MATTER: Speech. ORGANIZATION AND PHYSICAL APPEARANCE: Following the foreward, philosophy and objectives, this guide presents a speech curriculum. The curriculum covers junior high and Speech I, II, III (senior high). Thirteen units of study are presented for junior high, each unit is divided into…

  15. The Efficient Coding of Speech: Cross-Linguistic Differences.

    Directory of Open Access Journals (Sweden)

    Ramon Guevara Erra

    Full Text Available Neural coding in the auditory system has been shown to obey the principle of efficient neural coding. The statistical properties of speech appear to be particularly well matched to the auditory neural code. However, only English has so far been analyzed from an efficient coding perspective. It thus remains unknown whether such an approach is able to capture differences between the sound patterns of different languages. Here, we use independent component analysis to derive information theoretically optimal, non-redundant codes (filter populations for seven typologically distinct languages (Dutch, English, Japanese, Marathi, Polish, Spanish and Turkish and relate the statistical properties of these filter populations to documented differences in the speech rhythms (Analysis 1 and consonant inventories (Analysis 2 of these languages. We show that consonant class membership plays a particularly important role in shaping the statistical structure of speech in different languages, suggesting that acoustic transience, a property that discriminates consonant classes from one another, is highly relevant for efficient coding.

  16. Heart Rate Extraction from Vowel Speech Signals

    Institute of Scientific and Technical Information of China (English)

    Abdelwadood Mesleh; Dmitriy Skopin; Sergey Baglikov; Anas Quteishat

    2012-01-01

    This paper presents a novel non-contact heart rate extraction method from vowel speech signals.The proposed method is based on modeling the relationship between speech production of vowel speech signals and heart activities for humans where it is observed that the moment of heart beat causes a short increment (evolution) of vowel speech formants.The short-time Fourier transform (STFT) is used to detect the formant maximum peaks so as to accurately estimate the heart rate.Compared with traditional contact pulse oximeter,the average accuracy of the proposed non-contact heart rate extraction method exceeds 95%.The proposed non-contact heart rate extraction method is expected to play an important role in modern medical applications.

  17. Neural bases of childhood speech disorders: lateralization and plasticity for speech functions during development.

    Science.gov (United States)

    Liégeois, Frédérique J; Morgan, Angela T

    2012-01-01

    Current models of speech production in adults emphasize the crucial role played by the left perisylvian cortex, primary and pre-motor cortices, the basal ganglia, and the cerebellum for normal speech production. Whether similar brain-behaviour relationships and leftward cortical dominance are found in childhood remains unclear. Here we reviewed recent evidence linking motor speech disorders (apraxia of speech and dysarthria) and brain abnormalities in children and adolescents with developmental, progressive, or childhood-acquired conditions. We found no evidence that unilateral damage can result in apraxia of speech, or that left hemisphere lesions are more likely to result in dysarthria than lesion to the right. The few studies reporting on childhood apraxia of speech converged towards morphological, structural, metabolic or epileptic anomalies affecting the basal ganglia, perisylvian and rolandic cortices bilaterally. Persistent dysarthria, similarly, was commonly reported in individuals with syndromes and conditions affecting these same structures bilaterally. In conclusion, for the first time we provide evidence that longterm and severe childhood speech disorders result predominantly from bilateral disruption of the neural networks involved in speech production.

  18. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia.

  19. AHP 28: Too Much Loving-Kindness to Repay: Funeral Speeches of the Wenquan Pumi

    Directory of Open Access Journals (Sweden)

    Gerong Pincuo (kɛ́izoŋ pʰiŋtsʰu

    2013-12-01

    Full Text Available Two Pumi funeral speech rituals of the Wenquan Pumi area in northwestern Yunnan Province illustrate the traditional genre of speeches through their use of metaphor and parallellism. The speeches express the central concept of giving and repaying that plays an important role in strengthening social cohesion among Pumi relatives.

  20. Neural Coding of Formant-Exaggerated Speech in the Infant Brain

    Science.gov (United States)

    Zhang, Yang; Koerner, Tess; Miller, Sharon; Grice-Patil, Zach; Svec, Adam; Akbari, David; Tusler, Liz; Carney, Edward

    2011-01-01

    Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of…

  1. Neural Coding of Formant-Exaggerated Speech in the Infant Brain

    Science.gov (United States)

    Zhang, Yang; Koerner, Tess; Miller, Sharon; Grice-Patil, Zach; Svec, Adam; Akbari, David; Tusler, Liz; Carney, Edward

    2011-01-01

    Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of…

  2. Acoustic Analysis of PD Speech

    Directory of Open Access Journals (Sweden)

    Karen Chenausky

    2011-01-01

    Full Text Available According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD, with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication.

  3. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  4. The role of synchrony and ambiguity in speech-gesture integration during comprehension.

    Science.gov (United States)

    Habets, Boukje; Kita, Sotaro; Shao, Zeshu; Ozyurek, Asli; Hagoort, Peter

    2011-08-01

    During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture-speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.

  5. An Analysis of Speech Structure and Perception Processes and Its Effects on Oral English Teaching Centering around Lexical Chunks

    Institute of Scientific and Technical Information of China (English)

    ZHOU Li; NIE Yong-Wei

    2015-01-01

    The paper tries to analyze speech perception in terms of its structure, process, levels and models. Some problems con⁃cerning speech perception have been touched upon. The paper aims at providing some reference for oral English teaching and learning in the light of speech perception. It is intended to arouse readers’reflection upon the effect of speech perception on oral English teaching.

  6. A Rule Based System for Speech Language Context Understanding

    Institute of Scientific and Technical Information of China (English)

    Imran Sarwar Bajwa; Muhammad Abbas Choudhary

    2006-01-01

    Speech or Natural language contents are major tools of communication. This research paper presents a natural language processing based automated system for understanding speech language text. A new rule based model has been presented for analyzing the natural languages and extracting the relative meanings from the given text. User writes the natural language text in simple English in a few paragraphs and the designed system has a sound ability of analyzing the given script by the user. After composite analysis and extraction of associated information, the designed system gives particular meanings to an assortment of speech language text on the basis of its context. The designed system uses standard speech language rules that are clearly defined for all speech languages as English,Urdu, Chinese, Arabic, French, etc. The designed system provides a quick and reliable way to comprehend speech language context and generate respective meanings.

  7. Speech defect and orthodontics: a contemporary review.

    Science.gov (United States)

    Doshi, Umal Hiralal; Bhad-Patil, Wasundhara A

    2011-01-01

    In conjunction with the lips, tongue, and oropharynx, the teeth play an important role in the articulation of consonants via airflow obstruction and modification. Therefore, along with these articulators, any orthodontic therapy that changes their position may play a role in speech disorders. This paper examines the relevant studies and discusses the difficulties of scientific investigation in this area. The ability of patients to adapt their speech to compensate for most handicapping occlusion and facial deformities is recognized, but the mechanism for this adaptation remains incompletely understood. The overall conclusion is that while certain malocclusions show a relationship with speech defects, this does not appear to correlate with the severity of the condition. There is no direct cause-and-effect relationship. Similarly, no guarantees of improvement can be given to patients undergoing orthodontic or orthognathic correction of malocclusion.

  8. Duration, Pitch, and Loudness in Kunqu Opera Stage Speech.

    Science.gov (United States)

    Han, Qichao; Sundberg, Johan

    2017-03-01

    Kunqu is a special type of opera within the Chinese tradition with 600 years of history. In it, stage speech is used for the spoken dialogue. It is performed in Ming Dynasty's mandarin language and is a much more dominant part of the play than singing. Stage speech deviates considerably from normal conversational speech with respect to duration, loudness and pitch. This paper compares these properties in stage speech conversational speech. A famous, highly experienced female singer's performed stage speech and reading of the same lyrics in a conversational speech mode. Clear differences are found. As compared with conversational speech, stage speech had longer word and sentence duration and word duration was less variable. Average sound level was 16 dB higher. Also mean fundamental frequency was considerably higher and more varied. Within sentences, both loudness and fundamental frequency tended to vary according to a low-high-low pattern. Some of the findings fail to support current opinions regarding the characteristics of stage speech, and in this sense the study demonstrates the relevance of objective measurements in descriptions of vocal styles. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Speech-recognition interfaces for music information retrieval

    Science.gov (United States)

    Goto, Masataka

    2005-09-01

    This paper describes two hands-free music information retrieval (MIR) systems that enable a user to retrieve and play back a musical piece by saying its title or the artist's name. Although various interfaces for MIR have been proposed, speech-recognition interfaces suitable for retrieving musical pieces have not been studied. Our MIR-based jukebox systems employ two different speech-recognition interfaces for MIR, speech completion and speech spotter, which exploit intentionally controlled nonverbal speech information in original ways. The first is a music retrieval system with the speech-completion interface that is suitable for music stores and car-driving situations. When a user only remembers part of the name of a musical piece or an artist and utters only a remembered fragment, the system helps the user recall and enter the name by completing the fragment. The second is a background-music playback system with the speech-spotter interface that can enrich human-human conversation. When a user is talking to another person, the system allows the user to enter voice commands for music playback control by spotting a special voice-command utterance in face-to-face or telephone conversations. Experimental results from use of these systems have demonstrated the effectiveness of the speech-completion and speech-spotter interfaces. (Video clips: http://staff.aist.go.jp/m.goto/MIR/speech-if.html)

  10. Symbolic Communication as Speech in United States Supreme Court Jurisprudence

    OpenAIRE

    Łukasz Machaj

    2011-01-01

    The First Amendment to the United States Constitution forbids government to pass any law which abridges freedom of speech. Notwithstanding the absolute tenor of the clause, this guarantee is clearly not limitless; its boundaries are established mainly in the course of Constitutional adjudication. The United States Supreme Court has extended free speech guarantees to so-called symbolic speech, i.e. to nonverbal expression of ideas, views or emotions. The article analyzes basic criteria and lim...

  11. Speech-to-Speech Relay Service

    Science.gov (United States)

    ... to make an STS call. You are then connected to an STS CA who will repeat your spoken words, making the spoken words clear to the other party. Persons with speech disabilities may also receive STS calls. The calling ...

  12. Playing Together: Analyzing Jazz Improvisation to Improve the Multiframe

    OpenAIRE

    Zach Powers; Stephanie Grimm

    2016-01-01

    Musical terminology is often used when discussing narrative forms of art. However, this is seldom accompanied by a systematic application of musical concepts for use by artists in these other mediums. Comics, in particular, parallel music in terms of the multiframe, where various individual elements are perceived at once. Therefore, a useful analogy can be made between the multiframe and thematic and vertical musical construction. The interactivity among jazz musicians during a collective imp...

  13. Playing Together: Analyzing Jazz Improvisation to Improve the Multiframe

    Directory of Open Access Journals (Sweden)

    Zach Powers

    2016-06-01

    Full Text Available Musical terminology is often used when discussing narrative forms of art. However, this is seldom accompanied by a systematic application of musical concepts for use by artists in these other mediums. Comics, in particular, parallel music in terms of the multiframe, where various individual elements are perceived at once. Therefore, a useful analogy can be made between the multiframe and thematic and vertical musical construction. The interactivity among jazz musicians during a collective improvisation exemplifies this musical simultaneity, and this article creates an analogy between improvisation and narrative comics, deriving several analytical tools that can be used to inform the creation of more meaningful multiframes.

  14. Corpus Design for Malay Corpus-based Speech Synthesis System

    Directory of Open Access Journals (Sweden)

    Tian-Swee Tan

    2009-01-01

    Full Text Available Problem statement: Speech corpus is one of the major components in corpus-based synthesis. The quality and coverage in speech corpus will affect the quality of synthesis speech sound. Approach: This study proposes a corpus design for Malay corpus-based speech synthesis system. This includes the study of design criteria in corpus-based speech synthesis, Malay corpus based database design and the concatenation engine in Malay corpus-based synthesis system. A set of 10 millions digital text corpuses for Malay language has been collected from Malay internet news. This text corpus had been analyzed using word frequency count to find out all high frequency words to be used for designing the sentences for speech corpus. Results: Altogether 381 sentences for speech corpus had been designed using 70% of high frequency words from 10 million text corpus. It consists of 16826 phoneme units and the total storage size is 37.6Mb. All the phone units are phonetically transcribed to preserve the phonetic context of its origin that will be used for phonetic context unit. This speech corpus had been labeled at phoneme level and used for variable length continuous phoneme based concatenation. Speech corpus is one of the major components in corpus-based synthesis. The quality and coverage in speech corpus will affect the quality of synthesized speech sound. Conclusion/Recommendation: This study has proposed a platform for designing speech corpus especially for Malay Text to Speech which can be further enhanced to support more coverage and higher naturalness of synthetic speech.

  15. Exploration of Speech Planning and Producing by Speech Error Analysis

    Institute of Scientific and Technical Information of China (English)

    冷卉

    2012-01-01

    Speech error analysis is an indirect way to discover speech planning and producing processes. From some speech errors made by people in their daily life, linguists and learners can reveal the planning and producing processes more easily and clearly.

  16. Indirect Speech Acts

    Institute of Scientific and Technical Information of China (English)

    李威

    2001-01-01

    Indirect speech acts are frequently used in verbal communication, the interpretation of them is of great importance in order to meet the demands of the development of students' communicative competence. This paper, therefore, intends to present Searle' s indirect speech acts and explore the way how indirect speech acts are interpreted in accordance with two influential theories. It consists of four parts. Part one gives a general introduction to the notion of speech acts theory. Part two makes an elaboration upon the conception of indirect speech act theory proposed by Searle and his supplement and development of illocutionary acts. Part three deals with the interpretation of indirect speech acts. Part four draws implication from the previous study and also serves as the conclusion of the dissertation.

  17. Task Difficulty in Oral Speech Act Production

    Science.gov (United States)

    Taguchi, Naoko

    2007-01-01

    This study took a pragmatic approach to examining the effects of task difficulty on L2 oral output. Twenty native English speakers and 59 Japanese students of English at two different proficiency levels produced speech acts of requests and refusals in a role play task. The task had two situation types based on three social variables:…

  18. Reach for Speech: Communication Skills through Sociodrama.

    Science.gov (United States)

    Landy, Robert J.; Borisoff, Deborah J.

    1987-01-01

    Describes a program to help secondary school students develop speech skills by exploring social issues through role-playing. Notes that this method motivates discouraged students, reduces communication anxiety, improves research skills, fosters appropriate verbal and nonverbal skills, and stimulates affective learning. (JG)

  19. Esophageal speeches modified by the Speech Enhancer Program®

    OpenAIRE

    Manochiopinig, Sriwimon; Boonpramuk, Panuthat

    2014-01-01

    Esophageal speech appears to be the first choice of speech treatment for a laryngectomy. However, many laryngectomy people are unable to speak well. The aim of this study was to evaluate post-modified speech quality of Thai esophageal speakers using the Speech Enhancer Program®. The method adopted was to approach five speech–language pathologists to assess the speech accuracy and intelligibility of the words and continuing speech of the seven laryngectomy people. A comparison study was conduc...

  20. Speech Prosody in Persian Language

    Directory of Open Access Journals (Sweden)

    Maryam Nikravesh

    2014-05-01

    Full Text Available Background: In verbal communication in addition of semantic and grammatical aspects, includes: vocabulary, syntax and phoneme, some special voice characteristics were use that called speech prosody. Speech prosody is one of the important factors of communication which includes: intonation, duration, pitch, loudness, stress, rhythm and etc. The aim of this survey is studying some factors of prosody as duration, fundamental frequency range and intonation contour. Materials and Methods: This study is performed with cross-sectional and descriptive-analytic approach. The participants include 134 male and female between 18-30 years old who normally speak Persian. Two sentences include: an interrogative and one declarative sentence were studied. Voice samples were analyzed by Dr. Speech software (real analysis software and data were analyzed by statistical test of unilateral variance analysis and in depended T test, and intonation contour was drawn for sentences. Results: Mean of duration between kinds of sentences had a significant difference. Mean of duration had significant difference between female and male. Fundamental frequency range between kinds of sentences had not significant difference. Fundamental frequency range in female is higher than male. Conclusion: Duration is an affective factor in Persian prosody. The higher fundamental frequency range in female is because of different anatomical and physiological mechanisms in phonation system. In addition higher fundamental frequency range in female is the result of an authority of language use in Farsi female. The end part of intonation contour in yes/no question is rising, in declarative sentence is falling.

  1. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  2. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  3. Advances in Speech Recognition

    CERN Document Server

    Neustein, Amy

    2010-01-01

    This volume is comprised of contributions from eminent leaders in the speech industry, and presents a comprehensive and in depth analysis of the progress of speech technology in the topical areas of mobile settings, healthcare and call centers. The material addresses the technical aspects of voice technology within the framework of societal needs, such as the use of speech recognition software to produce up-to-date electronic health records, not withstanding patients making changes to health plans and physicians. Included will be discussion of speech engineering, linguistics, human factors ana

  4. Statistical speech segmentation and word learning in parallel: scaffolding from child-directed speech

    Directory of Open Access Journals (Sweden)

    Daniel eYurovsky

    2012-10-01

    Full Text Available In order to acquire their native languages, children must learn richly structured systems with regularities at multiple levels. While structure at different levels could be learned serially, e.g. speech segmentation coming before word-object mapping, redundancies across levels make parallel learning more efficient. For instance, a series of syllables is likely to be a word not only because of high transitional probabilities, but also because of a consistently co-occurring object. But additional statistics require additional processing, and thus might not be useful to cognitively constrained learners. We show that the structure of child-directed speech makes this problem solvable for human learners. First, a corpus of child-directed speech was recorded from parents and children engaged in a naturalistic free-play task. Analyses revealed two consistent regularities in the sentence structure of naming events. These regularities were subsequently encoded in an artificial language to which adult participants were exposed in the context of simultaneous statistical speech segmentation and word learning. Either regularity was sufficient to support successful learning, but no learning occurred in the absence of both regularities. Thus, the structure of child-directed speech plays an important role in scaffolding speech segmentation and word learning in parallel.

  5. Semantic contingency in the father's and mother's speech: a comparative analysis / Contingência semântica das falas materna e paterna: uma análise comparativa

    Directory of Open Access Journals (Sweden)

    Patrícia Nunes da Fonsêca

    2006-01-01

    Full Text Available The aim of this study is to compare the semantic contingency in the father's and mother's speeches directed to the child, considering the child's participation in the conversation. Twelve (12 children between 24 and 31 months old and their respective genitors of middle-class families from the city of João Pessoa, Pernambuco, Brazil, participated in this study. The parents were videotaped when interacting with their children in a free play situation for 20 minutes in their houses. The results indicated that the mothers gave significantly more continuity to the children's speeches when compared to the fathers. In relation to other contingent behaviour, the mothers presented more recast and imitations to the boys' speech than to the girls'. These results were discussed considering the level of the children's linguist development, the context in which the interactions occurred and analyzing the implications of contingency and breakdown speech to the process of language acquisition.

  6. THE USE OF EXPRESSIVE SPEECH ACTS IN HANNAH MONTANA SESSION 1

    Directory of Open Access Journals (Sweden)

    Nur Vita Handayani

    2015-07-01

    Full Text Available This study aims to describe kinds and forms of expressive speech act in Hannah Montana Session 1. It belongs to descriptive qualitative method. The research object was expressive speech act. The data source was utterances which contain expressive speech acts in the film Hannah Montana Session 1. The researcher used observation method and noting technique in collecting the data. In analyzing the data, descriptive qualitative method was used. The research findings show that there are ten kinds of expressive speech act found in Hannah Montana Session 1, namely expressing apology, expressing thanks, expressing sympathy, expressing attitudes, expressing greeting, expressing wishes, expressing joy, expressing pain, expressing likes, and expressing dislikes. The forms of expressive speech act are direct literal expressive speech act, direct non-literal expressive speech act, indirect literal expressive speech act, and indirect non-literal expressive speech act.

  7. Use of Deixis in Donald Trump?s Campaign Speech

    OpenAIRE

    Hanim, Saidatul

    2017-01-01

    The aims of this study are (1) to find out the types of deixis in Donald Trump?s campaign speech, (2) to find out the reasons for the use of dominant type of deixis in Donald Trump?s campaign speech and (3) to find out whether or not the deixis is used appropriately in Donald Trump?s campaign speech. This research is conducted by using qualitative content analysis. The data of the study are the utterances from the script Donald Trump?s campaign speech. The data are analyzed by using Levinson ...

  8. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Feeding Your 1- to 2-Year-Old Speech-Language Therapy KidsHealth > For Parents > Speech-Language Therapy A ... with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech disorder refers ...

  9. System Integration and Control in a Speech Understanding System

    Science.gov (United States)

    1975-09-01

    present in a specified time region of the input. Phonological and acousttc• Phonetic rule• ere used bY the mapper to relate Phonetic spellings to...speech wavetorm, An acoust1C• Phonetic processor analyzes the digitized waveform to extract parameters based on speech production characteristics, The

  10. Rhetorical and Linguistic Analysis of Bush's Second Inaugural Speech

    Science.gov (United States)

    Sameer, Imad Hayif

    2017-01-01

    This study attempts to analyze Bush's second inaugural speech. It aims at investigating the use of linguistic strategies in it. It resorts to two models which are Aristotle's model while the second is that of Atkinson's (1984) to draw the attention towards linguistic strategies. The analysis shows that Bush's second inaugural speech is successful…

  11. The Study of Full Light Speech Signal Collection System

    Institute of Scientific and Technical Information of China (English)

    JIA Bo; JIN Yaqiu; ZHANG Wei; HU Li; YE Kunzhen

    2001-01-01

    The demodulation character of 3×3 optic fiber couplers is analyzed, and the application in the coherent communication system and speech signal collection is pointed out in the paper. By the experiment, the feasibility of speech signal collection system by the way of the all light is verified.

  12. Speech recognition using articulatory and excitation source features

    CERN Document Server

    Rao, K Sreenivasa

    2017-01-01

    This book discusses the contribution of articulatory and excitation source information in discriminating sound units. The authors focus on excitation source component of speech -- and the dynamics of various articulators during speech production -- for enhancement of speech recognition (SR) performance. Speech recognition is analyzed for read, extempore, and conversation modes of speech. Five groups of articulatory features (AFs) are explored for speech recognition, in addition to conventional spectral features. Each chapter provides the motivation for exploring the specific feature for SR task, discusses the methods to extract those features, and finally suggests appropriate models to capture the sound unit specific knowledge from the proposed features. The authors close by discussing various combinations of spectral, articulatory and source features, and the desired models to enhance the performance of SR systems.

  13. Speech Compression for Noise-Corrupted Thai Expressive Speech

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2011-01-01

    Full Text Available Problem statement: In speech communication, speech coding aims at preserving the speech quality with lower coding bitrate. When considering the communication environment, various types of noises deteriorates the speech quality. The expressive speech with different speaking styles may cause different speech quality with the same coding method. Approach: This research proposed a study of speech compression for noise-corrupted Thai expressive speech by using two coding methods of CS-ACELP and MP-CELP. The speech material included a hundredmale speech utterances and a hundred female speech utterances. Four speaking styles included enjoyable, sad, angry and reading styles. Five sentences of Thai speech were chosen. Three types of noises were included (train, car and air conditioner. Five levels of each type of noise were varied from 0-20 dB. The subjective test of mean opinion score was exploited in the evaluation process. Results: The experimental results showed that CS-ACELP gave the better speech quality than that of MP-CELP at all three bitrates of 6000, 8600-12600 bps. When considering the levels of noise, the 20-dB noise gave the best speech quality, while 0-dB noise gave the worst speech quality. When considering the speech gender, female speech gave the better results than that of male speech. When considering the types of noise, the air-conditioner noise gave the best speech quality, while the train noise gave the worst speech quality. Conclusion: From the study, it can be seen that coding methods, types of noise, levels of noise, speech gender influence on the coding speech quality.

  14. SUSTAINABILITY IN THE BOWELS OF SPEECHES

    Directory of Open Access Journals (Sweden)

    Jadir Mauro Galvao

    2012-10-01

    Full Text Available The theme of sustainability has not yet achieved the feat of make up as an integral part the theoretical medley that brings out our most everyday actions, often visits some of our thoughts and permeates many of our speeches. The big event of 2012, the meeting gathered Rio +20 glances from all corners of the planet around that theme as burning, but we still see forward timidly. Although we have no very clear what the term sustainability closes it does not sound quite strange. Associate with things like ecology, planet, wastes emitted by smokestacks of factories, deforestation, recycling and global warming must be related, but our goal in this article is the least of clarifying the term conceptually and more try to observe as it appears in speeches of such conference. When the competent authorities talk about sustainability relate to what? We intend to investigate the lines and between the lines of these speeches, any assumptions associated with the term. Therefore we will analyze the speech of the People´s Summit, the opening speech of President Dilma and emblematic speech of the President of Uruguay, José Pepe Mujica.

  15. Expression of future prospective in indirect speech

    Directory of Open Access Journals (Sweden)

    Bodnaruk Elena Vladimirovna

    2015-03-01

    Full Text Available The article analyzes the characteristics and use of grammatical semantics and lexical and grammatical means used to create future prospects in double indirect discourse. The material for the study were epic works by contemporary German writers. In the analysis of the empirical material it has been pointed out that indirect discourse has preterial basis and is the kind of most frequent inner speech of characters. The most widely used form with future semantics in preterial indirect speech is conditional I, formally having a conjunctive basis, but is mostly used with the indicative semantics. Competitive to conditional I in indirect speech is preterial indicative. A characteristic feature of the indirect speech is the use of modal verbs, which, thanks to its semantics is usually referred as an action at a later term, creating the prospect of future statements. The most frequent were modal verbs wollen and sollen in the form of the preterite, more rare verbs were m ssen and k nnen. German indirect speech distinguishes the ability to use forms on the basis of conjunctive: preterite and plusquamperfect of conjunctive. Both forms express values similar to those of the indicative. However, conjunctive forms the basis of the data shown in a slightly more pronounced seme of uncertainty that accompanies future uses of these forms in indirect speech. In addition, plusquamperfect conjunctive differs from others by the presence of the seme of completeness.

  16. Emil Kraepelin's dream speech: a psychoanalytic interpretation.

    Science.gov (United States)

    Engels, Huub; Heynick, Frank; van der Staak, Cees

    2003-10-01

    Freud's contemporary fellow psychiatrist Emil Kraepelin collected over the course of several decades some 700 specimens of speech in dreams, mostly his own, along with various concomitant data. These generally exhibit far more obvious primary-process influence than do the dream speech specimens found in Freud's corpus; but Kraepelin eschewed any depth-psychology interpretation. In this paper the authors first explore the respective orientations of Freud and Kraepelin to mind and brain, and normal and pathological phenomena, particularly as these relate to speech and dreaming. They then proceed, with the help of biographical sources, to analyze a selection of Kraepelin's deviant dream speech in the manner that was pioneered by Freud, most notably in his 'Autodidasker' dream. They find that Kraepelin's particular concern with the preservation of his rather uncommon family name--and with the preservation of his medical nomenclature, which lent prestige to that name--appears to provide a key link in a chain of associations for elucidating his dream speech specimens. They further suggest, more generally, that one's proper name, as a minimal characteristic of the ego during sleep, may prove to be a key in interpreting the dream speech of others as well.

  17. Tracking Speech Sound Acquisition

    Science.gov (United States)

    Powell, Thomas W.

    2011-01-01

    This article describes a procedure to aid in the clinical appraisal of child speech. The approach, based on the work by Dinnsen, Chin, Elbert, and Powell (1990; Some constraints on functionally disordered phonologies: Phonetic inventories and phonotactics. "Journal of Speech and Hearing Research", 33, 28-37), uses a railway idiom to track gains in…

  18. Preschool Connected Speech Inventory.

    Science.gov (United States)

    DiJohnson, Albert; And Others

    This speech inventory developed for a study of aurally handicapped preschool children (see TM 001 129) provides information on intonation patterns in connected speech. The inventory consists of a list of phrases and simple sentences accompanied by pictorial clues. The test is individually administered by a teacher-examiner who presents the spoken…

  19. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  20. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  1. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  2. Tracking Speech Sound Acquisition

    Science.gov (United States)

    Powell, Thomas W.

    2011-01-01

    This article describes a procedure to aid in the clinical appraisal of child speech. The approach, based on the work by Dinnsen, Chin, Elbert, and Powell (1990; Some constraints on functionally disordered phonologies: Phonetic inventories and phonotactics. "Journal of Speech and Hearing Research", 33, 28-37), uses a railway idiom to track gains in…

  3. Free Speech Yearbook 1976.

    Science.gov (United States)

    Phifer, Gregg, Ed.

    The articles collected in this annual address several aspects of First Amendment Law. The following titles are included: "Freedom of Speech As an Academic Discipline" (Franklyn S. Haiman), "Free Speech and Foreign-Policy Decision Making" (Douglas N. Freeman), "The Supreme Court and the First Amendment: 1975-1976"…

  4. Preschool Connected Speech Inventory.

    Science.gov (United States)

    DiJohnson, Albert; And Others

    This speech inventory developed for a study of aurally handicapped preschool children (see TM 001 129) provides information on intonation patterns in connected speech. The inventory consists of a list of phrases and simple sentences accompanied by pictorial clues. The test is individually administered by a teacher-examiner who presents the spoken…

  5. Advertising and Free Speech.

    Science.gov (United States)

    Hyman, Allen, Ed.; Johnson, M. Bruce, Ed.

    The articles collected in this book originated at a conference at which legal and economic scholars discussed the issue of First Amendment protection for commercial speech. The first article, in arguing for freedom for commercial speech, finds inconsistent and untenable the arguments of those who advocate freedom from regulation for political…

  6. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds Voted For Schorr Inquiry" by Richard Lyons, "Erosion of the…

  7. Neural Oscillations Carry Speech Rhythm through to Comprehension.

    Science.gov (United States)

    Peelle, Jonathan E; Davis, Matthew H

    2012-01-01

    A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners' processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging - particularly electroencephalography (EEG) and magnetoencephalography (MEG) - point to phase locking by ongoing cortical oscillations to low-frequency information (~4-8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain.

  8. Neural oscillations carry speech rhythm through to comprehension

    Directory of Open Access Journals (Sweden)

    Jonathan E Peelle

    2012-09-01

    Full Text Available A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners’ processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging—particularly electroencephalography (EEG and magnetoencephalography (MEG—point to phase locking by ongoing cortical oscillations to low-frequency information (~4–8 Hz in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and on segment perception (i.e., that the perception of phonemes and words in connected speech are influenced by preceding speech rate. Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in additional recruitment of left hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain.

  9. Apraxia of speech and cerebellar mutism syndrome: a case report.

    Science.gov (United States)

    De Witte, E; Wilssens, I; De Surgeloose, D; Dua, G; Moens, M; Verhoeven, J; Manto, M; Mariën, P

    2017-01-01

    Cerebellar mutism syndrome (CMS) or posterior fossa syndrome (PFS) consists of a constellation of neuropsychiatric, neuropsychological and neurogenic speech and language deficits. It is most commonly observed in children after posterior fossa tumor surgery. The most prominent feature of CMS is mutism, which generally starts after a few days after the operation, has a limited duration and is typically followed by motor speech deficits. However, the core speech disorder subserving CMS is still unclear. This study investigates the speech and language symptoms following posterior fossa medulloblastoma surgery in a 12-year-old right-handed boy. An extensive battery of formal speech (DIAS = Diagnostic Instrument Apraxia of Speech) and language tests were administered during a follow-up of 6 weeks after surgery. Although the neurological and neuropsychological (affective, cognitive) symptoms of this patient are consistent with Schmahmann's syndrome, the speech and language symptoms were markedly different from what is typically described in the literature. In-depth analyses of speech production revealed features consistent with a diagnosis of apraxia of speech (AoS) while ataxic dysarthria was completely absent. In addition, language assessments showed genuine aphasic deficits as reflected by distorted language production and perception, wordfinding difficulties, grammatical disturbances and verbal fluency deficits. To the best of our knowledge this case might be the first example that clearly demonstrates that a higher level motor planning disorder (apraxia) may be the origin of disrupted speech in CMS. In addition, identification of non-motor linguistic disturbances during follow-up add to the view that the cerebellum not only plays a crucial role in the planning and execution of speech but also in linguistic processing. Whether the cerebellum has a direct or indirect role in motor speech planning needs to be further investigated.

  10. Charisma in business speeches

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter

    2016-01-01

    to business speeches. Consistent with the public opinion, our findings are indicative of Steve Jobs being a more charismatic speaker than Mark Zuckerberg. Beyond previous studies, our data suggest that rhythm and emphatic accentuation are also involved in conveying charisma. Furthermore, the differences......Charisma is a key component of spoken language interaction; and it is probably for this reason that charismatic speech has been the subject of intensive research for centuries. However, what is still largely missing is a quantitative and objective line of research that, firstly, involves analyses...... of the acoustic-prosodic signal, secondly, focuses on business speeches like product presentations, and, thirdly, in doing so, advances the still fairly fragmentary evidence on the prosodic correlates of charismatic speech. We show that the prosodic features of charisma in political speeches also apply...

  11. Analysis of speech-based speech transmission index methods with implications for nonlinear operations

    Science.gov (United States)

    Goldsworthy, Ray L.; Greenberg, Julie E.

    2004-12-01

    The Speech Transmission Index (STI) is a physical metric that is well correlated with the intelligibility of speech degraded by additive noise and reverberation. The traditional STI uses modulated noise as a probe signal and is valid for assessing degradations that result from linear operations on the speech signal. Researchers have attempted to extend the STI to predict the intelligibility of nonlinearly processed speech by proposing variations that use speech as a probe signal. This work considers four previously proposed speech-based STI methods and four novel methods, studied under conditions of additive noise, reverberation, and two nonlinear operations (envelope thresholding and spectral subtraction). Analyzing intermediate metrics in the STI calculation reveals why some methods fail for nonlinear operations. Results indicate that none of the previously proposed methods is adequate for all of the conditions considered, while four proposed methods produce qualitatively reasonable results and warrant further study. The discussion considers the relevance of this work to predicting the intelligibility of cochlear-implant processed speech. .

  12. Investigation of habitual pitch during free play activities for preschool-aged children.

    Science.gov (United States)

    Chen, Yang; Kimelman, Mikael D Z; Micco, Katie

    2009-01-01

    This study is designed to compare the habitual pitch measured in two different speech activities (free play activity and traditionally used structured speech activity) for normally developing preschool-aged children to explore to what extent preschoolers vary their vocal pitch among different speech environments. Habitual pitch measurements were conducted for 10 normally developing children (2 boys, 8 girls) between the ages of 31 months and 71 months during two different activities: (1) free play; and (2) structured speech. Speech samples were recorded using a throat microphone connected with a wireless transmitter in both activities. The habitual pitch (in Hz) was measured for all collected speech samples by using voice analysis software (Real-Time Pitch). Significantly higher habitual pitch is found during free play in contrast to structured speech activities. In addition, there is no showing of significant difference of habitual pitch elicited across a variety of structured speech activities. Findings suggest that the vocal usage of preschoolers appears to be more effortful during free play than during structured activities. It is recommended that a comprehensive evaluation for young children's voice needs to be based on the speech/voice samples collected from both free play and structured activities.

  13. Fundamental Frequency and Direction-of-Arrival Estimation for Multichannel Speech Enhancement

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam

    Audio systems receive the speech signals of interest usually in the presence of noise. The noise has profound impacts on the quality and intelligibility of the speech signals, and it is therefore clear that the noisy signals must be cleaned up before being played back, stored, or analyzed. We can...... array, the multichannel periodic signals may have different phases due to the time-differences-of-arrivals (TDOAs) which are related to the direction-of-arrival (DOA) of the impinging sound waves. Hence, the outputs of the array can be steered to the direction of the signal of interest in order to align......-based methods in colored noise. Evaluations of the estimators comparing with the minimum variance of the deterministic parameters and the other methods confirm that the proposed estimators are statistically efficient in colored noise and computationally simple. Finally, we propose model-based beamformers...

  14. A Corpus Analysis of Patterns of Age-Related Change in Conversational Speech

    OpenAIRE

    2010-01-01

    Conversational speech from over three hundred speakers aged 17 to 68 years old was analyzed for age-related changes in the timing and content of spoken language production. Overall, several relationships between the lexical content, timing, and fluency of speech emerged, such that more novel and lower frequency words are associated with slower speech and higher levels of disfluencies. Speaker age was associated with slower speech and more filled pauses, particularly those associated with lexi...

  15. Post-error Correction in Automatic Speech Recognition Using Discourse Information

    OpenAIRE

    Kang,S.; Kim, J. -H.; Seo, J.

    2014-01-01

    Overcoming speech recognition errors in the field of human�computer interaction is important in ensuring a consistent user experience. This paper proposes a semantic-oriented post-processing approach for the correction of errors in speech recognition. The novelty of the model proposed here is that it re-ranks the n-best hypothesis of speech recognition based on the user's intention, which is analyzed from previous discourse information, while conventional automatic speech reco...

  16. Problematic Game Play: The Diagnostic Value of Playing Motives, Passion, and Playing Time in Men

    Directory of Open Access Journals (Sweden)

    Julia Kneer

    2015-04-01

    Full Text Available Internet gaming disorder is currently listed in the DSM—not in order to diagnose such a disorder but to encourage research to investigate this phenomenon. Even whether it is still questionable if Internet Gaming Disorder exists and can be judged as a form of addiction, problematic game play is already very well researched to cause problems in daily life. Approaches trying to predict problematic tendencies in digital game play have mainly focused on playing time as a diagnostic criterion. However, motives to engage in digital game play and obsessive passion for game play have also been found to predict problematic game play but have not yet been investigated together. The present study aims at (1 analyzing if obsessive passion can be distinguished from problematic game play as separate concepts, and (2 testing motives of game play, passion, and playing time for their predictive values for problematic tendencies. We found (N = 99 males, Age: M = 22.80, SD = 3.81 that obsessive passion can be conceptually separated from problematic game play. In addition, the results suggest that compared to solely playing time immersion as playing motive and obsessive passion have added predictive value for problematic game play. The implications focus on broadening the criteria in order to diagnose problematic playing.

  17. Intensity of guitar playing as a function of auditory feedback.

    Science.gov (United States)

    Johnson, C I; Pick, H L; Garber, S R; Siegel, G M

    1978-06-01

    Subjects played an electric guitar while auditory feedback was attenuated or amplified at seven sidetone levels varying 10-dB steps around a comfortable listening level. The sidetone signal was presented in quiet (experiment I) and several levels of white noise (experiment II). Subjects compensated for feedback changes, demonstrating a sidetone amplification as well as a Lombard effect. The similarity of these results to those found previously for speech suggests that guitar playing can be a useful analog for the function of auditory feedback in speech production. Unlike previous findings for speech, the sidetone-amplification effect was not potentiated by masking, consistent with a hypothesis that potentiation in speech is attributable to interference with bone conduction caused by the masking noise.

  18. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...... from computational auditory scene analysis and further support the hypothesis that the SNRenv is a powerful metric for speech intelligibility prediction....

  19. Sperry Univac speech communications technology

    Science.gov (United States)

    Medress, Mark F.

    1977-01-01

    Technology and systems for effective verbal communication with computers were developed. A continuous speech recognition system for verbal input, a word spotting system to locate key words in conversational speech, prosodic tools to aid speech analysis, and a prerecorded voice response system for speech output are described.

  20. Voice and Speech after Laryngectomy

    Science.gov (United States)

    Stajner-Katusic, Smiljka; Horga, Damir; Musura, Maja; Globlek, Dubravka

    2006-01-01

    The aim of the investigation is to compare voice and speech quality in alaryngeal patients using esophageal speech (ESOP, eight subjects), electroacoustical speech aid (EACA, six subjects) and tracheoesophageal voice prosthesis (TEVP, three subjects). The subjects reading a short story were recorded in the sound-proof booth and the speech samples…

  1. Speech Correction in the Schools.

    Science.gov (United States)

    Eisenson, Jon; Ogilvie, Mardel

    An introduction to the problems and therapeutic needs of school age children whose speech requires remedial attention, the text is intended for both the classroom teacher and the speech correctionist. General considerations include classification and incidence of speech defects, speech correction services, the teacher as a speaker, the mechanism…

  2. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  3. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  4. [The systematic selection of speech audiometric procedures].

    Science.gov (United States)

    Steffens, T

    2017-03-01

    The impact of hearing loss on the ability to participate in verbal communication can be directly quantified through the use of speech audiometry. Advances in technology and the associated reduction in background noise interference for hearing aids have allowed the reproduction of very complex acoustic environments, analogous to those in which conversations occur in daily life. These capabilities have led to the creation of numerous advanced speech audiometry measures, test procedures and environments, far beyond the presentation of isolated words in an otherwise noise-free testing booth. The aim of this study was to develop a set of systematic criteria for the appropriate selection of speech audiometric material, which are presented in this article in relationship to the most widely used test procedures. Before an appropriate speech test can be selected from the numerous procedures available, the precise aims of the evaluation should be basically defined. Specific test characteristics, such as validity, objectivity, reliability and sensitivity are important for the selection of the correct test for the specific goals. A concrete understanding of the goals of the evaluation as well as of specific test criteria play a crucial role in the selection of speech audiometry testing procedures.

  5. Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders

    CERN Document Server

    Baghai-Ravary, Ladan

    2013-01-01

    Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders provides a survey of methods designed to aid clinicians in the diagnosis and monitoring of speech disorders such as dysarthria and dyspraxia, with an emphasis on the signal processing techniques, statistical validity of the results presented in the literature, and the appropriateness of methods that do not require specialized equipment, rigorously controlled recording procedures or highly skilled personnel to interpret results. Such techniques offer the promise of a simple and cost-effective, yet objective, assessment of a range of medical conditions, which would be of great value to clinicians. The ideal scenario would begin with the collection of examples of the clients’ speech, either over the phone or using portable recording devices operated by non-specialist nursing staff. The recordings could then be analyzed initially to aid diagnosis of conditions, and subsequently to monitor the clients’ progress and res...

  6. Analysis of the Phonological Skills of Children with Down Syndrome from Single Word and Connected Speech Samples.

    Science.gov (United States)

    Iacono, Teresa A.

    1998-01-01

    This study compared the speech of five children (ages 5 and 6) with Down syndrome across single word samples elicited from tests and connected speech samples elicited during play. Analysis indicated that connected speech samples provided fewer total words and word tokens and sometimes failed to target certain later developing phonemes. Clinical…

  7. Private Speech Use in Arithmetical Calculation: Contributory Role of Phonological Awareness in Children with and without Mathematical Difficulties

    Science.gov (United States)

    Ostad, Snorre A.

    2013-01-01

    The majority of recent studies conclude that children's private speech development (private speech internalization) is related to and important for mathematical development and disabilities. It is far from clear, however, whether private speech internalization itself plays any causal role in the development of mathematical competence. The main…

  8. Analysis of the Phonological Skills of Children with Down Syndrome from Single Word and Connected Speech Samples.

    Science.gov (United States)

    Iacono, Teresa A.

    1998-01-01

    This study compared the speech of five children (ages 5 and 6) with Down syndrome across single word samples elicited from tests and connected speech samples elicited during play. Analysis indicated that connected speech samples provided fewer total words and word tokens and sometimes failed to target certain later developing phonemes. Clinical…

  9. Speech outcome after surgical treatment for oral and oropharyngeal cancer : A longitudinal assessment of patients reconstructed by a microvascular flap

    NARCIS (Netherlands)

    Borggreven, PA; Verdonck-de Leeuw, [No Value; Langendijk, JA; Doornaert, P; Koster, MN; de Bree, R; Leemans, R

    Background. The aim of the study was to analyze speech outcome for patients with advanced oral/oropharyngeal cancer treated with reconstructive surgery and adjuvant radiotherapy. Methods. Speech tests (communicative suitability, intelligibility, articulation, nasality, and consonant errors) were

  10. Speech processing in mobile environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book focuses on speech processing in the presence of low-bit rate coding and varying background environments. The methods presented in the book exploit the speech events which are robust in noisy environments. Accurate estimation of these crucial events will be useful for carrying out various speech tasks such as speech recognition, speaker recognition and speech rate modification in mobile environments. The authors provide insights into designing and developing robust methods to process the speech in mobile environments. Covering temporal and spectral enhancement methods to minimize the effect of noise and examining methods and models on speech and speaker recognition applications in mobile environments.

  11. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Source Separation via Spectral Masking for Speech Recognition Systems

    Directory of Open Access Journals (Sweden)

    Gustavo Fernandes Rodrigues

    2012-12-01

    Full Text Available In this paper we present an insight into the use of spectral masking techniques in time-frequency domain, as a preprocessing step for the speech signal recognition. Speech recognition systems have their performance negatively affected in noisy environments or in the presence of other speech signals. The limits of these masking techniques for different levels of the signal-to-noise ratio are discussed. We show the robustness of the spectral masking techniques against four types of noise: white, pink, brown and human speech noise (bubble noise. The main contribution of this work is to analyze the performance limits of recognition systems  using spectral masking. We obtain an increase of 18% on the speech hit rate, when the speech signals were corrupted by other speech signals or bubble noise, with different signal-to-noise ratio of approximately 1, 10 and 20 dB. On the other hand, applying the ideal binary masks to mixtures corrupted by white, pink and brown noise, results an average growth of 9% on the speech hit rate, with the same different signal-to-noise ratio. The experimental results suggest that the masking spectral techniques are more suitable for the case when it is applied a bubble noise, which is produced by human speech, than for the case of applying white, pink and brown noise.

  13. Speech nasality and nasometry in cleft lip and palate

    Directory of Open Access Journals (Sweden)

    Fabiane Rodrigues Larangeira

    Full Text Available ABSTRACT INTRODUCTION: Perceptual evaluation is considered the gold standard to evaluate speech nasality. Several procedures are used to collect and analyze perceptual data, which makes it susceptible to errors. Therefore, there has been an increasing desire to find methods that can improve the assessment. OBJECTIVE: To describe and compare the results of speech nasality obtained by assessments of live speech, the Test of Hypernasality (THYPER, assessments of audio recorded speech, and nasometry. METHODS: A retrospective study consisting of 331 patients with operated unilateral cleft lip and palate. Speech nasality was assessed by four methods of assessment: live perceptual judgement, THYPER, audio-recorded speech sample judgement by multiple judges, and nasometry. All data were collected from medical records of patients, with the exception of the speech sample recording assessment, which was carried out by multiple judges. RESULTS: The results showed that the highest percentages of absence of hypernasality were obtained from judgements performed live and from the THYPER, with equal results between them (79%. Lower percentages were obtained from the recordings by judges (66% and from nasometry (57%. CONCLUSION: The best results among the four speech nasality evaluation methods were obtained for the ones performed live (live nasality judgement by a speech pathologist and THYPER.

  14. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...... egalitarian reasons for free speech - reasons from overall welfare, from autonomy and from respect for the equality of citizens - it is argued that these reasons outweigh the proposed reasons for curbing culturally offensive speech. Currently controversial cases such as that of the Danish Cartoon Controversy...

  15. Thematic Progression and Textual Coherence in Speech

    Institute of Scientific and Technical Information of China (English)

    周云鹤

    2014-01-01

    Thematic progression can affect the flow of information and directly affect the discourse coherence. This paper analyzes thematic progression patterns of a speech about“people and nature” in the “CCTV Cup”English Speaking Contest and finds that there are three progression patterns in this text, which are parallel progression, continuous progression, and crossing progression.

  16. Speech Recognition, Disability, and College Composition

    Science.gov (United States)

    Nelson, Lorna M.; Reynolds, Thomas W., Jr.

    2015-01-01

    This study examined the composing processes of five postsecondary students who used or were learning to use speech recognition software (SR) for college-level writing. The study analyzed their composing processes through observation, interviews, and analysis of written products over a series of composing sessions. This investigation was prompted…

  17. Hearing versus Listening: Attention to Speech and Its Role in Language Acquisition in Deaf Infants with Cochlear Implants.

    Science.gov (United States)

    Houston, Derek M; Bergeson, Tonya R

    2014-01-01

    The advent of cochlear implantation has provided thousands of deaf infants and children access to speech and the opportunity to learn spoken language. Whether or not deaf infants successfully learn spoken language after implantation may depend in part on the extent to which they listen to speech rather than just hear it. We explore this question by examining the role that attention to speech plays in early language development according to a prominent model of infant speech perception - Jusczyk's WRAPSA model - and by reviewing the kinds of speech input that maintains normal-hearing infants' attention. We then review recent findings suggesting that cochlear-implanted infants' attention to speech is reduced compared to normal-hearing infants and that speech input to these infants differs from input to infants with normal hearing. Finally, we discuss possible roles attention to speech may play on deaf children's language acquisition after cochlear implantation in light of these findings and predictions from Jusczyk's WRAPSA model.

  18. Song and speech: examining the link between singing talent and speech imitation ability.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  19. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  20. LIBERDADE DE EXPRESSÃO E DISCURSO DO ÓDIO NO BRASIL / FREE SPEECH AND HATE SPEECH IN BRAZIL

    Directory of Open Access Journals (Sweden)

    Nevita Maria Pessoa de Aquino Franca Luna

    2014-12-01

    Full Text Available The purpose of this article is to analyze the restriction of free speech when it comes close to hate speech. In this perspective, the aim of this study is to answer the question: what is the understanding adopted by the Brazilian Supreme Court in cases involving the conflict between free speech and hate speech? The methodology combines a bibliographic review on the theoretical assumptions of the research (concept of free speech and hate speech, and understanding of the rights of defense of traditionally discriminated minorities and empirical research (documental and jurisprudential analysis of judged cases of American Court, German Court and Brazilian Court. Firstly, free speech is discussed, defining its meaning, content and purpose. Then, the hate speech is pointed as an inhibitor element of free speech for offending members of traditionally discriminated minorities, who are outnumbered or in a situation of cultural, socioeconomic or political subordination. Subsequently, are discussed some aspects of American (negative freedom and German models (positive freedom, to demonstrate that different cultures adopt different legal solutions. At the end, it is concluded that there is an approximation of the Brazilian understanding with the German doctrine, from the analysis of landmark cases as the publisher Siegfried Ellwanger (2003 and the Samba School Unidos do Viradouro (2008. The Brazilian comprehension, a multicultural country made up of different ethnicities, leads to a new process of defending minorities who, despite of involving the collision of fundamental rights (dignity, equality and freedom, is still restrained by incompatible barriers of a contemporary pluralistic democracy.

  1. Speech evaluation in children with temporomandibular disorders.

    Science.gov (United States)

    Pizolato, Raquel Aparecida; Fernandes, Frederico Silva de Freitas; Gavião, Maria Beatriz Duarte

    2011-10-01

    The aims of this study were to evaluate the influence of temporomandibular disorders (TMD) on speech in children, and to verify the influence of occlusal characteristics. Speech and dental occlusal characteristics were assessed in 152 Brazilian children (78 boys and 74 girls), aged 8 to 12 (mean age 10.05 ± 1.39 years) with or without TMD signs and symptoms. The clinical signs were evaluated using the Research Diagnostic Criteria for TMD (RDC/TMD) (axis I) and the symptoms were evaluated using a questionnaire. The following groups were formed: Group TMD (n=40), TMD signs and symptoms (Group S and S, n=68), TMD signs or symptoms (Group S or S, n=33), and without signs and symptoms (Group N, n=11). Articulatory speech disorders were diagnosed during spontaneous speech and repetition of the words using the "Phonological Assessment of Child Speech" for the Portuguese language. It was also applied a list of 40 phonological balanced words, read by the speech pathologist and repeated by the children. Data were analyzed by descriptive statistics, Fisher's exact or Chi-square tests (α=0.05). A slight prevalence of articulatory disturbances, such as substitutions, omissions and distortions of the sibilants /s/ and /z/, and no deviations in jaw lateral movements were observed. Reduction of vertical amplitude was found in 10 children, the prevalence being greater in TMD signs and symptoms children than in the normal children. The tongue protrusion in phonemes /t/, /d/, /n/, /l/ and frontal lips in phonemes /s/ and /z/ were the most prevalent visual alterations. There was a high percentage of dental occlusal alterations. There was no association between TMD and speech disorders. Occlusal alterations may be factors of influence, allowing distortions and frontal lisp in phonemes /s/ and /z/ and inadequate tongue position in phonemes /t/; /d/; /n/; /l/.

  2. Speech evaluation in children with temporomandibular disorders

    Directory of Open Access Journals (Sweden)

    Raquel Aparecida Pizolato

    2011-10-01

    Full Text Available OBJECTIVE: The aims of this study were to evaluate the influence of temporomandibular disorders (TMD on speech in children, and to verify the influence of occlusal characteristics. MATERIAL AND METHODS: Speech and dental occlusal characteristics were assessed in 152 Brazilian children (78 boys and 74 girls, aged 8 to 12 (mean age 10.05 ± 1.39 years with or without TMD signs and symptoms. The clinical signs were evaluated using the Research Diagnostic Criteria for TMD (RDC/TMD (axis I and the symptoms were evaluated using a questionnaire. The following groups were formed: Group TMD (n=40, TMD signs and symptoms (Group S and S, n=68, TMD signs or symptoms (Group S or S, n=33, and without signs and symptoms (Group N, n=11. Articulatory speech disorders were diagnosed during spontaneous speech and repetition of the words using the "Phonological Assessment of Child Speech" for the Portuguese language. It was also applied a list of 40 phonological balanced words, read by the speech pathologist and repeated by the children. Data were analyzed by descriptive statistics, Fisher's exact or Chi-square tests (α=0.05. RESULTS: A slight prevalence of articulatory disturbances, such as substitutions, omissions and distortions of the sibilants /s/ and /z/, and no deviations in jaw lateral movements were observed. Reduction of vertical amplitude was found in 10 children, the prevalence being greater in TMD signs and symptoms children than in the normal children. The tongue protrusion in phonemes /t/, /d/, /n/, /l/ and frontal lips in phonemes /s/ and /z/ were the most prevalent visual alterations. There was a high percentage of dental occlusal alterations. CONCLUSIONS: There was no association between TMD and speech disorders. Occlusal alterations may be factors of influence, allowing distortions and frontal lisp in phonemes /s/ and /z/ and inadequate tongue position in phonemes /t/; /d/; /n/; /l/.

  3. Impact of language on functional connectivity for audiovisual speech integration.

    Science.gov (United States)

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-Aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-08-11

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl's gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration.

  4. Impact of language on functional connectivity for audiovisual speech integration

    Science.gov (United States)

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  5. Anxiety and ritualized speech

    Science.gov (United States)

    Lalljee, Mansur; Cook, Mark

    1975-01-01

    The experiment examines the effects on a number of words that seem irrelevant to semantic communication. The Units of Ritualized Speech (URSs) considered are: 'I mean', 'in fact', 'really', 'sort of', 'well' and 'you know'. (Editor)

  6. Anxiety and ritualized speech

    Science.gov (United States)

    Lalljee, Mansur; Cook, Mark

    1975-01-01

    The experiment examines the effects on a number of words that seem irrelevant to semantic communication. The Units of Ritualized Speech (URSs) considered are: 'I mean', 'in fact', 'really', 'sort of', 'well' and 'you know'. (Editor)

  7. HATE SPEECH AS COMMUNICATION

    National Research Council Canada - National Science Library

    Gladilin Aleksey Vladimirovich

    2012-01-01

    The purpose of the paper is a theoretical comprehension of hate speech from communication point of view, on the one hand, and from the point of view of prejudice, stereotypes and discrimination on the other...

  8. Speech intelligibility in hospitals.

    Science.gov (United States)

    Ryherd, Erica E; Moeller, Michael; Hsu, Timothy

    2013-07-01

    Effective communication between staff members is key to patient safety in hospitals. A variety of patient care activities including admittance, evaluation, and treatment rely on oral communication. Surprisingly, published information on speech intelligibility in hospitals is extremely limited. In this study, speech intelligibility measurements and occupant evaluations were conducted in 20 units of five different U.S. hospitals. A variety of unit types and locations were studied. Results show that overall, no unit had "good" intelligibility based on the speech intelligibility index (SII > 0.75) and several locations found to have "poor" intelligibility (SII speech intelligibility across a variety of hospitals and unit types, offers some evidence of the positive impact of absorption on intelligibility, and identifies areas for future research.

  9. Speech disorders - children

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/001430.htm Speech disorders - children To use the sharing features on ... 2017, A.D.A.M., Inc. Duplication for commercial use must be authorized in writing by ADAM ...

  10. Speech impairment (adult)

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003204.htm Speech impairment (adult) To use the sharing features on ... 2017, A.D.A.M., Inc. Duplication for commercial use must be authorized in writing by ADAM ...

  11. Cultural-historical and cognitive approaches to understanding the origins of development of written speech

    Directory of Open Access Journals (Sweden)

    L.F. Obukhova

    2014-08-01

    Full Text Available We present an analysis of the emergence and development of written speech, its relationship to the oral speech, connections to the symbolic and modeling activities of preschool children – playing and drawing. While a child's drawing is traditionally interpreted in psychology either as a measure of intellectual development, or as a projective technique, or as a criterion for creative giftedness of the child, in this article, the artistic activity is analyzed as a prerequisite for development of written speech. The article substantiates the hypothesis that the mastery of “picture writing” – the ability to display the verbal content in a schematic picturesque plan – is connected to the success of writing speech at school age. Along with the classical works of L.S. Vygotsky, D.B. Elkonin, A.R. Luria, dedicated to finding the origins of writing, the article presents the current Russian and foreign frameworks of forming the preconditions of writing, based on the concepts of cultural-historical theory (“higher mental functions”, “zone of proximal development”, etc.. In Western psychology, a number of pilot studies used the developmental function of drawing for teaching the written skills to children of 5-7 years old. However, in cognitive psychology, relationship between drawing and writing is most often reduced mainly to the analysis of general motor circuits. Despite the recovery in research on writing and its origins in the last decade, either in domestic or in foreign psychology, the written speech is not a sufficiently studied problem.

  12. Recognizing GSM Digital Speech

    OpenAIRE

    Gallardo-Antolín, Ascensión; Peláez-Moreno, Carmen; Díaz-de-María, Fernando

    2005-01-01

    The Global System for Mobile (GSM) environment encompasses three main problems for automatic speech recognition (ASR) systems: noisy scenarios, source coding distortion, and transmission errors. The first one has already received much attention; however, source coding distortion and transmission errors must be explicitly addressed. In this paper, we propose an alternative front-end for speech recognition over GSM networks. This front-end is specially conceived to be effective against source c...

  13. Speech Compression and Synthesis

    Science.gov (United States)

    1980-10-01

    phonological rules combined with diphone improved the algorithms used by the phonetic synthesis prog?Im for gain normalization and time... phonetic vocoder, spectral template. i0^Th^TreprtTörc"u’d1sTuV^ork for the past two years on speech compression’and synthesis. Since there was an...from Block 19: speech recognition, pnoneme recogmtion. initial design for a phonetic recognition program. We also recorded ana partially labeled a

  14. Recognizing GSM Digital Speech

    OpenAIRE

    2005-01-01

    The Global System for Mobile (GSM) environment encompasses three main problems for automatic speech recognition (ASR) systems: noisy scenarios, source coding distortion, and transmission errors. The first one has already received much attention; however, source coding distortion and transmission errors must be explicitly addressed. In this paper, we propose an alternative front-end for speech recognition over GSM networks. This front-end is specially conceived to be effective against source c...

  15. Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech.

    Science.gov (United States)

    Venezia, Jonathan H; Fillmore, Paul; Matchin, William; Isenberg, A Lisette; Hickok, Gregory; Fridriksson, Julius

    2016-02-01

    Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development.

  16. A Generative Model of Speech Production in Broca's and Wernicke's Areas.

    Science.gov (United States)

    Price, Cathy J; Crinion, Jenny T; Macsweeney, Mairéad

    2011-01-01

    Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words "one" and "three." We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca's area in the left dorsal pars opercularis and Wernicke's area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca's and Wernicke's areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations.

  17. A generative model of speech production in Broca’s and Wernicke’s areas

    Directory of Open Access Journals (Sweden)

    Cathy J Price

    2011-09-01

    Full Text Available Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalisation, auditory feedback and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words one and three. We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca's area in the left dorsal pars opercularis and Wernicke's area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations.

  18. An Effective System for Acute Spotting Aberration in the Speech of Abnormal Children Via Artificial Neural Network and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    C. R. Bharathi

    2012-01-01

    Full Text Available Problem statement: In real-world environment, speech signal processing plays a vital role among the research communities. A wide range of researches are carried out in this field for denoising, enhancement and more. Besides the other, stress management is important to identify the spot in which the stress has to be made. Approach: In this study, in order to provide proper speech practice for the abnormal person, their speech is analyzed. Initially, the normal and abnormal person’s speech are obtained with the same set of words. As an initial process, the Mel Frequency Cepstrum Coefficients (MFCC is extracted from both words and the Principal Component Analysis (PCA is applied to reduce the dimensionality of the words. From the dimensionality reduced words, the parameters are obtained and then these parameters are utilized to train the ANN which is used to identify the word that is abnormal. After identifying the abnormal word, the acute word is extracted through the thresholding operation and then FFT is computed for the acute word. From this FFT, the parameters are obtained and then these parameters are used in the genetic algorithm for optimization. GA is used to identify the spot in which the speech practice is required for the abnormal person. Results: The proposed system is implemented in the working platform of MATLAB. The performance of the proposed system is tested by generating the dataset for the normal and abnormal female children. Conclusion: In this study, an effective system has been proposed to identify the abnormal word and the spot in which the speech has to be improved also identified.

  19. Learning Fault-tolerant Speech Parsing with SCREEN

    CERN Document Server

    Wermter, S; Wermter, Stefan; Weber, Volker

    1994-01-01

    This paper describes a new approach and a system SCREEN for fault-tolerant speech parsing. SCREEEN stands for Symbolic Connectionist Robust EnterprisE for Natural language. Speech parsing describes the syntactic and semantic analysis of spontaneous spoken language. The general approach is based on incremental immediate flat analysis, learning of syntactic and semantic speech parsing, parallel integration of current hypotheses, and the consideration of various forms of speech related errors. The goal for this approach is to explore the parallel interactions between various knowledge sources for learning incremental fault-tolerant speech parsing. This approach is examined in a system SCREEN using various hybrid connectionist techniques. Hybrid connectionist techniques are examined because of their promising properties of inherent fault tolerance, learning, gradedness and parallel constraint integration. The input for SCREEN is hypotheses about recognized words of a spoken utterance potentially analyzed by a spe...

  20. Aspects of Connected Speech Processes in Nigerian English

    Directory of Open Access Journals (Sweden)

    Rotimi Olanrele Oladipupo

    2014-12-01

    Full Text Available Nigerian English (NigE, like other new Englishes, possesses its unique features at various domains of phonology. This article examined aspects of connected speech processes (CSPs, the phenomena that account for sound modifications and simplifications in speech, with a view to establishing features that characterize Standard NigE connected speech. Natural phonology (NP, which provides explanations for substitutions, alternations, and variations in the speech of second language speakers, was adopted as theoretical framework. The subjects of the study were 360 educated NigE speakers, accidentally sampled from different language groups in Nigeria. The CSPs found in their semi-spontaneous speeches were transcribed perceptually and analyzed statistically, by allotting marks to instances of occurrence and converting such to percentages. Three categories of CSPs were identified in the data: dominant, minor, and idiosyncratic processes. The study affirms that only the dominant CSPs, typical of NigE speakers, are acceptable as Standard Nigerian spoken English.

  1. Computer-based speech therapy for childhood speech sound disorders.

    Science.gov (United States)

    Furlong, Lisa; Erickson, Shane; Morris, Meg E

    2017-07-01

    With the current worldwide workforce shortage of Speech-Language Pathologists, new and innovative ways of delivering therapy to children with speech sound disorders are needed. Computer-based speech therapy may be an effective and viable means of addressing service access issues for children with speech sound disorders. To evaluate the efficacy of computer-based speech therapy programs for children with speech sound disorders. Studies reporting the efficacy of computer-based speech therapy programs were identified via a systematic, computerised database search. Key study characteristics, results, main findings and details of computer-based speech therapy programs were extracted. The methodological quality was evaluated using a structured critical appraisal tool. 14 studies were identified and a total of 11 computer-based speech therapy programs were evaluated. The results showed that computer-based speech therapy is associated with positive clinical changes for some children with speech sound disorders. There is a need for collaborative research between computer engineers and clinicians, particularly during the design and development of computer-based speech therapy programs. Evaluation using rigorous experimental designs is required to understand the benefits of computer-based speech therapy. The reader will be able to 1) discuss how computerbased speech therapy has the potential to improve service access for children with speech sound disorders, 2) explain the ways in which computer-based speech therapy programs may enhance traditional tabletop therapy and 3) compare the features of computer-based speech therapy programs designed for different client populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. SPEECH DISORDERS ENCOUNTERED DURING SPEECH THERAPY AND THERAPY TECHNIQUES

    Directory of Open Access Journals (Sweden)

    İlhan ERDEM

    2013-06-01

    Full Text Available Speech which is a physical and mental process, agreed signs and sounds to create a sense of mind to the message that change . Process to identify the sounds of speech it is essential to know the structure and function of various organs which allows to happen the conversation. Speech is a physical and mental process so many factors can lead to speech disorders. Speech disorder can be about language acquisitions as well as it can be caused medical and psychological many factors. Disordered speech, language, medical and psychological conditions as well as acquisitions also be caused by many factors. Speaking, is the collective work of many organs, such as an orchestra. Mental dimension of the speech disorder which is a very complex skill so it must be found which of these obstacles inhibit conversation. Speech disorder is a defect in speech flow, rhythm, tizliğinde, beats, the composition and vocalization. In this study, speech disorders such as articulation disorders, stuttering, aphasia, dysarthria, a local dialect speech, , language and lip-laziness, rapid speech peech defects in a term of language skills. This causes of speech disorders were investigated and presented suggestions for remedy was discussed.

  3. Analytical Study of High Pitch Delay Resolution Technique for Tonal Speech Coding

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2012-01-01

    Full Text Available Problem statement: In tonal-language speech, since tone plays important role not only on the naturalness and also the intelligibility of the speech, it must be treated appropriately in a speech coder algorithm. Approach: This study proposes an analytical study of the technique of High Pitch Delay Resolutions (HPDR applied to the adaptive codebook of core coder of Multi-Pulse based Code Excited Linear Predictive (MP-CELP coder. Results: The experimental results show that the speech quality of the MP-CELP speech coder with HPDR technique is improved above the speech quality of the conventional coder. An optimum resolution of pitch delay is also presented. Conclusion: From the analytical study, it has been found that the proposed technique can improve the speech coding quality.

  4. The role of the motor system in discriminating normal and degraded speech sounds.

    Science.gov (United States)

    D'Ausilio, Alessandro; Bufalari, Ilaria; Salmas, Paola; Fadiga, Luciano

    2012-07-01

    Listening to speech recruits a network of fronto-temporo-parietal cortical areas. Classical models consider anterior, motor, sites involved in speech production whereas posterior sites involved in comprehension. This functional segregation is more and more challenged by action-perception theories suggesting that brain circuits for speech articulation and speech perception are functionally interdependent. Recent studies report that speech listening elicits motor activities analogous to production. However, the motor system could be crucially recruited only under certain conditions that make speech discrimination hard. Here, by using event-related double-pulse transcranial magnetic stimulation (TMS) on lips and tongue motor areas, we show data suggesting that the motor system may play a role in noisy, but crucially not in noise-free environments, for the discrimination of speech signals. Copyright © 2011 Elsevier Srl. All rights reserved.

  5. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  6. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Speech-language pathologists (SLPs), often informally known as speech therapists, are professionals educated in the study of human ... Palate Hearing Evaluation in Children Going to a Speech Therapist Stuttering Hearing Impairment Speech Problems Cleft Lip and ...

  7. Speech processing using maximum likelihood continuity mapping

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, John E. (Santa Fe, NM)

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  8. Speech processing using maximum likelihood continuity mapping

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.E.

    2000-04-18

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  9. Managing the reaction effects of speech disorders on speech ...

    African Journals Online (AJOL)

    Speech disorders is responsible for defective speaking. It is usually ... They occur as a result of persistent frustrations which speech defectives usually encounter for speaking defectively. This paper ... AJOL African Journals Online. HOW TO ...

  10. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  11. Resourcing speech-language pathologists to work with multilingual children.

    Science.gov (United States)

    McLeod, Sharynne

    2014-06-01

    Speech-language pathologists play important roles in supporting people to be competent communicators in the languages of their communities. However, with over 7000 languages spoken throughout the world and the majority of the global population being multilingual, there is often a mismatch between the languages spoken by children and families and their speech-language pathologists. This paper provides insights into service provision for multilingual children within an English-dominant country by viewing Australia's multilingual population as a microcosm of ethnolinguistic minorities. Recent population studies of Australian pre-school children show that their most common languages other than English are: Arabic, Cantonese, Vietnamese, Italian, Mandarin, Spanish, and Greek. Although 20.2% of services by Speech Pathology Australia members are offered in languages other than English, there is a mismatch between the language of the services and the languages of children within similar geographical communities. Australian speech-language pathologists typically use informal or English-based assessments and intervention tools with multilingual children. Thus, there is a need for accessible culturally and linguistically appropriate resources for working with multilingual children. Recent international collaborations have resulted in practical strategies to support speech-language pathologists during assessment, intervention, and collaboration with families, communities, and other professionals. The International Expert Panel on Multilingual Children's Speech was assembled to prepare a position paper to address issues faced by speech-language pathologists when working with multilingual populations. The Multilingual Children's Speech website ( http://www.csu.edu.au/research/multilingual-speech ) addresses one of the aims of the position paper by providing free resources and information for speech-language pathologists about more than 45 languages. These international

  12. Auditory-motor learning during speech production in 9-11-year-old children.

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    Full Text Available BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations. CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

  13. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  14. Automatic speech recognition An evaluation of Google Speech

    OpenAIRE

    Stenman, Magnus

    2015-01-01

    The use of speech recognition is increasing rapidly and is now available in smart TVs, desktop computers, every new smart phone, etc. allowing us to talk to computers naturally. With the use in home appliances, education and even in surgical procedures accuracy and speed becomes very important. This thesis aims to give an introduction to speech recognition and discuss its use in robotics. An evaluation of Google Speech, using Google’s speech API, in regards to word error rate and translation ...

  15. Developmental trends in children's pretend play.

    Science.gov (United States)

    Lyytinen, P

    1991-01-01

    The developmental trends in pretend play were investigated in children 2-6 years of age (18 in each of five age groups) by examining changes in pretend action and speech separately. Play behaviour was assessed by using a selected set of Duplo Lego toys. Interest focused on occurrence of decentration, decontextualization and integration at different age levels. The proportions of decentred and decontextualized acts, action integrations and play themes, increased linearly with age. Changes in substitutive and inventive actions were, however, more minor than expected. Single-scheme combinations did not reveal any essential aspect of the development of children's symbolic competence. In this sense, multischeme combinations were more important in revealing the children's way of organizing toy material. Linear age trends were not found for language measures. The use of decentred utterances, language integrations and linguistically expressed themes were individual-specific rather than age-related. Issues for studying pretend play in language-impaired groups are also suggested.

  16. Song and speech: examining the link between singing talent and speech imitation ability

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  17. Happy Festivus! Parody as playful consumer resistance

    DEFF Research Database (Denmark)

    Mikkonen, Ilona; Bajde, Domen

    2013-01-01

    Drawing upon literary theory, play and consumer resistance literature, we conceptualize consumer parodic resistance – a resistant form of play that critically refunctions dominant consumption discourses and marketplace ideologies. We explore parodic resistance empirically by analyzing Festivus...

  18. Differential Diagnosis of Severe Speech Disorders Using Speech Gestures

    Science.gov (United States)

    Bahr, Ruth Huntley

    2005-01-01

    The differentiation of childhood apraxia of speech from severe phonological disorder is a common clinical problem. This article reports on an attempt to describe speech errors in children with childhood apraxia of speech on the basis of gesture use and acoustic analyses of articulatory gestures. The focus was on the movement of articulators and…

  19. Kannada Phonemes to Speech Dictionary: Statistical Approach

    Directory of Open Access Journals (Sweden)

    Mallamma V. Reddy

    2017-01-01

    Full Text Available The input or output of a natural Language processing system can be either written text or speech. To process written text we need to analyze: lexical, syntactic, semantic knowledge about the language, discourse information, real world knowledge to process spoken language, we need to analyze everything required to process written text, along with the challenges of speech recognition and speech synthesis. This paper describes how articulatory phonetics of Kannada is used to generate the phoneme to speech dictionary for Kannada; the statistical computational approach is used to map the elements which are taken from input query or documents. The articulatory phonetics is the place of articulation of a consonant. It is the point of contact where an obstruction occurs in the vocal tract between an articulatory gesture, an active articulator, typically some part of the tongue, and a passive location, typically some part of the roof of the mouth. Along with the manner of articulation and the phonation, this gives the consonant its distinctive sound. The results are presented for the same.

  20. Erotic Language as Dramatic Action in Plays by Lyly and Shakespeare

    Science.gov (United States)

    Knoll, Gillian

    2012-01-01

    This study closely examines the language of desire in the dramatic works of John Lyly and William Shakespeare, and argues that contemplative and analytical speeches about desire function as modes of action in their plays. Erotic speeches do more than express desire in a purely descriptive or perlocutionary capacity distinct from the action of the…

  1. Erotic Language as Dramatic Action in Plays by Lyly and Shakespeare

    Science.gov (United States)

    Knoll, Gillian

    2012-01-01

    This study closely examines the language of desire in the dramatic works of John Lyly and William Shakespeare, and argues that contemplative and analytical speeches about desire function as modes of action in their plays. Erotic speeches do more than express desire in a purely descriptive or perlocutionary capacity distinct from the action of the…

  2. Evaluation of the speech perception in the noise in different positions in adults with cochlear implants

    Directory of Open Access Journals (Sweden)

    Santos, Karlos Thiago Pinheiro dos

    2009-03-01

    Full Text Available Introduction: The most frequent complaint of the cochlear implant users has been to recognize and understand the speech signal in the presence of noise. Researches have been developed on the speech perception of users of cochlear implant with focus on aspects such as the effect of the reduction to the signal/noise ratio in the speech perception, the speech recognition in the noise, with different types of cochlear implant and strategies of speech codification and the effects of the binaural stimulation in the speech perception in noise. Objective: 1-To assess the speech perception in cochlear implant adult users in different positions regarding the presentation of the stimulus, 2-to compare the index of speech recognition in the frontal, ipsilateral and contralateral positions and 3-to analyze the effect of monoaural adaptation in the speech perception with noise. Method: 22 cochlear implant adult users were evaluated regarding the speech perception. The individuals were submitted to sentences recognition evaluation, with competitive noise in the signal/noise ratio +10 decibels in three positions: frontal, ipsilateral and contralateral to the cochlear implant side. Results: The results demonstrated the largest index of speech recognition in the ipsilateral position (100% and the lowest index of speech recognition with sentences in the contralateral position (5%. Conclusion: The performance of speech perception in cochlear implant users is damaged when the competitive noise is introduced, the index of speech recognition is better when the speech is presented ipsilaterally, and it's consequently worse when presented contralaterally to the cochlear implant, and there are more damages in the speech intelligibility when there is only monoaural input.

  3. Individual differences in speech-in-noise perception parallel neural speech processing and attention in preschoolers.

    Science.gov (United States)

    Thompson, Elaine C; Woodruff Carr, Kali; White-Schwoch, Travis; Otto-Meyer, Sebastian; Kraus, Nina

    2017-02-01

    From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3-5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ∼12 months), we followed a cohort of 59 preschoolers, ages 3.0-4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  5. Denial Denied: Freedom of Speech

    Directory of Open Access Journals (Sweden)

    Glen Newey

    2009-12-01

    Full Text Available Free speech is a widely held principle. This is in some ways surprising, since formal and informal censorship of speech is widespread, and rather different issues seem to arise depending on whether the censorship concerns who speaks, what content is spoken or how it is spoken. I argue that despite these facts, free speech can indeed be seen as a unitary principle. On my analysis, the core of the free speech principle is the denial of the denial of speech, whether to a speaker, to a proposition, or to a mode of expression. Underlying free speech is the principle of freedom of association, according to which speech is both a precondition of future association (e.g. as a medium for negotiation and a mode of association in its own right. I conclude by applying this account briefly to two contentious issues: hate speech and pornography.

  6. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  7. Speech Criticism, Group Presentations, and Centrality: A Marriage Made in Heaven for the Basic Public Speaking Course.

    Science.gov (United States)

    Ayres, Joe; Sonandre, Debbie Ayres

    This paper presents an exercise which serves as an addition to public speaking courses. Showing students how to uncover the speech patterns that shape their lives allows them to appreciate the importance of speech communication in their lives. In the exercise, groups analyze speeches and report their findings to the class. The exercise improves…

  8. Analysis of Pause Occurrence in Three Kinds of Modified Speech: Public Address, Caretaker Talk, and Foreigner Talk.

    Science.gov (United States)

    Osada, Nobuko

    2003-01-01

    Analyzes the occurrence of silent pauses n monologues, especially in modified speech, such as in public address, caretaker talk, and foreigner talk. Discusses speech rate, articulation rate, pause unit length, individual pause length, and pause percentage to overall speech time. (Author/VWL)

  9. Speech spectrogram expert

    Energy Technology Data Exchange (ETDEWEB)

    Johannsen, J.; Macallister, J.; Michalek, T.; Ross, S.

    1983-01-01

    Various authors have pointed out that humans can become quite adept at deriving phonetic transcriptions from speech spectrograms (as good as 90percent accuracy at the phoneme level). The authors describe an expert system which attempts to simulate this performance. The speech spectrogram expert (spex) is actually a society made up of three experts: a 2-dimensional vision expert, an acoustic-phonetic expert, and a phonetics expert. The visual reasoning expert finds important visual features of the spectrogram. The acoustic-phonetic expert reasons about how visual features relates to phonemes, and about how phonemes change visually in different contexts. The phonetics expert reasons about allowable phoneme sequences and transformations, and deduces an english spelling for phoneme strings. The speech spectrogram expert is highly interactive, allowing users to investigate hypotheses and edit rules. 10 references.

  10. RECOGNISING SPEECH ACTS

    Directory of Open Access Journals (Sweden)

    Phyllis Kaburise

    2012-09-01

    Full Text Available Speech Act Theory (SAT, a theory in pragmatics, is an attempt to describe what happens during linguistic interactions. Inherent within SAT is the idea that language forms and intentions are relatively formulaic and that there is a direct correspondence between sentence forms (for example, in terms of structure and lexicon and the function or meaning of an utterance. The contention offered in this paper is that when such a correspondence does not exist, as in indirect speech utterances, this creates challenges for English second language speakers and may result in miscommunication. This arises because indirect speech acts allow speakers to employ various pragmatic devices such as inference, implicature, presuppositions and context clues to transmit their messages. Such devices, operating within the non-literal level of language competence, may pose challenges for ESL learners.

  11. Protection limits on free speech

    Institute of Scientific and Technical Information of China (English)

    李敏

    2014-01-01

    Freedom of speech is one of the basic rights of citizens should receive broad protection, but in the real context of China under what kind of speech can be protected and be restricted, how to grasp between state power and free speech limit is a question worth considering. People tend to ignore the freedom of speech and its function, so that some of the rhetoric cannot be demonstrated in the open debates.

  12. Rhetorical Flaws in Brutus’ Forum Speech in Julius Caesar: A Carefully Controlled Weakness?

    Directory of Open Access Journals (Sweden)

    Dominic Cheetham

    2017-06-01

    Full Text Available In Julius Caesar Shakespeare reproduces one of the pivotal moments in European history. Brutus and Mark Antony, through the medium of their forum speeches, compete for the support of the people of Rome. In the play, as in history, Mark Antony wins this contest of language. Critics are generally agreed that Antony has the better speech, but also that Brutus’ speech is still exceptionally good. Traditionally the question of how Antony’s speech is superior is argued by examining differences between the two speeches, however, this approach has not resulted in any critical consensus. This paper takes the opening lines of the speeches as the only point of direct convergence between the content and the rhetorical forms used by Brutus and Antony and argues that Brutus’ opening tricolon is structurally inferior to Marc Antony’s. Analysis of the following rhetorical schemes in Brutus’ speech reveals further structural weaknesses. Shakespeare gives Brutus a speech rich in perceptually salient rhetorical schemes but introduces small, less salient, structural weaknesses into those schemes. The tightly structured linguistic patterns which make up the majority of Brutus’ speech give an impression of great rhetorical skill. This skilful impression obscures the minor faults or weaknesses that quietly and subtly reduce the overall power of the speech. By identifying the weaknesses in Brutus’ forms we add an extra element to the discussion of these speeches and at the same time display how subtly and effectively Shakespeare uses rhetorical forms to control audience response and appreciation.

  13. Designing speech for a recipient

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    is investigated on three candidates for so-called ‘simplified registers’: speech to children (also called motherese or baby talk), speech to foreigners (also called foreigner talk) and speech to robots. The volume integrates research from various disciplines, such as psychology, sociolinguistics...

  14. ADMINISTRATIVE GUIDE IN SPEECH CORRECTION.

    Science.gov (United States)

    HEALEY, WILLIAM C.

    WRITTEN PRIMARILY FOR SCHOOL SUPERINTENDENTS, PRINCIPALS, SPEECH CLINICIANS, AND SUPERVISORS, THIS GUIDE OUTLINES THE MECHANICS OF ORGANIZING AND CONDUCTING SPEECH CORRECTION ACTIVITIES IN THE PUBLIC SCHOOLS. IT INCLUDES THE REQUIREMENTS FOR CERTIFICATION OF A SPEECH CLINICIAN IN MISSOURI AND DESCRIBES ESSENTIAL STEPS FOR THE DEVELOPMENT OF A…

  15. SPEECH DISORDERS ENCOUNTERED DURING SPEECH THERAPY AND THERAPY TECHNIQUES

    OpenAIRE

    2013-01-01

    Speech which is a physical and mental process, agreed signs and sounds to create a sense of mind to the message that change . Process to identify the sounds of speech it is essential to know the structure and function of various organs which allows to happen the conversation. Speech is a physical and mental process so many factors can lead to speech disorders. Speech disorder can be about language acquisitions as well as it can be caused medical and psychological many factors. Disordered sp...

  16. Play for Power

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Musharraf steps down under pressure, but Pakistan’s political battles are far from over Recently, Pakistan had a political earthquake. On August 18, President Pervez Musharraf announced his resignation in a nationally televised speech. After the Pakistani Parliament accepted his resignation, the 57-year-old Speaker of the Pakistan Senate, Muhammadmian Soomro, became acting president in accordance with Pakistan’s Constitution. Lawmakers must select a new president within 30 days.

  17. Freedom of Speech and Hate Speech: an analysis of possible limits for freedom of speech

    National Research Council Canada - National Science Library

    Riva Sobrado de Freitas; Matheus Felipe de Castro

    2013-01-01

      In a view to determining the outlines of the Freedom of Speech and to specify its contents, we face hate speech as an offensive and repulsive manifestation, particularly directed to minority groups...

  18. Children's Empowerment in Play

    Science.gov (United States)

    Canning, Natalie

    2007-01-01

    This article examines the level of empowerment and autonomy children can create in their play experiences. It examines the play discourses that children build and maintain and considers the importance of play contexts in supporting children's emotional and social development. These aspects of play are often unseen or misunderstood by the adult…

  19. The Play of Psychotherapy

    Science.gov (United States)

    Marks-Tarlow, Terry

    2012-01-01

    The author reviews the role of play within psychotherapy. She does not discuss the formal play therapy especially popular for young children, nor play from the Jungian perspective that encourages the use of the sand tray with adults. Instead, she focuses on the informal use of play during psychotherapy as it is orchestrated intuitively. Because…

  20. Two people playing together: some thoughts on play, playing, and playfulness in psychoanalytic work.

    Science.gov (United States)

    Vliegen, Nicole

    2009-01-01

    Children's play and the playfulness of adolescents and adults are important indicators of personal growth and development. When a child is not able to play, or an adolescent/adult is not able to be playful with thoughts and ideas, psychotherapy can help to find a more playful and creative stance. Elaborating Winnicott's (1968, p. 591) statement that "psychotherapy has to do with two people playing together," three perspectives on play in psychotherapy are discussed. In the first point of view, the child gets in touch with and can work through aspects of his or her inner world, while playing in the presence of the therapist. The power of play is then rooted in the playful communication with the self In a second perspective, in play the child is communicating aspects of his or her inner world to the therapist as a significant other. In a third view, in "playing together" child and therapist are coconstructing new meanings. These three perspectives on play are valid at different moments of a therapy process or for different children, depending on the complex vicissitudes of the child's constitution, life experiences, development, and psychic structure. Concerning these three perspectives, a parallel can be drawn between the therapist's attitude toward the child's play and the way the therapist responds to the verbal play of an adolescent or adult. We illustrate this with the case of Jacob, a late adolescent hardly able to play with ideas.

  1. Applying Play to Therapy.

    Science.gov (United States)

    Ritter, Patricia S.; Fokes, Joann

    The objectives of this paper are (1) to present the relationship of play to language and cognition, (2) to describe the stages of play and discuss recent literature about the characteristics of play, and (3) to describe the use of play with the multifaceted goals of cognition, pragmatics, semantics, syntax, and morphology as an intervention…

  2. Speech transmission index from running speech: A neural network approach

    Science.gov (United States)

    Li, F. F.; Cox, T. J.

    2003-04-01

    Speech transmission index (STI) is an important objective parameter concerning speech intelligibility for sound transmission channels. It is normally measured with specific test signals to ensure high accuracy and good repeatability. Measurement with running speech was previously proposed, but accuracy is compromised and hence applications limited. A new approach that uses artificial neural networks to accurately extract the STI from received running speech is developed in this paper. Neural networks are trained on a large set of transmitted speech examples with prior knowledge of the transmission channels' STIs. The networks perform complicated nonlinear function mappings and spectral feature memorization to enable accurate objective parameter extraction from transmitted speech. Validations via simulations demonstrate the feasibility of this new method on a one-net-one-speech extract basis. In this case, accuracy is comparable with normal measurement methods. This provides an alternative to standard measurement techniques, and it is intended that the neural network method can facilitate occupied room acoustic measurements.

  3. The effect of fear on paralinguistic aspects of speech in patients with panic disorder with agoraphobia.

    Science.gov (United States)

    Hagenaars, Muriel A; van Minnen, Agnes

    2005-01-01

    The present study investigated the effect of fear on paralinguistic aspects of speech in patients suffering from panic disorder with agoraphobia (N = 25). An experiment was conducted that comprised two modules: Autobiographical Talking and Script Talking. Each module consisted of two emotional conditions: Fearful and Happy. Speech was recorded digitally and analyzed using PRAAT, a computer program designed to extract paralinguistic measures from digitally recorded spoken sound. In addition to subjective fear, several speech characteristics were measured as a reflection of psychophysiology: rate of speech, mean pitch and pitch variability. Results show that in Autobiographical Talking speech was slower, had a lower pitch, and a lower pitch variability than in Script Talking. Pitch variability was lower in Fearful than in Happy speech. The findings indicate that paralinguistic aspects of speech, especially pitch variability, are promising measures to gain information about fear processing during the recollection of autobiographical memories.

  4. Enhancement of Non-Air Conducted Speech Based on Wavelet-Packet Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Xijing Jing

    2013-01-01

    Full Text Available This study developed a new kind of speech detecting method by using millimeter wave. Because of the advantage of the millimeter wave, this speech detecting method has great potential application and may provide some exciting possibility for wide applications. However, the MMW conduct speech is in less intelligible and poor audibility since it is corrupted by additive combined noise. This paper, therefore, also developed an algorithm of wavelet packet threshold by using hard threshold and soft threshold for removing noise based on the good capability of wavelet packet for analyzing time-frequency signal. Comparing to traditional speech enhancement algorithm, the results from both simulation and listening evaluation suggest that the proposed algorithm takes on a better performance on noise removing while the distortion of MMW radar speech remains acceptable, the enhanced speech also sounds more pleasant to human listeners, resulting in improved results over classical speech enhancement algorithms.

  5. Metaphor Analysis of Chinese Premier Wen’s Cambridge Speech

    Institute of Scientific and Technical Information of China (English)

    LUO Luo

    2014-01-01

    Metaphor is more than an ostensible decoration of language. It is an integral part of human thought of ideologized world. This article analyzes the metaphor use of Chinese Premier Wen Jiabao’s speech at Cambridge in February 2009, in an at-tempt to display how the preferred metaphors serve the purpose of this speech and reflect Premier Wen ’s construction of Chi-na’s situation.

  6. How charisma is perceived from speech. A multidimensional approach

    OpenAIRE

    Signorello, Rosario; D'Errico, Francesca; Poggi, Isabella; Demolin, Didier

    2012-01-01

    International audience; A leader's charisma is conveyed by multiple aspects of his perceivable behavior among which the acoustic-prosodic characteristics of speech. We present here a study on the perception of charisma in political speech that aims to investigate the notion of charisma and to validate a theoretical framework on a multidimensional scale of charisma perception. The study points out that a multidimensional approach of charisma allows to better analyze which factors are related t...

  7. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference

    OpenAIRE

    Byeongwook Lee; Kwang-Hyun Cho

    2016-01-01

    Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintai...

  8. Listening with Your Eyes: The Importance of Speech-Related Gestures in the Language Classroom.

    Science.gov (United States)

    Harris, Tony

    2003-01-01

    Argues nonverbal communication (NVC) forms an important part of everyday speech transmission and should occupy a more central position in second and foreign language teaching than it currently does. Examines the role played by NVC in a three-turn conversational exchange and the literature supporting the notion that speech-related gestures have a…

  9. Family Worlds: Couple Satisfaction, Parenting Style, and Mothers' and Fathers' Speech to Young Children.

    Science.gov (United States)

    Pratt, Michael W.; And Others

    1992-01-01

    Investigated relations between certain family context variables and the conversational behavior of 36 parents who were playing with their 3 year olds. Transcripts were coded for types of conversational functions and structure of parent speech. Marital satisfaction was associated with aspects of parent speech. (LB)

  10. Speech Perception Engages a General Timer: Evidence from a Divided Attention Word Identification Task

    Science.gov (United States)

    Casini, Laurence; Burle, Boris; Nguyen, Noel

    2009-01-01

    Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…

  11. Cerebellar tDCS dissociates the timing of perceptual decisions from perceptual change in speech

    NARCIS (Netherlands)

    Lametti, D.R.; Oostwoud Wijdenes, L.; Bonaiuto, J.; Bestmann, S.; Rothwell, J.C.

    2016-01-01

    Neuroimaging studies suggest that the cerebellum might play a role in both speech perception and speech perceptual learning. However, it remains unclear what this role is: does the cerebellum directly contribute to the perceptual decision? Or does it contribute to the timing of perceptual decisions?

  12. Automatic recognition of spontaneous emotions in speech using acoustic and lexical features

    NARCIS (Netherlands)

    Raaijmakers, S.; Truong, K.P.

    2008-01-01

    We developed acoustic and lexical classifiers, based on a boosting algorithm, to assess the separability on arousal and valence dimensions in spontaneous emotional speech. The spontaneous emotional speech data was acquired by inviting subjects to play a first-person shooter video game. Our acoustic

  13. Family Worlds: Couple Satisfaction, Parenting Style, and Mothers' and Fathers' Speech to Young Children.

    Science.gov (United States)

    Pratt, Michael W.; And Others

    1992-01-01

    Investigated relations between certain family context variables and the conversational behavior of 36 parents who were playing with their 3 year olds. Transcripts were coded for types of conversational functions and structure of parent speech. Marital satisfaction was associated with aspects of parent speech. (LB)

  14. A phonetic investigation of single word versus connected speech production in children with persisting speech difficulties relating to cleft palate.

    Science.gov (United States)

    Howard, Sara

    2013-03-01

    Objective : To investigate the phonetic and phonological parameters of speech production associated with cleft palate in single words and in sentence repetition in order to explore the impact of connected speech processes, prosody, and word juncture on word production across contexts. Participants : Two boys (aged 9 years 5 months and 11 years 0 months) with persisting speech impairments related to a history of unilateral cleft lip and palate formed the main focus of the study; three typical adult male speakers provided control data. Method : Audio, video, and electropalatographic recordings were made of the participants producing single words and repeating two sets of sentences. The data were transcribed and the electropalatographic recordings were analyzed to explore lingual-palatal contact patterns across the different speech conditions. Acoustic analysis was used to further inform the perceptual analysis and to make specific durational measurements. Results : The two boys' speech production differed across the speech conditions. Both boys showed typical and atypical phonetic features in their connected speech production. One boy, although often unintelligible, resembled the adult speakers more closely prosodically and in his specific connected speech behaviors at word boundaries. The second boy produced developmentally atypical phonetic adjustments at word boundaries that appeared to promote intelligibility at the expense of naturalness. Conclusion : For older children with persisting speech impairments, it is particularly important to examine specific features of connected speech production, including word juncture and prosody. Sentence repetition data provide useful information to this end, but further investigations encompassing detailed perceptual and instrumental analysis of real conversational data are warranted.

  15. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...

  16. Speech and Hearing Therapy.

    Science.gov (United States)

    Sakata, Reiko; Sakata, Robert

    1978-01-01

    In the public school, the speech and hearing therapist attempts to foster child growth and development through the provision of services basic to awareness of self and others, management of personal and social interactions, and development of strategies for coping with the handicap. (MM)

  17. Perceptual learning in speech

    NARCIS (Netherlands)

    Norris, D.; McQueen, J.M.; Cutler, A.

    2003-01-01

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listener

  18. Speech and Language Delay

    Science.gov (United States)

    ... home affect my child’s language and speech?The brain has to work harder to interpret and use 2 languages, so it may take longer for children to start using either one or both of the languages they’re learning. It’s not unusual for a bilingual child to ...

  19. Mandarin Visual Speech Information

    Science.gov (United States)

    Chen, Trevor H.

    2010-01-01

    While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…

  20. Speech After Banquet

    Science.gov (United States)

    Yang, Chen Ning

    2013-05-01

    I am usually not so short of words, but the previous speeches have rendered me really speechless. I have known and admired the eloquence of Freeman Dyson, but I did not know that there is a hidden eloquence in my colleague George Sterman...

  1. Speech disfluency in centenarians.

    Science.gov (United States)

    Searl, Jeffrey P; Gabel, Rodney M; Fulks, J Steven

    2002-01-01

    Other than a single case presentation of a 105-year-old female, no other studies have addressed the speech fluency characteristics of centenarians. The purpose of this study was to provide descriptive information on the fluency characteristics of speakers between the ages of 100-103 years. Conversational speech samples from seven speakers were evaluated for the frequency and types of disfluencies and speech rate. The centenarian speakers had a disfluency rate similar to that reported for 70-, 80-, and early 90-year-olds. The types of disfluencies observed also were similar to those reported for younger elderly speakers (primarily whole word/phrase, or formulative fluency breaks). Finally, the speech rate data for the current group of speakers supports prior literature reports of a slower rate with advancing age, but extends the finding to centenarians. As a result of this activity, participants will be able to: (1) describe the frequency of disfluency breaks and the types of disfluencies exhibited by centenarian speakers, (2) describe the mean and range of speaking rates in centenarians, and (3) compare the present findings for centenarians to the fluency and speaking rate characteristics reported in the literature.

  2. Mandarin Visual Speech Information

    Science.gov (United States)

    Chen, Trevor H.

    2010-01-01

    While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…

  3. The Commercial Speech Doctrine.

    Science.gov (United States)

    Luebke, Barbara F.

    In its 1942 ruling in the "Valentine vs. Christensen" case, the Supreme Court established the doctrine that commercial speech is not protected by the First Amendment. In 1975, in the "Bigelow vs. Virginia" case, the Supreme Court took a decisive step toward abrogating that doctrine, by ruling that advertising is not stripped of…

  4. Cohesive and coherent connected speech deficits in mild stroke.

    Science.gov (United States)

    Barker, Megan S; Young, Breanne; Robinson, Gail A

    2017-05-01

    Spoken language production theories and lesion studies highlight several important prelinguistic conceptual preparation processes involved in the production of cohesive and coherent connected speech. Cohesion and coherence broadly connect sentences with preceding ideas and the overall topic. Broader cognitive mechanisms may mediate these processes. This study aims to investigate (1) whether stroke patients without aphasia exhibit impairments in cohesion and coherence in connected speech, and (2) the role of attention and executive functions in the production of connected speech. Eighteen stroke patients (8 right hemisphere stroke [RHS]; 6 left [LHS]) and 21 healthy controls completed two self-generated narrative tasks to elicit connected speech. A multi-level analysis of within and between-sentence processing ability was conducted. Cohesion and coherence impairments were found in the stroke group, particularly RHS patients, relative to controls. In the whole stroke group, better performance on the Hayling Test of executive function, which taps verbal initiation/suppression, was related to fewer propositional repetitions and global coherence errors. Better performance on attention tasks was related to fewer propositional repetitions, and decreased global coherence errors. In the RHS group, aspects of cohesive and coherent speech were associated with better performance on attention tasks. Better Hayling Test scores were related to more cohesive and coherent speech in RHS patients, and more coherent speech in LHS patients. Thus, we documented connected speech deficits in a heterogeneous stroke group without prominent aphasia. Our results suggest that broader cognitive processes may play a role in producing connected speech at the early conceptual preparation stage. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. ANALYSIS OF PRESIDENTIAL INAUGURAL ADDRESSES USING SEARLES TAXONOMY OF SPEECH ACTS

    Directory of Open Access Journals (Sweden)

    Paulus Widiatmoko

    2017-06-01

    Full Text Available This study analyzes the performance of Searle speech acts in presidential inaugural addresses. Five presidential inaugural addresses taken as the samples of this study were analyzed using Searles speech act taxonomy and different distinctive features of illocutionary acts. The findings of this study revealed the frequency and comparison of the speech act performance. Each inaugural address possessed distinctive characteristics influenced by sociopolitical, economic, and historical situation of the countries. In addition, some commonalities in relation to the performance of Searles speech act taxonomy were also observed.

  6. Electronic Instruments -- Played or Used?

    Science.gov (United States)

    Ulveland, Randall Dana

    1998-01-01

    Compares the experience of playing an acoustic instrument to an electronic instrument by analyzing the constant structures and relationships between the experiences. Concludes that students' understanding of the physical experience of making music increases when experiences with acoustic instruments precede their exposure to electronic…

  7. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  8. Runs of homozygosity associated with speech delay in autism in a taiwanese han population: evidence for the recessive model.

    Directory of Open Access Journals (Sweden)

    Ping-I Lin

    Full Text Available Runs of homozygosity (ROH may play a role in complex diseases. In the current study, we aimed to test if ROHs are linked to the risk of autism and related language impairment. We analyzed 546,080 SNPs in 315 Han Chinese affected with autism and 1,115 controls. ROH was defined as an extended homozygous haplotype spanning at least 500 kb. Relative extended haplotype homozygosity (REHH for the trait-associated ROH region was calculated to search for the signature of selection sweeps. Totally, we identified 676 ROH regions. An ROH region on 11q22.3 was significantly associated with speech delay (corrected p = 1.73×10(-8. This region contains the NPAT and ATM genes associated with ataxia telangiectasia characterized by language impairment; the CUL5 (culin 5 gene in the same region may modulate the neuronal migration process related to language functions. These three genes are highly expressed in the cerebellum. No evidence for recent positive selection was detected on the core haplotypes in this region. The same ROH region was also nominally significantly associated with speech delay in another independent sample (p = 0.037; combinatorial analysis Stouffer's z trend = 0.0005. Taken together, our findings suggest that extended recessive loci on 11q22.3 may play a role in language impairment in autism. More research is warranted to investigate if these genes influence speech pathology by perturbing cerebellar functions.

  9. Runs of homozygosity associated with speech delay in autism in a taiwanese han population: evidence for the recessive model.

    Science.gov (United States)

    Lin, Ping-I; Kuo, Po-Hsiu; Chen, Chia-Hsiang; Wu, Jer-Yuarn; Gau, Susan S-F; Wu, Yu-Yu; Liu, Shih-Kai

    2013-01-01

    Runs of homozygosity (ROH) may play a role in complex diseases. In the current study, we aimed to test if ROHs are linked to the risk of autism and related language impairment. We analyzed 546,080 SNPs in 315 Han Chinese affected with autism and 1,115 controls. ROH was defined as an extended homozygous haplotype spanning at least 500 kb. Relative extended haplotype homozygosity (REHH) for the trait-associated ROH region was calculated to search for the signature of selection sweeps. Totally, we identified 676 ROH regions. An ROH region on 11q22.3 was significantly associated with speech delay (corrected p = 1.73×10(-8)). This region contains the NPAT and ATM genes associated with ataxia telangiectasia characterized by language impairment; the CUL5 (culin 5) gene in the same region may modulate the neuronal migration process related to language functions. These three genes are highly expressed in the cerebellum. No evidence for recent positive selection was detected on the core haplotypes in this region. The same ROH region was also nominally significantly associated with speech delay in another independent sample (p = 0.037; combinatorial analysis Stouffer's z trend = 0.0005). Taken together, our findings suggest that extended recessive loci on 11q22.3 may play a role in language impairment in autism. More research is warranted to investigate if these genes influence speech pathology by perturbing cerebellar functions.

  10. Speech disorders in students in Belo Horizonte.

    Science.gov (United States)

    Rabelo, Alessandra Terra Vasconcelos; Alves, Claudia Regina Lindgren; Goulart, Lúcia Maria H Figueiredo; Friche, Amélia Augusta de Lima; Lemos, Stela Maris Aguiar; Campos, Fernanda Rodrigues; Friche, Clarice Passos

    2011-12-01

    To describe speech disorders in students from 1st to 4th grades, and to investigate possible associations between these disorders and stomatognathic system and auditory processing disorders. Cross-sectional study with stratified random sample composed of 288 students, calculated based on an universe of 1,189 children enrolled in public schools from the area covered by a health center in Belo Horizonte. The median age was 8.9 years, and 49.7% were male. Assessment used a stomatognathic system protocol adapted from the Myofunctional Evaluation Guidelines, the Phonology task of the ABFW - Child Language Test, and a simplified auditory processing evaluation. Data were statistically analyzed. From the subjects studied, 31.9% had speech disorder. From these, 18% presented phonetic deviation, 9.7% phonological deviation, and 4.2% phonetic and phonological deviation. Linguistic variation was observed in 38.5% of the children. There was a higher proportion of children with phonetic deviation in 1st grade, and a higher proportion of children younger than 8 years old with both phonetic and phonological deviations. Phonetic deviation was associated to stomatognathic system disorder, and phonological deviation was associated to auditory processing disorder. The prevalence of speech disorders in 1st to 4th grade students is considered high. Moreover, these disorders are associated to other Speech-Language Pathology and Audiology alterations, which suggest that one disorder may be a consequence of the other, indicating the need for early diagnosis and intervention.

  11. CREATIVE STYLISTICS AND CREATIVE SPEECH TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Natalia A. Kupina

    2016-01-01

    Full Text Available The article develops linguo-aesthetic ideas of Professor V.P. Grigoriev that are connected to the wider interpretation of poetic language and isolation of a ‘creatheme’ as a unit of poetic language. Analyzing data from “Integrum” database, the article discovers specific characteristics of the lexical compatibility of the popular word kreativny (‘creative’. The study is reconstructing a fragment of the worldview that reflects the current conventional understanding of the subjects and areas of creative activity, including creative speech, as well as products of this activity prevailing at the present time. The article raises a question about the formation and development of creative stylistics; of its object, subject, goals, and vector of development. For the purposes of specific stylistical analysis, the article brings up isolated creathemes from colloquial speech of factory workers, newspaper writing, official/business speech, advertising texts, applied poetry texts, and mass literature. The study includes analysis of newspaper headlines bringing under scrutiny the phenomenon of paronymic attraction. In the process of interpretation of the creative speech technologies in women prose, we prove, basing on ‘packaging material’ notion coined by V.P. Grigoriev, that creathemes with aesthetic meanings which strengthen axiologic function of sentence and/or text serve as this sort of ‘packaging’ in writing. 

  12. A Mobile Phone based Speech Therapist

    OpenAIRE

    Pandey, Vinod K.; Pande, Arun; Kopparapu, Sunil Kumar

    2016-01-01

    Patients with articulatory disorders often have difficulty in speaking. These patients need several speech therapy sessions to enable them speak normally. These therapy sessions are conducted by a specialized speech therapist. The goal of speech therapy is to develop good speech habits as well as to teach how to articulate sounds the right way. Speech therapy is critical for continuous improvement to regain normal speech. Speech therapy sessions require a patient to travel to a hospital or a ...

  13. Speech-on-speech masking with variable access to the linguistic content of the masker speech for native and nonnative english speakers.

    Science.gov (United States)

    Calandruccio, Lauren; Bradlow, Ann R; Dhar, Sumitrajit

    2014-04-01

    Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech Affiliationect masking release. A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native

  14. The play grid

    DEFF Research Database (Denmark)

    Fogh, Rune; Johansen, Asger

    2013-01-01

    In this paper we propose The Play Grid, a model for systemizing different play types. The approach is psychological by nature and the actual Play Grid is based, therefore, on two pairs of fundamental and widely acknowledged distinguishing characteristics of the ego, namely: extraversion vs...... at the Play Grid. Thus, the model has four quadrants, each of them describing one of four play types: the Assembler, the Director, the Explorer, and the Improviser. It is our hope that the Play Grid can be a useful design tool for making entertainment products for children....

  15. On Optimal Linear Filtering of Speech for Near-End Listening Enhancement

    DEFF Research Database (Denmark)

    Taal, Cees H.; Jensen, Jesper; Leijon, Arne

    2013-01-01

    In this letter the focus is on linear filtering of speech before degradation due to additive background noise. The goal is to design the filter such that the speech intelligibility index (SII) is maximized when the speech is played back in a known noisy environment. Moreover, a power constraint...... suboptimal. In this work we propose a nonlinear approximation of the SII which is accurate for all SNRs. Experiments show large intelligibility improvements with the proposed method over the unprocessed noisy speech and better performance than one state-of-the art method....

  16. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  17. IITKGP-SESC: Speech Database for Emotion Analysis

    Science.gov (United States)

    Koolagudi, Shashidhar G.; Maity, Sudhamay; Kumar, Vuppala Anil; Chakrabarti, Saswat; Rao, K. Sreenivasa

    In this paper, we are introducing the speech database for analyzing the emotions present in speech signals. The proposed database is recorded in Telugu language using the professional artists from All India Radio (AIR), Vijayawada, India. The speech corpus is collected by simulating eight different emotions using the neutral (emotion free) statements. The database is named as Indian Institute of Technology Kharagpur Simulated Emotion Speech Corpus (IITKGP-SESC). The proposed database will be useful for characterizing the emotions present in speech. Further, the emotion specific knowledge present in speech at different levels can be acquired by developing the emotion specific models using the features from vocal tract system, excitation source and prosody. This paper describes the design, acquisition, post processing and evaluation of the proposed speech database (IITKGP-SESC). The quality of the emotions present in the database is evaluated using subjective listening tests. Finally, statistical models are developed using prosodic features, and the discrimination of the emotions is carried out by performing the classification of emotions using the developed statistical models.

  18. Human phoneme recognition depending on speech-intrinsic variability.

    Science.gov (United States)

    Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger

    2010-11-01

    The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).

  19. Role-Playing Mitosis.

    Science.gov (United States)

    Wyn, Mark A.; Stegink, Steven J.

    2000-01-01

    Introduces a role playing activity that actively engages students in the learning process of mitosis. Students play either chromosomes carrying information, or cells in the cell membrane. (Contains 11 references.) (Author/YDS)

  20. Play the Tuberculosis Game

    Science.gov (United States)

    ... Questionnaire Tuberculosis Play Tuberculosis Experiments & Discoveries About the game Discover and experience some of the classic methods ... last will in Paris. Play the Blood Typing Game Try to save some patients and learn about ...

  1. Play the MRI Game

    Science.gov (United States)

    ... Teachers' Questionnaire MRI Play MRI the Magnetic Miracle Game About the game In the MRI imaging technique, strong magnets and ... last will in Paris. Play the Blood Typing Game Try to save some patients and learn about ...

  2. Play the Electrocardiogram Game

    Science.gov (United States)

    ... and Work Teachers' Questionnaire Electrocardiogram Play the ECG Game About the game ECG is used for diagnosing heart conditions by ... last will in Paris. Play the Blood Typing Game Try to save some patients and learn about ...

  3. Learning Through Play

    Science.gov (United States)

    ... play, such as using play dough, LEGOs, and board games. Toys such as puzzles, pegboards, beads, and lacing ... Building sets, books, bicycles, roller skates, ice skates, board games, checkers, beginning sports • Middle Schoolers and Adolescents: Athletics, ...

  4. Children, Time, and Play

    DEFF Research Database (Denmark)

    Elkind, David; Rinaldi, Carla; Flemmert Jensen, Anne;

    Proceedings from the conference "Children, Time, and Play". Danish University of Education, January 30th 2003.......Proceedings from the conference "Children, Time, and Play". Danish University of Education, January 30th 2003....

  5. Role-Playing Mitosis.

    Science.gov (United States)

    Wyn, Mark A.; Stegink, Steven J.

    2000-01-01

    Introduces a role playing activity that actively engages students in the learning process of mitosis. Students play either chromosomes carrying information, or cells in the cell membrane. (Contains 11 references.) (Author/YDS)

  6. Play at Work

    DEFF Research Database (Denmark)

    Meier Sørensen, Bent; Spoelstra, Sverre

    2012-01-01

    for business and the other insists that work and play are largely indistinguishable in the postindustrial organization. Our field study of a design and communications company in Denmark shows that organizational play can be much more than just functional to the organization. We identify three ways in which......The interest in organizational play is growing, both in popular business discourse and organization studies. As the presumption that play is dysfunctional for organizations is increasingly discarded, the existing positions may be divided into two camps; one proposes ‘serious play’ as an engine...... workplaces engage in play: play as a (serious) continuation of work, play as a (critical) intervention into work and play as an (uninvited) usurpation of work....

  7. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  8. Playing with social identities

    DEFF Research Database (Denmark)

    Winther-Lindqvist, Ditte Alexandra

    2013-01-01

    This chapter offers support for Vygotsky’s claim that all play involves both an imagined situation as well as rules. Synthesising Schousboe’s comprehensive model of spheres of realities in playing (see Chapter 1, this volume) with Lev Vygotskys insight that all playing involve rules as well...

  9. Toddlers: Learning by Playing

    Science.gov (United States)

    ... Feeding Your 1- to 2-Year-Old Toddlers: Learning by Playing KidsHealth > For Parents > Toddlers: Learning by Playing Print A A A What's in ... child's play, but toddlers are hard at work learning important physical skills as they gain muscle control, ...

  10. Playing against the Game

    Science.gov (United States)

    Remmele, Bernd

    2017-01-01

    The paper first outlines a differentiation of play/game-motivations that include "negative" attitudes against the play/game itself like cheating or spoilsporting. This problem is of particular importance in concern of learning games because they are not "played" for themselves--at least in the first place--but due to an…

  11. Play the Mosquito Game

    Science.gov (United States)

    ... and Work Teachers' Questionnaire Malaria Play the Mosquito Game Play the Parasite Game About the games Malaria is one of the world's most common ... last will in Paris. Play the Blood Typing Game Try to save some patients and learn about ...

  12. (Steering) interactive play behavior

    NARCIS (Netherlands)

    Delden, van Robertus Wilhelmus

    2017-01-01

    Play is a powerful means to have an impact on the cognitive, social-emotional, and/or motor skills development. The introduction of technology brings new possibilities to provide engaging and entertaining whole-body play activities. Technology mediates the play activities and in this way changes how

  13. (Steering) interactive play behavior

    NARCIS (Netherlands)

    van Delden, Robertus Wilhelmus

    2017-01-01

    Play is a powerful means to have an impact on the cognitive, social-emotional, and/or motor skills development. The introduction of technology brings new possibilities to provide engaging and entertaining whole-body play activities. Technology mediates the play activities and in this way changes how

  14. Speech processing system demonstrated by positron emission tomography (PET). A review of the literature

    Energy Technology Data Exchange (ETDEWEB)

    Hirano, Shigeru; Naito, Yasushi; Kojima, Hisayoshi [Kyoto Univ. (Japan)

    1996-03-01

    We review the literature on speech processing in the central nervous system as demonstrated by positron emission tomography (PET). Activation study using PET has been proved to be a useful and non-invasive method of investigating the speech processing system in normal subjects. In speech recognition, the auditory association areas and lexico-semantic areas called Wernicke`s area play important roles. Broca`s area, motor areas, supplementary motor cortices and the prefrontal area have been proved to be related to speech output. Visual speech stimulation activates not only the visual association areas but also the temporal region and prefrontal area, especially in lexico-semantic processing. Higher level speech processing, such as conversation which includes auditory processing, vocalization and thinking, activates broad areas in both hemispheres. This paper also discusses problems to be resolved in the future. (author) 42 refs.

  15. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  16. Noise estimation Algorithms for Speech Enhancement in highly non-stationary Environments

    Directory of Open Access Journals (Sweden)

    Anuradha R Fukane

    2011-03-01

    Full Text Available A noise estimation algorithm plays an important role in speech enhancement. Speech enhancement for automatic speaker recognition system, Man-Machine communication, Voice recognition systems, speech coders, Hearing aids, Video conferencing and many applications are related to speech processing. All these systems are real world systems and input available for these systems is only the noisy speech signal, before applying to these systems we have to remove the noise component from noisy speech signal means enhanced speech signal can be applied to these systems. In most speech enhancement algorithms, it is assumed that an estimate of noise spectrum is available. Noise estimate is critical part and it is important for speech enhancement algorithms. If the noise estimate is too low then annoying residual noise will be available and if the noise estimate is too high then speech will get distorted and loss intelligibility. This paper focus on the different approaches of noise estimation. Section I introduction, Section II explains simple approach of Voice activity detector (VAD for noise estimation, Section III explains different classes of noise estimation algorithms, Section IV explains performance evaluation of noise estimation algorithms, Section V conclusion.

  17. Error analysis to improve the speech recognition accuracy on Telugu language

    Indian Academy of Sciences (India)

    N Usha Rani; P N Girija

    2012-12-01

    Speech is one of the most important communication channels among the people. Speech Recognition occupies a prominent place in communication between the humans and machine. Several factors affect the accuracy of the speech recognition system. Much effort was involved to increase the accuracy of the speech recognition system, still erroneous output is generating in current speech recognition systems. Telugu language is one of the most widely spoken south Indian languages. In the proposed Telugu speech recognition system, errors obtained from decoder are analysed to improve the performance of the speech recognition system. Static pronunciation dictionary plays a key role in the speech recognition accuracy. Modification should be performed in the dictionary, which is used in the decoder of the speech recognition system. This modification reduces the number of the confusion pairs which improves the performance of the speech recognition system. Language model scores are also varied with this modification. Hit rate is considerably increased during this modification and false alarms have been changing during the modification of the pronunciation dictionary. Variations are observed in different error measures such as F-measures, error-rate and Word Error Rate (WER) by application of the proposed method.

  18. A scheme of improving the quality of speech mixing in multi-media conference system

    Science.gov (United States)

    Lu, Meilian; Xu, Jiang; Chen, Pengfei

    2013-03-01

    The factors which influence the voice quality in multi-media conference on IP network, such as delay and the accumulation of delay is analyzed. A scheme which adopts mixing buffer and method of uniform mixing period to increase the quality of mixed speech is proposed. The prototype of multi-media conference system which using this scheme proves that the quality issue of speech mixing can be solved well, and the speech quality is good in the local area network.

  19. Hate Speech: Power in the Marketplace.

    Science.gov (United States)

    Harrison, Jack B.

    1994-01-01

    A discussion of hate speech and freedom of speech on college campuses examines the difference between hate speech from normal, objectionable interpersonal comments and looks at Supreme Court decisions on the limits of student free speech. Two cases specifically concerning regulation of hate speech on campus are considered: Chaplinsky v. New…

  20. Playing with social identities

    DEFF Research Database (Denmark)

    Winther-Lindqvist, Ditte Alexandra

    2013-01-01

    This chapter offers support for Vygotsky’s claim that all play involves both an imagined situation as well as rules. Synthesising Schousboe’s comprehensive model of spheres of realities in playing (see Chapter 1, this volume) with Lev Vygotskys insight that all playing involve rules as well...... as pretence, children’s play is understood as an activity involving rules of the social order (roles and positions) as well as identification processes (imagined situations). The theoretical argumentation builds on empirical examples obtained in two different Danish day-care centres. The chapter is informed...... by ethnographic observations and draws on illustrative examples with symbolic group play as well as game-play with rules (soccer) among 5 year old boys. Findings suggest that day-care children’s play, involves negotiation of roles, positioning and identification, and rules – and that these negotiations...

  1. Study of acoustic correlates associate with emotional speech

    Science.gov (United States)

    Yildirim, Serdar; Lee, Sungbok; Lee, Chul Min; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Ebrahim; Narayanan, Shrikanth

    2004-10-01

    This study investigates the acoustic characteristics of four different emotions expressed in speech. The aim is to obtain detailed acoustic knowledge on how a speech signal is modulated by changes from neutral to a certain emotional state. Such knowledge is necessary for automatic emotion recognition and classification and emotional speech synthesis. Speech data obtained from two semi-professional actresses are analyzed and compared. Each subject produces 211 sentences with four different emotions; neutral, sad, angry, happy. We analyze changes in temporal and acoustic parameters such as magnitude and variability of segmental duration, fundamental frequency and the first three formant frequencies as a function of emotion. Acoustic differences among the emotions are also explored with mutual information computation, multidimensional scaling and acoustic likelihood comparison with normal speech. Results indicate that speech associated with anger and happiness is characterized by longer duration, shorter interword silence, higher pitch and rms energy with wider ranges. Sadness is distinguished from other emotions by lower rms energy and longer interword silence. Interestingly, the difference in formant pattern between [happiness/anger] and [neutral/sadness] are better reflected in back vowels such as /a/(/father/) than in front vowels. Detailed results on intra- and interspeaker variability will be reported.

  2. VARIATIONS OF DIRECTIVE SPEECH ACT IN TEMBANG DOLANAN

    Directory of Open Access Journals (Sweden)

    Daru Winarti

    2015-10-01

    Full Text Available This article discusses the directive speech acts contained in tembang dolanan. Using a pragmatic approach, particularly the framework of speech act theory, this article analyzes the different types of directive speech acts, the context which it embodies, and the level of decency. The data used in this research consisted of various tembang dolanan that contain directive statements. These data were analyzed using interpretation and inference by presenting it in the form of descriptive analysis. Descriptive analysis is meant to describe, systematically illustrating or elaborating the facts and relationships between phenomena. In the dolanan song, directive speech acts can be expressed directly or indirectly. Direct expression is conventionally used to rule, invite, and forward, while indirect expression is used when instead of by a command line, the intention is ruled by statement sentences, obligation-stating sentences, and questions. The use of direct speech acts generally does not have the value of politeness because they tend to still contain elements of coercion, have no effort to obscure the form of an order, and show the superiority of the speakers. On the other hand, the use of indirect speech acts seems to be an attempt to obscure the commandments to be more polite in the hope opponents would happily respond to commands.

  3. Performativity and Hate Speech: Expressions of Male Homosexuality in Cali

    Directory of Open Access Journals (Sweden)

    Andrés Felipe Castelar

    2012-12-01

    Full Text Available This article analyzes the speech of a group of self-identified homosexual men in Cali inwhich they refer to the visibilization of homosexuality and its consequences. Distancingitself from the explanation of such speech as “internal homophobia” or as “endo-discrimiation,”the current study instead utilizes Judith Butler’s concept of performativityin three categories of analysis: allegory, implicit and explicit norms, and desire/aversion.Based on this analysis, this hate speech can be read as a succession of performative actsthat constitute an idealized subject while simultaneously reaffirming a desired yet threatenedmasculinity. These performative acts not only allow them to defend themselves againsta perceived persecution for a sexual abjection that threatens heteronormativity, but alsoallow for a strong component of homoerotic desire to be embedded within such speech.

  4. MEASURABILITY OF ORAL SPEECH SAMPLE AS A TEST QUALITY

    Directory of Open Access Journals (Sweden)

    Olena Petrashchuk

    2011-03-01

    Full Text Available Abstract. The article deals with the problem of measurability of oral speech sample as a test quality.Provision of this quality is required for reliability of assessment speaking skills. The main focus is on specificnature of speaking skill including its mental, communication and social aspects. Assessment of speakingskills is analyzed through prism of descriptors of rating scales proposed in ICAO documents. Method of oralproficiency interview is applied to obtain an oral speech sample measurable against the scales. Themeasurability of oral speech sample is considered as a Speaking Test quality alongside with other testqualities such as validity and reliability.Keywords: aviation english language proficiency, ICAO rating scale, measurability of oral speechperformance, oral speech sample, speaking skill.

  5. Variation and Synthetic Speech

    CERN Document Server

    Miller, C; Massey, N; Miller, Corey; Karaali, Orhan; Massey, Noel

    1997-01-01

    We describe the approach to linguistic variation taken by the Motorola speech synthesizer. A pan-dialectal pronunciation dictionary is described, which serves as the training data for a neural network based letter-to-sound converter. Subsequent to dictionary retrieval or letter-to-sound generation, pronunciations are submitted a neural network based postlexical module. The postlexical module has been trained on aligned dictionary pronunciations and hand-labeled narrow phonetic transcriptions. This architecture permits the learning of individual postlexical variation, and can be retrained for each speaker whose voice is being modeled for synthesis. Learning variation in this way can result in greater naturalness for the synthetic speech that is produced by the system.

  6. Speech is Golden

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2014-01-01

    Most of the Danish municipalities are ready to begin to adopt automatic speech recognition, but at the same time remain nervous following a long series of bad business cases in the recent past. Complaints are voiced over costly licences and low service levels, typical effects of a de facto monopoly...... on the supply side. The present article reports on a new public action strategy which has taken shape in the course of 2013-14. While Denmark is a small language area, our public sector is well organised and has considerable purchasing power. Across this past year, Danish local authorities have organised around...... of the present article, in the role of economically neutral advisers. The aim of the initiative is to pave the way for the first profitable contract in the field - which we hope to see in 2014 - an event which would precisely break the present deadlock and open up a billion EUR market for speech technology...

  7. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  8. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  9. Speech Impairments in Intellectual Disability: An Acoustic Study

    Directory of Open Access Journals (Sweden)

    Sumanlata Gautam

    2016-08-01

    Full Text Available Speech is the primary means of human communication. Speech production starts in early ages and matures as children grow. People with intellectual or learning disabilities have deficit in speech production and faces difficulties in communication. These people need tailor-made therapies or trainings for rehabilitation to lead their lives independently. To provide these special trainings , it is important to know the exact nature of impairment in the speech through acoustic analysis. This study calculated the spectro-temporal features relevant to brain structures, encoded at short and long timescales in the speech of 82 subjects including 32 typically developing children, 20 adults and 30 participants with intellectual disabilities (severity ranges from mild to moderate. The results revealed that short timescales, which encoded information like formant transition in typically developing group were significantly different from intellectually disabled group, whereas long timescales were similar amongst groups. The short timescales were significantly different even within typically developing group but not within intellectually disabled group. The findings suggest that the features encoded at short timescales and ratio (short/long play a significant role in classifying the group. It is shown that the classifier models with good accuracy can be constructed using acoustic features under investigation. This indicates that these features are relevant in differentiating normal and disordered speech. These classification models can help in early diagnostics of intellectual or learning disabilities.

  10. Hiding Information under Speech

    Science.gov (United States)

    2005-12-12

    as it arrives in real time, and it disappears as fast as it arrives. Furthermore, our cognitive process for translating audio sounds to the meaning... steganography , whose goal is to make the embedded data completely undetectable. In addi- tion, we must dismiss the idea of hiding data by using any...therefore, an image has more room to hide data; and (2) speech steganography has not led to many money-making commercial businesses. For these two

  11. Speech Quality Measurement

    Science.gov (United States)

    1977-06-10

    noise test , t=2 for t1-v low p’ass f lit er te st ,and t 3 * or theit ADP(NI cod ing tevst ’*s is the sub lec nube 0l e tet Bostz- Av L b U0...a 1ý...it aepa rate, speech clu.1 t laboratory and controlled by the NOVA 830 computoer . Bach of tho stations has a CRT, .15 response buttons, a "rad button

  12. Binary Masking & Speech Intelligibility

    DEFF Research Database (Denmark)

    Boldt, Jesper

    The purpose of this thesis is to examine how binary masking can be used to increase intelligibility in situations where hearing impaired listeners have difficulties understanding what is being said. The major part of the experiments carried out in this thesis can be categorized as either experime...... mask using a directional system and a method for correcting errors in the target binary mask. The last part of the thesis, proposes a new method for objective evaluation of speech intelligibility....

  13. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  14. Methodology and Resources of the Itinerant Speech and Hearing Teacher

    Science.gov (United States)

    Carrion-Martinez, Jose J; de la Rosa, Antonio Luque

    2013-01-01

    Introduction: Having spent twenty years of business and professional development from the emergence of speech and hearing teacher traveling, it seems appropriate to reflect on the role he has been playing this figure in order to apprehend the things considered to improve the approach to adopt towards to promote the quality of its educational…

  15. Application of the wavelet transform for speech processing

    Science.gov (United States)

    Maes, Stephane

    1994-01-01

    Speaker identification and word spotting will shortly play a key role in space applications. An approach based on the wavelet transform is presented that, in the context of the 'modulation model,' enables extraction of speech features which are used as input for the classification process.

  16. An early influence of common ground during speech planning

    NARCIS (Netherlands)

    Vanlangendonck, F.; Willems, R.M.; Menenti, L.M.E.; Hagoort, P.

    2016-01-01

    In order to communicate successfully, speakers have to take into account which information they share with their addressee, i.e. common ground. In the current experiment we investigated how and when common ground affects speech planning by tracking speakers' eye movements while they played a

  17. Spoken Word Recognition of Chinese Words in Continuous Speech

    Science.gov (United States)

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  18. Facial speech gestures: the relation between visual speech processing, phonological awareness, and developmental dyslexia in 10-year-olds.

    Science.gov (United States)

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D

    2016-11-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown to be associated with impairments in reading and spelling (i.e. developmental dyslexia), but visual aspects of phoneme processing have not been investigated in individuals with such deficits. The present study analyzed the passive visual Mismatch Response (vMMR) in school children with and without developmental dyslexia in response to video-recorded mouth movements pronouncing syllables silently. Our results reveal that both groups of children showed processing of visual speech stimuli, but with different scalp distribution. Children without developmental dyslexia showed a vMMR with typical posterior distribution. In contrast, children with developmental dyslexia showed a vMMR with anterior distribution, which was even more pronounced in children with severe phonological deficits and very low spelling abilities. As anterior scalp distributions are typically reported for auditory speech processing, the anterior vMMR of children with developmental dyslexia might suggest an attempt to anticipate potentially upcoming auditory speech information in order to support phonological processing, which has been shown to be deficient in children with developmental dyslexia. © 2015 John Wiley & Sons Ltd.

  19. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  20. Noise Reduction in Car Speech

    OpenAIRE

    V. Bolom

    2009-01-01

    This paper presents properties of chosen multichannel algorithms for speech enhancement in a noisy environment. These methods are suitable for hands-free communication in a car cabin. Criteria for evaluation of these systems are also presented. The criteria consider both the level of noise suppression and the level of speech distortion. The performance of multichannel algorithms is investigated for a mixed model of speech signals and car noise and for real signals recorded in a car. 

  1. Speech recognition in university classrooms

    OpenAIRE

    Wald, Mike; Bain, Keith; Basson, Sara H

    2002-01-01

    The LIBERATED LEARNING PROJECT (LLP) is an applied research project studying two core questions: 1) Can speech recognition (SR) technology successfully digitize lectures to display spoken words as text in university classrooms? 2) Can speech recognition technology be used successfully as an alternative to traditional classroom notetaking for persons with disabilities? This paper addresses these intriguing questions and explores the underlying complex relationship between speech recognition te...

  2. Noise Reduction in Car Speech

    Directory of Open Access Journals (Sweden)

    V. Bolom

    2009-01-01

    Full Text Available This paper presents properties of chosen multichannel algorithms for speech enhancement in a noisy environment. These methods are suitable for hands-free communication in a car cabin. Criteria for evaluation of these systems are also presented. The criteria consider both the level of noise suppression and the level of speech distortion. The performance of multichannel algorithms is investigated for a mixed model of speech signals and car noise and for real signals recorded in a car. 

  3. Late Modern Play Culture

    DEFF Research Database (Denmark)

    Skovbjerg-Karoff, Helle

    2008-01-01

    Children's play and culture have changed over the recent years, and it is possible to understand the changes as a result of a more general change in society. We witness a large degree of changes connected to demographical aspects of children's lives. First of all it is a fact that large groups....... They are changing play arenas in order to find the identity, which suits them. In order to play children must know and be conscious of the cultural heritage, which contains knowledge of the way to organize in the playing session, the aesthetics, the techniques of playing, and this is something that is handed down...... from one generation to the next. Because older children are no longer present as younger children grow up, the traditional "cultural leaders" are gone. They have taken with them much of the inspiration for play as well as important knowledge about how to organise a game. In that sense we can say...

  4. Play, dreams, and creativity.

    Science.gov (United States)

    Oremland, J D

    1998-01-01

    Viewed ontogenetically, creating, dreaming, and playing are a variant of object relatedness. It is suggested that in recapitulating the ontogenetic sequence, creating, dreaming, and playing each as a process initiates by de-differentiation to primal union, evolves into transitional functioning, and consummates in tertiary cognitive discourse. The products of the triad--the created object, the dream, and play--are viewed as synergistic psychodynamic composites of topical, personal, and arche-typical imperatives. Creating, dreaming, and playing are easily overburdened by events, becoming stereotypical and repetitious. Nowhere is this more clearly seen than in the play of chronically ill, hospitalized children. It is suggested that with development generally, playing is replaced by formalized games; only dreaming continues as the vestige of early creative abilities.

  5. Now you hear it, now you don't: vowel devoicing in Japanese infant-directed speech.

    Science.gov (United States)

    Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F

    2010-03-01

    In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically devoicing particular vowels, yielding surface consonant clusters. We measured vowel devoicing rates in a corpus of infant- and adult-directed Japanese speech, for both read and spontaneous speech, and found that the mothers in our study preserve the fluent adult form of the language and mask underlying phonological structure by devoicing vowels in infant-directed speech at virtually the same rates as those for adult-directed speech. The results highlight the complex interrelationships among the modifications to adult speech that comprise infant-directed speech, and that form the input from which infants begin to build the eventual mature form of their native language.

  6. Play and virtuality

    Directory of Open Access Journals (Sweden)

    Svein Sando

    2010-07-01

    Full Text Available The similarities between virtuality and play are obvious, beginning with, for instance, the ubiquitous character of both. This paper deals with how insights from research on play can be used to enlighten our understanding of the ethical dimensions of activities in cyberspace, and vice versa. In particular, a central claim that play is beyond vice and virtue is debated and contested.http://dx.doi.org/10.5324/eip.v4i2.1762

  7. Why do Dolphins Play?

    Directory of Open Access Journals (Sweden)

    Stan A. Kuczaj

    2014-05-01

    Full Text Available Play is an important aspect of dolphin life, perhaps even an essential one. Play provides opportunities for dolphin calves to practice and perfect locomotor skills, including those involved in foraging and mating strategies and behaviors. Play also allows dolphin calves to learn important social skills and acquire information about the characteristics and predispositions of members of their social group, particularly their peers. In addition to helping dolphin calves learn how to behave, play also provides valuable opportunities for them to learn how to think. The ability to create and control play contexts enables dolphins to create novel experiences for themselves and their playmates under relatively safe conditions. The behavioral variability and individual creativity that characterize dolphin play yield ample opportunities for individual cognitive development as well as social learning, and sometimes result in innovations that are reproduced by other members of the group. Although adults sometimes produce innovative play, calves are the primary source of such innovations. Calves are also more likely to imitate novel play behaviors than are adults, and so calves contribute significantly to both the creation and transmission of novel play behaviors within a group. Not unexpectedly, then, the complexity of dolphin play increases with the involvement of peers. As a result, the opportunity to observe and/or interact with other dolphin calves enhances the effects of play on the acquisition and maintenance of flexible problem solving skills, the emergence and strengthening of social and communicative competencies, and the establishment of social relationships. It seems that play may have evolved to help young dolphins learn to adapt to novel situations in both their physical and social worlds, the beneficial result being a set of abilities that increases the likelihood that an individual survives and reproduces.

  8. PlayBook三人行

    Institute of Scientific and Technical Information of China (English)

    黑莓时光

    2011-01-01

    PlayBook,来自非苹果的另外一个水果——黑莓,它不是iPad,却也是平板。PBer,这个并不完美的平板——PlayBook的用户,他们开朗、认真、执着。热爱PlayBook的三人,拥有各自的人生轨迹,却挂着同样的嘴角上扬。

  9. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...... in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within...... command and control, text entry and search are presented with an emphasis on mobile text entry....

  10. Speech Recognition: How Do We Teach It?

    Science.gov (United States)

    Barksdale, Karl

    2002-01-01

    States that growing use of speech recognition software has made voice writing an essential computer skill. Describes how to present the topic, develop basic speech recognition skills, and teach speech recognition outlining, writing, proofreading, and editing. (Contains 14 references.) (SK)

  11. Huntington's Disease: Speech, Language and Swallowing

    Science.gov (United States)

    ... Disease Society of America Huntington's Disease Youth Organization Movement Disorder Society National Institute of Neurological Disorders and Stroke Typical Speech and Language Development Learning More Than One Language Adult Speech and Language Child Speech and Language Swallowing ...

  12. The relationship between the intelligibility of time-compressed speech and speech in noise in young and elderly listeners

    Science.gov (United States)

    Versfeld, Niek J.; Dreschler, Wouter A.

    2002-01-01

    A conventional measure to determine the ability to understand speech in noisy backgrounds is the so-called speech reception threshold (SRT) for sentences. It yields the signal-to-noise ratio (in dB) for which half of the sentences are correctly perceived. The SRT defines to what degree speech must be audible to a listener in order to become just intelligible. There are indications that elderly listeners have greater difficulty in understanding speech in adverse listening conditions than young listeners. This may be partly due to the differences in hearing sensitivity (presbycusis), hence audibility, but other factors, such as temporal acuity, may also play a significant role. A potential measure for the temporal acuity may be the threshold to which speech can be accelerated, or compressed in time. A new test is introduced where the speech rate is varied adaptively. In analogy to the SRT, the time-compression threshold (or TCT) then is defined as the speech rate (expressed in syllables per second) for which half of the sentences are correctly perceived. In experiment I, the TCT test is introduced and normative data are provided. In experiment II, four groups of subjects (young and elderly normal-hearing and hearing-impaired subjects) participated, and the SRT's in stationary and fluctuating speech-shaped noise were determined, as well as the TCT. The results show that the SRT in fluctuating noise and the TCT are highly correlated. All tests indicate that, even after correction for the hearing loss, elderly normal-hearing subjects perform worse than young normal-hearing subjects. The results indicate that the use of the TCT test or the SRT test in fluctuating noise is preferred over the SRT test in stationary noise.

  13. A Systematic Review of Speech Assessments for Children With Autism Spectrum Disorder: Recommendations for Best Practice.

    Science.gov (United States)

    Broome, Kate; McCabe, Patricia; Docking, Kimberley; Doble, Maree

    2017-08-15

    The purpose of this systematic review was to provide a summary and evaluation of speech assessments used with children with autism spectrum disorders (ASD). A subsequent narrative review was completed to ascertain the core components of an evidence-based pediatric speech assessment, which, together with the results of the systematic review, provide clinical and research guidelines for best practice. A systematic search of eight databases was used to find peer-reviewed research articles published between 1990 and 2014 assessing the speech of children with ASD. Eligible articles were categorized according to the assessment methods used and the speech characteristics described. The review identified 21 articles that met the inclusion criteria, search criteria, and confidence in ASD diagnosis. The speech of prelinguistic participants was assessed in seven articles. Speech assessments with verbal participants were completed in 15 articles with segmental and suprasegmental aspects of speech analyzed. Assessment methods included connected speech samples, single-word naming tasks, speech imitation tasks, and analysis of the production of words and sentences. Clinical and research guidelines for speech assessment of children with ASD are outlined. Future comparisons will be facilitated by the use of consistent reporting methods in research focusing on children with ASD.

  14. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  15. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  16. Play your part

    CERN Document Server

    Ramsey, Gaynor

    1978-01-01

    Play your part is a collection of then situations in which students have to take on the roles of particular people and express their opinions, feelings or arguments about the situation. Play your part is intended for use with advanced students of English.

  17. Role Playing and Skits

    Science.gov (United States)

    Letwin, Robert, Ed.

    1975-01-01

    Explores non-scripted role playing, dialogue role playing, sociodrama, and skits as variations of simulation techniques. Provides step-by-step guidelines for conducting such sessions. Successful Meetings, Bill Communications, Inc., 1422 Chestnut Street, Philadelphia, Pa. 19102. Subscription Rates: yearly (US, Canada, Mexico) $14.00; elsewhere,…

  18. The Play's the Thing

    Science.gov (United States)

    Bateman, Barbara

    2005-01-01

    The modern special education theater in the United States has hosted many plays, none with a larger or more diverse cast than the learning disabilities (LD) play. During the prologue, the children with LD were waiting in the wings, not yet identified as LD but there, nonetheless. With the advent of compulsory education in this country, awareness…

  19. Playfulness and Openness

    DEFF Research Database (Denmark)

    Marchetti, Emanuela; Petersson, Eva

    2011-01-01

    What does it mean to design a playful learning tool? What is needed for a learning tool to be perceived by potential users as playful? These questions emerged reflecting on a Participatory Design process aimed at enhancing museum-learning practice from the perspective of primary school children. ...

  20. Play as Experience

    Science.gov (United States)

    Henricks, Thomas S.

    2015-01-01

    The author investigates what he believes one of the more important aspects of play--the experience it generates in its participants. He considers the quality of this experience in relation to five ways of viewing play--as action, interaction, activity, disposition, and within a context. He treats broadly the different forms of affect, including…

  1. Art of Play

    DEFF Research Database (Denmark)

    Froes, Isabel Cristina G.; Walker, Kevin

    2011-01-01

    Play is a key element in cultural development, according to the Dutch historian Johan Huizinga. Nowadays many of us interact with other people in online games and social networks, through multiple digital devices. But harnessing playful activities for museum learning is mostly undeveloped. In thi...

  2. Family Play Therapy.

    Science.gov (United States)

    Ariel, Shlomo

    This paper examines a case study of family play therapy in Israel. The unique contributions of play therapy are evaluated including the therapy's accessibility to young children, its richness and flexibility, its exposure of covert patterns, its wealth of therapeutic means, and its therapeutic economy. The systematization of the therapy attempts…

  3. Return to Play

    Science.gov (United States)

    Mangan, Marianne

    2013-01-01

    Call it physical activity, call it games, or call it play. Whatever its name, it's a place we all need to return to. In the physical education, recreation, and dance professions, we need to redesign programs to address the need for and want of play that is inherent in all of us.

  4. Playful Collaboration (Or Not)

    DEFF Research Database (Denmark)

    Bogers, Marcel; Sproedt, Henrik

    2012-01-01

    This article explores how playing games can be used to teach intangible social interaction across boundaries, in particular within open collaborative innovation. We present an exploratory case study of how students learned from playing a board game in a graduate course of the international...... imply several opportunities and challenges within education and beyond....

  5. Art of Play

    DEFF Research Database (Denmark)

    Froes, Isabel Cristina G.; Walker, Kevin

    2011-01-01

    Play is a key element in cultural development, according to the Dutch historian Johan Huizinga. Nowadays many of us interact with other people in online games and social networks, through multiple digital devices. But harnessing playful activities for museum learning is mostly undeveloped. In thi...

  6. Play framework cookbook

    CERN Document Server

    Reelsen, Alexander

    2015-01-01

    This book is aimed at advanced developers who are looking to harness the power of Play 2.x. This book will also be useful for professionals looking to dive deeper into web development. Play 2 .x is an excellent framework to accelerate your learning of advanced topics.

  7. Let's Just Play

    Science.gov (United States)

    Schmidt, Janet

    2003-01-01

    Children have a right to play. The idea is so simple it seems self-evident. But a stroll through any toy superstore, or any half-hour of so-called "children's" programming on commercial TV, makes it clear that violence, not play, dominates what's being sold. In this article, the author discusses how teachers and parents share the responsibility in…

  8. Dynamic Processes of Speech Development by Seven Adult Learners of Japanese in a Domestic Immersion Context

    Science.gov (United States)

    Fukuda, Makiko

    2014-01-01

    The present study revealed the dynamic process of speech development in a domestic immersion program by seven adult beginning learners of Japanese. The speech data were analyzed with fluency, accuracy, and complexity measurements at group, interindividual, and intraindividual levels. The results revealed the complex nature of language development…

  9. An Analysis of the Language Features of Barack Obama’s Inaugural Speeches

    Institute of Scientific and Technical Information of China (English)

    李红梅; 吴丹; 朱耀顺

    2014-01-01

    This thesis tries to analyze the language features of Barack Obama's two inaugural speeches in 2008 and 2012 from the linguistic aspects, including sentence types as well as figures of speech which included imperative sentences, parallelism, rhetorical question, alliteration, hyperbole, simile, metaphor and so on.

  10. Distributed Speech Recognition Systems and Some Key Factors Affecting It's Performance

    Institute of Scientific and Technical Information of China (English)

    YE Lei; YANG Zhen

    2003-01-01

    In this paper we first analyze the Distributed Speech Recognition (DSR) system and the key factors that affect it's performance and then focus on the research on the relationship between the length of testing speech and the recognition accuracy of the system. Some experimental results are given at last.

  11. Discourse Analysis of Political Speeches from the Perspective of Functional Linguistics

    Institute of Scientific and Technical Information of China (English)

    韩菲菲; 杨秀娟

    2013-01-01

    Language is the primary tool of human communication, speech is the collective reflection of language art, and dis⁃course analysis is a means to appreciate the language arts. This paper analyzes the entire discourse of Chinese political leader’s speech at Cambridge University from different aspects by using discourse analysis tools, to show the beauty of words and the force of language.

  12. The Acquisition of Relative Clauses in Spontaneous Child Speech in Mandarin Chinese

    Science.gov (United States)

    Chen, Jidong; Shirai, Yasuhiro

    2015-01-01

    This study investigates the developmental trajectory of relative clauses (RCs) in Mandarin-learning children's speech. We analyze the spontaneous production of RCs by four monolingual Mandarin-learning children (0;11 to 3;5) and their input from a longitudinal naturalistic speech corpus (Min, 1994). The results reveal that in terms of the…

  13. A Discourse Analysis of Two Speeches on the Topic of I Have a Dream

    Institute of Scientific and Technical Information of China (English)

    戴嘉谊

    2014-01-01

    The author selects two speeches that deal with the same topic-I have a dream, delivered by Martin Luther King, Jr. and Bai Yansong respectively. For each speech, the structure of the text and how context and intended readership incorporated in the writing are analyzed. It is concluded that structure, context and intended readership are carried out differently.

  14. Dynamic Processes of Speech Development by Seven Adult Learners of Japanese in a Domestic Immersion Context

    Science.gov (United States)

    Fukuda, Makiko

    2014-01-01

    The present study revealed the dynamic process of speech development in a domestic immersion program by seven adult beginning learners of Japanese. The speech data were analyzed with fluency, accuracy, and complexity measurements at group, interindividual, and intraindividual levels. The results revealed the complex nature of language development…

  15. Speech Analysis of Bengali Speaking Children with Repaired Cleft Lip & Palate

    Science.gov (United States)

    Chakrabarty, Madhushree; Kumar, Suman; Chatterjee, Indranil; Maheshwari, Neha

    2012-01-01

    The present study aims at analyzing speech samples of four Bengali speaking children with repaired cleft palates with a view to differentiate between the misarticulations arising out of a deficit in linguistic skills and structural or motoric limitations. Spontaneous speech samples were collected and subjected to a number of linguistic analyses…

  16. Speech Analysis of Bengali Speaking Children with Repaired Cleft Lip & Palate

    Science.gov (United States)

    Chakrabarty, Madhushree; Kumar, Suman; Chatterjee, Indranil; Maheshwari, Neha

    2012-01-01

    The present study aims at analyzing speech samples of four Bengali speaking children with repaired cleft palates with a view to differentiate between the misarticulations arising out of a deficit in linguistic skills and structural or motoric limitations. Spontaneous speech samples were collected and subjected to a number of linguistic analyses…

  17. Evaluation of speech function on repairing defects of maxilla and palate with temporalis muscle flap

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Objective: To evaluate the speech function on repairing defects of maxilla and palate with temporalis muscle flap after benign or malignant turmor resection. Methods: The lateral cephalogram and speech intelligibility were detected in 19 cases with the operations of repairing defects of maxilla and palate by temporalis muscle flap, and their recovery of the speech function were analyzed.Results: Among the 19 patients, there were 15 cases (78.00%) with complete velopharynx, 3 cases (15.80 % ) with mariginal velopharynx, and 1 case (5.26%) with insufficient velopharynx. The average speech intelligibility was 94.3%, close to the normal speech intelligibility. Conclusion: The operation of repairing defects of maxilla and palate with temporalis muscle flap can reconstruct the phonatory structure, preserve the palate function and restore the speech function after operation.

  18. Automated Gesturing for Virtual Characters: Speech-driven and Text-driven Approaches

    Directory of Open Access Journals (Sweden)

    Goranka Zoric

    2006-04-01

    Full Text Available We present two methods for automatic facial gesturing of graphically embodied animated agents. In one case, conversational agent is driven by speech in automatic Lip Sync process. By analyzing speech input, lip movements are determined from the speech signal. Another method provides virtual speaker capable of reading plain English text and rendering it in a form of speech accompanied by the appropriate facial gestures. Proposed statistical model for generating virtual speaker’s facial gestures can be also applied as addition to lip synchronization process in order to obtain speech driven facial gesturing. In this case statistical model will be triggered with the input speech prosody instead of lexical analysis of the input text.

  19. Discourse, Statement and Speech Act

    Directory of Open Access Journals (Sweden)

    Елена Александровна Красина

    2016-12-01

    Full Text Available Being a component of socio-cultural interaction discourse constitutes a sophisticated cohesion of language form, meaning and performance, i.e. communicative event or act. Cohesion with event and performance let us treat discourse as a certain lifeform, appealing both to communicative interaction and pragmatic environment using the methodology of studies of E. Benveniste, M. Foucault, I. Kecskes, J.R. Searle et al. In linguistics and other fields of humanitarian knowledge the notion of discourse facilitates the integration of studies in humanities. Principles of integration, incorporation into broad humanitarian context reveal some topics of discourse-speech act-utterance interaction which leads to substantive solutions of a number of linguistic topics, in particular, that of an utterance. Logicians determine utterance through proposition; linguists - through sentence, while speech act theory does it by means of illocutionary act. Integrated in a discourse or its part, utterance makes up their integral constituents although not unique ones. In relation to speech acts, utterance happens to be the unique definitional domain synchronically modelling and denoting speech act by means of propositional content. The goal of the research is to show the conditions of interaction and correlation of discourse, speech act and utterance as linguistic constructions, reveal some similarities and differences of their characteristics and prove the importance of the constructive role of utterance as a minimal unit of speech production. Discourse-speech act-utterance correlation supports the utterance role of a discrete unit within syntactic continuum, facing both language and speech: still, it belongs exclusively neither to language nor speech, but specifies their interaction in course of speech activity exposing simultaneously its nature of an ‘atom of discourse’ and creating the definitional domain of a speech act.

  20. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    Science.gov (United States)

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  1. The Attention-Getting Capacity of Whines and Child-Directed Speech

    Directory of Open Access Journals (Sweden)

    Rosemarie Sokol Chang

    2010-04-01

    Full Text Available The current study tested the ability of whines and child-directed speech to attract the attention of listeners involved in a story repetition task. Twenty non-parents and 17 parents were presented with two dull stories, each playing to a separate ear, and asked to repeat one of the stories verbatim. The story that participants were instructed to ignore was interrupted occasionally with the reader whining and using child-directed speech. While repeating the passage, participants were monitored for Galvanic skin response, heart rate, and blood pressure. Based on 4 measures, participants tuned in more to whining, and to a lesser extent child-directed speech, than neutral speech segments that served as a control. Participants, regardless of gender or parental status, made more mistakes when presented with the whine or child-directed speech, they recalled hearing those vocalizations, they recognized more words from the whining segment than the neutral control segment, and they exhibited higher Galvanic skin response during the presence of whines and child-directed speech than neutral speech segments. Whines and child-directed speech appear to be integral members of a suite of vocalizations designed to get the attention of attachment partners by playing to an auditory sensitivity among humans. Whines in particular may serve the function of eliciting care at a time when caregivers switch from primarily mothers to greater care from other caregivers.

  2. The frontal aslant tract underlies speech fluency in persistent developmental stuttering.

    Science.gov (United States)

    Kronfeld-Duenias, Vered; Amir, Ofer; Ezrati-Vinacour, Ruth; Civier, Oren; Ben-Shachar, Michal

    2016-01-01

    The frontal aslant tract (FAT) is a pathway that connects the inferior frontal gyrus with the supplementary motor area (SMA) and pre-SMA. The FAT was recently identified and introduced as part of a "motor stream" that plays an important role in speech production. In this study, we use diffusion imaging to examine the hypothesis that the FAT underlies speech fluency, by studying its properties in individuals with persistent developmental stuttering, a speech disorder that disrupts the production of fluent speech. We use tractography to quantify the volume and diffusion properties of the FAT in a group of adults who stutter (AWS) and fluent controls. Additionally, we use tractography to extract these measures from the corticospinal tract (CST), a well-known component of the motor system. We compute diffusion measures in multiple points along the tracts, and examine the correlation between these diffusion measures and behavioral measures of speech fluency. Our data show increased mean diffusivity in bilateral FAT of AWS compared with controls. In addition, the results show regions within the left FAT and the left CST where diffusivity values are increased in AWS compared with controls. Last, we report that in AWS, diffusivity values measured within sub-regions of the left FAT negatively correlate with speech fluency. Our findings are the first to relate the FAT with fluent speech production in stuttering, thus adding to the current knowledge of the functional role that this tract plays in speech production and to the literature of the etiology of persistent developmental stuttering.

  3. Study on adaptive compressed sensing & reconstruction of quantized speech signals

    Science.gov (United States)

    Yunyun, Ji; Zhen, Yang

    2012-12-01

    Compressed sensing (CS) is a rising focus in recent years for its simultaneous sampling and compression of sparse signals. Speech signals can be considered approximately sparse or compressible in some domains for natural characteristics. Thus, it has great prospect to apply compressed sensing to speech signals. This paper is involved in three aspects. Firstly, the sparsity and sparsifying matrix for speech signals are analyzed. Simultaneously, a kind of adaptive sparsifying matrix based on the long-term prediction of voiced speech signals is constructed. Secondly, a CS matrix called two-block diagonal (TBD) matrix is constructed for speech signals based on the existing block diagonal matrix theory to find out that its performance is empirically superior to that of the dense Gaussian random matrix when the sparsifying matrix is the DCT basis. Finally, we consider the quantization effect on the projections. Two corollaries about the impact of the adaptive quantization and nonadaptive quantization on reconstruction performance with two different matrices, the TBD matrix and the dense Gaussian random matrix, are derived. We find that the adaptive quantization and the TBD matrix are two effective ways to mitigate the quantization effect on reconstruction of speech signals in the framework of CS.

  4. The neural processing of foreign-accented speech and its relationship to listener bias

    Directory of Open Access Journals (Sweden)

    Han-Gyol eYi

    2014-10-01

    Full Text Available Foreign-accented speech often presents a challenging listening condition. In addition to deviations from the target speech norms related to the inexperience of the nonnative speaker, listener characteristics may play a role in determining intelligibility levels. We have previously shown that an implicit visual bias for associating East Asian faces and foreignness predicts the listeners’ perceptual ability to process Korean-accented English audiovisual speech (Yi et al., 2013. Here, we examine the neural mechanism underlying the influence of listener bias to foreign faces on speech perception. In a functional magnetic resonance imaging (fMRI study, native English speakers listened to native- and Korean-accented English sentences, with or without faces. The participants’ Asian-foreign association was measured using an implicit association test (IAT, conducted outside the scanner. We found that foreign-accented speech evoked greater activity in the bilateral primary auditory cortices and the inferior frontal gyri, potentially reflecting greater computational demand. Higher IAT scores, indicating greater bias, were associated with increased BOLD response to foreign-accented speech with faces in the primary auditory cortex, the early node for spectrotemporal analysis. We conclude the following: (1 foreign-accented speech perception places greater demand on the neural systems underlying speech perception; (2 face of the talker can exaggerate the perceived foreignness of foreign-accented speech; (3 implicit Asian-foreign association is associated with decreased neural efficiency in early spectrotemporal processing.

  5. External Sources of Individual Differences? A Cross-Linguistic Analysis of the Phonetics of Mothers' Speech to 1-Year-Old Children.

    Science.gov (United States)

    Vihman, Marilyn M.; And Others

    1994-01-01

    Sampled the speech of American, French, and Swedish mothers to their one-year olds, to analyze distribution of phonetic parameters of adult speech, as well as children's own early words. Found that variability is greater in child words than in adult speech, and mother-child dyads showed no evidence of specific maternal influence on phonetics of…

  6. Crew Activity Analyzer

    Science.gov (United States)

    Murray, James; Kirillov, Alexander

    2008-01-01

    The crew activity analyzer (CAA) is a system of electronic hardware and software for automatically identifying patterns of group activity among crew members working together in an office, cockpit, workshop, laboratory, or other enclosed space. The CAA synchronously records multiple streams of data from digital video cameras, wireless microphones, and position sensors, then plays back and processes the data to identify activity patterns specified by human analysts. The processing greatly reduces the amount of time that the analysts must spend in examining large amounts of data, enabling the analysts to concentrate on subsets of data that represent activities of interest. The CAA has potential for use in a variety of governmental and commercial applications, including planning for crews for future long space flights, designing facilities wherein humans must work in proximity for long times, improving crew training and measuring crew performance in military settings, human-factors and safety assessment, development of team procedures, and behavioral and ethnographic research. The data-acquisition hardware of the CAA (see figure) includes two video cameras: an overhead one aimed upward at a paraboloidal mirror on the ceiling and one mounted on a wall aimed in a downward slant toward the crew area. As many as four wireless microphones can be worn by crew members. The audio signals received from the microphones are digitized, then compressed in preparation for storage. Approximate locations of as many as four crew members are measured by use of a Cricket indoor location system. [The Cricket indoor location system includes ultrasonic/radio beacon and listener units. A Cricket beacon (in this case, worn by a crew member) simultaneously transmits a pulse of ultrasound and a radio signal that contains identifying information. Each Cricket listener unit measures the difference between the times of reception of the ultrasound and radio signals from an identified beacon

  7. Teaching Speech Acts

    Directory of Open Access Journals (Sweden)

    Teaching Speech Acts

    2007-01-01

    Full Text Available In this paper I argue that pragmatic ability must become part of what we teach in the classroom if we are to realize the goals of communicative competence for our students. I review the research on pragmatics, especially those articles that point to the effectiveness of teaching pragmatics in an explicit manner, and those that posit methods for teaching. I also note two areas of scholarship that address classroom needs—the use of authentic data and appropriate assessment tools. The essay concludes with a summary of my own experience teaching speech acts in an advanced-level Portuguese class.

  8. Speech Understanding Systems

    Science.gov (United States)

    1976-02-01

    kHz that is a fixed number of decibels below the maximum value in the spectrum. A value of zero, however, is not recommended. (c) Speech for the...probability distributions for [t,p,k,d,n] should be evaluated using the observed parameters. But the scores on each of the vowels are all bad, so...plosives [p,t,k] is to examine the burst frequency and the voice-onset-time (VOT) when the plosive is followed by a vowel or semi- vowel . However, if

  9. A Speech Intelligibility Index-based approach to predict the speech reception threshold for sentences in fluctuating noise for normal-hearing listeners

    Science.gov (United States)

    Rhebergen, Koenraad S.; Versfeld, Niek J.

    2005-04-01

    The SII model in its present form (ANSI S3.5-1997, American National Standards Institute, New York) can accurately describe intelligibility for speech in stationary noise but fails to do so for nonstationary noise maskers. Here, an extension to the SII model is proposed with the aim to predict the speech intelligibility in both stationary and fluctuating noise. The basic principle of the present approach is that both speech and noise signal are partitioned into small time frames. Within each time frame the conventional SII is determined, yielding the speech information available to the listener at that time frame. Next, the SII values of these time frames are averaged, resulting in the SII for that particular condition. Using speech reception threshold (SRT) data from the literature, the extension to the present SII model can give a good account for SRTs in stationary noise, fluctuating speech noise, interrupted noise, and multiple-talker noise. The predictions for sinusoidally intensity modulated (SIM) noise and real speech or speech-like maskers are better than with the original SII model, but are still not accurate. For the latter type of maskers, informational masking may play a role. .

  10. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  11. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  12. Methods of Teaching Speech Recognition

    Science.gov (United States)

    Rader, Martha H.; Bailey, Glenn A.

    2010-01-01

    Objective: This article introduces the history and development of speech recognition, addresses its role in the business curriculum, outlines related national and state standards, describes instructional strategies, and discusses the assessment of student achievement in speech recognition classes. Methods: Research methods included a synthesis of…

  13. PESQ Based Speech Intelligibility Measurement

    NARCIS (Netherlands)

    Beerends, J.G.; Buuren, R.A. van; Vugt, J.M. van; Verhave, J.A.

    2009-01-01

    Several measurement techniques exist to quantify the intelligibility of a speech transmission chain. In the objective domain, the Articulation Index [1] and the Speech Transmission Index STI [2], [3], [4], [5] have been standardized for predicting intelligibility. The STI uses a signal that contains

  14. Perceptual Learning of Interrupted Speech

    NARCIS (Netherlands)

    Benard, Michel Ruben; Başkent, Deniz

    2013-01-01

    The intelligibility of periodically interrupted speech improves once the silent gaps are filled with noise bursts. This improvement has been attributed to phonemic restoration, a top-down repair mechanism that helps intelligibility of degraded speech in daily life. Two hypotheses were investigated u

  15. Play vs. Procedures

    DEFF Research Database (Denmark)

    Hammar, Emil

    Through the theories of play by Gadamer (2004) and Henricks (2006), I will show how the relationship between play and game can be understood as dialectic and disruptive, thus challenging understandings of how the procedures of games determine player activity and vice versa. As such, I posit some...... analytical consequences for understandings of digital games as procedurally fixed (Boghost, 2006; Flannagan, 2009; Bathwaite & Sharp, 2010). That is, if digital games are argued to be procedurally fixed and if play is an appropriative and dialectic activity, then it could be argued that the latter affects...

  16. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    . The Babel myth speaks about distance created when aspiring to the heaven as the reason for language division. Meanwhile, Locquin states through thorough investigations that only a few phonemes are present throughout history. Our interpretation is that a system able to recognize archetypal phonemes through......Speech is both beautiful and informative. In this work, a conceptual study of the speech, through investigation of the tower of Babel, the archetypal phonemes, and a study of the reasons of uses of language is undertaken in order to create an artistic work investigating the nature of speech...... vowels and consonants, and which converts the speech energy into visual particles that form complex visual structures, provides us with a mean to present the expressiveness of speech into a visual mode. This system is presented in an artwork whose scenario is inspired from the reasons of language...

  17. Speech Compression Using Multecirculerletet Transform

    Directory of Open Access Journals (Sweden)

    Sulaiman Murtadha

    2012-01-01

    Full Text Available Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension on speech compression. DWT and MCT performances in terms of compression ratio (CR, mean square error (MSE and peak signal to noise ratio (PSNR are assessed. Computer simulation results indicate that the two dimensions MCT offer a better compression ratio, MSE and PSNR than DWT.

  18. Play and Power

    DEFF Research Database (Denmark)

    The power of play, so central to psychoanalytic theory and practice, is conjoined to the social psychological or socio-politically coloured concept of power, giving rise to many fruitful discussions of how these concepts manifest themselves in clinical work with children, groups and adults....... The inspiration for this book was the 3-section EFPP conference in Copenhagen in May 2007 with the main theme "Play and Power". At the conference and in the book, this theme is presented both inside and outside the therapeutic space. It is amply illustrated in clinical cases from individual psychotherapies....... Play and power are also explored in the broader context of the community, however. In relation to society at large, psychoanalytic psychotherapy has important contributions to offer society, and we need playful creativity and power to bring forward our knowledge about it....

  19. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  20. Dichotic speech tests.

    Science.gov (United States)

    Hällgren, M; Johansson, M; Larsby, B; Arlinger, S

    1998-01-01

    When central auditory dysfunction is present, ability to understand speech in difficult listening situations can be affected. To study this phenomenon, dichotic speech tests were performed with test material in the Swedish language. Digits, spondees, sentences and consonant-vowel syllables were used as stimuli and the reporting was free or directed. The test material was recorded on CD. The study includes a normal group of 30 people in three different age categories; 11 years, 23-27 years and 67-70 years. It also includes two different groups of subjects with suspected central auditory lesions; 11 children with reading and writing difficulties and 4 adults earlier exposed to organic solvents. The results from the normal group do not show any differences in performance due to age. The children with reading and writing difficulties show a significant deviation for one test with digits and one test with syllables. Three of the four adults exposed to solvents show a significant deviation from the normal group.

  1. Interactions between distal speech rate, linguistic knowledge, and speech environment.

    Science.gov (United States)

    Morrill, Tuuli; Baese-Berk, Melissa; Heffner, Christopher; Dilley, Laura

    2015-10-01

    During lexical access, listeners use both signal-based and knowledge-based cues, and information from the linguistic context can affect the perception of acoustic speech information. Recent findings suggest that the various cues used in lexical access are implemented with flexibility and may be affected by information from the larger speech context. We conducted 2 experiments to examine effects of a signal-based cue (distal speech rate) and a knowledge-based cue (linguistic structure) on lexical perception. In Experiment 1, we manipulated distal speech rate in utterances where an acoustically ambiguous critical word was either obligatory for the utterance to be syntactically well formed (e.g., Conner knew that bread and butter (are) both in the pantry) or optional (e.g., Don must see the harbor (or) boats). In Experiment 2, we examined identical target utterances as in Experiment 1 but changed the distribution of linguistic structures in the fillers. The results of the 2 experiments demonstrate that speech rate and linguistic knowledge about critical word obligatoriness can both influence speech perception. In addition, it is possible to alter the strength of a signal-based cue by changing information in the speech environment. These results provide support for models of word segmentation that include flexible weighting of signal-based and knowledge-based cues.

  2. PCA-Based Speech Enhancement for Distorted Speech Recognition

    Directory of Open Access Journals (Sweden)

    Tetsuya Takiguchi

    2007-09-01

    Full Text Available We investigated a robust speech feature extraction method using kernel PCA (Principal Component Analysis for distorted speech recognition. Kernel PCA has been suggested for various image processing tasks requiring an image model, such as denoising, where a noise-free image is constructed from a noisy input image. Much research for robust speech feature extraction has been done, but it remains difficult to completely remove additive or convolution noise (distortion. The most commonly used noise-removal techniques are based on the spectraldomain operation, and then for speech recognition, the MFCC (Mel Frequency Cepstral Coefficient is computed, where DCT (Discrete Cosine Transform is applied to the mel-scale filter bank output. This paper describes a new PCA-based speech enhancement algorithm using kernel PCA instead of DCT, where the main speech element is projected onto low-order features, while the noise or distortion element is projected onto high-order features. Its effectiveness is confirmed by word recognition experiments on distorted speech.

  3. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  4. The effects of news frames and political speech sources on political attitudes: the moderating role of values

    NARCIS (Netherlands)

    Waheed, M.; Schuck, A.; Neijens, P.; de Vreese, C.

    2015-01-01

    This study investigated the extent to which values play a role in affecting citizens’ political attitudes when exposed to different media news frames and political speech sources. To test this, we designed a survey experiment which used news coverage of a political speech concerning the cultural pra

  5. Brief Training with Co-Speech Gesture Lends a Hand to Word Learning in a Foreign Language

    Science.gov (United States)

    Kelly, Spencer D.; McDevitt, Tara; Esch, Megan

    2009-01-01

    Recent research in psychology and neuroscience has demonstrated that co-speech gestures are semantically integrated with speech during language comprehension and development. The present study explored whether gestures also play a role in language learning in adults. In Experiment 1, we exposed adults to a brief training session presenting novel…

  6. Application of Microsoft Speech SDK Based on C# Language%基于C#语言的Microsoft Speech SDK应用

    Institute of Scientific and Technical Information of China (English)

    白林如; 纪浩哲

    2013-01-01

    This paper provides speech application programming interface SAPI on MicrosoR Speech SDK 5.1 software development kit are analyzed, discussed the method of using C# language called SAPI in speech synthesis and speech recognition interface, realizes the voice interactive function of database application program.%对MicrosoR Speech SDK 5.1软件开发包提供的语音应用程序编程接口SAPI进行了分析,探讨了使用C#语言调用SAPI中语音合成和语音识别相关接口的方法,实现了数据库应用程序的语音交互功能。

  7. Hate Speech/Free Speech: Using Feminist Perspectives To Foster On-Campus Dialogue.

    Science.gov (United States)

    Cornwell, Nancy; Orbe, Mark P.; Warren, Kiesha

    1999-01-01

    Explores the complex issues inherent in the tension between hate speech and free speech, focusing on the phenomenon of hate speech on college campuses. Describes the challenges to hate speech made by critical race theorists and explains how a feminist critique can reorient the parameters of hate speech. (SLD)

  8. Hate Speech/Free Speech: Using Feminist Perspectives To Foster On-Campus Dialogue.

    Science.gov (United States)

    Cornwell, Nancy; Orbe, Mark P.; Warren, Kiesha

    1999-01-01

    Explores the complex issues inherent in the tension between hate speech and free speech, focusing on the phenomenon of hate speech on college campuses. Describes the challenges to hate speech made by critical race theorists and explains how a feminist critique can reorient the parameters of hate speech. (SLD)

  9. 《傲慢与偏见》中微观言语行为与人物刻画——以伊丽莎白为例%The Micro-Speech Acts and Characterization in Pride and Prejudice:A Case Study of Elizabeth's Four Typical Micro-Speech Acts

    Institute of Scientific and Technical Information of China (English)

    王军

    2011-01-01

    运用言语行为理论分析《傲慢与偏见》中个性鲜明的女主人公伊丽莎白的四种典型微观言语行为——讽刺、嘲弄、悔恨、道歉,说明人物的性格特征,可以得出结论,即言语行为在人物刻画过程中能够起到关键性作用,而言语行为理论则可以为文学文本解读提供一个崭新的视角。%By using Speech Act Theory,this paper analyzes the four typical micro-speech acts of the heroine in Pride and Prejudice,Elizabeth,who has striking personality traits.Through the analysis,the conclusion can be reached that speech acts play a key role in the characterization and Speech Act Theory can provide a new perspective for the interpretation of literary texts.

  10. Connected Speech Processes in Australian English.

    Science.gov (United States)

    Ingram, J. C. L.

    1989-01-01

    Explores the role of Connected Speech Processes (CSP) in accounting for sociolinguistically significant dimensions of speech variation, and presents initial findings on the distribution of CSPs in the speech of Australian adolescents. The data were gathered as part of a wider survey of speech of Brisbane school children. (Contains 26 references.)…

  11. Linguistic Units and Speech Production Theory.

    Science.gov (United States)

    MacNeilage, Peter F.

    This paper examines the validity of the concept of linguistic units in a theory of speech production. Substantiating data are drawn from the study of the speech production process itself. Secondarily, an attempt is made to reconcile the postulation of linguistic units in speech production theory with their apparent absence in the speech signal.…

  12. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  13. Coevolution of Human Speech and Trade

    NARCIS (Netherlands)

    Horan, R.D.; Bulte, E.H.; Shogren, J.F.

    2008-01-01

    We propose a paleoeconomic coevolutionary explanation for the origin of speech in modern humans. The coevolutionary process, in which trade facilitates speech and speech facilitates trade, gives rise to multiple stable trajectories. While a `trade-speech¿ equilibrium is not an inevitable outcome for

  14. Connected Speech Processes in Australian English.

    Science.gov (United States)

    Ingram, J. C. L.

    1989-01-01

    Explores the role of Connected Speech Processes (CSP) in accounting for sociolinguistically significant dimensions of speech variation, and presents initial findings on the distribution of CSPs in the speech of Australian adolescents. The data were gathered as part of a wider survey of speech of Brisbane school children. (Contains 26 references.)…

  15. Emotional Communication in Speech and Music: The Role of Melodic and Rhythmic Contrasts

    Directory of Open Access Journals (Sweden)

    Lena Rachel Quinto

    2013-04-01

    Full Text Available Many acoustic features convey emotion similarly in speech and music. Researchers have established that acoustic features such as pitch height, tempo, and intensity carry important emotional information in both domains. In this investigation, we examined the emotional significance of melodic and rhythmic contrasts between successive syllables or tones in speech and music, referred to as Melodic Interval Variability (MIV and the normalized Pairwise Variability Index (nPVI. The spoken stimuli were 96 tokens expressing the emotions of irritation, fear, happiness, sadness, tenderness or no emotion. The music stimuli were 96 phrases, played with or without performance expression and composed with the intention of communicating the same emotions. Results showed that speech, but not music, was characterized by changes in MIV as a function of intended emotion. Music and speech were both characterized by changes in nPVI as a function of intended emotion. The results suggest that these measures may signal emotional intentions differently in speech and music.

  16. The Speech Anxiety Thoughts Inventory: scale development and preliminary psychometric data.

    Science.gov (United States)

    Cho, Yongrae; Smits, Jasper A J; Telch, Michael J

    2004-01-01

    Cognitions have been known to play a central role in the development, maintenance, and treatment of speech anxiety. However, few instruments are currently available to assess cognitive contents associated with speech anxiety. This report describes three studies examining the psychometric characteristics of a revised English version of the Speech Anxiety Thoughts Inventory (SATI)-an instrument measuring maladaptive cognitions associated with speech anxiety. In Study 1, factor analyses of the SATI revealed a two-factor solution-"prediction of poor performance" and "fear of negative evaluation by audience", respectively. In Study 2, the two-factor structure was replicated. In addition, results revealed stability over a four-week period, high internal consistency, and good convergent and discriminant validity. In Study 3, the scale demonstrated sensitivity to change following brief exposure-based treatments. These findings suggest that the SATI is a highly reliable, valid measure to assess cognitive features of speech anxiety.

  17. Individual differences in children's private speech: the role of imaginary companions.

    Science.gov (United States)

    Davis, Paige E; Meins, Elizabeth; Fernyhough, Charles

    2013-11-01

    Relations between children's imaginary companion status and their engagement in private speech during free play were investigated in a socially diverse sample of 5-year-olds (N=148). Controlling for socioeconomic status, receptive verbal ability, total number of utterances, and duration of observation, there was a main effect of imaginary companion status on type of private speech. Children who had imaginary companions were more likely to engage in covert private speech compared with their peers who did not have imaginary companions. These results suggest that the private speech of children with imaginary companions is more internalized than that of their peers who do not have imaginary companions and that social engagement with imaginary beings may fulfill a similar role to social engagement with real-life partners in the developmental progression of private speech. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Transitioning from analog to digital audio recording in childhood speech sound disorders

    Science.gov (United States)

    Shriberg, Lawrence D.; McSweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2014-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants’ speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise. PMID:16019779

  19. Gaze aversion to stuttered speech: a pilot study investigating differential visual attention to stuttered and fluent speech.

    Science.gov (United States)

    Bowers, Andrew L; Crawcour, Stephen C; Saltuklaroglu, Tim; Kalinowski, Joseph

    2010-01-01

    pragmatic factors in communication partners. Regardless of the factors contributing to the response, its primary importance may be that gaze aversion is a visible communication partner signal informing the person stuttering that something is amiss in the interaction and hence, may contribute to inducing negative emotions in the persons stuttering, via engagement of the mirror neuron system. We suggest that witnessing and interpreting communication partner responses to stuttering may play a role when a person who stutters engages in future interactions, perhaps contributing to the development of covert strategies to hide stuttering. 2010 Royal College of Speech & Language Therapists.

  20. Playing the Numbers Game.

    Science.gov (United States)

    O'Hara, Rory

    1984-01-01

    The recent, experimental enrollment planning and resource allocation efforts of Britain's National Advisory Board are analyzed and some potential problems for institutions posed by them are discussed. (MSE)

  1. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Hynek Hermansky

    2011-10-01

    Information is carried in changes of a signal. The paper starts with revisiting Dudley’s concept of the carrier nature of speech. It points to its close connection to modulation spectra of speech and argues against short-term spectral envelopes as dominant carriers of the linguistic information in speech. The history of spectral representations of speech is briefly discussed. Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) filtering. Next, the frequency domain perceptual linear prediction technique for deriving autoregressive models of temporal trajectories of spectral power in individual frequency bands is reviewed. Finally, posterior-based features, which allow for straightforward application of modulation frequency domain information, are described. The paper is tutorial in nature, aims at a historical global overview of attempts for using spectral dynamics in machine recognition of speech, and does not always provide enough detail of the described techniques. However, extensive references to earlier work are provided to compensate for the lack of detail in the paper.

  2. ARMA Modelling for Whispered Speech

    Institute of Scientific and Technical Information of China (English)

    Xue-li LI; Wei-dong ZHOU

    2010-01-01

    The Autoregressive Moving Average (ARMA) model for whispered speech is proposed. Compared with normal speech, whispered speech has no fundamental frequency because of the glottis being semi-opened and turbulent flow being created, and formant shifting exists in the lower frequency region due to the narrowing of the tract in the false vocal fold regions and weak acoustic coupling with the subglottal system. Analysis shows that the effect of the subglottal system is to introduce additional pole-zero pairs into the vocal tract transfer function. Theoretically, the method based on an ARMA process is superior to that based on an AR process in the spectral analysis of the whispered speech. Two methods, the least squared modified Yule-Walker likelihood estimate (LSMY) algorithm and the Frequency-Domain Steiglitz-Mcbride (FDSM) algorithm, are applied to the ARMA model for the whispered speech. The performance evaluation shows that the ARMA model is much more appropriate for representing the whispered speech than the AR model, and the FDSM algorithm provides a more accurate estimation of the whispered speech spectral envelope than the LSMY algorithm with higher computational complexity.

  3. Can play be defined?

    DEFF Research Database (Denmark)

    Eichberg, Henning

    2015-01-01

    Can play be defined? There is reason to raise critical questions about the established academic demand that at phenomenon – also in humanist studies – should first of all be defined, i.e. de-lineated and by neat lines limited to a “little box” that can be handled. The following chapter develops t....... Human beings can very well understand play – or whatever phenomenon in human life – without defining it........ The academic imperative of definition seems to be linked to the positivistic attempts – and produces sometimes monstrous definitions. Have they any philosophical value for our knowledge of what play is? Definition is not a universal instrument of knowledge-building, but a culturally specific construction...

  4. Playing and gaming

    DEFF Research Database (Denmark)

    Karoff, Helle Skovbjerg; Ejsing-Duun, Stine; Hanghøj, Thorkild

    2013-01-01

    The paper develops an approach of playing and gaming activities through the perspective of both activities as mood activities . The point of departure is that a game - is a tool with which we, through our practices, achieve different moods. This based on an empirical study of children's everyday...... lives, where the differences emerge through actual practices, i.e. through the creation of meaning in the specific situations. The overall argument is that it is not that important whether it is a playing or a gaming activity - it is however crucial to be aware of how moods occur and what their optimal...... dimensions: practices and moods. Practice is the concept of all the doing in the activities. Moods are the particular concept of sense and feeling of being, which is what we are drawn to when we are playing or gaming....

  5. Playing and gaming

    DEFF Research Database (Denmark)

    Karoff, Helle Skovbjerg; Ejsing-Duun, Stine; Hanghøj, Thorkild

    2013-01-01

    The paper develops an approach of playing and gaming activities through the perspective of both activities as mood activities . The point of departure is that a game - is a tool with which we, through our practices, achieve different moods. This based on an empirical study of children's everyday...... lives, where the differences emerge through actual practices, i.e. through the creation of meaning in the specific situations. The overall argument is that it is not that important whether it is a playing or a gaming activity - it is however crucial to be aware of how moods occur and what their optimal...... dimensions: practices and moods. Practice is the concept of all the doing in the activities. Moods are the particular concept of sense and feeling of being, which is what we are drawn to when we are playing or gaming....

  6. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  7. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference

    Science.gov (United States)

    Lee, Byeongwook; Cho, Kwang-Hyun

    2016-11-01

    Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintain high recognition performance under any circumstance? Recent neurophysiological studies have suggested that the phase of neuronal oscillations in the auditory cortex contributes to accurate speech recognition by guiding speech segmentation into smaller units at different timescales. A phase-locked relationship between neuronal oscillation and the speech envelope has recently been obtained, which suggests that the speech envelope provides a foundation for multi-timescale speech segmental information. In this study, we quantitatively investigated the role of the speech envelope as a potential temporal reference to segment speech using its instantaneous phase information. We evaluated the proposed approach by the achieved information gain and recognition performance in various noisy environments. The results indicate that the proposed segmentation scheme not only extracts more information from speech but also provides greater robustness in a recognition test.

  8. A Survey on Speech Enhancement Methodologies

    Directory of Open Access Journals (Sweden)

    Ravi Kumar. K

    2016-12-01

    Full Text Available Speech enhancement is a technique which processes the noisy speech signal. The aim of speech enhancement is to improve the perceived quality of speech and/or to improve its intelligibility. Due to its vast applications in mobile telephony, VOIP, hearing aids, Skype and speaker recognition, the challenges in speech enhancement have grown over the years. It is more challenging to suppress back ground noise that effects human communication in noisy environments like airports, road works, traffic, and cars. The objective of this survey paper is to outline the single channel speech enhancement methodologies used for enhancing the speech signal which is corrupted with additive background noise and also discuss the challenges and opportunities of single channel speech enhancement. This paper mainly focuses on transform domain techniques and supervised (NMF, HMM speech enhancement techniques. This paper gives frame work for developments in speech enhancement methodologies

  9. General game playing

    CERN Document Server

    Genesereth, Michael

    2014-01-01

    General game players are computer systems able to play strategy games based solely on formal game descriptions supplied at ""runtime"" (n other words, they don't know the rules until the game starts). Unlike specialized game players, such as Deep Blue, general game players cannot rely on algorithms designed in advance for specific games; they must discover such algorithms themselves. General game playing expertise depends on intelligence on the part of the game player and not just intelligence of the programmer of the game player.GGP is an interesting application in its own right. It is intell

  10. Playful Collaboration (or Not)

    DEFF Research Database (Denmark)

    Bogers, Marcel; Sproedt, Henrik

    2011-01-01

    also be conducive to deep learning. As such, a game can engage different dimensions of learning and embed elements of active, collaborative, cooperative and problem-based learning. Building on this logic, we present an exploratory case study of the use of a particular board game in a class of a course......This paper explores how games and play, which are deeply rooted in human beings as a way to learn and interact, can be used to teach certain concepts and practices related to open collaborative innovation. We discuss how playing games can be a source of creativity, imagination and fun, while it can...

  11. Five recent play dates

    DEFF Research Database (Denmark)

    Abildgaard, Mette Simonsen; Birkbak, Andreas; Jensen, Torben Elgaard

    2017-01-01

    An advantage of the playground metaphor is that it comes with the activity of going out on ‘play dates’ and developing friendships. In such playful relationships, there is always something at stake, but the interaction is also fun and inherently exploratory. In the following, we take a tour of five...... recent collaborative projects that the TANTlab has participated in. The projects differ widely and testify to different experiences with collaboration and intervention – from a data print on obesity with other researchers to a Facebook-driven intervention in Aalborg municipality’s primary school reform...

  12. Analyzing Peace Pedagogies

    Science.gov (United States)

    Haavelsrud, Magnus; Stenberg, Oddbjorn

    2012-01-01

    Eleven articles on peace education published in the first volume of the Journal of Peace Education are analyzed. This selection comprises peace education programs that have been planned or carried out in different contexts. In analyzing peace pedagogies as proposed in the 11 contributions, we have chosen network analysis as our method--enabling…

  13. Computational neuroanatomy of speech production.

    Science.gov (United States)

    Hickok, Gregory

    2012-01-05

    Speech production has been studied predominantly from within two traditions, psycholinguistics and motor control. These traditions have rarely interacted, and the resulting chasm between these approaches seems to reflect a level of analysis difference: whereas motor control is concerned with lower-level articulatory control, psycholinguistics focuses on higher-level linguistic processing. However, closer examination of both approaches reveals a substantial convergence of ideas. The goal of this article is to integrate psycholinguistic and motor control approaches to speech production. The result of this synthesis is a neuroanatomically grounded, hierarchical state feedback control model of speech production.

  14. Speech enhancement theory and practice

    CERN Document Server

    Loizou, Philipos C

    2013-01-01

    With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algorithms that can improve speech intelligibility without sacrificing quality. Responding to this need, Speech Enhancement: Theory and Practice, Second Edition introduces readers to the basic problems of speech enhancement and the various algorithms proposed to solve these problems. Updated and expanded, this second edition of the bestselling textbook broadens its scope to include evaluation measures and enhancement algorithms aimed at impr

  15. On Low-level Cognitive Components of Speech

    DEFF Research Database (Denmark)

    Feng, Ling; Hansen, Lars Kai

    2005-01-01

    In this paper we analyze speech for low-level cognitive features using linear component analysis. We demonstrate generalizable component 'fingerprints' stemming from both phonemes and speaker. Phonemes are fingerprints found at the basic analysis window time scale (20 msec), while speaker...... 'voiceprints' are found at time scales around 1000 msec. The analysis is based on homomorphic filtering features and energy based sparsification....

  16. Steganalysis of recorded speech

    Science.gov (United States)

    Johnson, Micah K.; Lyu, Siwei; Farid, Hany

    2005-03-01

    Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.

  17. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2004-04-20

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  18. Silog: Speech Input Logon

    Science.gov (United States)

    Grau, Sergio; Allen, Tony; Sherkat, Nasser

    Silog is a biometrie authentication system that extends the conventional PC logon process using voice verification. Users enter their ID and password using a conventional Windows logon procedure but then the biometrie authentication stage makes a Voice over IP (VoIP) call to a VoiceXML (VXML) server. User interaction with this speech-enabled component then allows the user's voice characteristics to be extracted as part of a simple user/system spoken dialogue. If the captured voice characteristics match those of a previously registered voice profile, then network access is granted. If no match is possible, then a potential unauthorised system access has been detected and the logon process is aborted.

  19. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2000-10-19

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  20. Join Cost for Unit Selection Speech Synthesis

    OpenAIRE

    Vepa, Jithendra

    2004-01-01

    Undoubtedly, state-of-the-art unit selection-based concatenative speech systems produce very high quality synthetic speech. this is due to a large speech database containing many instances of each speech unit, with a varied and natural distribution of prosodic and spectral characteristics. the join cost, which measures how well two units can be joined together is one of the main criteria for selecting appropriate units from this large speech database. The ideal join cost is one that measur...

  1. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  2. A NOVEL APPROACH TO STUTTERED SPEECH CORRECTION

    OpenAIRE

    Alim Sabur Ajibola; Nahrul Khair bin Alang Md. Rashid; Wahju Sediono; Nik Nur Wahidah Nik Hashim

    2016-01-01

    Stuttered speech is a dysfluency rich speech, more prevalent in males than females. It has been associated with insufficient air pressure or poor articulation, even though the root causes are more complex. The primary features include prolonged speech and repetitive speech, while some of its secondary features include, anxiety, fear, and shame. This study used LPC analysis and synthesis algorithms to reconstruct the stuttered speech. The results were evaluated using cepstral distance, Itakura...

  3. Association of Velopharyngeal Insufficiency With Quality of Life and Patient-Reported Outcomes After Speech Surgery.

    Science.gov (United States)

    Bhuskute, Aditi; Skirko, Jonathan R; Roth, Christina; Bayoumi, Ahmed; Durbin-Johnson, Blythe; Tollefson, Travis T

    2017-09-01

    Patients with cleft palate and other causes of velopharyngeal insufficiency (VPI) suffer adverse effects on social interactions and communication. Measurement of these patient-reported outcomes is needed to help guide surgical and nonsurgical care. To further validate the VPI Effects on Life Outcomes (VELO) instrument, measure the change in quality of life (QOL) after speech surgery, and test the association of change in speech with change in QOL. Prospective descriptive cohort including children and young adults undergoing speech surgery for VPI in a tertiary academic center. Participants completed the validated VELO instrument before and after surgical treatment. The main outcome measures were preoperative and postoperative VELO scores and the perceptual speech assessment of speech intelligibility. The VELO scores are divided into subscale domains. Changes in VELO after surgery were analyzed using linear regression models. VELO scores were analyzed as a function of speech intelligibility adjusting for age and cleft type. The correlation between speech intelligibility rating and VELO scores was estimated using the polyserial correlation. Twenty-nine patients (13 males and 16 females) were included. Mean (SD) age was 7.9 (4.1) years (range, 4-20 years). Pharyngeal flap was used in 14 (48%) cases, Furlow palatoplasty in 12 (41%), and sphincter pharyngoplasty in 1 (3%). The mean (SD) preoperative speech intelligibility rating was 1.71 (1.08), which decreased postoperatively to 0.79 (0.93) in 24 patients who completed protocol (P after surgery (Pafter surgery (P = .36). Speech Intelligibility was correlated with preoperative and postoperative total VELO score (P after surgery was correlated with change in speech intelligibility. Speech surgery improves VPI-specific quality of life. We confirmed validation in a population of untreated patients with VPI and included pharyngeal flap surgery, which had not previously been included in validation studies. The VELO

  4. Theater, Speech, Light

    Directory of Open Access Journals (Sweden)

    Primož Vitez

    2011-07-01

    Full Text Available This paper considers a medium as a substantial translator: an intermediary between the producers and receivers of a communicational act. A medium is a material support to the spiritual potential of human sources. If the medium is a support to meaning, then the relations between different media can be interpreted as a space for making sense of these meanings, a generator of sense: it means that the interaction of substances creates an intermedial space that conceives of a contextualization of specific meaningful elements in order to combine them into the sense of a communicational intervention. The theater itself is multimedia. A theatrical event is a communicational act based on a combination of several autonomous structures: text, scenography, light design, sound, directing, literary interpretation, speech, and, of course, the one that contains all of these: the actor in a human body. The actor is a physical and symbolic, anatomic, and emblematic figure in the synesthetic theatrical act because he reunites in his body all the essential principles and components of theater itself. The actor is an audio-visual being, made of kinetic energy, speech, and human spirit. The actor’s body, as a source, instrument, and goal of the theater, becomes an intersection of sound and light. However, theater as intermedial art is no intermediate practice; it must be seen as interposing bodies between conceivers and receivers, between authors and auditors. The body is not self-evident; the body in contemporary art forms is being redefined as a privilege. The art needs bodily dimensions to explore the medial qualities of substances: because it is alive, it returns to studying biology. The fact that theater is an archaic art form is also the purest promise of its future.

  5. Talk in Interaction in the Speech-Language Pathology Clinic: Bringing Theory to Practice through Discourse

    Science.gov (United States)

    Leaby, Margaret M.; Walsh, Irene P.

    2008-01-01

    The importance of learning about and applying clinical discourse analysis to enhance the talk in interaction in the speech-language pathology clinic is discussed. The benefits of analyzing clinical discourse to explicate therapy dynamics are described.

  6. Speech of people with autism: Echolalia and echolalic speech

    OpenAIRE

    Błeszyński, Jacek Jarosław

    2013-01-01

    Speech of people with autism is recognised as one of the basic diagnostic, therapeutic and theoretical problems. One of the most common symptoms of autism in children is echolalia, described here as being of different types and severity. This paper presents the results of studies into different levels of echolalia, both in normally developing children and in children diagnosed with autism, discusses the differences between simple echolalia and echolalic speech - which can be considered to b...

  7. The Activity of Play

    DEFF Research Database (Denmark)

    Pichlmair, Martin

    2016-01-01

    This paper presents Activity Theory as a framework for understanding the action of playing games with the intention of building a foundation for the creation of new game design tools and methods. Activity Theory, an epistemological framework rooted in Soviet psychology of the first half of the 20...

  8. Playing the Role

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The G20 London summit made history. While applauding the summit’s productive communique, Ni Xiaoling, senior financial observer with Xinhua News Agency, warns of the gap between the greater responsibilities the International Monetary Fund shoulders and its limited capabilities to play the role of coordinator in economic globalization.

  9. Abstraction through Game Play

    Science.gov (United States)

    Avraamidou, Antri; Monaghan, John; Walker, Aisha

    2012-01-01

    This paper examines the computer game play of an 11-year-old boy. In the course of building a virtual house he developed and used, without assistance, an artefact and an accompanying strategy to ensure that his house was symmetric. We argue that the creation and use of this artefact-strategy is a mathematical abstraction. The discussion…

  10. Mobilities at Play

    DEFF Research Database (Denmark)

    Ungruhe, Christian

    2017-01-01

    -level perspective there is still an analytical gap between the ambitions and experiences of migrating players and economic power relations at play on the one hand and the socio-cultural embedding of the transnational connections in football migration on the other. In order to understand why and how football...

  11. Play's Importance in School

    Science.gov (United States)

    Sandberg, Anette; Heden, Rebecca

    2011-01-01

    The purpose of this study is to contribute knowledge on and gain an understanding of elementary school teachers' perspectives on the function of play in children's learning processes. The study is qualitative with a hermeneutical approach and has George Herbert Mead as a theoretical frame of reference. Interviews have been carried out with seven…

  12. Play framework essentials

    CERN Document Server

    Richard-Foy, Julien

    2014-01-01

    This book targets Java and Scala developers who already have some experience in web development and who want to master Play framework quickly and efficiently. This book assumes you have a good level of knowledge and understanding of efficient Java and Scala code.

  13. A Significant Play

    Institute of Scientific and Technical Information of China (English)

    梁海光; 陈明

    2002-01-01

    Yesterday evening, I went to see a play. It was really significant. It was about Zheng Xiaoyue, a very clever and diligent middle school student. Unfortunately, her mother died when she and her brother were very young. Her father was out of work and,

  14. Tonal Language Speech Compression Based on a Bitrate Scalable Multi-Pulse Based Code Excited Linear Prediction Coder

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2011-01-01

    Full Text Available Problem statement: Speech compression is an important issue in the modern digital speech communication. The functionality of bitrates scalability also plays significant role, since the capacity of communication system varies all the time. When considering tonal speech, such as Thai, tone plays important role on the naturalness and the intelligibility of the speech, it must be treated appropriately. Therefore these issues are taken into account in this study. Approach: This study proposes a modification of flexible Multi-Pulse based Code Excited Linear Predictive (MP-CELP coder with bitrates scalabilities for tonal language speech in the multimedia applications. The coder consists of a core coder and bitrates scalable tools. The high pitch delay resolutions are applied to the adaptive codebook of core coder for tonal language speech quality improvement. The bitrates scalable tool employs multi-stage excitation coding based on an embedded-coding approach. The multi-pulse excitation codebook at each stage is adaptively produced depending on the selected excitation signal at the previous stage. Results: The experimental results show that the speech quality of the proposed coder is improved above the speech quality of the conventional coder without pitch-resolution adaptation. Conclusion: From the study, the proposed approach is able to improve the speech compression quality for tonal language and the functionality of bitrates scalability is also developed.

  15. The Intermodulation Lockin Analyzer

    CERN Document Server

    Tholen, Erik A; Forchheimer, Daniel; Schuler, Vivien; Tholen, Mats O; Hutter, Carsten; Haviland, David B

    2011-01-01

    Nonlinear systems can be probed by driving them with two or more pure tones while measuring the intermodulation products of the drive tones in the response. We describe a digital lock-in analyzer which is designed explicitly for this purpose. The analyzer is implemented on a field-programmable gate array, providing speed in analysis, real-time feedback and stability in operation. The use of the analyzer is demonstrated for Intermodulation Atomic Force Microscopy. A generalization of the intermodulation spectral technique to arbitrary drive waveforms is discussed.

  16. Analyzing in the present

    DEFF Research Database (Denmark)

    Revsbæk, Line; Tanggaard, Lene

    2015-01-01

    The article presents a notion of “analyzing in the present” as a source of inspiration in analyzing qualitative research materials. The term emerged from extensive listening to interview recordings during everyday commuting to university campus. Paying attention to the way different parts...... the interdependency between researcher and researched. On this basis, we advocate an explicit “open-state-of mind” listening as a key aspect of analyzing qualitative material, often described only as a matter of reading transcribed empirical materials, reading theory, and writing. The article contributes...

  17. Subtitle Translation of Puns in English Sitcoms from the Perspective of Speech Act Theory-A Case Study of Mind Your Language

    Institute of Scientific and Technical Information of China (English)

    ZOU Yu-juan

    2015-01-01

    Puns always play a vital role in sitcoms in order to create humor or ironies. Sometimes good translations of puns can de⁃cide whether target language audiences are able to obtain the equal understandings and reactions of the sitcoms as source language audiences get. Nevertheless, it is not easy for many translators to render faithfully and smoothly some English puns into Chinese. Therefore, this paper will analyze the subtitle translation of puns in the British sitcom Mind Your Language from the perspective of Speech Act Theory so as to find out some useful and helpful strategies for translation of puns in subtitling.

  18. Psychogenetic studies of speech and language abilities: The short review and studying prospects

    Directory of Open Access Journals (Sweden)

    Chernov D.N.

    2015-03-01

    Full Text Available In the article the basic results of psychogenetic researches of speech and language are stated. It is revealed that factors of shared environment, alongside with genetic influences play a considerable role in formation of specific features of speech and language; their role is considerable in early ontogenesis. Later contribution of shared environment in an explanation of individual distinctions in speech and lan-guage development falls and the role of factors of heredity increases. It is observed considerable onto-genesis stability of speech and language abilities that is provided with stability of genetic and shared envi-ronmental influences. A certain moderate role in change a genotype-environmental of parities can play medical and biologic (prematurity and social (socioeconomic and educational status of parents, degree of orderliness of the house environment factors. Theoretical and practical consequences of the re-searches are discussed.

  19. Analyzing binding data.

    Science.gov (United States)

    Motulsky, Harvey J; Neubig, Richard R

    2010-07-01

    Measuring the rate and extent of radioligand binding provides information on the number of binding sites, and their affinity and accessibility of these binding sites for various drugs. This unit explains how to design and analyze such experiments.

  20. Analog multivariate counting analyzers

    CERN Document Server

    Nikitin, A V; Armstrong, T P

    2003-01-01

    Characterizing rates of occurrence of various features of a signal is of great importance in numerous types of physical measurements. Such signal features can be defined as certain discrete coincidence events, e.g. crossings of a signal with a given threshold, or occurrence of extrema of a certain amplitude. We describe measuring rates of such events by means of analog multivariate counting analyzers. Given a continuous scalar or multicomponent (vector) input signal, an analog counting analyzer outputs a continuous signal with the instantaneous magnitude equal to the rate of occurrence of certain coincidence events. The analog nature of the proposed analyzers allows us to reformulate many problems of the traditional counting measurements, and cast them in a form which is readily addressed by methods of differential calculus rather than by algebraic or logical means of digital signal processing. Analog counting analyzers can be easily implemented in discrete or integrated electronic circuits, do not suffer fro...

  1. Miniature mass analyzer

    CERN Document Server

    Cuna, C; Lupsa, N; Cuna, S; Tuzson, B

    2003-01-01

    The paper presents the concept of different mass analyzers that were specifically designed as small dimension instruments able to detect with great sensitivity and accuracy the main environmental pollutants. The mass spectrometers are very suited instrument for chemical and isotopic analysis, needed in environmental surveillance. Usually, this is done by sampling the soil, air or water followed by laboratory analysis. To avoid drawbacks caused by sample alteration during the sampling process and transport, the 'in situ' analysis is preferred. Theoretically, any type of mass analyzer can be miniaturized, but some are more appropriate than others. Quadrupole mass filter and trap, magnetic sector, time-of-flight and ion cyclotron mass analyzers can be successfully shrunk, for each of them some performances being sacrificed but we must know which parameters are necessary to be kept unchanged. To satisfy the miniaturization criteria of the analyzer, it is necessary to use asymmetrical geometries, with ion beam obl...

  2. Improving speech intelligibility for binaural voice transmission under disturbing noise and reverberation using virtual speaker lateralization

    Directory of Open Access Journals (Sweden)

    A.L. Padilla Ortiz

    2015-06-01

    Full Text Available Subjective speech intelligibility tests were carried out in order to investigate strategies to improve speech intelligibility in binaural voice transmission when listening from different azimuth angles under adverse listening conditions. Phonetically balanced bi-syllable meaningful words in Spanish were used as speech material. The speech signal was played back through headphones, undisturbed, and also with the addition of high levels of disturbing noise or reverberation, with a signal to noise ratio of SNR = –10 dB and a reverberation time of T60 = 10 s. Speech samples were contaminated with interaurally uncorrelated noise and interaurally correlated reverberation, which previous studies have shown the more adverse. Results show that, for speech contaminated with interaurally uncorrelated noise, intelligibility scores improve for azimuth angles around ±30° over speech intelligibility at 0°. On the other hand, for interaurally correlated reverberation, binaural speech intelligibility reduces when listening at azimuth angles around ±30°, in comparison with listening at 0° or azimuth angles around ±60°.

  3. Effects of utterance length and vocal loudness on speech breathing in older adults.

    Science.gov (United States)

    Huber, Jessica E

    2008-12-31

    Age-related reductions in pulmonary elastic recoil and respiratory muscle strength can affect how older adults generate subglottal pressure required for speech production. The present study examined age-related changes in speech breathing by manipulating utterance length and loudness during a connected speech task (monologue). Twenty-three older adults and twenty-eight young adults produced a monologue at comfortable loudness and pitch and with multi-talker babble noise playing in the room to elicit louder speech. Dependent variables included sound pressure level, speech rate, and lung volume initiation, termination, and excursion. Older adults produced shorter utterances than young adults overall. Age-related effects were larger for longer utterances. Older adults demonstrated very different lung volume adjustments for loud speech than young adults. These results suggest that older adults have a more difficult time when the speech system is being taxed by both utterance length and loudness. The data were consistent with the hypothesis that both young and older adults use utterance length in premotor speech planning processes.

  4. Speech Enhancement Algorithm Using Sub band Two Step Decision Directed Approach with Adaptive Weighting factor and Noise Masking Threshold

    Directory of Open Access Journals (Sweden)

    Deepa Dhanaskodi

    2011-01-01

    Full Text Available Problem statement: Speech Enhancement plays an important role in any of the speech processing systems like speech recognition, mobile communication, hearing aid. Approach: In this work, human perceptual auditory masking effect is incorporated into the single channel speech enhancement algorithm. The algorithm is based on a criterion by which the audible noise may be masked rather than being attenuated and thereby reducing the chance of distortion to speech. The basic decision directed approach is for efficient reduction of musical noise, that includes the estimation of the a priori SNR which is a crucial parameter of the spectral gain, follows the a posteriori SNR with a delay of one frame in speech frames. In this work a simple adaptive speech enhancement technique, using an adaptive sigmoid type function to determine the weighting factor of the TSDD algorithm is employed based on a sub band approach. In turn the spectral estimate is used to obtain a perceptual gain factor. Results: Objective and subjective measures like SNR, MSE, IS distance and were obtained, which shows the ability of the proposed method for efficient enhancement of noisy speech Conclusion/Recommendations: Performance assessment shows that our proposal can achieve a more significant noise reduction and a better spectral estimation of weak speech spectral components from a noisy signal as compared to the conventional speech enhancement algorithm.

  5. Switched Scalar Quantization with Adaptation Performed on both the Power and the Distribution of Speech Signal

    Directory of Open Access Journals (Sweden)

    L. V. Stoimenov

    2011-11-01

    Full Text Available This paper analyzes the models for switching scalar quantization of a source with the Laplacian and Gaussian distribution. We have analyzed the results of real telephone speech and proposed a model of switching scalar quantization, which, in addition to adaptation on the power of speech, includes the adaptation on the distribution of signals (Gaussian and Laplacian , which resulted in a better quality of voice signal pronounced with Signal-to-Quantization-Noise Ratio.

  6. Acoustic analysis of the unvoiced stop consonants for detecting hypernasal speech

    OpenAIRE

    Castellanos Domínguez, César Germán; Sepúlveda Sepúlveda, Franklin Alexander; Godino Llorente, Juan Ignacio

    2008-01-01

    Speakers having evidence of a defective velopharyngeal mechanism produce speech with inappropriate nasal resonance (hypernasal speech). Voice analysis methods for the detection of hypernasality commonly use vowels and nasalized vowels. However, to obtain a more general assessment of this abnormality it is necessary to analyze stops and fricatives. This study describes a method for hipernasality detection analyzing the unvoiced Spanish stop consonants /k/ and /p/, as well. The importance of ph...

  7. Acoustic analysis of the unvoiced stop consonants for detecting hypernasal speech

    OpenAIRE

    Castellanos Domínguez, César Germán; Sepúlveda Sepúlveda, Franklin Alexander; Godino Llorente, Juan Ignacio

    2008-01-01

    Speakers having evidence of a defective velopharyngeal mechanism produce speech with inappropriate nasal resonance (hypernasal speech). Voice analysis methods for the detection of hypernasality commonly use vowels and nasalized vowels. However, to obtain a more general assessment of this abnormality it is necessary to analyze stops and fricatives. This study describes a method for hipernasality detection analyzing the unvoiced Spanish stop consonants /k/ and /p/, as well. The importance of ph...

  8. Analyzing Microarray Data.

    Science.gov (United States)

    Hung, Jui-Hung; Weng, Zhiping

    2017-03-01

    Because there is no widely used software for analyzing RNA-seq data that has a graphical user interface, this protocol provides an example of analyzing microarray data using Babelomics. This analysis entails performing quantile normalization and then detecting differentially expressed genes associated with the transgenesis of a human oncogene c-Myc in mice. Finally, hierarchical clustering is performed on the differentially expressed genes using the Cluster program, and the results are visualized using TreeView.

  9. Emotion Recognition using Speech Features

    CERN Document Server

    Rao, K Sreenivasa

    2013-01-01

    “Emotion Recognition Using Speech Features” covers emotion-specific features present in speech and discussion of suitable models for capturing emotion-specific information for distinguishing different emotions.  The content of this book is important for designing and developing  natural and sophisticated speech systems. Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about using evidence derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Discussion includes global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; use of complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance;  and pro...

  10. Why Go to Speech Therapy?

    Science.gov (United States)

    ... a Difference (PDF) Brief History About The Founder Corporate Directors Audit The Facts FAQ Basic Research Resources ... teens who stutter make positive changes in their communication skills. As you work with your speech pathologist ...

  11. English Speeches Of Three Minutes

    Institute of Scientific and Technical Information of China (English)

    凌和军; 丁小琴

    2002-01-01

    English speeches, which were made at the beginning of this term, are popular among us, English learners, as it is very useful for us to improve our spoken English. So each of us feels very interested te join the activity.

  12. Speech and Language Developmental Milestones

    Science.gov (United States)

    ... also use special spoken tests to evaluate your child. A hearing test is often included in the evaluation because a hearing problem can affect speech and language development. Depending on the result of the evaluation, the ...

  13. Writing, Inner Speech, and Meditation.

    Science.gov (United States)

    Moffett, James

    1982-01-01

    Examines the interrelationships among meditation, inner speech (stream of consciousness), and writing. Considers the possibilities and implications of using the techniques of meditation in educational settings, especially in the writing classroom. (RL)

  14. Delayed Speech or Language Development

    Science.gov (United States)

    ... What Parents Can Do en español Retraso en el desarrollo del habla o del lenguaje Your son ... for communication exchange and participation? What kind of feedback does the child get? When speech, language, hearing, ...

  15. Pragmatic Difficulties in the Production of the Speech Act of Apology by Iraqi EFL Learners

    Directory of Open Access Journals (Sweden)

    Mehdi Falih Al-Ghazalli

    2014-12-01

    Full Text Available The purpose of this paper is to investigate the pragmatic difficulties encountered by Iraqi EFL university students in producing the speech act of apology. Although the act of apology is easy to recognize or use by native speakers of English, non-native speakers generally encounter difficulties in discriminating one speech act from another. The problem can be attributed to two factors: pragma-linguistic and socio-pragmatic knowledge. The aim of this study is(1to evaluate the socio-pragmatic level of interpreting apologies as understood and used by Iraqi EFL university learners, (2 find out the level of difficulty they experience in producing apologies and(3 detect the reasons behind such misinterpretations and misuses. It is hypothesized that the socio-pragmatic interpretation of apology tends to play a crucial role in comprehending what is intended by the speaker. However, cultural gaps can be the main reason behind the EFL learners' inaccurate production of the act of apology. To verify the aforementioned hypotheses, a test has been constructed and administered to a sample of 70 fourth-year Iraqi EFL university learners, morning classes. The subjects' responses have been collected and linguistically analyzed in the light of an eclectic model based on Deutschmann (2003 and Lazare (2004. It has been concluded that the misinterpretation or difficulty Iraqi EFL students have faced is mainly attributed to their lack of socio-pragmatic knowledge. The interference of the learnersʹ first language culture has led to non-native productions of speech act of apology.

  16. Play or science?

    DEFF Research Database (Denmark)

    Lieberoth, Andreas; Pedersen, Mads Kock; Sherson, Jacob

    2015-01-01

    Crowdscience games may hold unique potentials as learning opportunities compared to games made for fun or education. They are part of an actual science problem solving process: By playing, players help scientists, and thereby interact with real continuous research processes. This mixes the two...... worlds of play and science in new ways. During usability testing we discovered that users of the crowdscience game Quantum Dreams tended to answer questions in game terms, even when directed explicitly to give science explanations. We then examined these competing frames of understanding though a mixed...... correlational and grounded theory analysis. This essay presents the core ideas of crowdscience games as learning opportunities, and reports how a group of players used “game”, “science” and “conceptual” frames to interpret their experience. Our results suggest that oscillating between the frames instead...

  17. Understanding Games as Played

    DEFF Research Database (Denmark)

    Leino, Olli Tapio

    2009-01-01

    Researchers interested in player’s experience would assumedly, across disciplines, agree that the goal behind enquiries into player’s experience is to understand the how games’ features end up affecting the player’s experience. Much of the contemporary interdisciplinary research into player......’s experience leans toward the empirical-scientific, in the forms (neuro)psychology, sociology and cognitive science, to name a few. In such approaches, for example demonstrating correlation between physiological symptoms and an in-game event may amount to ‘understanding’. However, the experience of computer...... game play is a viable topic also for computer game studies within the general tradition of humanities. In such context, the idea of ‘understanding an experience’ invites an approach focusing on the experienced significance of events and objects within computer game play. This focus, in turn, suggests...

  18. Ravens at Play

    Directory of Open Access Journals (Sweden)

    Deborah Bird Rose

    2011-09-01

    Full Text Available ‘We were driving through Death Valley, an American-Australian and two Aussies, taking the scenic route from Las Vegas to Santa Cruz.’ This multi-voiced account of multispecies encounters along a highway takes up the challenge of playful and humorous writing that is as well deeply serious and theoretically provocative. Our travels brought us into what Donna Haraway calls the contact zone: a region of recognition and response. The contact zone is a place of significant questions: ‘Who are you, and so who are we? Here we are, and so what are we to become?’ Events were everything in this ecology of play, in which the movements of all the actors involved the material field in its entirety. We were brought into dances of approach and withdrawal, dances emerging directly, to paraphrase Brian Massumi, from the dynamic relation between a myriad of charged particles.

  19. Public Computation & Boundary Play

    CERN Document Server

    Sengupta, Pratim

    2016-01-01

    In this paper, we introduce 'public computation' as a genre of learning environments that can be used to radically broaden public participation in authentic, computation-enabled STEM disciplinary practices. Our paradigmatic approach utilizes open source software designed for professional scientists, engineers and digital artists, and situates them in an undiluted form, alongside live and archived expert support, in a public space. We present a case study of DigiPlay, a prototypical public computation space we designed at the University of Calgary, where users can interact directly with scientific simulations as well as the underlying open source code using an array of massive multi- touch screens. We argue that in such a space, public interactions with the code can be thought of as boundary work and play, through which public participation becomes legitimate scientific act, as the public engages in scientific creation through truly open-ended explorations with the code.

  20. Play. Learn. Innovate

    DEFF Research Database (Denmark)

    Sproedt, Henrik

    evidence that play and games could be interesting perspectives to take in order to understand complex social interaction. I come to the conclusion that – in innovation settings – the social dynamics that affect the process are essentially about transformation of knowledge across boundaries. I propose......„Play. Learn. Innovate. – Grasping the Social Dynamics of Participatory Innovation“ the title of this thesis describes how the complex interplay of unexpected events led to some burning questions and eventually to this thesis, which one could call an innovation*1*. During several years...... study were to better understand the theoretical foundations and practical implications of complex social interaction in organizational innovation settings. As I did not find any existing models or hypotheses that I was interested in testing I set out to discover how I could grasp complex social...