WorldWideScience

Sample records for united nations speech

  1. National features of speech etiquette

    OpenAIRE

    Nacafova S.

    2017-01-01

    The article shows the differences between the speech etiquette of different peoples. The most important thing is to find a common language with this or that interlocutor. Knowledge of national etiquette, national character helps to learn the principles of speech of another nation. The article indicates in which cases certain forms of etiquette considered acceptable. At the same time, the rules of etiquette emphasized in the conduct of a dialogue in official meetings and for example, in the ex...

  2. Lexical and sublexical units in speech perception.

    Science.gov (United States)

    Giroux, Ibrahima; Rey, Arnaud

    2009-03-01

    Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.

  3. L’unité intonative dans les textes oralisés // Intonation unit in read speech

    Directory of Open Access Journals (Sweden)

    Lea Tylečková

    2015-12-01

    Full Text Available Prosodic phrasing, i.e. division of speech into intonation units, represents a phenomenon which is central to language comprehension. Incorrect prosodic boundary markings may lead to serious misunderstandings and ambiguous interpretations of utterances. The present paper investigates prosodic competencies of Czech students of French in the domain of prosodic phrasing in French read speech. Two texts of different length are examined through a perceptual method to observe how Czech speakers of French (B1–B2 level of CEFR divide read speech into prosodic units compared to French native speakers.

  4. A Survey of Speech Education in United States Two-Year Colleges.

    Science.gov (United States)

    Planck, Carolyn Roberts

    The status of speech education in all United States two-year colleges is discussed. Both public and private schools are examined. Two separate studies were conducted, each utilizing the same procedure. The specific aspects with which the research was concerned were: (1) availability of speech courses, (2) departmentalization of speech courses, (3)…

  5. Acoustic assessment of speech privacy curtains in two nursing units

    Science.gov (United States)

    Pope, Diana S.; Miller-Klein, Erik T.

    2016-01-01

    Hospitals have complex soundscapes that create challenges to patient care. Extraneous noise and high reverberation rates impair speech intelligibility, which leads to raised voices. In an unintended spiral, the increasing noise may result in diminished speech privacy, as people speak loudly to be heard over the din. The products available to improve hospital soundscapes include construction materials that absorb sound (acoustic ceiling tiles, carpet, wall insulation) and reduce reverberation rates. Enhanced privacy curtains are now available and offer potential for a relatively simple way to improve speech privacy and speech intelligibility by absorbing sound at the hospital patient's bedside. Acoustic assessments were performed over 2 days on two nursing units with a similar design in the same hospital. One unit was built with the 1970s’ standard hospital construction and the other was newly refurbished (2013) with sound-absorbing features. In addition, we determined the effect of an enhanced privacy curtain versus standard privacy curtains using acoustic measures of speech privacy and speech intelligibility indexes. Privacy curtains provided auditory protection for the patients. In general, that protection was increased by the use of enhanced privacy curtains. On an average, the enhanced curtain improved sound absorption from 20% to 30%; however, there was considerable variability, depending on the configuration of the rooms tested. Enhanced privacy curtains provide measureable improvement to the acoustics of patient rooms but cannot overcome larger acoustic design issues. To shorten reverberation time, additional absorption, and compact and more fragmented nursing unit floor plate shapes should be considered. PMID:26780959

  6. Acoustic assessment of speech privacy curtains in two nursing units.

    Science.gov (United States)

    Pope, Diana S; Miller-Klein, Erik T

    2016-01-01

    Hospitals have complex soundscapes that create challenges to patient care. Extraneous noise and high reverberation rates impair speech intelligibility, which leads to raised voices. In an unintended spiral, the increasing noise may result in diminished speech privacy, as people speak loudly to be heard over the din. The products available to improve hospital soundscapes include construction materials that absorb sound (acoustic ceiling tiles, carpet, wall insulation) and reduce reverberation rates. Enhanced privacy curtains are now available and offer potential for a relatively simple way to improve speech privacy and speech intelligibility by absorbing sound at the hospital patient's bedside. Acoustic assessments were performed over 2 days on two nursing units with a similar design in the same hospital. One unit was built with the 1970s' standard hospital construction and the other was newly refurbished (2013) with sound-absorbing features. In addition, we determined the effect of an enhanced privacy curtain versus standard privacy curtains using acoustic measures of speech privacy and speech intelligibility indexes. Privacy curtains provided auditory protection for the patients. In general, that protection was increased by the use of enhanced privacy curtains. On an average, the enhanced curtain improved sound absorption from 20% to 30%; however, there was considerable variability, depending on the configuration of the rooms tested. Enhanced privacy curtains provide measureable improvement to the acoustics of patient rooms but cannot overcome larger acoustic design issues. To shorten reverberation time, additional absorption, and compact and more fragmented nursing unit floor plate shapes should be considered.

  7. Acoustic assessment of speech privacy curtains in two nursing units

    Directory of Open Access Journals (Sweden)

    Diana S Pope

    2016-01-01

    Full Text Available Hospitals have complex soundscapes that create challenges to patient care. Extraneous noise and high reverberation rates impair speech intelligibility, which leads to raised voices. In an unintended spiral, the increasing noise may result in diminished speech privacy, as people speak loudly to be heard over the din. The products available to improve hospital soundscapes include construction materials that absorb sound (acoustic ceiling tiles, carpet, wall insulation and reduce reverberation rates. Enhanced privacy curtains are now available and offer potential for a relatively simple way to improve speech privacy and speech intelligibility by absorbing sound at the hospital patient′s bedside. Acoustic assessments were performed over 2 days on two nursing units with a similar design in the same hospital. One unit was built with the 1970s′ standard hospital construction and the other was newly refurbished (2013 with sound-absorbing features. In addition, we determined the effect of an enhanced privacy curtain versus standard privacy curtains using acoustic measures of speech privacy and speech intelligibility indexes. Privacy curtains provided auditory protection for the patients. In general, that protection was increased by the use of enhanced privacy curtains. On an average, the enhanced curtain improved sound absorption from 20% to 30%; however, there was considerable variability, depending on the configuration of the rooms tested. Enhanced privacy curtains provide measureable improvement to the acoustics of patient rooms but cannot overcome larger acoustic design issues. To shorten reverberation time, additional absorption, and compact and more fragmented nursing unit floor plate shapes should be considered.

  8. Fishing for meaningful units in connected speech

    DEFF Research Database (Denmark)

    Henrichsen, Peter Juel; Christiansen, Thomas Ulrich

    2009-01-01

    In many branches of spoken language analysis including ASR, the set of smallest meaningful units of speech is taken to coincide with the set of phones or phonemes. However, fishing for phones is difficult, error-prone, and computationally expensive. We present an experiment, based on machine...

  9. Eleanor Roosevelt, the United Nations and the Role of Radio Communications

    NARCIS (Netherlands)

    Luscombe, Anya

    Eleanor Roosevelt communicated with the public through a variety of media, both before, during and following her time in the White House. In 1946 she became part of the US delegation to the newly formed United Nations and she used newspaper columns, speeches and radio broadcasts to converse with

  10. Free Speech and GWOT: Back to the Future?

    National Research Council Canada - National Science Library

    Hargis, Michael J

    2008-01-01

    ... . . . abridging the freedom of speech. . . ." Although the language of that provision may seem clear, the history of the United States is replete with examples of restrictions upon free speech, particularly during times of national crisis...

  11. The National Outcomes Measurement System for Pediatric Speech-Language Pathology

    Science.gov (United States)

    Mullen, Robert; Schooling, Tracy

    2010-01-01

    Purpose: The American Speech-Language-Hearing Association's (ASHA's) National Outcomes Measurement System (NOMS) was developed in the late 1990s. The primary purpose was to serve as a source of data for speech-language pathologists (SLPs) who found themselves called on to provide empirical evidence of the functional outcomes associated with their…

  12. Automatic transcription of continuous speech into syllable-like units ...

    Indian Academy of Sciences (India)

    style HMM models are generated for each of the clusters during training. During testing .... manual segmentation at syllable-like units followed by isolated style recognition of continu- ous speech ..... obtaining demisyllabic reference patterns.

  13. Intervention Techniques Used With Autism Spectrum Disorder by Speech-Language Pathologists in the United States and Taiwan: A Descriptive Analysis of Practice in Clinical Settings.

    Science.gov (United States)

    Hsieh, Ming-Yeh; Lynch, Georgina; Madison, Charles

    2018-04-27

    This study examined intervention techniques used with children with autism spectrum disorder (ASD) by speech-language pathologists (SLPs) in the United States and Taiwan working in clinic/hospital settings. The research questions addressed intervention techniques used with children with ASD, intervention techniques used with different age groups (under and above 8 years old), and training received before using the intervention techniques. The survey was distributed through the American Speech-Language-Hearing Association to selected SLPs across the United States. In Taiwan, the survey (Chinese version) was distributed through the Taiwan Speech-Language Pathologist Union, 2018, to certified SLPs. Results revealed that SLPs in the United States and Taiwan used 4 common intervention techniques: Social Skill Training, Augmentative and Alternative Communication, Picture Exchange Communication System, and Social Stories. Taiwanese SLPs reported SLP preparation program training across these common intervention strategies. In the United States, SLPs reported training via SLP preparation programs, peer therapists, and self-taught. Most SLPs reported using established or emerging evidence-based practices as defined by the National Professional Development Center (2014) and the National Standards Report (2015). Future research should address comparison of SLP preparation programs to examine the impact of preprofessional training on use of evidence-based practices to treat ASD.

  14. Speech Language Assessments in Te Reo in a Primary School Maori Immersion Unit

    Science.gov (United States)

    Naidoo, Kershni

    2012-01-01

    This research originated from the need for a speech and language therapy assessment in te reo Maori for a particular child who attended a Maori immersion unit. A Speech and Language Therapy te reo assessment had already been developed but it needed to be revised and normative data collected. Discussions and assessments were carried out in a…

  15. Eisenhower's "Atoms for Peace" Speech: A Case Study in the Strategic Use of Language.

    Science.gov (United States)

    Medhurst, Martin J.

    1987-01-01

    Examines speech delivered by President Eisenhower to General Assembly of the United Nations in December 1953. Demonstrates how a complex rhetorical situation resulted in the crafting and exploitation of a public policy address. Speech bolstered international image of the United States as peacemaker, warned the Soviets against a preemptive nuclear…

  16. Copenhagen failure : a rhetorical treatise of how speeches unite and divide mankind

    OpenAIRE

    Kortetmäki, Teea

    2010-01-01

    The purpose of this treatise is to analyse five of the Copenhagen Climate Convention's main speeches to see how they supported or weakened the agreement possibilities in the convention. Particular focus will be on the elements that divide or unite negotiators and whether the summit's failing outcome is already built in the pre-planned speeches held at the main podium. Theoretically, the study builds on Kenneth Burke's identification thesis and Elizabeth L. Malone's climate change debate an...

  17. Closing speech at the First National Forum on Energy

    International Nuclear Information System (INIS)

    Castro Ruz, F.

    1984-01-01

    This speech raises the purposes and importance of the First National Forum on Energy. It includes an analysis of the measures adopted to conserve energy and the perspectives for energy development in Cuba and lays the groundwork for nuclear energy development. It discusses, among other aspects, the growth of energy consumption and the development of fuel production and makes an analysis of the international situation and especially that of the developing countries. Aspects related to the energy resources of the USSR and its nuclear energy development are mentioned. The speech also notes the cooperation received from and the economic exchange carried out with the socialist countries. Other economic aspects related to Cuba are also analyzed. (B.R.D.)

  18. Eleanor Roosevelt, the United Nations and the Role of Radio Communications

    OpenAIRE

    Luscombe, Anya

    2016-01-01

    Eleanor Roosevelt communicated with the public through a variety of media, both before, during and following her time in the White House. In 1946 she became part of the US delegation to the newly formed United Nations and she used newspaper columns, speeches and radio broadcasts to converse with citizens about the importance of the UN. This paper focuses on some of her radio performances of the early 1950s, both in the USA and in Europe. Despite increasing competition from television in the 1...

  19. Availability of Pre-Admission Information to Prospective Graduate Students in Speech-Language Pathology

    Science.gov (United States)

    Tekieli Koay, Mary Ellen; Lass, Norman J.; Parrill, Madaline; Naeser, Danielle; Babin, Kelly; Bayer, Olivia; Cook, Megan; Elmore, Madeline; Frye, Rachel; Kerwood, Samantha

    2016-01-01

    An extensive Internet search was conducted to obtain pre-admission information and acceptance statistics from 260 graduate programmes in speech-language pathology accredited by the American Speech-Language-Hearing Association (ASHA) in the United States. ASHA is the national professional, scientific and credentialing association for members and…

  20. Eczema Is Associated with Childhood Speech Disorder: A Retrospective Analysis from the National Survey of Children's Health and the National Health Interview Survey.

    Science.gov (United States)

    Strom, Mark A; Silverberg, Jonathan I

    2016-01-01

    To determine if eczema is associated with an increased risk of a speech disorder. We analyzed data on 354,416 children and adolescents from 19 US population-based cohorts: the 2003-2004 and 2007-2008 National Survey of Children's Health and 1997-2013 National Health Interview Survey, each prospective, questionnaire-based cohorts. In multivariate survey logistic regression models adjusting for sociodemographics and comorbid allergic disease, eczema was significantly associated with higher odds of speech disorder in 12 of 19 cohorts (P speech disorder in children with eczema was 4.7% (95% CI 4.5%-5.0%) compared with 2.2% (95% CI 2.2%-2.3%) in children without eczema. In pooled multivariate analysis, eczema was associated with increased odds of speech disorder (aOR [95% CI] 1.81 [1.57-2.05], P speech disorder. History of eczema was associated with moderate (2.35 [1.34-4.10], P = .003) and severe (2.28 [1.11-4.72], P = .03) speech disorder. Finally, significant interactions were found, such that children with both eczema and attention deficit disorder with or without hyperactivity or sleep disturbance had vastly increased risk of speech disorders than either by itself. Pediatric eczema may be associated with increased risk of speech disorder. Further, prospective studies are needed to characterize the exact nature of this association. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Barack Obama’s pauses and gestures in humorous speeches

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    The main aim of this paper is to investigate speech pauses and gestures as means to engage the audience and present the humorous message in an effective way. The data consist of two speeches by the USA president Barack Obama at the 2011 and 2016 Annual White House Correspondents’ Association Dinner...... produced significantly more hand gestures in 2016 than in 2011. An analysis of the hand gestures produced by Barack Obama in two political speeches held at the United Nations in 2011 and 2016 confirms that the president produced significantly less communicative co-speech hand gestures during his speeches...... and they emphasise the speech segment which they follow or precede. We also found a highly significant correlation between Obama’s speech pauses and audience response. Obama produces numerous head movements, facial expressions and hand gestures and their functions are related to both discourse content and structure...

  2. United Kingdom national paediatric bilateral cochlear implant audit: preliminary results.

    Science.gov (United States)

    Cullington, Helen; Bele, Devyanee; Brinton, Julie; Lutman, Mark

    2013-11-01

    Prior to 2009, United Kingdom (UK) public funding was mainly only available for children to receive unilateral cochlear implants. In 2009, the National Institute for Health and Care Excellence published guidance for cochlear implantation following their review. According to these guidelines, all suitable children are eligible to have simultaneous bilateral cochlear implants or a sequential bilateral cochlear implant if they had received the first before the guidelines were published. Fifteen UK cochlear implant centres formed a consortium to carry out a multi-centre audit. The audit involves collecting data from simultaneously and sequentially implanted children at four intervals: before bilateral cochlear implants or before the sequential implant, 1, 2, and 3 years after bilateral implants. The measures include localization, speech recognition in quiet and background noise, speech production, listening, vocabulary, parental perception, quality of life, and surgical data including complications. The audit has now passed the 2-year point, and data have been received on 850 children. This article provides a first view of some data received up until March 2012.

  3. ACTION OF UNIFORM SEARCH ALGORITHM WHEN SELECTING LANGUAGE UNITS IN THE PROCESS OF SPEECH

    Directory of Open Access Journals (Sweden)

    Ирина Михайловна Некипелова

    2013-05-01

    Full Text Available The article is devoted to research of action of uniform search algorithm when selecting by human of language units for speech produce. The process is connected with a speech optimization phenomenon. This makes it possible to shorten the time of cogitation something that human want to say, and to achieve the maximum precision in thoughts expression. The algorithm of uniform search works at consciousness  and subconsciousness levels. It favours the forming of automatism produce and perception of speech. Realization of human's cognitive potential in the process of communication starts up complicated mechanism of self-organization and self-regulation of language. In turn, it results in optimization of language system, servicing needs not only human's self-actualization but realization of communication in society. The method of problem-oriented search is used for researching of optimization mechanisms, which are distinctive to speech producing and stabilization of language.DOI: http://dx.doi.org/10.12731/2218-7405-2013-4-50

  4. [United Nations world population prize to Dr. Halfdan Mahler. Acceptance speech].

    Science.gov (United States)

    Mahler, H

    1995-06-01

    The professional achievements of Halfdan Mahler, for which he was awarded the 1995 UN World Population Prize, are summarized, and Dr. Mahler's acceptance speech is presented. Dr. Mahler worked for reproductive health and sustainable development during his six years as secretary general of the IPPF. Under his leadership, the IPPF established world standards for family planning and reproductive health. Dr. Mahler also guided creation and implementation of the long-term IPPF strategic plan, Vision 2000. During his tenure as director general of the World Health Organization from 1973 to 1988, he established the special program of education, development, and training for research in human reproduction. Dr. Mahler's acceptance speech sketched a world of the future in which women control their reproductive lives and enjoy equality with men in work and at home, where adolescents understand and control their sexuality, where all children are desired and cared for, and where hard work brings success even in the poorest population sectors. The challenges of achieving this vision are enormous. The world's population will have doubled to 10 billion, and tensions and inequities will persist. But if the vision is not fulfilled, the present population will triple to 15 billion and competition for every kind of resource will be intolerable. In order to succeed, the rights to free and informed reproductive decision making must be guaranteed for every couple. Harmful practices that violate the right to autonomous reproductive decision making, such as early marriage or female genital mutilation, must be eliminated. Governments must commit themselves to educating and providing resources to women so that they can exercise their rights. Family planning services must be extended to the poor and marginal population sectors that still are denied access, and to adolescents who are at risk of unwanted pregnancy and disease.

  5. 3 CFR - Waiver of Reimbursement Under the United Nations Participation Act to Support the United Nations...

    Science.gov (United States)

    2010-01-01

    ... Participation Act to Support the United Nations/African Union Mission in Darfur Presidential Documents Other... the United Nations Participation Act to Support the United Nations/African Union Mission in Darfur... the United Nations/African Union Mission in Darfur to support the airlift of equipment for...

  6. Nations United: The United Nations, the United States, and the Global Campaign Against Terrorism. A Curriculum Unit & Video for Secondary Schools.

    Science.gov (United States)

    Houlihan, Christina; McLeod, Shannon

    This curriculum unit and 1-hour videotape are designed to help students understand the purpose and functions of the United Nations (UN) and explore the relationship between the United Nations and the United States. The UN's role in the global counterterrorism campaign serves as a case study for the unit. The students are asked to develop a basic…

  7. Free Speech Yearbook 1980.

    Science.gov (United States)

    Kane, Peter E., Ed.

    The 11 articles in this collection deal with theoretical and practical freedom of speech issues. The topics covered are (1) the United States Supreme Court and communication theory; (2) truth, knowledge, and a democratic respect for diversity; (3) denial of freedom of speech in Jock Yablonski's campaign for the presidency of the United Mine…

  8. Free Speech as a Cultural Value in the United States

    Directory of Open Access Journals (Sweden)

    Mauricio J. Alvarez

    2018-02-01

    Full Text Available Political orientation influences support for free speech, with liberals often reporting greater support for free speech than conservatives. We hypothesized that this effect should be moderated by cultural context: individualist cultures value individual self-expression and self-determination, and collectivist cultures value group harmony and conformity. These different foci should differently influence liberals and conservatives’ support for free speech within these cultures. Two studies evaluated the joint influence of political orientation and cultural context on support for free speech. Study 1, using a multilevel analysis of data from 37 U.S. states (n = 1,001, showed that conservatives report stronger support for free speech in collectivist states, whereas there were no differences between conservatives and liberals in support for free speech in individualist states. Study 2 (n = 90 confirmed this pattern by priming independent and interdependent self-construals in liberals and conservatives. Results demonstrate the importance of cultural context for free speech. Findings suggest that in the U.S. support for free speech might be embraced for different reasons: conservatives’ support for free speech appears to be motivated by a focus on collectively held values favoring free speech, while liberals’ support for free speech might be motivated by a focus on individualist self-expression.

  9. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  10. A qualitative analysis of hate speech reported to the Romanian National Council for Combating Discrimination (2003‑2015)

    OpenAIRE

    Adriana Iordache

    2015-01-01

    The article analyzes the specificities of Romanian hate speech over a period of twelve years through a qualitative analysis of 384 Decisions of the National Council for Combating Discrimination. The study employs a coding methodology which allows one to separate decisions according to the group that was the victim of hate speech. The article finds that stereotypes employed are similar to those encountered in the international literature. The main target of hate speech is the Roma, who are ...

  11. Free Speech Yearbook 1978.

    Science.gov (United States)

    Phifer, Gregg, Ed.

    The 17 articles in this collection deal with theoretical and practical freedom of speech issues. The topics include: freedom of speech in Marquette Park, Illinois; Nazis in Skokie, Illinois; freedom of expression in the Confederate States of America; Robert M. LaFollette's arguments for free speech and the rights of Congress; the United States…

  12. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  13. The Role of Music in Speech Intelligibility of Learners with Post Lingual Hearing Impairment in Selected Units in Lusaka District

    Science.gov (United States)

    Katongo, Emily Mwamba; Ndhlovu, Daniel

    2015-01-01

    This study sought to establish the role of music in speech intelligibility of learners with Post Lingual Hearing Impairment (PLHI) and strategies teachers used to enhance speech intelligibility in learners with PLHI in selected special units for the deaf in Lusaka district. The study used a descriptive research design. Qualitative and quantitative…

  14. Who Receives Speech/Language Services by 5 Years of Age in the United States?

    Science.gov (United States)

    Hammer, Carol Scheffner; Farkas, George; Hillemeier, Marianne M.; Maczuga, Steve; Cook, Michael; Morano, Stephanie

    2016-01-01

    Purpose We sought to identify factors predictive of or associated with receipt of speech/language services during early childhood. We did so by analyzing data from the Early Childhood Longitudinal Study–Birth Cohort (ECLS-B; Andreassen & Fletcher, 2005), a nationally representative data set maintained by the U.S. Department of Education. We addressed two research questions of particular importance to speech-language pathology practice and policy. First, do early vocabulary delays increase children's likelihood of receiving speech/language services? Second, are minority children systematically less likely to receive these services than otherwise similar White children? Method Multivariate logistic regression analyses were performed for a population-based sample of 9,600 children and families participating in the ECLS-B. Results Expressive vocabulary delays by 24 months of age were strongly associated with and predictive of children's receipt of speech/language services at 24, 48, and 60 months of age (adjusted odds ratio range = 4.32–16.60). Black children were less likely to receive speech/language services than otherwise similar White children at 24, 48, and 60 months of age (adjusted odds ratio range = 0.42–0.55). Lower socioeconomic status children and those whose parental primary language was other than English were also less likely to receive services. Being born with very low birth weight also significantly increased children's receipt of services at 24, 48, and 60 months of age. Conclusion Expressive vocabulary delays at 24 months of age increase children’s risk for later speech/language services. Increased use of culturally and linguistically sensitive practices may help racial/ethnic minority children access needed services. PMID:26579989

  15. Freedom of Speech Newsletter, September, 1975.

    Science.gov (United States)

    Allen, Winfred G., Jr., Ed.

    The Freedom of Speech Newsletter is the communication medium for the Freedom of Speech Interest Group of the Western Speech Communication Association. The newsletter contains such features as a statement of concern by the National Ad Hoc Committee Against Censorship; Reticence and Free Speech, an article by James F. Vickrey discussing the subtle…

  16. New Nordic Exceptionalism: Jeuno JE Kim and Ewa Einhorn's The United Nations of Norden and other realist utopias

    Directory of Open Access Journals (Sweden)

    Mathias Danbolt

    2016-11-01

    Full Text Available At the 2009 Nordic Culture Forum summit in Berlin that centered on the profiling and branding of the Nordic region in a globalized world, one presenter stood out from the crowd. The lobbyist Annika Sigurdardottir delivered a speech that called for the establishment of “The United Nations of Norden”: A Nordic union that would gather the nations and restore Norden's role as the “moral superpower of the world.” Sigurdardottir's presentation generated such a heated debate that the organizers had to intervene and reveal that the speech was a performance made by the artists Jeuno JE Kim and Ewa Einhorn. This article takes Kim and Einhorn's intervention as a starting point for a critical discussion of the history and politics of Nordic image-building. The article suggests that the reason Kim and Einhorn's speech passed as a serious proposal was due to its meticulous mimicking of two discursive formations that have been central to the debates on the branding of Nordicity over the last decades: on the one hand, the discourse of “Nordic exceptionalism,” that since the 1960s has been central to the promotion of a Nordic political, socio-economic, and internationalist “third way” model, and, on the other hand, the discourse on the “New Nordic,” that emerged out of the New Nordic Food-movement in the early 2000s, and which has given art and culture a privileged role in the international re-fashioning of the Nordic brand. Through an analysis of Kim and Einhorn's United Nations of Norden (UNN-performance, the article examines the historical development and ideological underpinnings of the image of Nordic unity at play in the discourses of Nordic exceptionalism and the New Nordic. By focusing on how the UNN-project puts pressure on the role of utopian imaginaries in the construction of Nordic self-images, the article describes the emergence of a discursive framework of New Nordic Exceptionalism.

  17. Spotlight on Speech Codes 2011: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2011

    2011-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a rigorous survey of restrictions on speech at America's colleges and universities. The survey and accompanying report explore the extent to which schools are meeting their legal and moral obligations to uphold students' and faculty members' rights to freedom of speech,…

  18. Spotlight on Speech Codes 2009: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2009

    2009-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a wide, detailed survey of restrictions on speech at America's colleges and universities. The survey and resulting report explore the extent to which schools are meeting their obligations to uphold students' and faculty members' rights to freedom of speech, freedom of…

  19. Spotlight on Speech Codes 2010: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2010

    2010-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a rigorous survey of restrictions on speech at America's colleges and universities. The survey and resulting report explore the extent to which schools are meeting their legal and moral obligations to uphold students' and faculty members' rights to freedom of speech,…

  20. United Nations Peacekeeping: Issues for Congress

    National Research Council Canada - National Science Library

    Browne, Marjorie A

    2008-01-01

    A major issue facing the United Nations, the United States, and the 110th Congress is the extent to which the United Nations has the capacity to restore or keep the peace in the changing world environment...

  1. United Nations Peacekeeping: Issues for Congress

    National Research Council Canada - National Science Library

    Browne, Marjorie A

    2007-01-01

    A major issue facing the United Nations, the United States, and the 110th Congress is the extent to which the United Nations has the capacity to restore or keep the peace in the changing world environment...

  2. Building a Prototype Text to Speech for Sanskrit

    Science.gov (United States)

    Mahananda, Baiju; Raju, C. M. S.; Patil, Ramalinga Reddy; Jha, Narayana; Varakhedi, Shrinivasa; Kishore, Prahallad

    This paper describes about the work done in building a prototype text to speech system for Sanskrit. A basic prototype text-to-speech is built using a simplified Sanskrit phone set, and employing a unit selection technique, where prerecorded sub-word units are concatenated to synthesize a sentence. We also discuss the issues involved in building a full-fledged text-to-speech for Sanskrit.

  3. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  4. United Nations and Multilateralism: Appraising USA's Unilateralism ...

    African Journals Online (AJOL)

    DrNneka

    global peace and security, as well as the survival of the United Nations. This is because ... Key Words: United Nations, multilateralism, United States, unilateralism, national interest, UN Charter ..... Lebanon, Iraq, Turkey, Egypt, Jordan, etc.

  5. Free Speech as a Cultural Value in the United States

    OpenAIRE

    Alvarez, Mauricio J.; Kemmelmeier, Markus

    2017-01-01

    Political orientation influences support for free speech, with liberals often reporting greater support for free speech than conservatives. We hypothesized that this effect should be moderated by cultural context: individualist cultures value individual self-expression and self-determination, and collectivist cultures value group harmony and conformity. These different foci should differently influence liberals and conservatives’ support for free speech within these cultures. Two studies eval...

  6. The Enduring Grand Strategy of the United States Represented as a Mirror Strategy

    Science.gov (United States)

    2016-04-04

    Diplomatic Values: Promoting Democracy and Human Rights .............. 18 Information: Promote Freedom of speech at home and abroad...the American people. President Ronald Reagan articulated these ideals in his speech to the United Nations General Assembly: " Freedom is not the sole...every other year.48 Information: Promote Freedom of speech at home and abroad49 President Harry S. Truman in an address to Congress, asserted the

  7. Speech recognition using articulatory and excitation source features

    CERN Document Server

    Rao, K Sreenivasa

    2017-01-01

    This book discusses the contribution of articulatory and excitation source information in discriminating sound units. The authors focus on excitation source component of speech -- and the dynamics of various articulators during speech production -- for enhancement of speech recognition (SR) performance. Speech recognition is analyzed for read, extempore, and conversation modes of speech. Five groups of articulatory features (AFs) are explored for speech recognition, in addition to conventional spectral features. Each chapter provides the motivation for exploring the specific feature for SR task, discusses the methods to extract those features, and finally suggests appropriate models to capture the sound unit specific knowledge from the proposed features. The authors close by discussing various combinations of spectral, articulatory and source features, and the desired models to enhance the performance of SR systems.

  8. READING THE VALUES OF LIBERAL FEMINISM IN HILLARY CLINTON’S SPEECH AT THE DEMOCRATIC NATIONAL CONVENTION 2016

    Directory of Open Access Journals (Sweden)

    Andra Fakhrian

    2017-12-01

    Full Text Available In history, Hillary Clinton is the only female candidate as the president of the United State, and she got the name in politic, and having significant roles as a politician in America. Hillary performs a strong leadership and how the picture of women in the modern era. Thus, this study examines the representative women in the lens of Hillary Clinton. It is also under descriptively qualitative research supported by primary data from the script of Hillary Clinton’s speech at the democratic national convention 2016 along with relevant literature as the secondary data. The theory of liberal feminism is used to get a deep analysis of the women’s roles in modern society. Now Women seize the same chance for involving their roles in modern society. Women are not only in a domestic sense, but also in masculinity areas.

  9. Prevalence and Parental Risk Factors for Speech Disability Associated with Cleft Palate in Chinese Children—A National Survey

    Science.gov (United States)

    Yun, Chunfeng; Wang, Zhenjie; He, Ping; Guo, Chao; Chen, Gong; Zheng, Xiaoying

    2016-01-01

    Although the prevalence of oral clefts in China is among the highest worldwide, little is known about the prevalence of speech disability associated with cleft palate in Chinese children. The data for this study were collected from the Second China National Sample Survey on Disability, and identification of speech disability associated with cleft palate was based on consensus manuals. Logistic regression was used to estimate odds ratios (ORs) and 95% confidence intervals (CIs). A weighted number of 112,070 disabled children affected by cleft palate were identified, yielding a prevalence of 3.45 per 10,000 children (95% CI: 3.19–3.71). A history of speech disability in the mother (OR = 20.266, 95% CI 5.788–70.959, p cleft palate in the offspring. Our results showed that maternal speech disability, older paternal child-bearing age, and lower levels of parental education were independent risk factors for speech disability associated with cleft palate for children in China. These findings may have important implications for health disparities and prevention. PMID:27886104

  10. Speech and Debate as Civic Education

    Science.gov (United States)

    Hogan, J. Michael; Kurr, Jeffrey A.; Johnson, Jeremy D.; Bergmaier, Michael J.

    2016-01-01

    In light of the U.S. Senate's designation of March 15, 2016 as "National Speech and Debate Education Day" (S. Res. 398, 2016), it only seems fitting that "Communication Education" devote a special section to the role of speech and debate in civic education. Speech and debate have been at the heart of the communication…

  11. 75 FR 65561 - United Nations Day, 2010

    Science.gov (United States)

    2010-10-26

    ... A Proclamation Sixty-five years ago, 51 nations came together in the aftermath of one of history's... all peoples. The United Nations has made great advances since it first developed out of ruin and... of nations. The United Nations' humanitarian assistance lifts up countless lives, supporting nations...

  12. Model-Based Synthesis of Visual Speech Movements from 3D Video

    Directory of Open Access Journals (Sweden)

    Edge JamesD

    2009-01-01

    Full Text Available We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets with unit selection we improve the quality of our speech synthesis.

  13. Longitudinal Comparison of the Speech and Language Performance of United States-Born and Internationally Adopted Toddlers with Cleft Lip and Palate: A Pilot Study.

    Science.gov (United States)

    Scherer, Nancy J; Baker, Shauna; Kaiser, Ann; Frey, Jennifer R

    2018-01-01

    Objective This study compares the early speech and language development of children with cleft palate with or without cleft lip who were adopted internationally with children born in the United States. Design Prospective longitudinal description of early speech and language development between 18 and 36 months of age. Participants This study compares four children (age range = 19 to 38 months) with cleft palate with or without cleft lip who were adopted internationally with four children (age range = 19 to 38 months) with cleft palate with or without cleft lip who were born in the United States, matched for age, gender, and cleft type across three time points over 10 to 12 months. Main Outcome Measures Children's speech-language skills were analyzed using standardized tests, parent surveys, language samples, and single-word phonological assessments to determine differences between the groups. Results The mean scores for the children in the internationally adopted group were lower than the group born in the United States at all three time points for expressive language and speech sound production measures. Examination of matched pairs demonstrated observable differences for two of the four pairs. No differences were observed in cognitive performance and receptive language measures. Conclusions The results suggest a cumulative effect of later palate repair and/or a variety of health and environmental factors associated with their early circumstances that persist to age 3 years. Early intervention to address the trajectory of speech and language is warranted. Given the findings from this small pilot study, a larger study of the long-term speech and language development of children who are internationally adopted and have cleft palate with or without cleft lip is recommended.

  14. 38 CFR 8.18 - Total disability-speech.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Total disability-speech... SERVICE LIFE INSURANCE Premium Waivers and Total Disability § 8.18 Total disability—speech. The organic loss of speech shall be deemed to be total disability under National Service Life Insurance. [67 FR...

  15. United States of America National Report

    International Nuclear Information System (INIS)

    1992-01-01

    The United States has produced this report as part of the preparations for the United Nations Conference on Environment and Development (UNCED) to be held in Brazil in June 1992. It summarizes this nation's efforts to protect and enhance the quality of the human environment in concert with its efforts to provide economic well-being during the two decades since the United Nations Conference on the Human Environment was held in Stockholm. The information presented in this report is primarily and deliberately retrospective. It is an attempt to portray the many human, economic and natural resources of the United States, to describe resource use and the principal national laws and programs established to protect these resources, and to analyze key issues on the agenda of UNCED. This analysis is presented in terms of past and present conditions and trends, measures of progress made in responding to the key issues, and a summary of government activities, underway or pending, to address ongoing or newly emerging national environmental and resource management problems

  16. Oversight Institutions Within the United Nations

    DEFF Research Database (Denmark)

    Pontoppidan, Caroline Aggestam

    2015-01-01

    This article will give a description of the role of internal audit and governance functions within the United Nations system. The United Nations has, during the last 10 years, worked to establish effective oversight services. Oversight, governance and hereunder the internal audit function has been...

  17. The United Nations at 40

    International Nuclear Information System (INIS)

    1985-01-01

    The United Nations adopted a resolution expressing the hope that 1985 would mark the beginning of an era of durable and global peace and justice, social and economic development and progress and independence of all peoples. 1985 is the organization's 40th anniversary year - the United Nations Charter entered into force on 24 October 1945 - and the Assembly has chosen 'United Nations for a better world' as the anniversary theme. It also has decided to hold a brief commemorative session culminating on 24 October this year. Member States of the UN also have been urged to organize appropriate observance of the anniversary, with the widest possible participation, and to consider the creation of national committees to evaluate the contribution of the UN system over the past four decades, its continuing relevance in the current international situation, and ways in which it could be strengthened and made more effective. Among other things, the Assembly in its resolution appealed to the international mass media, both public and private, to contribute more effectively to dissemination of information on UN activities. During the commemorative session planned this October, a final document is expected to be adopted for which the Assembly has asked the Preparatory Committee for the Fortieth Anniversary of the United Nations to compose a suitable text. The Preparatory Committee had been established by the Assembly in 1983, and by December 1984, 98 countries had joined in its work, which relates to various activities

  18. The United Nations at 40

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1985-10-01

    The United Nations adopted a resolution expressing the hope that 1985 would mark the beginning of an era of durable and global peace and justice, social and economic development and progress and independence of all peoples. 1985 is the organization's 40th anniversary year - the United Nations Charter entered into force on 24 October 1945 - and the Assembly has chosen 'United Nations for a better world' as the anniversary theme. It also has decided to hold a brief commemorative session culminating on 24 October this year. Member States of the UN also have been urged to organize appropriate observance of the anniversary, with the widest possible participation, and to consider the creation of national committees to evaluate the contribution of the UN system over the past four decades, its continuing relevance in the current international situation, and ways in which it could be strengthened and made more effective. Among other things, the Assembly in its resolution appealed to the international mass media, both public and private, to contribute more effectively to dissemination of information on UN activities. During the commemorative session planned this October, a final document is expected to be adopted for which the Assembly has asked the Preparatory Committee for the Fortieth Anniversary of the United Nations to compose a suitable text. The Preparatory Committee had been established by the Assembly in 1983, and by December 1984, 98 countries had joined in its work, which relates to various activities.

  19. International boundary experiences by the United Nations

    Science.gov (United States)

    Kagawa, A.

    2013-12-01

    Over the last few decades, the United Nations (UN) has been approached by Security Council and Member States on international boundary issues. The United Nations regards the adequate delimitation and demarcation of international boundaries as a very important element for the maintenance of peace and security in fragile post-conflict situations, establishment of friendly relationships and cross-border cooperation between States. This paper will present the main principles and framework the United Nations applies to support the process of international boundary delimitation and demarcation activities. The United Nations is involved in international boundary issues following the principle of impartiality and neutrality and its role as mediator. Since international boundary issues are multi-faceted, a range of expertise is required and the United Nations Secretariat is in a good position to provide diverse expertise within the multiple departments. Expertise in different departments ranging from legal, political, technical, administrative and logistical are mobilised in different ways to provide support to Member States depending on their specific needs. This presentation aims to highlight some of the international boundary projects that the United Nations Cartographic Section has been involved in order to provide the technical support to different boundary requirements as each international boundary issue requires specific focus and attention whether it be in preparation, delimitation, demarcation or management. Increasingly, the United Nations is leveraging geospatial technology to facilitate boundary delimitation and demarcation process between Member States. Through the presentation of the various case studies ranging from Iraq - Kuwait, Israel - Lebanon (Blue Line), Eritrea - Ethiopia, Cyprus (Green Line), Cameroon - Nigeria, Sudan - South Sudan, it will illustrate how geospatial technology is increasingly used to carry out the support. In having applied a range

  20. The United Nations and Its Critics.

    Science.gov (United States)

    Menon, Bhaskar P.

    1989-01-01

    Provides a brief history of the development of the United Nations. Identifies achievements of the United Nations in the promotion of human rights, the translation of the Universal Declaration of Human Rights into binding international covenants, and the establishment of monitoring mechanisms to ensure the protection of human rights. (KO)

  1. Theoretical Value in Teaching Freedom of Speech.

    Science.gov (United States)

    Carney, John J., Jr.

    The exercise of freedom of speech within our nation has deteriorated. A practical value in teaching free speech is the possibility of restoring a commitment to its principles by educators. What must be taught is why freedom of speech is important, why it has been compromised, and the extent to which it has been compromised. Every technological…

  2. United States National Seismographic Network

    International Nuclear Information System (INIS)

    Buland, R.

    1993-09-01

    The concept of a United States National Seismograph Network (USNSN) dates back nearly 30 years. The idea was revived several times over the decades. but never funded. For, example, a national network was proposed and discussed at great length in the so called Bolt Report (U. S. Earthquake Observatories: Recommendations for a New National Network, National Academy Press, Washington, D.C., 1980, 122 pp). From the beginning, a national network was viewed as augmenting and complementing the relatively dense, predominantly short-period vertical coverage of selected areas provided by the Regional Seismograph Networks (RSN's) with a sparse, well-distributed network of three-component, observatory quality, permanent stations. The opportunity finally to begin developing a national network arose in 1986 with discussions between the US Geological Survey (USGS) and the Nuclear Regulatory Commission (NRC). Under the agreement signed in 1987, the NRC has provided $5 M in new funding for capital equipment (over the period 1987-1992) and the USGS has provided personnel and facilities to develop. deploy, and operate the network. Because the NRC funding was earmarked for the eastern United States, new USNSN station deployments are mostly east of 105 degree W longitude while the network in the western United States is mostly made up of cooperating stations (stations meeting USNSN design goals, but deployed and operated by other institutions which provide a logical extension to the USNSN)

  3. Multiple functional units in the preattentive segmentation of speech in Japanese: evidence from word illusions.

    Science.gov (United States)

    Nakamura, Miyoko; Kolinsky, Régine

    2014-12-01

    We explored the functional units of speech segmentation in Japanese using dichotic presentation and a detection task requiring no intentional sublexical analysis. Indeed, illusory perception of a target word might result from preattentive migration of phonemes, morae, or syllables from one ear to the other. In Experiment I, Japanese listeners detected targets presented in hiragana and/or kanji. Phoneme migrations did occur, suggesting that orthography-independent sublexical constituents play some role in segmentation. However, syllable and especially mora migrations were more numerous. This pattern of results was not observed in French speakers (Experiment 2), suggesting that it reflects native segmentation in Japanese. To control for the intervention of kanji representations (many words are written in kanji, and one kanji often corresponds to one syllable), in Experiment 3, Japanese listeners were presented with target loanwords that can be written only in katakana. Again, phoneme migrations occurred, while the first mora and syllable led to similar rates of illusory percepts. No migration occurred for the second, "special" mora (/J/ or/N/), probably because this constitutes the latter part of a heavy syllable. Overall, these findings suggest that multiple units, such as morae, syllables, and even phonemes, function independently of orthographic knowledge in Japanese preattentive speech segmentation.

  4. 76 FR 66845 - United Nations Day, 2011

    Science.gov (United States)

    2011-10-27

    ... become ever more intertwined, the leadership, staff, and member states of the United Nations continue to... a time of dramatic political transformation, the United Nations can embrace democratic movements and...

  5. A qualitative analysis of hate speech reported to the Romanian National Council for Combating Discrimination (2003‑2015

    Directory of Open Access Journals (Sweden)

    Adriana Iordache

    2015-12-01

    Full Text Available The article analyzes the specificities of Romanian hate speech over a period of twelve years through a qualitative analysis of 384 Decisions of the National Council for Combating Discrimination. The study employs a coding methodology which allows one to separate decisions according to the group that was the victim of hate speech. The article finds that stereotypes employed are similar to those encountered in the international literature. The main target of hate speech is the Roma, who are seen as „dirty“, „uncivilized“ and a threat to Romania’s image abroad. Other stereotypes encountered were that of the „disloyal“ Hungarian and of the sexually promiscuous woman. Moreover, women are seen as unfit for management positions. The article also discusses stereotypes about homosexuals, who are seen as „sick“ and about non-orthodox religions, portrayed as „sectarian“.

  6. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  7. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    International Nuclear Information System (INIS)

    Holzrichter, J.F.; Ng, L.C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs

  8. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    Science.gov (United States)

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  9. Quick Statistics about Voice, Speech, and Language

    Science.gov (United States)

    ... here Home » Health Info » Statistics and Epidemiology Quick Statistics About Voice, Speech, Language Voice, Speech, Language, and ... no 205. Hyattsville, MD: National Center for Health Statistics. 2015. Hoffman HJ, Li C-M, Losonczy K, ...

  10. United Nations and multilateralism: appraising USA's unilateralism ...

    African Journals Online (AJOL)

    Multilateralism as symbolized by the United Nations Organization, seems to have come under threat today, and nowhere is this more evident than in the United States-United Nations relations particularly in the area of military interventions around the world. The aim of this paper is to appraise the practice of the principle of ...

  11. The United Nations: It's More Than You Think.

    Science.gov (United States)

    Lord, Juliana G.; Gardner, Janet

    This guide accompanies a 30-minute color video of the same name. Chapters include: (1) "History of the United States" including information on the League of Nations, the birth of the United Nations, and the home of the United Nations; (2) "Structure of the Organization" which discusses each of the sections--General Assembly,…

  12. Ultra low bit-rate speech coding

    CERN Document Server

    Ramasubramanian, V

    2015-01-01

    "Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.

  13. Objective voice and speech analysis of persons with chronic hoarseness by prosodic analysis of speech samples.

    Science.gov (United States)

    Haderlein, Tino; Döllinger, Michael; Matoušek, Václav; Nöth, Elmar

    2016-10-01

    Automatic voice assessment is often performed using sustained vowels. In contrast, speech analysis of read-out texts can be applied to voice and speech assessment. Automatic speech recognition and prosodic analysis were used to find regression formulae between automatic and perceptual assessment of four voice and four speech criteria. The regression was trained with 21 men and 62 women (average age 49.2 years) and tested with another set of 24 men and 49 women (48.3 years), all suffering from chronic hoarseness. They read the text 'Der Nordwind und die Sonne' ('The North Wind and the Sun'). Five voice and speech therapists evaluated the data on 5-point Likert scales. Ten prosodic and recognition accuracy measures (features) were identified which describe all the examined criteria. Inter-rater correlation within the expert group was between r = 0.63 for the criterion 'match of breath and sense units' and r = 0.87 for the overall voice quality. Human-machine correlation was between r = 0.40 for the match of breath and sense units and r = 0.82 for intelligibility. The perceptual ratings of different criteria were highly correlated with each other. Likewise, the feature sets modeling the criteria were very similar. The automatic method is suitable for assessing chronic hoarseness in general and for subgroups of functional and organic dysphonia. In its current version, it is almost as reliable as a randomly picked rater from a group of voice and speech therapists.

  14. The United Nations disarmament yearbook. V. 19: 1994

    International Nuclear Information System (INIS)

    1995-01-01

    The United Nations Disarmament Yearbook contains a review of the main developments and negotiations in the field of disarmament taking place each year, together with a brief history of the major issues. The series began with the 1976 edition. The Yearbook makes no claim to present fully the views of Member States of the Organization. For further information on the official positions of States, readers should consult the Official Records of the General Assembly and other sources. General Assembly resolutions and decisions are quoted in The Yearbook in the form in which they were adopted by the General Assembly. For the edited texts of these documents for 1994, readers should consult the Official Records of the General Assembly, Forty-ninth Session, Supplement No. 49 (A/49/49). For an overview of the work of the United Nations in the field of disarmament, one should consult The United Nations and Disarmament: A short History (UN, 1988). A more detailed account is included in The United Nations and Disarmament: 1945-1970; United Nations and Disarmament: 1970-1975, and previous volumes of The United Nations Disarmament Yearbook

  15. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  16. The Different Functions of Speech in Defamation and Privacy Cases.

    Science.gov (United States)

    Kebbel, Gary

    1984-01-01

    Reviews United States Supreme Court decisions since 1900 to show that free speech decisions often rest on the circumstances surrounding the speech. Indicates that freedom of speech wins out over privacy when social or political function but not when personal happiness is the issue.

  17. Using Start/End Timings of Spectral Transitions Between Phonemes in Concatenative Speech Synthesis

    OpenAIRE

    Toshio Hirai; Seiichi Tenpaku; Kiyohiro Shikano

    2002-01-01

    The definition of "phoneme boundary timing" in a speech corpus affects the quality of concatenative speech synthesis systems. For example, if the selected speech unit is not appropriately match to the speech unit of the required phoneme environment, the quality may be degraded. In this paper, a dynamic segment boundary defi- nition is proposed. In the definition, the concatenation point is chosen from the start or end timings of spectral transition depending on the phoneme environment at the ...

  18. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility.

    Science.gov (United States)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail A; Dau, Torsten

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements. A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech intelligibility in normal-hearing listeners. A substantial improvement of 25.4 percentage points in speech intelligibility scores was found going from a subband-based architecture, in which a Gaussian Mixture Model-based classifier predicts the distributions of speech and noise for each frequency channel, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where the units are assigned a continuous value between zero and one. Therefore, both components play significant roles and by combining them, speech intelligibility improvements were obtained in a six-talker condition at a low signal-to-noise ratio.

  19. Malaysia’s Participation in a United Nations Standing Force: A Question of National Security

    Science.gov (United States)

    2002-05-31

    Armed Forces Defence College and during a key note address at the National Security Conference, Malaysian Defense Minister, Dato’ Najib Tun Razak ...Oxford: Oxford University Press, 1999), 197-198. 9Speech by Dato’ Sri Mohd Najib Tun Abdul Razak , “Regional Insecurity: Preparing For Low to High...Resolution 15, No.2, (1971) Dato’ Sri Mohd Najib Tun Razak . “Executive Interview.” Asian Defence Journal (October 2001): 14-16. General Tan Sri Dato

  20. United States v. Caronia: The increasing strength of commercial free speech and potential new emphasis on classifying off-label promotion as "false and misleading".

    Science.gov (United States)

    Scheineson, Marc J; Cuevas, Guillermo

    2013-01-01

    The authority of the United States Food and Drug Administration (FDA) to prohibit off-label promotion of drug products suffered another serious setback in United States v. Caronia. Viewing a legal system where physicians can prescribe prescription pharmaceuticals for unapproved uses legally in their practice of medicine, the Federal appeals court affirmed the commercial free speech rights of manufacturers to use truthful, non-misleading speech about lawfully marketed products. As a result of this case, and others upon which the decision is based, FDA is likely to challenge manufacturer promotion more carefully, and only if it can demonstrate that claims are not truthful, but are false or misleading, or otherwise deprive the prescriber of adequate directions for use.

  1. Interpersonal Learning Systems for National Speech-Communication.

    Science.gov (United States)

    Heinberg, Paul

    A consensus has prevailed among educators that Americans of verying ethnic, social, cultural, and linguistic backgrounds who must communicate with each other in social, academic, and occupational situations might achieve a greater degree of rapport if the dialect of the English mutually spoken and the speech mannerisms used were standardized.…

  2. Fathers of the Nation: Barack Obama Addresses Nelson Mandela

    Directory of Open Access Journals (Sweden)

    Elisa Bordin

    2014-11-01

    Full Text Available This essay analyzes Barack Obama’s Nelson Mandela Memorial speech together with other seminal texts of Obama’s political and personal creed, such as his book Dreams from My Father (1995 and his speech “A More Perfect Union” (2008. This reading becomes helpful to understand Mandela’s transnational power, which Obama uses to comment on the United States by comparing Madiba to other American “fathers of the nation.” Thus, he uproots Mandela’s from a specifically South African legacy, expands his figure, and addresses him as a transnational father of his own nation, whose power, influence, and example transcend South African borders. As a consequence of this enlargement and transnational validation of Mandela’s figure, the speech delivered at the Memorial becomes an occasion to tackle American past and future, while the memory of Madiba and his driving example in Obama’s life serve to reinforce previous positions conveyed in other discourses by the American President, such as the “A More Perfect Union” speech delivered in Philadelphia in 2008.

  3. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  4. Lwazi II Final Report: Increasing the impact of speech technologies in South Africa

    CSIR Research Space (South Africa)

    Calteaux, K

    2013-02-01

    Full Text Available  North-West University, Potchefstroom Campus  Department of Basic Education, National School Nutrition Unit  Thusong Service Centres (Bushbuckridge, Musina and Sterkspruit)  Senqu Municipality  Afrivet Training Services  Kokotla Junior Secondary... activities should also continue, in order to refine these technologies and improve their robustness and scalability. 5 | P a g e Acronyms API – Application programming interface ASR – Automatic speech recognition ATS – Afrivet Training Services CDW...

  5. The Texts of the Agency's Agreements with the United Nations

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1959-10-30

    The texts of the following agreements and supplementary agreements between the Agency and the United Nations are reproduced in this document for the information of all Members of the Agency: I. A. Agreement Governing the Relationship Between the United Nations and the International Atomic Energy Agency; B. Protocol Concerning the Entry into Force of the Agreement between the United Nations and the International Atomic Energy Agency; II. Administrative Arrangement Concerning the Use of the United Nations Laissez-Passer by Officials of the International Atomic Energy Agency; and III. Agreement for the Admission of the International Atomic Energy Agency into the United Nations Joint Staff Pension Fund.

  6. The Texts of the Agency's Agreements with the United Nations

    International Nuclear Information System (INIS)

    1959-01-01

    The texts of the following agreements and supplementary agreements between the Agency and the United Nations are reproduced in this document for the information of all Members of the Agency: I. A. Agreement Governing the Relationship Between the United Nations and the International Atomic Energy Agency; B. Protocol Concerning the Entry into Force of the Agreement between the United Nations and the International Atomic Energy Agency; II. Administrative Arrangement Concerning the Use of the United Nations Laissez-Passer by Officials of the International Atomic Energy Agency; and III. Agreement for the Admission of the International Atomic Energy Agency into the United Nations Joint Staff Pension Fund

  7. The Texts of the Agency's Agreements with the United Nations

    International Nuclear Information System (INIS)

    1959-01-01

    The texts of the following agreements and supplementary agreements between the Agency and the United Nations are reproduced in this document for the information of all Members of the Agency: I. A. Agreement Governing the Relationship Between the United Nations and the International Atomic Energy Agency; B. Protocol Concerning the Entry into Force of the Agreement between the United Nations and the International Atomic Energy Agency; II. Administrative Arrangement Concerning the Use of the United Nations Laissez-Passer by Officials of the International Atomic Energy Agency; and III. Agreement for the Admission of the International Atomic Energy Agency into the United Nations Joint Staff Pension Fund [ru

  8. The Texts of the Agency's Agreements with the United Nations

    International Nuclear Information System (INIS)

    1959-01-01

    The texts of the following agreements and supplementary agreements between the Agency and the United Nations are reproduced in this document for the information of all Members of the Agency: I. A. Agreement Governing the Relationship Between the United Nations and the International Atomic Energy Agency; B. Protocol Concerning the Entry into Force of the Agreement between the United Nations and the International Atomic Energy Agency; II. Administrative Arrangement Concerning the Use of the United Nations Laissez-Passer by Officials of the International Atomic Energy Agency; and III. Agreement for the Admission of the International Atomic Energy Agency into the United Nations Joint Staff Pension Fund [es

  9. Analysis of Serbian Military Riverine Units Capability for Participation in the United Nations Peacekeeping Operations

    Directory of Open Access Journals (Sweden)

    Slobodan Radojevic

    2017-06-01

    Full Text Available This paper analyses required personnel, training capacities and equipment for participation in the United Nations peacekeeping operations with the riverine elements. In order to meet necessary capabilities for engagement in United Nations peacekeeping operations, Serbian military riverine units have to be compatible with the issued UN requirements. Serbian Armed Forces have the potential to reach such requirements with the River Flotilla as a pivot for the participation in UN missions. Serbian Military Academy adopted and developed educational and training program in accordance with the provisions and recommendations of the IMO conventions and IMO model courses. Serbian Military Academy has opportunities for education and training military riverine units for participation in the United Nations peacekeeping operations. Moreover, Serbia has Multinational Operations Training Center and Peacekeeping Operations Center certified to provide selection, training, equipping and preparations of individuals and units to the United Nations multinational operations.

  10. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  11. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  12. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  13. Scale-free amplitude modulation of neuronal oscillations tracks comprehension of accelerated speech

    NARCIS (Netherlands)

    Borges, Ana Filipa Teixeira; Giraud, Anne Lise; Mansvelder, Huibert D.; Linkenkaer-Hansen, Klaus

    2018-01-01

    Speech comprehension is preserved up to a threefold acceleration, but deteriorates rapidly at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain’s ability to parse speech into syllabic units using δ/θ oscillations. Here, we investigated

  14. Estimation of net ecosystem exchange at the Skukuza flux site, Kruger National Park, South Africa

    CSIR Research Space (South Africa)

    Nickless, A

    2011-03-01

    Full Text Available , Manlay R., Ngom D., Ntoupka M., Ouattara S., Savadogo P., Sawadogo L., Seghieri J.,Tiveau D. ?valuation de la productivit? et de la biomasse des savanes s?ches africaines : l?apport du collectif SAVAFOR. Bois et For?ts des Tropiques, 60... (citato 1 volta) UNFCCC United Nations Framework Convention on Climate Change UNOPS United Nations Office for Project Services UR2PI Unit? de Recherche sur la Productivit? des Plantations Industrielles (ex CTFT) Key note speeches on Africa...

  15. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  16. 31 CFR 515.334 - United States national.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false United States national. 515.334 Section 515.334 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) OFFICE... of the United States, and which has its principal place of business in the United States. [61 FR...

  17. Language and Speech Improvement for Kindergarten and First Grade. A Supplementary Handbook.

    Science.gov (United States)

    Cole, Roberta; And Others

    The 16-unit language and speech improvement handbook for kindergarten and first grade students contains an introductory section which includes a discussion of the child's developmental speech and language characteristics, a sound development chart, a speech and hearing language screening test, the Henja articulation test, and a general outline of…

  18. Attention mechanisms and the mosaic evolution of speech

    Directory of Open Access Journals (Sweden)

    Pedro Tiago Martins

    2014-12-01

    Full Text Available There is still no categorical answer for why humans, and no other species, have speech, or why speech is the way it is. Several purely anatomical arguments have been put forward, but they have been shown to be false, biologically implausible, or of limited scope. This perspective paper supports the idea that evolutionary theories of speech could benefit from a focus on the cognitive mechanisms that make speech possible, for which antecedents in evolutionary history and brain correlates can be found. This type of approach is part of a very recent, but rapidly growing tradition, which has provided crucial insights on the nature of human speech by focusing on the biological bases of vocal learning. Here, we call attention to what might be an important ingredient for speech. We contend that a general mechanism of attention, which manifests itself not only in visual but also auditory (and possibly other modalities, might be one of the key pieces of human speech, in addition to the mechanisms underlying vocal learning, and the pairing of facial gestures with vocalic units.

  19. Speech recognition systems on the Cell Broadband Engine

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Jones, H; Vaidya, S; Perrone, M; Tydlitat, B; Nanda, A

    2007-04-20

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousands of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.

  20. Computational speech segregation based on an auditory-inspired modulation analysis

    DEFF Research Database (Denmark)

    May, Tobias; Dau, Torsten

    2014-01-01

    A monaural speech segregation system is presented that estimates the ideal binary mask from noisy speech based on the supervised learning of amplitude modulation spectrogram (AMS) features. Instead of using linearly scaled modulation filters with constant absolute bandwidth, an auditory- inspired...... about speech activity present in neighboring time-frequency units. In order to evaluate the generalization performance of the system to unseen acoustic conditions, the speech segregation system is trained with a limited set of low signal-to-noise ratio (SNR) conditions, but tested over a wide range...

  1. Perspectives on Inclusive Education with Reference to United Nations

    Science.gov (United States)

    Sharma, Arvind

    2015-01-01

    This essay explores inclusive education and explains the role of United Nations for imparting it to different nations. Undoubtedly, the UN and the United Nations Children's Fund (UNICEF) strive for all children to have equitable access to education as a basic human right. The Convention on the Rights of the Child (CRC) combined with the Convention…

  2. Risk and protective factors associated with speech and language impairment in a nationally representative sample of 4- to 5-year-old children.

    Science.gov (United States)

    Harrison, Linda J; McLeod, Sharynne

    2010-04-01

    To determine risk and protective factors for speech and language impairment in early childhood. Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community factors previously reported as being predictors of speech and language impairment were tested as predictors of (a) parent-rated expressive speech/language concern and (b) receptive language concern, (c) use of speech-language pathology services, and (d) low receptive vocabulary. Bivariate logistic regression analyses confirmed 29 of the identified factors. However, when tested concurrently with other predictors in multivariate analyses, only 19 remained significant: 9 for 2-4 outcomes and 10 for 1 outcome. Consistent risk factors were being male, having ongoing hearing problems, and having a more reactive temperament. Protective factors were having a more persistent and sociable temperament and higher levels of maternal well-being. Results differed by outcome for having an older sibling, parents speaking a language other than English, and parental support for children's learning at home. Identification of children requiring speech and language assessment requires consideration of the context of family life as well as biological and psychosocial factors intrinsic to the child.

  3. Comparison of two speech privacy measurements, articulation index (AI) and speech privacy noise isolation class (NIC'), in open workplaces

    Science.gov (United States)

    Yoon, Heakyung C.; Loftness, Vivian

    2002-05-01

    Lack of speech privacy has been reported to be the main dissatisfaction among occupants in open workplaces, according to workplace surveys. Two speech privacy measurements, Articulation Index (AI), standardized by the American National Standards Institute in 1969, and Speech Privacy Noise Isolation Class (NIC', Noise Isolation Class Prime), adapted from Noise Isolation Class (NIC) by U. S. General Services Administration (GSA) in 1979, have been claimed as objective tools to measure speech privacy in open offices. To evaluate which of them, normal privacy for AI or satisfied privacy for NIC', is a better tool in terms of speech privacy in a dynamic open office environment, measurements were taken in the field. AIs and NIC's in the different partition heights and workplace configurations have been measured following ASTM E1130 (Standard Test Method for Objective Measurement of Speech Privacy in Open Offices Using Articulation Index) and GSA test PBS-C.1 (Method for the Direct Measurement of Speech-Privacy Potential (SPP) Based on Subjective Judgments) and PBS-C.2 (Public Building Service Standard Method of Test Method for the Sufficient Verification of Speech-Privacy Potential (SPP) Based on Objective Measurements Including Methods for the Rating of Functional Interzone Attenuation and NC-Background), respectively.

  4. Proposal for a United Nations Basic Space Technology Initiative

    Science.gov (United States)

    Balogh, Werner

    Putting space technology and its applications to work for sustainable economic and social development is the primary objective of the United Nations Programme on Space Applications, launched in 1971. A specific goal for achieving this objective is to establish a sustainable national space capacity. The traditional line of thinking has supported a logical progression from building capacity in basic space science, to using space applications and finally - possibly - to establishing indigenous space technology capabilities. The experience in some countries suggests that such a strict line of progression does not necessarily hold true and that priority given to the establishment of early indigenous space technology capabilities may contribute to promoting the operational use of space applications in support of sustainable economic and social development. Based on these findings and on the experiences with the United Nations Basic Space Science Initiative (UNBSSI) as well as on a series of United Nations/International Academy of Astronautics Workshops on Small Satellites in the Service of Developing Countries, the United Nations Office for Outer Space Affairs (UNOOSA) is considering the launch of a dedicated United Nations Basic Space Technology Initiative (UNBSTI). The initiative would aim to contribute to capacity building in basic space technology and could include, among other relevant fields, activities related to the space and ground segments of small satellites and their applications. It would also provide an international framework for enhancing cooperation between all interested actors, facilitate the exchange of information on best practices, and contribute to standardization efforts. It is expected that these activities would advance the operational use of space technology and its applications in an increasing number of space-using countries and emerging space nations. The paper reports on these initial considerations and on the potential value-adding role

  5. The Congo crisis, the United Nations, and Zimbabwean nationalism ...

    African Journals Online (AJOL)

    United Nations moved swiftly in response to Lumumba's immediate request for assistance .... of apathy towards the white man in Africa and a strong desire to rid .... such actions would legitimatise the intervention of the Congo government in.

  6. Predicting Prosody from Text for Text-to-Speech Synthesis

    CERN Document Server

    Rao, K Sreenivasa

    2012-01-01

    Predicting Prosody from Text for Text-to-Speech Synthesis covers the specific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing.

  7. Pitch Synchronous Segmentation of Speech Signals

    Data.gov (United States)

    National Aeronautics and Space Administration — The Pitch Synchronous Segmentation (PSS) that accelerates speech without changing its fundamental frequency method could be applied and evaluated for use at NASA....

  8. Free Speech Yearbook 1979.

    Science.gov (United States)

    Kane, Peter E., Ed.

    The seven articles in this collection deal with theoretical and practical freedom of speech issues. Topics covered are: the United States Supreme Court, motion picture censorship, and the color line; judicial decision making; the established scientific community's suppression of the ideas of Immanuel Velikovsky; the problems of avant-garde jazz,…

  9. TIMLOGORO - AN INTERACTIVE PLATFORM DESIGN FOR SPEECH THERAPY

    Directory of Open Access Journals (Sweden)

    Georgeta PÂNIȘOARĂ

    2016-12-01

    Full Text Available This article presents some tehnical and pedagogical features of an interactive platforme used for language therapy. Timlogoro project demonstrates that technology is an effective tool in learning and, in particular, a viable solution for improving speech disorders present in different stages of age. A digital platform for different categories of users with speech impairments (children and adults has a good support in pedagogical principles. In speech therapy, the computer was originally used to assess deficiencies. Nowadays it has become a useful tool in language rehabilitation. A few Romanian speech therapists create digital applications that will be used in therapy for recovery.This work was supported by a grant of the Romanian National Authority for Scientific UEFISCDI.

  10. Hello World, It's Me: Bringing the Basic Speech Communication Course into the Digital Age

    Science.gov (United States)

    Kirkwood, Jessica; Gutgold, Nichola D.; Manley, Destiny

    2011-01-01

    During the past decade, instructors of speech communication have been adapting the introductory speech course to keep up with the television age. Learning units in speech textbooks now teach how to speak well on television, as well as how to interpret speeches in the media. This article argues that the computer age invites adaptation of the…

  11. Theoretical Background and Methodology of Senior Preschoolers’ Enlargement of the Vocabulary by Means of Phraseological Units

    Directory of Open Access Journals (Sweden)

    Mysan Inna

    2015-06-01

    Full Text Available The article deals with the problem of preschoolers’ development of speech by means of phraseological units. It reveals their importance for children’s mental development, in particular emphasises that the most striking verges of the child’s language are reflected in its means of expression, that is, in phraseology. They make the language varied, expressive, emotional, vivid, figurative, deepen its ethno-cultural identity; offered are the results of the analysis of children’s speech on listening and use of phraseological units in oral speech. By taking into account the data on specific features of perception, understanding and use of phraseological units in the statements of children are represented the methods of work on phraseology in kindergarten based on the following assumptions: on timely and methodically correct implementation of work on phraseological units will depend speech and mental development of the preschool children because at this age, they begin to comprehend figurative meanings of words and master the process of meanings changing. In addition, at this age period, the child’s vocabulary is sufficiently formed to acquire the wealth and national characteristics of the native language, as well as expressive and aesthetic functions that perform phraseological units. Methodic contains and reveals its components such as: principles, goals, objectives, contents, methods, organisational forms and means of enlarging the vocabulary (impressive and expressive with phraseological expressions.

  12. The role of temporal resolution in modulation-based speech segregation

    DEFF Research Database (Denmark)

    May, Tobias; Bentsen, Thomas; Dau, Torsten

    2015-01-01

    speech and noise activity on the basis of individual time-frequency (T-F) units. One important parameter of the segregation system is the window duration of the analysis-synthesis stage, which determines the lower limit of modulation frequencies that can be represented but also the temporal acuity...... with which the segregation system can manipulate individual T-F units. To clarify the consequences of this trade-off on modulation-based speech segregation performance, the influence of the window duration was systematically investigated...

  13. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  14. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements....... A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech......, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where...

  15. Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech.

    Directory of Open Access Journals (Sweden)

    Victoria Leong

    Full Text Available When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes. Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS. Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz, syllables (Syllable AM, ~5 Hz and onset-rime units (Phoneme AM, ~20 Hz. We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words, syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72-82% (freely-read CDS and 90-98% (rhythmically-regular CDS stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across

  16. United Nations Charter, Chapter VII, Article 43: Now or Never.

    Science.gov (United States)

    Burkle, Frederick M

    2018-04-25

    For more than 75 years, the United Nations Charter has functioned without the benefit of Chapter VII, Article 43, which commits all United Nations member states "to make available to the Security Council, on its call, armed forces, assistance, facilities, including rights of passage necessary for the purpose of maintaining international peace and security." The consequences imposed by this 1945 decision have had a dramatic negative impact on the United Nation's functional capacity as a global body for peace and security. This article summarizes the struggle to implement Article 43 over the decades from the onset of the Cold War, through diplomatic attempts during the post-Cold War era, to current and often controversial attempts to provide some semblance of conflict containment through peace enforcement missions. The rapid growth of globalization and the capability of many nations to provide democratic protections to their populations are again threatened by superpower hegemony and the development of novel unconventional global threats. The survival of the United Nations requires many long overdue organizational structure and governance power reforms, including implementation of a robust United Nations Standing Task Force under Article 43. (Disaster Med Public Health Preparedness. 2018;page 1 of 8).

  17. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  18. PERSON DEIXIS IN USA PRESIDENTIAL CAMPAIGN SPEECHES

    Directory of Open Access Journals (Sweden)

    Nanda Anggarani Putri

    2015-06-01

    Full Text Available This study investigates the use of person deixis in presidential campaign speeches. This study is important because the use of person deixis in political speeches has been proved by many studies to give significant effects to the audience. The study largely employs a descriptive qualitative method. However, it also employs a simple quantitative method in calculating the number of personal pronouns used in the speeches and their percentages. The data for the study were collected from the transcriptions of six presidential campaign speeches of Barack Obama and Mitt Romney during the campaign rally in various places across the United States of America in July, September, and November 2012. The results of this study show that the presidential candidates make the best use of pronouns as a way to promote themselves and to attack their opponents. The results also suggest that the use of pronouns in the speeches enables the candidates to construct positive identity and reality, which are favorable to them and make them appear more eligible for the position.

  19. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  20. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    Science.gov (United States)

    Arnold, Denis; Tomaschek, Fabian; Sering, Konstantin; Lopez, Florence; Baayen, R Harald

    2017-01-01

    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  1. 78 FR 64385 - United Nations Day, 2013

    Science.gov (United States)

    2013-10-28

    ... A Proclamation In 1945, after two world wars that showed the horrific lethality of modern conflict.... We celebrate the organization's challenging and often unheralded work of forging a world in which... children and grandchildren from the ravages of war, the members of the United Nations committed ``to unite...

  2. Mock Trial: A Window to Free Speech Rights and Abilities

    Science.gov (United States)

    Schwartz, Sherry

    2010-01-01

    This article provides some strategies to alleviate the current tensions between personal responsibility and freedom of speech rights in the public school classroom. The article advocates the necessity of making sure students understand the points and implications of the first amendment by providing a mock trial unit concerning free speech rights.…

  3. The United Nations University and Information Development.

    Science.gov (United States)

    Tanaskovic, Ines Wesley

    1994-01-01

    Describes the role of the United Nations University (UNU) in promoting the effective use of new information technologies in support of science and technology for development. The UNU Information and Decision Systems (INDES) project examines the constraints preventing developing nations from using advances in informatics and from integrating their…

  4. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.

    Directory of Open Access Journals (Sweden)

    Ling He

    Full Text Available The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the

  5. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  6. National Assessment of College Student Learning: Identifying College Graduates' Essential Skills in Writing, Speech and Listening, and Critical Thinking. Final Project Report.

    Science.gov (United States)

    Jones, Elizabeth A.; And Others

    This study used an iterative Delphi survey process of about 600 faculty, employers, and policymakers to identify writing, speech and listening, and critical thinking skills that college graduates should achieve to become effective employees and citizens (National Education Goal 6). Participants reached a consensus about the importance in critical…

  7. Human rights or security? Positions on asylum in European Parliament speeches

    DEFF Research Database (Denmark)

    Frid-Nielsen, Snorre Sylvester

    2018-01-01

    This study examines speeches in the European Parliament relating to asylum. Conceptually, it tests hypotheses concerning the relation between national parties and Members of European Parliament (MEPs). The computer-based content analysis method Wordfish is used to examine 876 speeches from 2004-2...

  8. Recent advances in Automatic Speech Recognition for Vietnamese

    OpenAIRE

    Le , Viet-Bac; Besacier , Laurent; Seng , Sopheap; Bigi , Brigitte; Do , Thi-Ngoc-Diep

    2008-01-01

    International audience; This paper presents our recent activities for automatic speech recognition for Vietnamese. First, our text data collection and processing methods and tools are described. For language modeling, we investigate word, sub-word and also hybrid word/sub-word models. For acoustic modeling, when only limited speech data are available for Vietnamese, we propose some crosslingual acoustic modeling techniques. Furthermore, since the use of sub-word units can reduce the high out-...

  9. Describing Speech Usage in Daily Activities in Typical Adults.

    Science.gov (United States)

    Anderson, Laine; Baylor, Carolyn R; Eadie, Tanya L; Yorkston, Kathryn M

    2016-01-01

    "Speech usage" refers to what people want or need to do with their speech to meet communication demands in life roles. The purpose of this study was to contribute to validation of the Levels of Speech Usage scale by providing descriptive data from a sample of adults without communication disorders, comparing this scale to a published Occupational Voice Demands scale and examining predictors of speech usage levels. This is a survey design. Adults aged ≥25 years without reported communication disorders were recruited nationally to complete an online questionnaire. The questionnaire included the Levels of Speech Usage scale, questions about relevant occupational and nonoccupational activities (eg, socializing, hobbies, childcare, and so forth), and demographic information. Participants were also categorized according to Koufman and Isaacson occupational voice demands scale. A total of 276 participants completed the questionnaires. People who worked for pay tended to report higher levels of speech usage than those who do not work for pay. Regression analyses showed employment to be the major contributor to speech usage; however, considerable variance left unaccounted for suggests that determinants of speech usage and the relationship between speech usage, employment, and other life activities are not yet fully defined. The Levels of Speech Usage may be a viable instrument to systematically rate speech usage because it captures both occupational and nonoccupational speech demands. These data from a sample of typical adults may provide a reference to help in interpreting the impact of communication disorders on speech usage patterns. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Freedom of Speech and the Communication Discipline: Defending the Value of Low-Value Speech. Wicked Problems Forum: Freedom of Speech at Colleges and Universities

    Science.gov (United States)

    Herbeck, Dale A.

    2018-01-01

    Heated battles over free speech have erupted on college campuses across the United States in recent months. Some of the most prominent incidents involve efforts by students to prevent public appearances by speakers espousing controversial viewpoints. Efforts to silence offensive speakers on college campuses are not new; in these endeavors, one can…

  11. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  12. Unit: Plants, Inspection Pack, National Trial Print.

    Science.gov (United States)

    Australian Science Education Project, Toorak, Victoria.

    This is a National Trial Print of a unit on plants produced as a part of the Australian Science Education Project. The unit consists of an information booklet for students, a booklet for recording student data, and a teacher's guide. The material, designed for use with students in the upper elementary grades, takes from 15 to 20 forty-minute…

  13. Error Consistency in Acquired Apraxia of Speech with Aphasia: Effects of the Analysis Unit

    Science.gov (United States)

    Haley, Katarina L.; Cunningham, Kevin T.; Eaton, Catherine Torrington; Jacks, Adam

    2018-01-01

    Purpose: Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain…

  14. Apraxia of Speech

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  15. Out-of-synchrony speech entrainment in developmental dyslexia.

    Science.gov (United States)

    Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel

    2016-08-01

    Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. The United Nations disarmament yearbook. V. 29: 2004

    International Nuclear Information System (INIS)

    2005-09-01

    The United Nations Disarmament book is designed to be a concise reference work. As a good amount of background information is condensed, it may be helpful to consult previous editions. Factual information, presented where possible in tabular form, is provided in the appendices. Web sites of United Nations departments and specialized agencies, intergovernmental organizations, research institutes and non-governmental organizations appear as footnotes. The Department for Disarmament Affairs draws your attention to its web site at http://disarmament.un.org where up-to-date information on disarmament issues may be obtained throughout the year

  17. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  18. Speech Recognition for the iCub Platform

    Directory of Open Access Journals (Sweden)

    Bertrand Higy

    2018-02-01

    Full Text Available This paper describes open source software (available at https://github.com/robotology/natural-speech to build automatic speech recognition (ASR systems and run them within the YARP platform. The toolkit is designed (i to allow non-ASR experts to easily create their own ASR system and run it on iCub and (ii to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human–iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: “articulatory” and “unsupervised” speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning-based baselines. The second type of recognition systems, the “unsupervised” systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems. To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2.5-h speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates.

  19. Modelling the Architecture of Phonetic Plans: Evidence from Apraxia of Speech

    Science.gov (United States)

    Ziegler, Wolfram

    2009-01-01

    In theories of spoken language production, the gestural code prescribing the movements of the speech organs is usually viewed as a linear string of holistic, encapsulated, hard-wired, phonetic plans, e.g., of the size of phonemes or syllables. Interactions between phonetic units on the surface of overt speech are commonly attributed to either the…

  20. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  1. Tools for the assessment of childhood apraxia of speech.

    Science.gov (United States)

    Gubiani, Marileda Barichello; Pagliarin, Karina Carlesso; Keske-Soares, Marcia

    2015-01-01

    This study systematically reviews the literature on the main tools used to evaluate childhood apraxia of speech (CAS). The search strategy includes Scopus, PubMed, and Embase databases. Empirical studies that used tools for assessing CAS were selected. Articles were selected by two independent researchers. The search retrieved 695 articles, out of which 12 were included in the study. Five tools were identified: Verbal Motor Production Assessment for Children, Dynamic Evaluation of Motor Speech Skill, The Orofacial Praxis Test, Kaufman Speech Praxis Test for Children, and Madison Speech Assessment Protocol. There are few instruments available for CAS assessment and most of them are intended to assess praxis and/or orofacial movements, sequences of orofacial movements, articulation of syllables and phonemes, spontaneous speech, and prosody. There are some tests for assessment and diagnosis of CAS. However, few studies on this topic have been conducted at the national level, as well as protocols to assess and assist in an accurate diagnosis.

  2. Automatic Smoker Detection from Telephone Speech Signals

    DEFF Research Database (Denmark)

    Poorjam, Amir Hossein; Hesaraki, Soheila; Safavi, Saeid

    2017-01-01

    This paper proposes an automatic smoking habit detection from spontaneous telephone speech signals. In this method, each utterance is modeled using i-vector and non-negative factor analysis (NFA) frameworks, which yield low-dimensional representation of utterances by applying factor analysis...... method is evaluated on telephone speech signals of speakers whose smoking habits are known drawn from the National Institute of Standards and Technology (NIST) 2008 and 2010 Speaker Recognition Evaluation databases. Experimental results over 1194 utterances show the effectiveness of the proposed approach...... for the automatic smoking habit detection task....

  3. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  4. The Origin of the United Nations

    Directory of Open Access Journals (Sweden)

    Carlos Yordan

    2010-11-01

    Full Text Available Este artículo explica los orígenes de sistema global antiterrorista de las Naciones Unidas. Nosotros argüimos que tres factores determinan las características de un sistema descentralizado y de estados centralizados. El primero es la reacción de la ONU contra los ataques terroristas del 11 de septiembre de 2001. El segundo factor es la cada vez mayor relevancia de las redes de gobierno transnacional. La tercera fuerza son los intereses y los asuntos del Consejo de Seguridad permanente, que últimamente determina la arquitectura del sistema.9/11, United Nations, Security Council, transnacional governance networks,counter-terrorism system.___________________________ABSTRACT:This article explains the origins of the United Nations’ global counter-terrorism system. We argue that three factors shaped the system’s decentralized and state-centered characteristics. The first is the UN’s reactions to terrorism prior to the attacks of 11 September 2001. The second factor is the growing relevance of transnational governance networks. The third force is the interests and concerns of the Security Council’s permanent representative interests, which ultimately shaped the system’s architecture.Keywords: 9/11; United Nations; Security Council; transnacional governance networks; counter-terrorism system

  5. Stuttering Frequency, Speech Rate, Speech Naturalness, and Speech Effort During the Production of Voluntary Stuttering.

    Science.gov (United States)

    Davidow, Jason H; Grossman, Heather L; Edge, Robin L

    2018-05-01

    Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.

  6. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  8. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  9. Integrating speech in time depends on temporal expectancies and attention.

    Science.gov (United States)

    Scharinger, Mathias; Steinberg, Johanna; Tavano, Alessandro

    2017-08-01

    Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. ABSTRACT NOUNS IN THE SPEECH OF THE EMGLISHMEN (BASED ON FICTION WORKS AND BRITISH NATIONAL CORPUS

    Directory of Open Access Journals (Sweden)

    Natalia Veniaminovna Khokhlova

    2015-01-01

    Full Text Available The research aimed at studying the use of abstract nouns in the Englishmen’s speech from the standpoint of sociolinguistics. The article introduces a new, sociolinguistic, approach to research of abstract nouns; it is also the first time they are studied in a language corpus. The first stage of the research was based on fiction literary works: abstract nouns were extracted of analysis from the statements of the characters belonging to the opposite social classes. Later, these data was compared with the results of the original corpus research based on the British national corpus: sentences with nouns were selected out of the conversational subcorpus of BNC and were further sorted into abstract, concrete and words denoting people. Then, their frequency and vocabulary was studied with regards to speakers’ age, gender and social standing. The results revealed that abstract words are used more often that concrete ones regardless of the speaker’s social characteristics, however, the size and content of vocabulary is different (it is generally more substantial in the speech of women and representatives of higher social classes. The results of this research can be used in elaborating a course of the English language or in teaching general linguistics, sociolinguistics and country studies. 

  11. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  12. Oral speech teaching to students of mathematic specialties: a grammatical aspect

    Directory of Open Access Journals (Sweden)

    Ibragimov I.I.

    2016-08-01

    Full Text Available the paper considers teaching features of English speech grammar aspects. The case studies include undergraduates of mathematical specialties. The content of students’ educational activity at the final stage of language teaching is pointed out. Besides the structure of grammar section, a special didactic training unit in which framework mastering grammar phenomena used in oral speech takes place is described.

  13. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  14. The segment as the minimal planning unit in speech production and reading aloud: evidence and implications.

    Science.gov (United States)

    Kawamoto, Alan H; Liu, Qiang; Kello, Christopher T

    2015-01-01

    Speech production and reading aloud studies have much in common, especially the last stages involved in producing a response. We focus on the minimal planning unit (MPU) in articulation. Although most researchers now assume that the MPU is the syllable, we argue that it is at least as small as the segment based on negative response latencies (i.e., response initiation before presentation of the complete target) and longer initial segment durations in a reading aloud task where the initial segment is primed. We also discuss why such evidence was not found in earlier studies. Next, we rebut arguments that the segment cannot be the MPU by appealing to flexible planning scope whereby planning units of different sizes can be used due to individual differences, as well as stimulus and experimental design differences. We also discuss why negative response latencies do not arise in some situations and why anticipatory coarticulation does not preclude the segment MPU. Finally, we argue that the segment MPU is also important because it provides an alternative explanation of results implicated in the serial vs. parallel processing debate.

  15. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  16. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  17. Motivational Projections of Russian Spontaneous Speech

    Directory of Open Access Journals (Sweden)

    Galina M. Shipitsina

    2017-06-01

    Full Text Available The article deals with the semantic, pragmatic and structural features of words, phrases, dialogues motivation, in the contemporary Russian popular speech. These structural features are characterized by originality and unconventional use. Language material is the result of authors` direct observation of spontaneous verbal communication between people of different social and age groups. The words and remarks were analyzed in compliance with the communication system of national Russian language and cultural background of popular speech. Studies have discovered that in spoken discourse there are some other ways to increase the expression statement. It is important to note that spontaneous speech identifies lacunae in the nominative language and its vocabulary system. It is proved, prefixation is also effective and regular way of the same action presenting. The most typical forms, ways and means to update language resources as a result of the linguistic creativity of native speakers were identified.

  18. A Cross-Cultural Approach to Speech-Act-Sets: The Case of Apologies

    Directory of Open Access Journals (Sweden)

    Válková Silvie

    2014-07-01

    Full Text Available The aim of this paper is to contribute to the validity of recent research into speech act theory by advocating the idea that with some of the traditional speech acts, their overt language manifestations that emerge from corpus data remind us of ritualised scenarios of speech-act-sets rather than single acts, with configurations of core and peripheral units reflecting the socio-cultural norms of the expectations and culture-bound values of a given language community. One of the prototypical manifestations of speech-act-sets, apologies, will be discussed to demonstrate a procedure which can be used to identify, analyse, describe and cross-culturally compare the validity of speech-act-set theory and provide evidence of its relevance for studying the English-Czech interface in this particular domain of human interaction.

  19. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  20. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  1. A Danish open-set speech corpus for competing-speech studies

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten; Neher, Tobias

    2014-01-01

    Studies investigating speech-on-speech masking effects commonly use closed-set speech materials such as the coordinate response measure [Bolia et al. (2000). J. Acoust. Soc. Am. 107, 1065-1066]. However, these studies typically result in very low (i.e., negative) speech recognition thresholds (SRTs......) when the competing speech signals are spatially separated. To achieve higher SRTs that correspond more closely to natural communication situations, an open-set, low-context, multi-talker speech corpus was developed. Three sets of 268 unique Danish sentences were created, and each set was recorded...... with one of three professional female talkers. The intelligibility of each sentence in the presence of speech-shaped noise was measured. For each talker, 200 approximately equally intelligible sentences were then selected and systematically distributed into 10 test lists. Test list homogeneity was assessed...

  2. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    Science.gov (United States)

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production

  3. Current Policies and New Directions for Speech-Language Pathology Assistants.

    Science.gov (United States)

    Paul-Brown, Diane; Goldberg, Lynette R

    2001-01-01

    This article provides an overview of current American Speech-Language-Hearing Association (ASHA) policies for the appropriate use and supervision of speech-language pathology assistants with an emphasis on the need to preserve the role of fully qualified speech-language pathologists in the service delivery system. Seven challenging issues surrounding the appropriate use of speech-language pathology assistants are considered. These include registering assistants and approving training programs; membership in ASHA; discrepancies between state requirements and ASHA policies; preparation for serving diverse multicultural, bilingual, and international populations; supervision considerations; funding and reimbursement for assistants; and perspectives on career-ladder/bachelor-level personnel. The formation of a National Leadership Council is proposed to develop a coordinated strategic plan for addressing these controversial and potentially divisive issues related to speech-language pathology assistants. This council would implement strategies for future development in the areas of professional education pertaining to assistant-level supervision, instruction of assistants, communication networks, policy development, research, and the dissemination/promotion of information regarding assistants.

  4. Reported Speech in Conversational Storytelling during Nursing Shift Handover Meetings

    Science.gov (United States)

    Bangerter, Adrian; Mayor, Eric; Pekarek Doehler, Simona

    2011-01-01

    Shift handovers in nursing units involve formal transmission of information and informal conversation about non-routine events. Informal conversation often involves telling stories. Direct reported speech (DRS) was studied in handover storytelling in two nursing care units. The study goal is to contribute to a better understanding of conversation…

  5. Visit of H.E. Mr. S. Marchi, Ambassador and Permanent Representative for Canada to the Office of the United Nations at Geneva and H.E. Mr. Ch. Westdal, Alternate Permanent Representative, Ambassador to the Office of the United Nations Permanent Representative and Ambassador to the United Nations for Disarmament for Canada

    CERN Multimedia

    Patrice Loiez

    2000-01-01

    Visit of H.E. Mr. S. Marchi, Ambassador and Permanent Representative for Canada to the Office of the United Nations at Geneva and H.E. Mr. Ch. Westdal, Alternate Permanent Representative, Ambassador to the Office of the United Nations Permanent Representative and Ambassador to the United Nations for Disarmament for Canada

  6. Freedom of Speech Newsletter, May 1976.

    Science.gov (United States)

    Allen, Winfred G., Jr., Ed.

    This issue of the "Freedom of Speech Newsletter" contains three articles. "Big Brother, 1976--Judges and the Gag Order" by Miles Clark examines constitutional censorship of the media and government secrecy. "Democratic Rights: A Socialist View" by Kipp Dawson argues that "the rulers of the United States have never granted the American people any…

  7. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  8. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  9. DELVING INTO SPEECH ACT A Case Of Indonesian EFL Young Learners

    Directory of Open Access Journals (Sweden)

    Swastika Septiani, S.Pd

    2017-04-01

    Full Text Available This study attempts to describe the use of speech acts applied in primary school. This study is intended to identify the speech acts performed in primary school, to find the most dominant speech acts performed in elementary school, to give brief description of how speech acts applied in primary school, and to know how to apply the result of the study in English teaching learning to young learners. The speech acts performed in primary school is classified based on Searle‘s theory of speech acts. The most dominant speech acts performed in primary school is Directive (41.17%, the second speech act mostly performed is Declarative (33.33%, the third speech act mostly performed is Representative and Expressive (each 11.76%, and the least speech act performed is Commisive (1.9%. The speech acts performed in elementary school is applied on the context of situation determined by the National Education Standards Agency (BSNP. The speech acts performed in fourth grade have to be applied in the context of classroom, and the speech acts performed in fifth grade have to be applied in the context of school, whereas the speech acts performed in sixth grade have to be applied in the context of the students‘ surroundings. The result of this study is highy expected to give significant contribution to English teaching learning to young learners. By acknowledging the characteristics of young learners, the way they learn English as a foreign language, the teachers are expected to have inventive strategies and various techniques to create a fun and condusive atmosphere in English class.

  10. Postdeployment reintegration experiences of female soldiers from national guard and reserve units in the United States.

    Science.gov (United States)

    Kelly, Patricia J; Berkel, LaVerne A; Nilsson, Johanna E

    2014-01-01

    Women are an integral part of Reserve and National Guard units and active duty armed forces of the United States. Deployment to conflict and war zones is a difficult experience for both soldiers and their families. On return from deployment, all soldiers face the challenge of reintegration into family life and society, but those from the National Guard and Reserve units face the additional challenge of reintegration in relative isolation from other soldiers. There is limited research about the reintegration experiences of women and the functioning of the families during reintegration following deployment. The goal was to document postdeployment family reintegration experiences of women in the National Guard. Semistructured interviews were conducted with 42 female members of Midwestern National Guard units. Directed content analysis was used to identify categories of experiences related to women's family reintegration. Five categories of postdeployment experience for female soldiers and their families were identified: Life Is More Complex, Loss of Military Role, Deployment Changes You, Reestablishing Partner Connections, and Being Mom Again. The categories reflected individual and family issues, and both need to be considered when soldiers and their families seek care. Additional research is needed to fully understand the specific impact of gender on women's reintegration.

  11. An Update from the United Nations

    Science.gov (United States)

    Staley, Lynn

    2005-01-01

    On September 8, 9, and 10, the United Nations (UN) Department of Information (DPI) partnered with the non-governmental organizations (NGOs) to sponsor the 57th Annual DPI/NGO Conference in New York City. In his welcoming remarks, Kofi Annan, Secretary-General of the UN, highlighted the theme of the conference, "Millennium Development Goals (MDGs):…

  12. Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study

    Science.gov (United States)

    Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle

    2012-01-01

    In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…

  13. Report from UNSCEAR to the United Nations General Assembly

    International Nuclear Information System (INIS)

    2001-01-01

    Over the past few years, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) has undertaken a broad review of the sources and effects of ionizing radiation. The result of this work has presented for the general reader in the 2000 Report to the General Assembly. This report with the supporting scientific annexes, which are aimed at the general scientific community, was published as 'Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation 2000 report to the General Assembly, with scientific annexes'

  14. Methodology for Speech Assessment in the Scandcleft Project-An International Randomized Clinical Trial on Palatal Surgery

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth

    2009-01-01

    Objective: To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Design: Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcomes...... across five included languages were developed and tested. Participants and Materials: Randomly selected video recordings of 10 5-year-old children from each language (n = 50) were included in the project. Speech material consisted of test consonants in single words, connected speech, and syllable chains......-sum and the overall rating of VPC was 78%. Conclusions: Pooling data of speakers of different languages in the same trial and comparing speech outcome across trials seems possible if the assessment of speech concerns consonants and is confined to speech units that are phonetically similar across languages. Agreed...

  15. Development of a speech-based dialogue system for report dictation and machine control in the endoscopic laboratory.

    Science.gov (United States)

    Molnar, B; Gergely, J; Toth, G; Pronai, L; Zagoni, T; Papik, K; Tulassay, Z

    2000-01-01

    Reporting and machine control based on speech technology can enhance work efficiency in the gastrointestinal endoscopy laboratory. The status and activation of endoscopy laboratory equipment were described as a multivariate parameter and function system. Speech recognition, text evaluation and action definition engines were installed. Special programs were developed for the grammatical analysis of command sentences, and a rule-based expert system for the definition of machine answers. A speech backup engine provides feedback to the user. Techniques were applied based on the "Hidden Markov" model of discrete word, user-independent speech recognition and on phoneme-based speech synthesis. Speech samples were collected from three male low-tone investigators. The dictation module and machine control modules were incorporated in a personal computer (PC) simulation program. Altogether 100 unidentified patient records were analyzed. The sentences were grouped according to keywords, which indicate the main topics of a gastrointestinal endoscopy report. They were: "endoscope", "esophagus", "cardia", "fundus", "corpus", "antrum", "pylorus", "bulbus", and "postbulbar section", in addition to the major pathological findings: "erosion", "ulceration", and "malignancy". "Biopsy" and "diagnosis" were also included. We implemented wireless speech communication control commands for equipment including an endoscopy unit, video, monitor, printer, and PC. The recognition rate was 95%. Speech technology may soon become an integrated part of our daily routine in the endoscopy laboratory. A central speech and laboratory computer could be the most efficient alternative to having separate speech recognition units in all items of equipment.

  16. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding....... Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  17. HOSPITAL SOUNDSCAPE: ACOUSTICS EVALUATION IN NEONATAL INTENSIVE CARE UNIT (NICU ROOM OF A NATIONAL HOSPITAL IN JAKARTA, INDONESIA

    Directory of Open Access Journals (Sweden)

    SARWONO R. Sugeng Joko

    2016-12-01

    Full Text Available Acoustics comfort in a room is one of the most important building physics aspect that should be observed. in public spaces like hospital, especially in an intensive care unit such as NICU. Researches on the acoustic conditions of NICU in Indonesia are still limited. The acoustical study conducted in this research is using objective, subjective, and simulation methods based on soundscape concept with the concern on the nurse’s perception. This research was conducted at a national hospital in Jakarta. According to National Standardization Agency of Indonesia (SNI and World Health Organization (WHO, the suitable sound pressure level (SPL for noise in patient’s room is 35 dBA. From the study, it was found that the equivalent SPL value exceeded the standard. Soundscape in NICU can be improve with the addition of curtain on the incubator’s side, installation of glass partition, and ceiling absorber in the nurse station area. The result of simulation showed that the SPL in the room decreased with average value 8.9 dBA for sound source alarm ventilator and 8.2 dBA for sound source medical officer conversations. And the speech transmission index (STI increased from “bad” to “good” range became “fair” to “excellent” range.

  18. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  19. Group delay functions and its applications in speech technology

    Indian Academy of Sciences (India)

    (iii) High resolution property: The (anti) resonance peaks (due to complex ... Resolving power of the group delay spectrum: z-plane (a, d, g), magnitude ...... speech signal into syllable-like units, without the knowledge of phonetic transcription.

  20. Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.

    Science.gov (United States)

    Schoenmaker, Esther; van de Par, Steven

    2016-01-01

    Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.

  1. An experimental Dutch keyboard-to-speech system for the speech impaired

    NARCIS (Netherlands)

    Deliege, R.J.H.

    1989-01-01

    An experimental Dutch keyboard-to-speech system has been developed to explor the possibilities and limitations of Dutch speech synthesis in a communication aid for the speech impaired. The system uses diphones and a formant synthesizer chip for speech synthesis. Input to the system is in

  2. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  3. Commercial Speech Protection and Alcoholic Beverage Advertising.

    Science.gov (United States)

    Greer, Sue

    An examination of the laws governing commercial speech protection and alcoholic beverage advertisements, this document details the legal precedents for and implications of banning such advertising. An introduction looks at the current amount of alcohol consumed in the United States and the recent campaigns to have alcoholic beverage ads banned.…

  4. Speech Function and Speech Role in Carl Fredricksen's Dialogue on Up Movie

    OpenAIRE

    Rehana, Ridha; Silitonga, Sortha

    2013-01-01

    One aim of this article is to show through a concrete example how speech function and speech role used in movie. The illustrative example is taken from the dialogue of Up movie. Central to the analysis proper form of dialogue on Up movie that contain of speech function and speech role; i.e. statement, offer, question, command, giving, and demanding. 269 dialogue were interpreted by actor, and it was found that the use of speech function and speech role.

  5. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  6. Academic Freedom in Classroom Speech: A Heuristic Model for U.S. Catholic Higher Education

    Science.gov (United States)

    Jacobs, Richard M.

    2010-01-01

    As the nation's Catholic universities and colleges continually clarify their identity, this article examines academic freedom in classroom speech, offering a heuristic model for use as board members, academic administrators, and faculty leaders discuss, evaluate, and judge allegations of misconduct in classroom speech. Focusing upon the practice…

  7. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  8. Early Recovery of Aphasia through Thrombolysis: The Significance of Spontaneous Speech.

    Science.gov (United States)

    Furlanis, Giovanni; Ridolfi, Mariana; Polverino, Paola; Menichelli, Alina; Caruso, Paola; Naccarato, Marcello; Sartori, Arianna; Torelli, Lucio; Pesavento, Valentina; Manganotti, Paolo

    2018-07-01

    Aphasia is one of the most devastating stroke-related consequences for social interaction and daily activities. Aphasia recovery in acute stroke depends on the degree of reperfusion after thrombolysis or thrombectomy. As aphasia assessment tests are often time-consuming for patients with acute stroke, physicians have been developing rapid and simple tests. The aim of our study is to evaluate the improvement of language functions in the earliest stage in patients treated with thrombolysis and in nontreated patients using our rapid screening test. Our study is a single-center prospective observational study conducted at the Stroke Unit of the University Medical Hospital of Trieste (January-December 2016). Patients treated with thrombolysis and nontreated patients underwent 3 aphasia assessments through our rapid screening test (at baseline, 24 hours, and 72 hours). The screening test assesses spontaneous speech, oral comprehension of words, reading aloud and comprehension of written words, oral comprehension of sentences, naming, repetition of words and a sentence, and writing words. The study included 40 patients: 18 patients treated with thrombolysis and 22 nontreated patients. Both groups improved over time. Among all language parameters, spontaneous speech was statistically significant between 24 and 72 hours (P value = .012), and between baseline and 72 hours (P value = .017). Our study demonstrates that patients treated with thrombolysis experience greater improvement in language than the nontreated patients. The difference between the 2 groups is increasingly evident over time. Moreover, spontaneous speech is the parameter marked by the greatest improvement. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  9. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  10. Prevalence and etiologies of adult communication disabilities in the United States: Results from the 2012 National Health Interview Survey.

    Science.gov (United States)

    Morris, Megan A; Meier, Sarah K; Griffin, Joan M; Branda, Megan E; Phelan, Sean M

    2016-01-01

    Communication disabilities, including speech, language and voice disabilities, can significantly impact a person's quality of life, employment and health status. Despite this, little is known about the prevalence and etiology of communication disabilities in the general adult population. To assess the prevalence and etiology of communication disabilities in a nationally representative adult sample. We conducted a cross-sectional study and analyzed the responses of non-institutionalized adults to the Sample Adult Core questionnaire within the 2012 National Health Interview Survey. We used respondents' self-report of having a speech, language or voice disability within the past year and receiving a diagnosis for one of these communication disabilities, as well as the etiology of their communication disability. We additionally examined the responses by subgroups, including sex, age, race and ethnicity, and geographical area. In 2012 approximately 10% of the US adult population reported a communication disability, while only 2% of adults reported receiving a diagnosis. The rates of speech, language and voice disabilities and diagnoses varied across gender, race/ethnicity and geographic groups. The most common response for the etiology of a communication disability was "something else." Improved understanding of population prevalence and etiologies of communication disabilities will assist in appropriately directing rehabilitation and medical services; potentially reducing the burden of communication disabilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Speech disorders - children

    Science.gov (United States)

    ... disorder; Voice disorders; Vocal disorders; Disfluency; Communication disorder - speech disorder; Speech disorder - stuttering ... evaluation tools that can help identify and diagnose speech disorders: Denver Articulation Screening Examination Goldman-Fristoe Test of ...

  12. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  13. The National Security Strategy Under the United Nations and International Law

    Science.gov (United States)

    2004-03-19

    a result of that war." This was addressed in 1951 by Hans Kelsen in a legal analysis of fundamental problems with the UN Charter. He concluded that...www.zmag.org/content/print_article.cfm>; Internet; accessed 31 January 2004. 36 Charter of the United Nations, Article 107. 37 Kearly, 27–28. 38 Hans Kelsen

  14. United Nations - African Union Cooperation In Conflict Prevention, Peacekeeping and Peacebuildin

    Directory of Open Access Journals (Sweden)

    Liliya Igorevna Romadan

    2015-01-01

    Full Text Available The article addresses the cooperation between the United Nations and regional organizations, in particular the African Union in the sphere of security and settlement of conflicts. Over the last decade the role of the AU and sub regional organizations has dramatically increased. Through its agencies of ensuring peace and security the African Union is making significant contribution to strengthening stability and promotion of democracy and human rights in Africa. In the beginning of the article authors make a review of the level of security on the African continent and stress the sharpest conflict zones. According to researches one of the most turbulent regions on continent in terms of security is the North-East Africa. Continuing quarter-century war in Somalia, conflict relations between Somalia and Ethiopia, the border crises between Ethiopia and Eritrea, which in the late 20th century turned into the war between the two countries, finally, the number of armed clashes in Sudan attracted the special attention to the region of the entire world community. Authors pay the main attention to the cooperation between the United Nations and the African Union in the sphere of settling regional conflicts and holding peacekeeping operations. In the article the main mechanisms and methods that are used by the United Nations and the African Union to hold peacekeeping operations are analyzed in details. The situation in Somalia and efforts of the United Nations and the African Union that are making towards stabilization in this country are also studied. Authors reveal the basic elements and make a review of the mixed multicomponent peacekeeping operation of the United Nations and the African Union in Sudan. In the conclusion authors stress the measures that could strengthen the strategic cooperation between the United Nations and the African union. According to the authors the most important task is to solve problems of financing joint peacekeeping operations

  15. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

    Directory of Open Access Journals (Sweden)

    Jing Mi

    2016-09-01

    Full Text Available Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  16. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.

    Science.gov (United States)

    Mi, Jing; Colburn, H Steven

    2016-10-03

    Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.

  17. Parameter masks for close talk speech segregation using deep neural networks

    Directory of Open Access Journals (Sweden)

    Jiang Yi

    2015-01-01

    Full Text Available A deep neural networks (DNN based close talk speech segregation algorithm is introduced. One nearby microphone is used to collect the target speech as close talk indicated, and another microphone is used to get the noise in environments. The time and energy difference between the two microphones signal is used as the segregation cue. A DNN estimator on each frequency channel is used to calculate the parameter masks. The parameter masks represent the target speech energy in each time frequency (T-F units. Experiment results show the good performance of the proposed system. The signal to noise ratio (SNR improvement is 8.1 dB on 0 dB noisy environment.

  18. Toward a Pax Universalis: A Historical Critique of the National Military Strategy for the 1990s

    Science.gov (United States)

    1992-04-01

    some to be ethnocentric and perhaps neo -im­ perialistic. It is not intended to be so. In a speech to the United Nations on 23 September 1991, President...intelligence. As an example, at Adrianople in A.D. 378, Valens thought he was facing only 10,000 Visigoths ; the enemy force actually numbered many more than

  19. Denmark's National Inventory Reports. Submitted under the United Nations framework convention on climate change

    International Nuclear Information System (INIS)

    Boll Illerup, J.; Lyck, E.; Winther, M.; Rasmussen, E.

    2000-01-01

    This report is Denmark's National Inventory Report reported to the Conference of the Parties under the United Nations Framework Convention on Climate Change (UNFCCC) due by 15 April 2000. The report contains information on Denmark's inventories for all years from 1990 to 1998 for CO 2 , CH 4 , N 2 O, NO x , CO, NMVOC, SO 2 , HFCs, PFCs and SF. (au)

  20. Population Health in Pediatric Speech and Language Disorders: Available Data Sources and a Research Agenda for the Field.

    Science.gov (United States)

    Raghavan, Ramesh; Camarata, Stephen; White, Karl; Barbaresi, William; Parish, Susan; Krahn, Gloria

    2018-05-17

    The aim of the study was to provide an overview of population science as applied to speech and language disorders, illustrate data sources, and advance a research agenda on the epidemiology of these conditions. Computer-aided database searches were performed to identify key national surveys and other sources of data necessary to establish the incidence, prevalence, and course and outcome of speech and language disorders. This article also summarizes a research agenda that could enhance our understanding of the epidemiology of these disorders. Although the data yielded estimates of prevalence and incidence for speech and language disorders, existing sources of data are inadequate to establish reliable rates of incidence, prevalence, and outcomes for speech and language disorders at the population level. Greater support for inclusion of speech and language disorder-relevant questions is necessary in national health surveys to build the population science in the field.

  1. Attitudes of Turkish speech and language therapists toward stuttering.

    Science.gov (United States)

    Maviş, Ilknur; St Louis, Kenneth O; Özdemir, Sertan; Toğram, Bülent

    2013-06-01

    The study sought to identify clinical beliefs and attitudes of speech and language therapists (SLTs) in Turkey and to compare them to previous research on SLTs in the USA and UK. The Clinician Attitudes Toward Stuttering (CATS) inventory was administered by mail to nearly all-practicing SLTs in Turkey (n=61). Turkish SLTs, whose caseloads contained a substantial number of people who stutter, agreed with CATS items suggesting psychological causes and problems for those who stutter. They strongly believed therapy should focus on parent counseling for preschoolers who stutter as well as adolescents. They were not optimistic about carrying out stuttering therapy nor about the likelihood that children who stutter could be effectively treated. Attitudes toward stuttering by clinicians who treat them are important considerations in the conduct and outcomes of stuttering therapy. Additionally, SLTs working with stuttering clients should benefit from professional views and clinical experiences of their colleagues from surveys such as this one. The reader will be able to describe: (a) the components of the CATS, (b) common themes in Turkish speech and language therapists' attitudes toward stuttering, (c) differences between the attitudes of speech and language therapists from Turkey versus the United States and the United Kingdom. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. United Nations Environment Programme. Annual Review 1981.

    Science.gov (United States)

    United Nations Environment Programme, Nairobi (Kenya).

    This edition of the United Nations Environment Programme (UNEP) annual report is structured in three parts. Part 1 focuses on three contemporary problems (ground water, toxic chemicals and human food chains and environmental economics) and attempts to solve them. Also included is a modified extract of "The Annual State of the Environment…

  3. Unit: Petroleum, Inspection Pack, National Trial Print.

    Science.gov (United States)

    Australian Science Education Project, Toorak, Victoria.

    This is a National Trial Print of a unit on petroleum developed for the Australian Science Education Project. The package contains the teacher's edition of the written material and a script for a film entitled "The Extraordinary Experience of Nicholas Nodwell" emphasizing the uses of petroleum and petroleum products in daily life and…

  4. Reducing language to rhythm: Amazonian Bora drummed language exploits speech rhythm for long-distance communication

    Science.gov (United States)

    Seifart, Frank; Meyer, Julien; Grawunder, Sven; Dentel, Laure

    2018-04-01

    Many drum communication systems around the world transmit information by emulating tonal and rhythmic patterns of spoken languages in sequences of drumbeats. Their rhythmic characteristics, in particular, have not been systematically studied so far, although understanding them represents a rare occasion for providing an original insight into the basic units of speech rhythm as selected by natural speech practices directly based on beats. Here, we analyse a corpus of Bora drum communication from the northwest Amazon, which is nowadays endangered with extinction. We show that four rhythmic units are encoded in the length of pauses between beats. We argue that these units correspond to vowel-to-vowel intervals with different numbers of consonants and vowel lengths. By contrast, aligning beats with syllables, mora or only vowel length yields inconsistent results. Moreover, we also show that Bora drummed messages conventionally select rhythmically distinct markers to further distinguish words. The two phonological tones represented in drummed speech encode only few lexical contrasts. Rhythm thus appears to crucially contribute to the intelligibility of drummed Bora. Our study provides novel evidence for the role of rhythmic structures composed of vowel-to-vowel intervals in the complex puzzle concerning the redundancy and distinctiveness of acoustic features embedded in speech.

  5. Perceptual effects of noise reduction by time-frequency masking of noisy speech.

    Science.gov (United States)

    Brons, Inge; Houben, Rolph; Dreschler, Wouter A

    2012-10-01

    Time-frequency masking is a method for noise reduction that is based on the time-frequency representation of a speech in noise signal. Depending on the estimated signal-to-noise ratio (SNR), each time-frequency unit is either attenuated or not. A special type of a time-frequency mask is the ideal binary mask (IBM), which has access to the real SNR (ideal). The IBM either retains or removes each time-frequency unit (binary mask). The IBM provides large improvements in speech intelligibility and is a valuable tool for investigating how different factors influence intelligibility. This study extends the standard outcome measure (speech intelligibility) with additional perceptual measures relevant for noise reduction: listening effort, noise annoyance, speech naturalness, and overall preference. Four types of time-frequency masking were evaluated: the original IBM, a tempered version of the IBM (called ITM) which applies limited and non-binary attenuation, and non-ideal masking (also tempered) with two different types of noise-estimation algorithms. The results from ideal masking imply that there is a trade-off between intelligibility and sound quality, which depends on the attenuation strength. Additionally, the results for non-ideal masking suggest that subjective measures can show effects of noise reduction even if noise reduction does not lead to differences in intelligibility.

  6. Atypical Speech and Language Development: A Consensus Study on Clinical Signs in the Netherlands

    Science.gov (United States)

    Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.

    2017-01-01

    Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…

  7. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  8. Concert | United Nations Orchestra at CERN | 19 September

    CERN Multimedia

    2014-01-01

    The United Nations Orchestra will give a concert on the occasion of CERN’s 60th anniversary.   Under the baton of conductor and artistic director Antoine Marguier, the Orchestra will have the pleasure to accompany the soloist Maestro Matteo Fedeli, who, under the patronage of the Permanent Mission of Italy to the United Nations, will perform on a Stradivarius violin. The programme for the concert comprises: Jacques Offenbach, Orpheus in the Underworld Overture Franz von Suppé, Poet and Peasant Overture Camille Saint-Saëns, Introduction & Rondo Capriccioso for solo violin and orchestra Georges Bizet, Carmen Suite No. 1 Franz Lehár, Gold and Silver Waltz Gioachino Rossini, William Tell Overture   Doors open at 6 p.m. The concert will take place in a marquee behind the Globe of Science and Innovation, CERN Book your ticket here.

  9. The United Nations Basic Space Science Initiative

    Science.gov (United States)

    Haubold, Hans; Balogh, Werner

    2014-05-01

    The basic space science initiative was a long-term effort for the development of astronomy and space science through regional and international cooperation in this field on a worldwide basis, particularly in developing nations. Basic space science workshops were co-sponsored and co-organized by ESA, JAXA, and NASA. A series of workshops on basic space science was held from 1991 to 2004 (India 1991, Costa Rica and Colombia 1992, Nigeria 1993, Egypt 1994, Sri Lanka 1995, Germany 1996, Honduras 1997, Jordan 1999, France 2000, Mauritius 2001, Argentina 2002, and China 2004; http://neutrino.aquaphoenix.com/un-esa/) and addressed the status of astronomy in Asia and the Pacific, Latin America and the Caribbean, Africa, and Western Asia. Through the lead of the National Astronomical Observatory Japan, astronomical telescope facilities were inaugurated in seven developing nations and planetariums were established in twenty developing nations based on the donation of respective equipment by Japan.Pursuant to resolutions of the Committee on the Peaceful Uses of Outer Space of the United Nations (COPUOS) and its Scientific and Technical Subcommittee, since 2005, these workshops focused on the preparations for and the follow-ups to the International Heliophysical Year 2007 (UAE 2005, India 2006, Japan 2007, Bulgaria 2008, South Korea 2009; www.unoosa.org/oosa/SAP/bss/ihy2007/index.html). IHY's legacy is the current operation of 16 worldwide instrument arrays with more than 1000 instruments recording data on solar-terrestrial interaction from coronal mass ejections to variations of the total electron content in the ionosphere (http://iswisecretariat.org/). Instruments are provided to hosting institutions by entities of Armenia, Brazil, France, Israel, Japan, Switzerland, and the United States. Starting in 2010, the workshops focused on the International Space Weather Initiative (ISWI) as mandated in a three-year-work plan as part of the deliberations of COPUOS. Workshops on ISWI

  10. Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder

    Science.gov (United States)

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2006-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…

  11. The United Nations Basic Space Science Initiative

    Science.gov (United States)

    Haubold, H. J.

    2006-08-01

    Pursuant to recommendations of the United Nations Conference on the Exploration and Peaceful Uses of Outer Space (UNISPACE III) and deliberations of the United Nations Committee on the Peaceful Uses of Outer Space (UNCOPUOS), annual UN/ European Space Agency workshops on basic space science have been held around the world since 1991. These workshops contribute to the development of astrophysics and space science, particularly in developing nations. Following a process of prioritization, the workshops identified the following elements as particularly important for international cooperation in the field: (i) operation of astronomical telescope facilities implementing TRIPOD, (ii) virtual observatories, (iii) astrophysical data systems, (iv) concurrent design capabilities for the development of international space missions, and (v) theoretical astrophysics such as applications of nonextensive statistical mechanics. Beginning in 2005, the workshops focus on preparations for the International Heliophysical Year 2007 (IHY2007). The workshops continue to facilitate the establishment of astronomical telescope facilities as pursued by Japan and the development of low-cost, ground-based, world-wide instrument arrays as lead by the IHY secretariat. Wamsteker, W., Albrecht, R. and Haubold, H.J.: Developing Basic Space Science World-Wide: A Decade of UN/ESA Workshops. Kluwer Academic Publishers, Dordrecht 2004. http://ihy2007.org http://www.unoosa.org/oosa/en/SAP/bss/ihy2007/index.html http://www.cbpf.br/GrupPesq/StatisticalPhys/biblio.htm

  12. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  13. Performing mathematics activities with non-standard units of measurement using robots controlled via speech-generating devices: three case studies.

    Science.gov (United States)

    Adams, Kim D; Cook, Albert M

    2017-07-01

    Purpose To examine how using a Lego robot controlled via a speech-generating device (SGD) can contribute to how students with physical and communication impairments perform hands-on and communicative mathematics measurement activities. This study was a follow-up to a previous study. Method Three students with cerebral palsy used the robot to measure objects using non-standard units, such as straws, and then compared and ordered the objects using the resulting measurement. Their performance was assessed, and the manipulation and communication events were observed. Teachers and education assistants were interviewed regarding robot use. Results Similar benefits to the previous study were found in this study. Gaps in student procedural knowledge were identified such as knowing to place measurement units tip-to-tip, and students' reporting revealed gaps in conceptual understanding. However, performance improved with repeated practice. Stakeholders identified that some robot tasks took too long or were too difficult to perform. Conclusions Having access to both their SGD and a robot gave the students multiple ways to show their understanding of the measurement concepts. Though they could participate actively in the new mathematics activities, robot use is most appropriate in short tasks requiring reasonable operational skill. Implications for Rehabilitation Lego robots controlled via speech-generating devices (SGDs) can help students to engage in the mathematics pedagogy of performing hands-on activities while communicating about concepts. Students can "show what they know" using the Lego robots, and report and reflect on concepts using the SGD. Level 1 and Level 2 mathematics measurement activities have been adapted to be accomplished by the Lego robot. Other activities can likely be accomplished with similar robot adaptations (e.g., gripper, pen). It is not recommended to use the robot to measure items that are long, or perform measurements that require high

  14. Combined Aphasia and Apraxia of Speech Treatment (CAAST): effects of a novel therapy.

    Science.gov (United States)

    Wambaugh, Julie L; Wright, Sandra; Nessler, Christina; Mauszycki, Shannon C

    2014-12-01

    This investigation was designed to examine the effects of a newly developed treatment for aphasia and acquired apraxia of speech (AOS). Combined Aphasia and Apraxia of Speech Treatment (CAAST) targets language and speech production simultaneously, with treatment techniques derived from Response Elaboration Training (Kearns, 1985) and Sound Production Treatment (Wambaugh, Kalinyak-Fliszar, West, & Doyle, 1998). The purpose of this study was to determine whether CAAST was associated with positive changes in verbal language and speech production with speakers with aphasia and AOS. Four participants with chronic aphasia and AOS received CAAST applied sequentially to sets of pictures in the context of multiple baseline designs. CAAST entailed elaboration of participant-initiated utterances, with sound production training applied as needed to the elaborated productions. The dependent variables were (a) production of correct information units (CIUs; Nicholas & Brookshire, 1993) in response to experimental picture stimuli, (b) percentage of consonants correct in sentence repetition, and (c) speech intelligibility. CAAST was associated with increased CIU production in trained and untrained picture sets for all participants. Gains in sound production accuracy and speech intelligibility varied across participants; a modification of CAAST to provide additional speech production treatment may be desirable.

  15. Speech and Language Delay

    Science.gov (United States)

    ... OTC Relief for Diarrhea Home Diseases and Conditions Speech and Language Delay Condition Speech and Language Delay Share Print Table of Contents1. ... Treatment6. Everyday Life7. Questions8. Resources What is a speech and language delay? A speech and language delay ...

  16. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  17. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  18. Speech-specific tuning of neurons in human superior temporal gyrus.

    Science.gov (United States)

    Chan, Alexander M; Dykstra, Andrew R; Jayaram, Vinay; Leonard, Matthew K; Travis, Katherine E; Gygi, Brian; Baker, Janet M; Eskandar, Emad; Hochberg, Leigh R; Halgren, Eric; Cash, Sydney S

    2014-10-01

    How the brain extracts words from auditory signals is an unanswered question. We recorded approximately 150 single and multi-units from the left anterior superior temporal gyrus of a patient during multiple auditory experiments. Against low background activity, 45% of units robustly fired to particular spoken words with little or no response to pure tones, noise-vocoded speech, or environmental sounds. Many units were tuned to complex but specific sets of phonemes, which were influenced by local context but invariant to speaker, and suppressed during self-produced speech. The firing of several units to specific visual letters was correlated with their response to the corresponding auditory phonemes, providing the first direct neural evidence for phonological recoding during reading. Maximal decoding of individual phonemes and words identities was attained using firing rates from approximately 5 neurons within 200 ms after word onset. Thus, neurons in human superior temporal gyrus use sparse spatially organized population encoding of complex acoustic-phonetic features to help recognize auditory and visual words. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Unifying Speech and Language in a Developmentally Sensitive Model of Production.

    Science.gov (United States)

    Redford, Melissa A

    2015-11-01

    Speaking is an intentional activity. It is also a complex motor skill; one that exhibits protracted development and the fully automatic character of an overlearned behavior. Together these observations suggest an analogy with skilled behavior in the non-language domain. This analogy is used here to argue for a model of production that is grounded in the activity of speaking and structured during language acquisition. The focus is on the plan that controls the execution of fluent speech; specifically, on the units that are activated during the production of an intonational phrase. These units are schemas: temporally structured sequences of remembered actions and their sensory outcomes. Schemas are activated and inhibited via associated goals, which are linked to specific meanings. Schemas may fuse together over developmental time with repeated use to form larger units, thereby affecting the relative timing of sequential action in participating schemas. In this way, the hierarchical structure of the speech plan and ensuing rhythm patterns of speech are a product of development. Individual schemas may also become differentiated during development, but only if subsequences are associated with meaning. The necessary association of action and meaning gives rise to assumptions about the primacy of certain linguistic forms in the production process. Overall, schema representations connect usage-based theories of language to the action of speaking.

  20. Evidence-based speech-language pathology practices in schools: findings from a national survey.

    Science.gov (United States)

    Hoffman, Lavae M; Ireland, Marie; Hall-Mills, Shannon; Flynn, Perry

    2013-07-01

    This study documented evidence-based practice (EBP) patterns as reported by speech-language pathologists (SLPs) employed in public schools during 2010-2011. Using an online survey, practioners reported their EBP training experiences, resources available in their workplaces, and the frequency with which they engage in specific EBP activities, as well as their resource needs and future training format preferences. A total of 2,762 SLPs in 28 states participated in the online survey, 85% of whom reported holding the Certificate of Clinical Competence in Speech-Language Pathology credential. Results revealed that one quarter of survey respondents had no formal training in EBP, 11% of SLPs worked in school districts with official EBP procedural guidelines, and 91% had no scheduled time to support EBP activities. The majority of SLPs posed and researched 0 to 2 EBP questions per year and read 0 to 4 American Speech-Language-Hearing Association (ASHA) journal articles per year on either assessment or intervention topics. Use of ASHA online resources and engagement in EBP activities were documented to be low. However, results also revealed that school-based SLPs have high interest in additional training and resources to support scientifically based practices. Suggestions for enhancing EBP support in public schools and augmenting knowledge transfer are provided.

  1. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  2. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy ... most kids with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech ...

  3. Annual Report of the United Nations Joint Staff Pension Board. The Report Made In 1974

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1974-11-06

    Pursuant to the requirement in Article 14(a) of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published the report presented by the Board in 1974 as Supplement No. 9 to the Official Records of the General Assembly: Twenty-Ninth Session (A/9609). The report has thus already been communicated to Governments. However, if any Member should require additional copies, the Secretariat is ready to obtain them.

  4. Annual Report of the United Nations Joint Staff Pension Board. The Report made in 1975

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1975-11-27

    Pursuant to the requirement in Article 14(a) of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published the report presented by the Board in 1975 as Supplement No. 9 to the Official Records of the General Assembly: Thirtieth Session (A/10009). The report has thus already been communicated to Governments. However, if any Member should require additional copies, the Secretariat is ready to obtain them.

  5. Annual Report of the United Nations Joint Staff Pension Board. The Report made in 1972

    International Nuclear Information System (INIS)

    1972-01-01

    Pursuant to the requirement in Article 14 of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published the report presented by the Board in 1972 as Supplement No. 9 to the Official Records of the General Assembly: Twenty-Seventh Session (A/8709). The report has thus already been communicated to Governments. However, if any Member should require additional copies, the Secretariat is ready to obtain them

  6. Annual Report of the United Nations Joint Staff Pension Board. The Report Made In 1974

    International Nuclear Information System (INIS)

    1974-01-01

    Pursuant to the requirement in Article 14(a) of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published the report presented by the Board in 1974 as Supplement No. 9 to the Official Records of the General Assembly: Twenty-Ninth Session (A/9609). The report has thus already been communicated to Governments. However, if any Member should require additional copies, the Secretariat is ready to obtain them

  7. Annual Report of the United Nations Joint Staff Pension Board. The Report made in 1975

    International Nuclear Information System (INIS)

    1975-01-01

    Pursuant to the requirement in Article 14(a) of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published the report presented by the Board in 1975 as Supplement No. 9 to the Official Records of the General Assembly: Thirtieth Session (A/10009). The report has thus already been communicated to Governments. However, if any Member should require additional copies, the Secretariat is ready to obtain them

  8. Annual Report of the United Nations Joint Staff Pension Board. The Report made in 1972

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1972-11-28

    Pursuant to the requirement in Article 14 of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published the report presented by the Board in 1972 as Supplement No. 9 to the Official Records of the General Assembly: Twenty-Seventh Session (A/8709). The report has thus already been communicated to Governments. However, if any Member should require additional copies, the Secretariat is ready to obtain them.

  9. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  10. USGS Governmental Unit Boundaries Overlay Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS Governmental Unit Boundaries service from The National Map (TNM) represents major civil areas for the Nation, including States or Territories, counties (or...

  11. CERN’s new seat at the United Nations

    CERN Multimedia

    Antonella Del Rosso

    2013-01-01

    At the end of December, the General Assembly of the United Nations in New York granted CERN Observer status. As the only science organisation to acquire this prestigious status in the Assembly, CERN hopes to be able to raise awareness about the importance of fundamental science for society more effectively.   “Both CERN and the United Nations are committed to promoting science as a driving element for society. Both organisations promote dialogue between different cultures and can propose concrete models for peaceful cooperation towards objectives that benefit society as a whole,” says Maurizio Bona, CERN's officer in charge of relations with international organisations. Although the basic motivations are clear, obtaining the prestigious status from the UN was a long process that required negotiations and diplomatic work. Following some preliminary contacts with Switzerland starting in spring 2012, the resolution to grant observer status to CERN was jointly submitted...

  12. Developmental apraxia of speech in children. Quantitive assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  13. Origins of a national seismic system in the United States

    Science.gov (United States)

    Filson, John R.; Arabasz, Walter J.

    2016-01-01

    This historical review traces the origins of the current national seismic system in the United States, a cooperative effort that unifies national, regional, and local‐scale seismic monitoring within the structure of the Advanced National Seismic System (ANSS). The review covers (1) the history and technological evolution of U.S. seismic networks leading up to the 1990s, (2) factors that made the 1960s and 1970s a watershed period for national attention to seismology, earthquake hazards, and seismic monitoring, (3) genesis of the vision of a national seismic system during 1980–1983, (4) obstacles and breakthroughs during 1984–1989, (5) consensus building and convergence during 1990–1992, and finally (6) the two‐step realization of a national system during 1993–2000. Particular importance is placed on developments during the period between 1980 and 1993 that culminated in the adoption of a charter for the Council of the National Seismic System (CNSS)—the foundation for the later ANSS. Central to this story is how many individuals worked together toward a common goal of a more rational and sustainable approach to national earthquake monitoring in the United States. The review ends with the emergence of ANSS during 1999 and 2000 and its statutory authorization by Congress in November 2000.

  14. Statement to the 54th session of the United Nations General Assembly. United Nations, New York, 4 November 1999

    International Nuclear Information System (INIS)

    ElBaradei, M.

    1999-01-01

    In his Statement to the 54th Session of the United Nations General Assembly (New York, 4 November 1999), the Director General of the IAEA presented some of the major Agency's achievements in fulfilling its mandate as described in the Annual Report of the IAEA for 1998, and also some of the challenges and opportunities that lie ahead

  15. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems.

    Science.gov (United States)

    Greene, Beth G; Logan, John S; Pisoni, David B

    1986-03-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.

  16. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    Science.gov (United States)

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  17. Fast forward for the United Nations. Human security becomes a unifying force

    International Nuclear Information System (INIS)

    Annan, Kofi

    2005-01-01

    This paper speaks about the author's vision of a safer world and a better United Nations. The global threats of our age include terrorism, deadly weapons, genocide, infectious disease, poverty, environmental degradation and organized crime. They will not wait for States to sort out their differences. That is why we must act now to strengthen our collective defences. We must unite to master today's threats, and not allow them to divide and master us. And I submit that the only universal instrument that can bring States together in such a global effort is the United Nations. One must acknowledge that the United Nations is not perfect. At times, it shows its age. But our world will not easily find a better instrument for forging a sustained, global response to today's threats. We must use it to unite around common priorities - and act on them. And we must agree on a plan to reform the United Nations - and get on with the job of implementing it. This message lies at the heart of the recent report, A More Secure World: Our Shared Responsibility. It is the work of the Panel of 16 men and women from around the world I appointed last year. The report contains a powerful vision of collective security. Whether the threat is terrorism or AIDS, a threat to one is a threat to all. Our defences are only as strong as their weakest link. We will be safest if we work together

  18. The speech perception skills of children with and without speech sound disorder.

    Science.gov (United States)

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  20. South Africa: ANC Youth League President found guilty of hate speech.

    Science.gov (United States)

    Sinclair, Kelly

    2010-06-01

    On 15 March 2010, the Johannesburg Equality Court found African National Congress (ANC) Youth League President Julius Malema guilty of hate speech and harassment for his comments regarding rape survivors.

  1. New representative of the Director-General of the IAEA to the United Nations

    International Nuclear Information System (INIS)

    2000-01-01

    The document gives information about Mr. Kwaku Aning (Ghana) who was nominated as the Representative of the Director-General of the IAEA to the United Nations and as Director of its Office at the United Nations Headquarters in New York, USA, as of 1 February 2000

  2. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  3. Hierarchical organization in the temporal structure of infant-direct speech and song.

    Science.gov (United States)

    Falk, Simone; Kello, Christopher T

    2017-06-01

    Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech. Events were defined as peaks in the amplitude envelope, and clusters of various sizes related to periods of acoustic speech energy at varying timescales. Infant-directed speech and song clearly showed greater event clustering compared with adult-directed registers, at multiple timescales of hundreds of milliseconds to tens of seconds. We discuss the relation of this newly discovered acoustic property to temporal variability in linguistic units and its potential implications for parent-infant communication and infants learning the hierarchical structures of speech and language. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  5. Reflective practices at the Security Council: Children and armed conflict and the three United Nations.

    Science.gov (United States)

    Bode, Ingvild

    2018-06-01

    The United Nations Security Council passed its first resolution on children in armed conflict in 1999, making it one of the oldest examples of Security Council engagement with a thematic mandate and leading to the creation of a dedicated working group in 2005. Existing theoretical accounts of the Security Council cannot account for the developing substance of the children and armed conflict agenda as they are macro-oriented and focus exclusively on states. I argue that Security Council decision-making on thematic mandates is a productive process whose outcomes are created by and through practices of actors across the three United Nations: member states (the first United Nations), United Nations officials (the second United Nations) and non-governmental organizations (the third United Nations). In presenting a practice-based, micro-oriented analysis of the children and armed conflict agenda, the article aims to deliver on the empirical promise of practice theories in International Relations. I make two contributions to practice-based understandings: first, I argue that actors across the three United Nations engage in reflective practices of a strategic or tactical nature to manage, arrange or create space in Security Council decision-making. Portraying practices as reflective rather than as only based on tacit knowledge highlights how actors may creatively adapt their practices to social situations. Second, I argue that particular individuals from the three United Nations are more likely to become recognized as competent performers of practices because of their personality, understood as plural socialization experiences. This adds varied individual agency to practice theories that, despite their micro-level interests, have focused on how agency is relationally constituted.

  6. Speech-driven environmental control systems--a qualitative analysis of users' perceptions.

    Science.gov (United States)

    Judge, Simon; Robertson, Zoë; Hawley, Mark; Enderby, Pam

    2009-05-01

    To explore users' experiences and perceptions of speech-driven environmental control systems (SPECS) as part of a larger project aiming to develop a new SPECS. The motivation for this part of the project was to add to the evidence base for the use of SPECS and to determine the key design specifications for a new speech-driven system from a user's perspective. Semi-structured interviews were conducted with 12 users of SPECS from around the United Kingdom. These interviews were transcribed and analysed using a qualitative method based on framework analysis. Reliability is the main influence on the use of SPECS. All the participants gave examples of occasions when their speech-driven system was unreliable; in some instances, this unreliability was reported as not being a problem (e.g., for changing television channels); however, it was perceived as a problem for more safety critical functions (e.g., opening a door). Reliability was cited by participants as the reason for using a switch-operated system as back up. Benefits of speech-driven systems focused on speech operation enabling access when other methods were not possible; quicker operation and better aesthetic considerations. Overall, there was a perception of increased independence from the use of speech-driven environmental control. In general, speech was considered a useful method of operating environmental controls by the participants interviewed; however, their perceptions regarding reliability often influenced their decision to have backup or alternative systems for certain functions.

  7. Speech pathologists' experiences with stroke clinical practice guidelines and the barriers and facilitators influencing their use: a national descriptive study.

    Science.gov (United States)

    Hadely, Kathleen A; Power, Emma; O'Halloran, Robyn

    2014-03-06

    Communication and swallowing disorders are a common consequence of stroke. Clinical practice guidelines (CPGs) have been created to assist health professionals to put research evidence into clinical practice and can improve stroke care outcomes. However, CPGs are often not successfully implemented in clinical practice and research is needed to explore the factors that influence speech pathologists' implementation of stroke CPGs. This study aimed to describe speech pathologists' experiences and current use of guidelines, and to identify what factors influence speech pathologists' implementation of stroke CPGs. Speech pathologists working in stroke rehabilitation who had used a stroke CPG were invited to complete a 39-item online survey. Content analysis and descriptive and inferential statistics were used to analyse the data. 320 participants from all states and territories of Australia were surveyed. Almost all speech pathologists had used a stroke CPG and had found the guideline "somewhat useful" or "very useful". Factors that speech pathologists perceived influenced CPG implementation included the: (a) guideline itself, (b) work environment, (c) aspects related to the speech pathologist themselves, (d) patient characteristics, and (e) types of implementation strategies provided. There are many different factors that can influence speech pathologists' implementation of CPGs. The factors that influenced the implementation of CPGs can be understood in terms of knowledge creation and implementation frameworks. Speech pathologists should continue to adapt the stroke CPG to their local work environment and evaluate their use. To enhance guideline implementation, they may benefit from a combination of educational meetings and resources, outreach visits, support from senior colleagues, and audit and feedback strategies.

  8. Clear Speech - Mere Speech? How segmental and prosodic speech reduction shape the impression that speakers create on listeners

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2017-01-01

    of reduction levels and perceived speaker attributes in which moderate reduction can make a better impression on listeners than no reduction. In addition to its relevance in reduction models and theories, this interplay is instructive for various fields of speech application from social robotics to charisma...... whether variation in the degree of reduction also has a systematic effect on the attributes we ascribe to the speaker who produces the speech signal. A perception experiment was carried out for German in which 46 listeners judged whether or not speakers showing 3 different combinations of segmental...... and prosodic reduction levels (unreduced, moderately reduced, strongly reduced) are appropriately described by 13 physical, social, and cognitive attributes. The experiment shows that clear speech is not mere speech, and less clear speech is not just reduced either. Rather, results revealed a complex interplay...

  9. Addressing Child Poverty: How Does the United States Compare With Other Nations?

    Science.gov (United States)

    Smeeding, Timothy; Thévenot, Céline

    2016-04-01

    Poverty during childhood raises a number of policy challenges. The earliest years are critical in terms of future cognitive and emotional development and early health outcomes, and have long-lasting consequences on future health. In this article child poverty in the United States is compared with a set of other developed countries. To the surprise of few, results show that child poverty is high in the United States. But why is poverty so much higher in the United States than in other rich nations? Among child poverty drivers, household composition and parent's labor market participation matter a great deal. But these are not insurmountable problems. Many of these disadvantages can be overcome by appropriate public policies. For example, single mothers have a very high probability of poverty in the United States, but this is not the case in other countries where the provision of work support increases mothers' labor earnings and together with strong public cash support effectively reduces child poverty. In this article we focus on the role and design of public expenditure to understand the functioning of the different national systems and highlight ways for improvements to reduce child poverty in the United States. We compare relative child poverty in the United States with poverty in a set of selected countries. The takeaway is that the United States underinvests in its children and their families and in so doing this leads to high child poverty and poor health and educational outcomes. If a nation like the United States wants to decrease poverty and improve health and life chances for poor children, it must support parental employment and incomes, and invest in children's futures as do other similar nations with less child poverty. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  10. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  11. Lost in Translation: Understanding Students' Use of Social Networking and Online Resources to Support Early Clinical Practices. A National Survey of Graduate Speech-Language Pathology Students

    Science.gov (United States)

    Boster, Jamie B.; McCarthy, John W.

    2018-01-01

    The Internet is a source of many resources for graduate speech-language pathology (SLP) students. It is important to understand the resources students are aware of, which they use, and why they are being chosen as sources of information for therapy activities. A national online survey of graduate SLP students was conducted to assess their…

  12. National Agricultural Library | United States Department of Agriculture

    Science.gov (United States)

    Skip to main content Home National Agricultural Library United States Department of Agriculture Ag Terms of Service Frequently Asked Questions Policies and Documentation Ag Data Commons Monthly Metrics News Contact Us Search  Log inRegister Home Home About Policies and Documentation Ag Data Commons

  13. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  14. PRACTICING SPEECH THERAPY INTERVENTION FOR SOCIAL INTEGRATION OF CHILDREN WITH SPEECH DISORDERS

    Directory of Open Access Journals (Sweden)

    Martin Ofelia POPESCU

    2016-11-01

    Full Text Available The article presents a concise speech correction intervention program in of dyslalia in conjunction with capacity development of intra, interpersonal and social integration of children with speech disorders. The program main objectives represent: the potential increasing of individual social integration by correcting speech disorders in conjunction with intra- and interpersonal capacity, the potential growth of children and community groups for social integration by optimizing the socio-relational context of children with speech disorder. In the program were included 60 children / students with dyslalia speech disorders (monomorphic and polymorphic dyslalia, from 11 educational institutions - 6 kindergartens and 5 schools / secondary schools, joined with inter-school logopedic centre (CLI from Targu Jiu city and areas of Gorj district. The program was implemented under the assumption that therapeutic-formative intervention to correct speech disorders and facilitate the social integration will lead, in combination with correct pronunciation disorders, to social integration optimization of children with speech disorders. The results conirm the hypothesis and gives facts about the intervention program eficiency.

  15. Bipolar Disorder in Children: Implications for Speech-Language Pathologists

    Science.gov (United States)

    Quattlebaum, Patricia D.; Grier, Betsy C.; Klubnik, Cynthia

    2012-01-01

    In the United States, bipolar disorder is an increasingly common diagnosis in children, and these children can present with severe behavior problems and emotionality. Many studies have documented the frequent coexistence of behavior disorders and speech-language disorders. Like other children with behavior disorders, children with bipolar disorder…

  16. Schizophrenia alters intra-network functional connectivity in the caudate for detecting speech under informational speech masking conditions.

    Science.gov (United States)

    Zheng, Yingjun; Wu, Chao; Li, Juanhua; Li, Ruikeng; Peng, Hongjun; She, Shenglin; Ning, Yuping; Li, Liang

    2018-04-04

    Speech recognition under noisy "cocktail-party" environments involves multiple perceptual/cognitive processes, including target detection, selective attention, irrelevant signal inhibition, sensory/working memory, and speech production. Compared to health listeners, people with schizophrenia are more vulnerable to masking stimuli and perform worse in speech recognition under speech-on-speech masking conditions. Although the schizophrenia-related speech-recognition impairment under "cocktail-party" conditions is associated with deficits of various perceptual/cognitive processes, it is crucial to know whether the brain substrates critically underlying speech detection against informational speech masking are impaired in people with schizophrenia. Using functional magnetic resonance imaging (fMRI), this study investigated differences between people with schizophrenia (n = 19, mean age = 33 ± 10 years) and their matched healthy controls (n = 15, mean age = 30 ± 9 years) in intra-network functional connectivity (FC) specifically associated with target-speech detection under speech-on-speech-masking conditions. The target-speech detection performance under the speech-on-speech-masking condition in participants with schizophrenia was significantly worse than that in matched healthy participants (healthy controls). Moreover, in healthy controls, but not participants with schizophrenia, the strength of intra-network FC within the bilateral caudate was positively correlated with the speech-detection performance under the speech-masking conditions. Compared to controls, patients showed altered spatial activity pattern and decreased intra-network FC in the caudate. In people with schizophrenia, the declined speech-detection performance under speech-on-speech masking conditions is associated with reduced intra-caudate functional connectivity, which normally contributes to detecting target speech against speech masking via its functions of suppressing masking-speech signals.

  17. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  18. On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

    Directory of Open Access Journals (Sweden)

    Wesley Mattheyses

    2009-01-01

    Full Text Available Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signal's quality.

  19. Speech and Speech-Related Quality of Life After Late Palate Repair: A Patient's Perspective.

    Science.gov (United States)

    Schönmeyr, Björn; Wendby, Lisa; Sharma, Mitali; Jacobson, Lia; Restrepo, Carolina; Campbell, Alex

    2015-07-01

    Many patients with cleft palate deformities worldwide receive treatment at a later age than is recommended for normal speech to develop. The outcomes after late palate repairs in terms of speech and quality of life (QOL) still remain largely unstudied. In the current study, questionnaires were used to assess the patients' perception of speech and QOL before and after primary palate repair. All of the patients were operated at a cleft center in northeast India and had a cleft palate with a normal lip or with a cleft lip that had been previously repaired. A total of 134 patients (7-35 years) were interviewed preoperatively and 46 patients (7-32 years) were assessed in the postoperative survey. The survey showed that scores based on the speech handicap index, concerning speech and speech-related QOL, did not improve postoperatively. In fact, the questionnaires indicated that the speech became more unpredictable (P reported that their self-confidence had improved after the operation. Thus, the majority of interviewed patients who underwent late primary palate repair were satisfied with the surgery. At the same time, speech and speech-related QOL did not improve according to the speech handicap index-based survey. Speech predictability may even become worse and nasal regurgitation may increase after late palate repair, according to these results.

  20. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  1. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  2. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. The Texts of the Agency's Agreements with the United Nations; Texte Des Accords Conclus Entre L'Agence Et L'Organisation Des Nations Unies

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1959-10-30

    The texts of the following agreements and supplementary agreements between the Agency and the United Nations are reproduced in this document for the information of all Members of the Agency: I. A. Agreement Governing the Relationship Between the United Nations and the International Atomic Energy Agency; B. Protocol Concerning the Entry into Force of the Agreement between the United Nations and the International Atomic Energy Agency; II. Administrative Arrangement Concerning the Use of the United Nations Laissez-Passer by Officials of the International Atomic Energy Agency; and III. Agreement for the Admission of the International Atomic Energy Agency into the United Nations Joint Staff Pension Fund [French] Le texte des accords et des accords additionnels ci-apres, conclus entre l'Agence et l'Organisation des Nations Unies, est reproduit dans le present document pour l'information de tous les Membres de l'Agence. A.Accord regissant les relations entre l'Organisation des Nations Unies et l'Agence internationale de l'energie atomique; B.Protocole relatif a l'entree en vigueur de l'accord conclu entre l'Organisation des Nations Unies et l'Agence internationale de l'energie atomique; II.Dispositions administratives concernant l'utilisation du laissez-passer de l'Organisation des Nations Unies par les fonctionnaires de l'Agence internationale de l'energie atomique; III.Accord en vue de l'admission de l'Agence internationale de l'energie atomique a la Caisse commune des pensions du personnel des Nations Unies.

  4. Principles and foundation: national standards on quantities and units in nuclear science field

    International Nuclear Information System (INIS)

    Chen Lishu

    1993-11-01

    The main contents of National Standards on Quantities and units of atomic and nuclear physics (GB 3102.9) and Quantities and Units of nuclear reactions and ionizing radiations (GB 310.10) are presented in which most important quantities with their symbols and definitions in the nuclear scientific field are given. The principles and foundation, including the International System of Units (SI) and its application to the nuclear scientific field, in the setting of the National Standards are explained

  5. European regulation of cross-border hate speech in cyberspace: The limits of legislation

    OpenAIRE

    Banks, James

    2011-01-01

    This paper examines the complexities of regulating hate speech on the Internet through legal frameworks. It demonstrates the limitations of unilateral national content legislation and the difficulties inherent in multilateral efforts to regulate the Internet. The paper highlights how the US's commitment to free speech has undermined European efforts to construct a truly international regulatory system. It is argued that a broad coalition of citizens, industry and government, employing technol...

  6. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  7. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  8. Overview of the Common Core State Standard initiative and educational reform movement from the vantage of speech-language pathologists.

    Science.gov (United States)

    Staskowski, Maureen

    2012-05-01

    Educational reform is sweeping the country. The adoption and the implementation of the Common Core State Standards in almost every state are meant to transform education. It is intended to update the way schools educate, the way students learn, and to ultimately prepare the nation's next generation for the global workplace. This article will describe the Common Core State Standard initiative and the underlying concerns about the quality of education in the United States as well as the opportunities this reform initiative affords speech-language pathologists. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. Asthma, hay fever, and food allergy are associated with caregiver-reported speech disorders in US children.

    Science.gov (United States)

    Strom, Mark A; Silverberg, Jonathan I

    2016-09-01

    Children with asthma, hay fever, and food allergy may have several factors that increase their risk of speech disorder, including allergic inflammation, ADD/ADHD, and sleep disturbance. However, few studies have examined a relationship between asthma, allergic disease, and speech disorder. We sought to determine whether asthma, hay fever, and food allergy are associated with speech disorder in children and whether disease severity, sleep disturbance, or ADD/ADHD modified such associations. We analyzed cross-sectional data on 337,285 children aged 2-17 years from 19 US population-based studies, including the 1997-2013 National Health Interview Survey and the 2003/4 and 2007/8 National Survey of Children's Health. In multivariate models, controlling for age, demographic factors, healthcare utilization, and history of eczema, lifetime history of asthma (odds ratio [95% confidence interval]: 1.18 [1.04-1.34], p = 0.01), and one-year history of hay fever (1.44 [1.28-1.62], p speech disorder. Children with current (1.37 [1.15-1.59] p = 0.0003) but not past (p = 0.06) asthma had increased risk of speech disorder. In one study that assessed caregiver-reported asthma severity, mild (1.58 [1.20-2.08], p = 0.001) and moderate (2.99 [1.54-3.41], p speech disorder; however, severe asthma was associated with the highest odds of speech disorder (5.70 [2.36-13.78], p = 0.0001). Childhood asthma, hay fever, and food allergy are associated with increased risk of speech disorder. Future prospective studies are needed to characterize the associations. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  11. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    Science.gov (United States)

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  12. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  13. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  14. Sensory integration dysfunction affects efficacy of speech therapy on children with functional articulation disorders

    Directory of Open Access Journals (Sweden)

    Tung LC

    2013-01-01

    Full Text Available Li-Chen Tung,1,# Chin-Kai Lin,2,# Ching-Lin Hsieh,3,4 Ching-Chi Chen,1 Chin-Tsan Huang,1 Chun-Hou Wang5,6 1Department of Physical Medicine and Rehabilitation, Chi Mei Medical Center, Tainan, 2Program of Early Intervention, Department of Early Childhood Education, National Taichung University of Education, Taichung, 3School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, 4Department of Physical Medicine and Rehabilitation, National Taiwan University Hospital, Taipei, 5School of Physical Therapy, College of Medical Science and Technology, Chung Shan Medical University, Taichung, 6Physical Therapy Room, Chung Shan Medical University Hospital, Taichung, Taiwan#These authors contributed equally Background: Articulation disorders in young children are due to defects occurring at a certain stage in sensory and motor development. Some children with functional articulation disorders may also have sensory integration dysfunction (SID. We hypothesized that speech therapy would be less efficacious in children with SID than in those without SID. Hence, the purpose of this study was to compare the efficacy of speech therapy in two groups of children with functional articulation disorders: those without and those with SID.Method: A total of 30 young children with functional articulation disorders were divided into two groups, the no-SID group (15 children and the SID group (15 children. The number of pronunciation mistakes was evaluated before and after speech therapy.Results: There were no statistically significant differences in age, sex, sibling order, education of parents, and pretest number of mistakes in pronunciation between the two groups (P > 0.05. The mean and standard deviation in the pre- and posttest number of mistakes in pronunciation were 10.5 ± 3.2 and 3.3 ± 3.3 in the no-SID group, and 10.1 ± 2.9 and 6.9 ± 3.5 in the SID group, respectively. Results showed great changes after speech therapy treatment (F

  15. Multilateral Disarmament and the Special Session: Twelfth Conference on the United Nations of the Next Decade.

    Science.gov (United States)

    Stanley Foundation, Muscatine, IA.

    The report discusses issues relating to multilateral disarmament in the context of the Special Session of the United Nations General Assembly to be convened in 1978. Intended as a forum for the exchange of ideas of government leaders from the United States and other nations about the international peace-keeping role of the United Nations, the…

  16. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  17. United Nations Global Compact as a driver of Sustainable Development through businesses

    OpenAIRE

    Bereng, Reitumetse Esther

    2018-01-01

    The United Nations Global Compact (UNGC) was created in 2000 as a global compact between the United Nations and the Corporate Sector to induce businesses to incorporate principles that relate to human rights, labour, environment and anti-corruption into their corporate actions in order to contribute to sustainable development. This report reviews the tools used by the UNGC to ensure that its members´ strategies and operations align to the basic principles.

  18. An analysis of the masking of speech by competing speech using self-report data.

    Science.gov (United States)

    Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

  19. Speech chronemics--a hidden dimension of speech. Theoretical background, measurement and clinical validity.

    Science.gov (United States)

    Krüger, H P

    1989-02-01

    The term "speech chronemics" is introduced to characterize a research strategy which extracts from the physical qualities of the speech signal only the pattern of ons ("speaking") and offs ("pausing"). The research in this field can be structured into the methodological dimension "unit of time", "number of speakers", and "quality of the prosodic measures". It is shown that a researcher's actual decision for one method largely determines the outcome of his study. Then, with the Logoport a new portable measurement device is presented. It enables the researcher to study speaking behavior over long periods of time (up to 24 hours) in the normal environment of his subjects. Two experiments are reported. The first shows the validity of articulation pauses for variations in the physiological state of the organism. The second study proves a new betablocking agent to have sociotropic effects: in a long-term trial socially high-strung subjects showed an improved interaction behavior (compared to placebo and socially easy-going persons) in their everyday life. Finally, the need for a comprehensive theoretical foundation and for standardization of measurement situations and methods is emphasized.

  20. Sustainable Procurement in the United Nations

    DEFF Research Database (Denmark)

    Lund-Thomsen, Peter; Costa, Nives

    2011-01-01

    are highly contested among UN procurement officers and member states. However, so far the debate has mostly been based on assumptions about how the implementation of SP might affect developing country stakeholders. In fact, very few academic studies have been made of the economic, social and environmental......This paper deals with the integration of economic, social and environmental criteria into the purchasing practices of the United Nations (UN) system--also known as the UN engagement in sustainable procurement (SP). We argue that the debates about the pros and cons of the UN engaging in SP...

  1. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  2. Russian Speech in Radio: Norm and Deviation

    Directory of Open Access Journals (Sweden)

    Igor V. Nefedov

    2017-06-01

    Full Text Available National radio, like television, is called upon to bring to the masses not only relevant information, but also a high culture of language. There were always serious demands to oral public speech from the point of view of the correctness and uniformity of the pronunciation. However, today the analysis of the language practice of broadcasting often indicates a discrepancy between the use of linguistic resources in existing literary norms. The author of the article from the end of December 2016 to early April 2017 listened and analyzed from the point of view of language correctness the majority of programs on the radio Komsomolskaya Pravda (KP. In general, recognizing the good speech qualification of the workers of this radio, as well as their «guests» (political scientists, lawyers, historians, etc., one can not but note the presence of a significant number of errors in their speech. The material presented in the article allows us to conclude that at present, broadcasting is losing its position in the field of speech culture. Neglect of the rules of the Russian language on the radio «Komsomolskaya Pravda» negatively affects the image of the Russian language, which is formed in the minds of listeners. The language of radio should strive to become a standard of cleanliness and high culture for the population, since it has the enormous power of mass impact and supports the unity of the cultural and linguistic space.

  3. United nations internship programme policy and the need for its amendment

    Directory of Open Access Journals (Sweden)

    Novaković Marko

    2017-01-01

    Full Text Available An internship at the United Nations is an opportunity that young people interested in international law, international relations, and many other fields, perceive as he best possible career starting point - and rightfully so. The United Nations internship is an experience second to none in the world of international organizations and this is why it must be available to the widest range of people, regardless of their status, place of birth and social context. However, the current United Nations internship policy is very controversial and in desperate need of a change. While voices for change of policy are raised more and more, this topic has been very rarely addressed in academic literature across the world and papers and books dealing exclusively with this issue are almost non-existent. In this article, the author will address the main points of the concern regarding unpaid internship and will offer potential solutions for its improvement. This article is a humble contribution that will hopefully instigate wider academic acknowledgment of this problem and eventually contribute to the resolution of this unfortunate practice.

  4. Part-of-speech effects on text-to-speech synthesis

    CSIR Research Space (South Africa)

    Schlunz, GI

    2010-11-01

    Full Text Available One of the goals of text-to-speech (TTS) systems is to produce natural-sounding synthesised speech. Towards this end various natural language processing (NLP) tasks are performed to model the prosodic aspects of the TTS voice. One of the fundamental...

  5. Suppression of the µ rhythm during speech and non-speech discrimination revealed by independent component analysis: implications for sensorimotor integration in speech processing.

    Science.gov (United States)

    Bowers, Andrew; Saltuklaroglu, Tim; Harkrider, Ashley; Cuellar, Megan

    2013-01-01

    Constructivist theories propose that articulatory hypotheses about incoming phonetic targets may function to enhance perception by limiting the possibilities for sensory analysis. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor μ rhythm) to accurate speech and non-speech discrimination performance (i.e., correct trials.). Sixteen participants (15 female and 1 male) were asked to passively listen to or actively identify speech and tone-sweeps in a two-force choice discrimination task while the electroencephalograph (EEG) was recorded from 32 channels. The stimuli were presented at signal-to-noise ratios (SNRs) in which discrimination accuracy was high (i.e., 80-100%) and low SNRs producing discrimination performance at chance. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. ICA revealed left and right sensorimotor µ components for 14/16 and 13/16 participants respectively that were identified on the basis of scalp topography, spectral peaks, and localization to the precentral and postcentral gyri. Time-frequency analysis of left and right lateralized µ component clusters revealed significant (pFDRspeech discrimination trials relative to chance trials following stimulus offset. Findings are consistent with constructivist, internal model theories proposing that early forward motor models generate predictions about likely phonemic units that are then synthesized with incoming sensory cues during active as opposed to passive processing. Future directions and possible translational value for clinical populations in which sensorimotor integration may play a functional role are discussed.

  6. 75 FR 26701 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-05-12

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... proposed compensation rates for Interstate TRS, Speech-to-Speech Services (STS), Captioned Telephone... costs reported in the data submitted to NECA by VRS providers. In this regard, document DA 10-761 also...

  7. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  8. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  9. Ethiopia before the United Nations Treaty Monitoring Bodies

    Directory of Open Access Journals (Sweden)

    Eva Brems

    2007-08-01

    Full Text Available Among the many human rights conventions adopted by the UN, seven are known — together with their additional protocols — as the core international human rights instruments: - The International Convention on the Elimination of All Forms of Racial Discrimination; - The International Covenant on Civil and Political Rights; - The International Covenant on Economic, Social and Cultural Rights; - The Convention on the Elimination of all Forms of Discrimination against Women; - The Convention against Torture and other Cruel, Inhuman or Degrading Treatment or Punishment; - The Convention on the Rights of the Child;  - The International Convention on the Protection of the Rights of all Migrant Workers and Members of their Families.  The main international control mechanism under these conventions is what may be considered the standard mechanism in international human rights protection: state reporting before an international committee. An initial report is due usually one year after joining the treaty and afterwards, reports are due periodically (every four or five years. The international committees examine the reports submitted by the state parties. In the course of this examination they include information from other sources, such as the press, other United Nations materials or NGO information. They also hold a meeting with representatives of the state submitting the report. At the end of this process the committee issues 'concluding observations' or 'concluding comments'. This paper focuses on the experience of one state — Ethiopia - with the seven core human rights treaties. This should allow the reader to gain insights both into the human rights situation in Ethiopia and in the functioning of the United Nations human rights protection system. Key Words: United Nations, Human Rights Conventions, State Reporting, Human Rights Situation in Ethiopia

  10. Current National Weather Service Watches, Warnings, or Advisories for the United States

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Weather Service (NWS) Storm Prediction Center uses RSS feeds to disseminate all watches, warnings and advisories for the United States that are...

  11. Research of Features of the Phonetic System of Speech and Identification of Announcers on the Voice

    Directory of Open Access Journals (Sweden)

    Roman Aleksandrovich Vasilyev

    2013-02-01

    Full Text Available In the work the method of the phonetic analysis of speech — allocation of the list of elementary speech units such as separate phonemes from a continuous stream of informal conversation of the specific announcer is offered. The practical algorithm of identification of the announcer — process of definition speaking of the set of announcers is described.

  12. Denmark's National Inventory Report - Submitted under the United Nations Framework Convention on Climate Change, 1990-2001

    DEFF Research Database (Denmark)

    Illerup, J. B.; Lyck, E.; Nielsen, M.

    This report is Denmark's National Inventory Report reported to the Conference of the Parties under the United Nations Framework Convention on Climate Change (UNFCCC) due by 15 April 2003. The report contains information on Denmark's in-ventories for all years' from 1990 to 2001 for CO2, CH4, N2O......, CO, NMVOC, SO2 , HFCs, PFCs and SF6....

  13. 75 FR 54040 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-09-03

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...; speech-to-speech (STS); pay-per-call (900) calls; types of calls; and equal access to interexchange... of a report, due April 16, 2011, addressing whether it is necessary for the waivers to remain in...

  14. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    Science.gov (United States)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  15. The United Nations Framework Classification for World Petroleum Resources

    Science.gov (United States)

    Ahlbrandt, T.S.; Blystad, P.; Young, E.D.; Slavov, S.; Heiberg, S.

    2003-01-01

    The United Nations has developed an international framework classification for solid fuels and minerals (UNFC). This is now being extended to petroleum by building on the joint classification of the Society of Petroleum Engineers (SPE), the World Petroleum Congresses (WPC) and the American Association of Petroleum Geologists (AAPG). The UNFC is a 3-dimansional classification. This: Is necessary in order to migrate accounts of resource quantities that are developed on one or two of the axes to the common basis; Provides for more precise reporting and analysis. This is particularly useful in analyses of contingent resources. The characteristics of the SPE/WPC/AAPG classification has been preserved and enhanced to facilitate improved international and national petroleum resource management, corporate business process management and financial reporting. A UN intergovernmental committee responsible for extending the UNFC to extractive energy resources (coal, petroleum and uranium) will meet in Geneva on October 30th and 31st to review experiences gained and comments received during 2003. A recommended classification will then be delivered for consideration to the United Nations through the Committee on Sustainable Energy of the Economic Commission for Europe (UN ECE).

  16. Model United Nations comes to CERN

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    From 20 to 22 January pupils from international schools in Switzerland, France and Turkey came to CERN for three days of "UN-type" conferences.   The MUN organisers, who are all pupils at the Lycée international in Ferney-Voltaire, worked tirelessly for weeks to make the event a real success. The members of the MUN/MFNU association at the Lycée international in Ferney-Voltaire spent several months preparing for their first "Model United Nations" (MUN),  a simulation of a UN session at which young "diplomats" take on the role of delegates representing different nations to discuss a given topic. And as their chosen topic was science, it was only natural that they should hold the event at CERN. For three days, from 20 to 22 January, no fewer than 340 pupils from 12 international schools* in Switzerland, France and Turkey came together to deliberate, consult and debate on the importance of scientific progress fo...

  17. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  18. Updated United Nations Framework Classification for reserves and resources of extractive industries

    Science.gov (United States)

    Ahlbrandt, T.S.; Blaise, J.R.; Blystad, P.; Kelter, D.; Gabrielyants, G.; Heiberg, S.; Martinez, A.; Ross, J.G.; Slavov, S.; Subelj, A.; Young, E.D.

    2004-01-01

    The United Nations have studied how the oil and gas resource classification developed jointly by the SPE, the World Petroleum Congress (WPC) and the American Association of Petroleum Geologists (AAPG) could be harmonized with the United Nations Framework Classification (UNFC) for Solid Fuel and Mineral Resources (1). The United Nations has continued to build on this and other works, with support from many relevant international organizations, with the objective of updating the UNFC to apply to the extractive industries. The result is the United Nations Framework Classification for Energy and Mineral Resources (2) that this paper will present. Reserves and resources are categorized with respect to three sets of criteria: ??? Economic and commercial viability ??? Field project status and feasibility ??? The level of geologic knowledge The field project status criteria are readily recognized as the ones highlighted in the SPE/WPC/AAPG classification system of 2000. The geologic criteria absorb the rich traditions that form the primary basis for the Russian classification system, and the ones used to delimit, in part, proved reserves. Economic and commercial criteria facilitate the use of the classification in general, and reflect the commercial considerations used to delimit proved reserves in particular. The classification system will help to develop a common understanding of reserves and resources for all the extractive industries and will assist: ??? International and national resources management to secure supplies; ??? Industries' management of business processes to achieve efficiency in exploration and production; and ??? An appropriate basis for documenting the value of reserves and resources in financial statements.

  19. Emotionally conditioning the target-speech voice enhances recognition of the target speech under "cocktail-party" listening conditions.

    Science.gov (United States)

    Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang

    2018-05-01

    Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.

  20. United Nations' Concept of Justice and Fairness in The Context of ...

    African Journals Online (AJOL)

    Perhaps the inability of the United Nations to manage some international conflicts successfully coupled with its passivity on matters that involve some powerful nations on may be responsible for its criticism by some analysts. These critics, in turn, may not have considered holistically, the UN programmes which have ...

  1. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  2. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  3. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  4. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  5. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  6. Pacific Northwest National Laboratory Facility Radionuclide Emissions Units and Sampling Systems

    Energy Technology Data Exchange (ETDEWEB)

    Barnett, J. Matthew; Brown, Jason H.; Walker, Brian A.

    2012-04-01

    Battelle–Pacific Northwest Division operates numerous research and development (R&D) laboratories in Richland, WA, including those associated with Pacific Northwest National Laboratory (PNNL) on the U.S. Department of Energy (DOE)’s Hanford Site and PNNL Site that have the potential for radionuclide air emissions. The National Emission Standard for Hazardous Air Pollutants (NESHAP 40 CFR 61, Subparts H and I) requires an assessment of all emission units that have the potential for radionuclide air emissions. Potential emissions are assessed annually by PNNL staff members. Sampling, monitoring, and other regulatory compliance requirements are designated based upon the potential-to-emit dose criteria found in the regulations. The purpose of this document is to describe the facility radionuclide air emission sampling program and provide current and historical facility emission unit system performance, operation, and design information. For sampled systems, a description of the buildings, exhaust units, control technologies, and sample extraction details is provided for each registered emission unit. Additionally, applicable stack sampler configuration drawings, figures, and photographs are provided. Deregistered emission unit details are provided as necessary for up to 5 years post closure.

  7. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  9. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  10. EPA's Role in the United Nations Economic and Social Council

    Science.gov (United States)

    The United Nations Economic and Social Council (ECOSOC) considers the world’s economic, social, and environmental challenges. ECOSOC is composed of subsidiary bodies, including the recently concluded Commission on Sustainable Development (CSD).

  11. 4. national communication to the United Nation framework convention on the climatic change

    International Nuclear Information System (INIS)

    2006-01-01

    France, as the other involved participants, has to periodically present its actions in favor of the climatic change fight. This fourth national communication follows a plan defined by the Conference of the Parties to the United Nation Framework Convention on the Climatic Change. This report follows the third national convention published on 2001. It presents in nine chapters the actions realized to reduce and stop the greenhouse effect gases emissions and limit the impacts on the environment and public health: an analytical abstract, the conditions specific to the country, the inventory, the policies and measures, the projections and global effects of the policies and measures, the evaluation of the vulnerability and the climatic changes consequences and the adapted measures, the financial resources and the technology transfer, the research programs, the education formation and awareness of the public. (A.L.B.)

  12. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  13. The role of the United Nations in the field of verification

    International Nuclear Information System (INIS)

    1991-01-01

    By resolution 43/81 B of 7 December 1988, the General Assembly requested the Secretary General to undertake, with the assistance of a group of qualified governmental experts, an in-depth study of the role of the United Nations in the field of verification. In August 1990, the Secretary-General transmitted to the General Assembly the unanimously approved report of the experts. The report is structured in six chapters and contains a bibliographic appendix on technical aspects of verification. The Introduction provides a brief historical background on the development of the question of verification in the United Nations context, culminating with the adoption by the General Assembly of resolution 43/81 B, which requested the study. Chapters II and III address the definition and functions of verification and the various approaches, methods, procedures and techniques used in the process of verification. Chapters IV and V examine the existing activities of the United Nations in the field of verification, possibilities for improvements in those activities as well as possible additional activities, while addressing the organizational, technical, legal, operational and financial implications of each of the possibilities discussed. Chapter VI presents the conclusions and recommendations of the Group

  14. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    Science.gov (United States)

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  15. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  16. Systematic Studies of Modified Vocalization: The Effect of Speech Rate on Speech Production Measures during Metronome-Paced Speech in Persons Who Stutter

    Science.gov (United States)

    Davidow, Jason H.

    2014-01-01

    Background: Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control…

  17. TongueToSpeech (TTS): Wearable wireless assistive device for augmented speech.

    Science.gov (United States)

    Marjanovic, Nicholas; Piccinini, Giacomo; Kerr, Kevin; Esmailbeigi, Hananeh

    2017-07-01

    Speech is an important aspect of human communication; individuals with speech impairment are unable to communicate vocally in real time. Our team has developed the TongueToSpeech (TTS) device with the goal of augmenting speech communication for the vocally impaired. The proposed device is a wearable wireless assistive device that incorporates a capacitive touch keyboard interface embedded inside a discrete retainer. This device connects to a computer, tablet or a smartphone via Bluetooth connection. The developed TTS application converts text typed by the tongue into audible speech. Our studies have concluded that an 8-contact point configuration between the tongue and the TTS device would yield the best user precision and speed performance. On average using the TTS device inside the oral cavity takes 2.5 times longer than the pointer finger using a T9 (Text on 9 keys) keyboard configuration to type the same phrase. In conclusion, we have developed a discrete noninvasive wearable device that allows the vocally impaired individuals to communicate in real time.

  18. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Nigeria and the United States: An Analysis of National Goals

    National Research Council Canada - National Science Library

    McCarthy, John M

    2008-01-01

    Since the beginning of the 21st century, the continent of Africa has regained its importance to the United States and other developed nations, primarily due to its vast amounts of untapped resources...

  20. Education for Sustainable Development at the United Nations Conference on Sustainable Development (Rio+20)

    Science.gov (United States)

    Journal of Education for Sustainable Development, 2012

    2012-01-01

    The United Nations Conference on Sustainable Development (Rio+20) was held in Rio de Janeiro, Brazil, 20-22 June 2012, marking the twentieth anniversary of the United Nations Conference on Sustainable Development in Rio de Janeiro in 1992 and the tenth anniversary of the 2002 World Summit on Sustainable Development in Johannesburg. With more than…

  1. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  2. ROTARY DAY AT THE UNITED NATIONS OFFICE IN GENEVA

    CERN Multimedia

    Staff Association

    2017-01-01

    We have been informed about the Rotary day at the United Nations office in Geneva. Join us on November 10th & 11th, 2017 at the United Nations office Avenue de la Paix 8-14 1211 Geneva, Switzerland   PEACE: MAKING A DIFFERENCE! Conflict and violence displace millions of people each year. Half of those killed in conflict are children, and 90 percent are civilians. We, Rotarians, refuse conflict as a way of life. But how can we contribute to Peace? And what about you? Are you keen on meeting exceptional individuals and exchanging ideas to move forward? Would you like to network and collaborate with Rotarians, Government Representatives, International Civil Servants, Representatives of Nongovernmental Organizations and Liberal Professions, Businessmen/women, and Students to make a difference in Peace? In November 2017, come to Geneva, get involved, and formulate recommendations to the international community. Together, we’ll celebrate Rotary&a...

  3. United Nations programme for the assistance in Uruguay mining exploration

    International Nuclear Information System (INIS)

    1976-01-01

    The Uruguay government asked for the United Nations for the development of technical assistance programme in geological considerations of the Valentines iron deposits. This agreement was signed as Mining prospect ion assistance in Uruguay.

  4. Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech

  5. Denmark's National Inventory Reports. Submitted under the United Nations framework convention on climate change

    Energy Technology Data Exchange (ETDEWEB)

    Boll Illerup, J.; Lyck, E.; Winther, M. [Danmarks Miljoeundersoegelser, Afd. for Systemanalyse (Denmark); Rasmussen, E. [Energistyrelsen (Denmark)

    2000-05-01

    This report is Denmark's National Inventory Report reported to the Conference of the Parties under the United Nations Framework Convention on Climate Change (UNFCCC) due by 15 April 2000. The report contains information on Denmark's inventories for all years from 1990 to 1998 for CO{sub 2}, CH{sub 4}, N{sub 2}O, NO{sub x}, CO, NMVOC, SO{sub 2}, HFCs, PFCs and SF. (au)

  6. A Comparison of Coverage of Speech and Press Verdicts of Supreme Court.

    Science.gov (United States)

    Hale, F. Dennis

    1979-01-01

    An analysis of the coverage by ten newspapers of 20 United States Supreme Court decisions concerning freedom of the press and 20 decisions concerning freedom of speech revealed that the newspapers gave significantly greater coverage to the press decisions. (GT)

  7. Multisensory integration of speech sounds with letters vs. visual speech : only visual speech induces the mismatch negativity

    NARCIS (Netherlands)

    Stekelenburg, J.J.; Keetels, M.N.; Vroomen, J.H.M.

    2018-01-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect.

  8. Speech Research

    Science.gov (United States)

    Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.

  9. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  10. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  11. The 2011 United Nations High-Level Meeting on Non ...

    African Journals Online (AJOL)

    The 2011 United Nations High-Level Meeting on Non- Communicable Diseases: The Africa agenda calls for a 5-by-5 approach. ... The Political Declaration issued at the meeting focused the attention of world leaders and the global health community on the prevention and control of noncommunicable diseases (NCDs).

  12. The challenges of preventive diplomacy: The United Nations' post ...

    African Journals Online (AJOL)

    In Africa, however, where international borders are porous and state organs are sometimes not in .... the media, and information dissemination form part of the options available to the United .... National boundaries are blurred by ..... and arrangements for the free flow of information, including the monitoring of regional arms ...

  13. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  14. Evidence-Based Speech-Language Pathology Practices in Schools: Findings from a National Survey

    Science.gov (United States)

    Hoffman, LaVae M.; Ireland, Marie; Hall-Mills, Shannon; Flynn, Perry

    2013-01-01

    Purpose: This study documented evidence-based practice (EBP) patterns as reported by speech-language pathologists (SLPs) employed in public schools during 2010-2011. Method: Using an online survey, practioners reported their EBP training experiences, resources available in their workplaces, and the frequency with which they engage in specific EBP…

  15. United Nations Environment Programme. Annual Report of the Executive Director, 1985.

    Science.gov (United States)

    United Nations Environment Programme, Nairobi (Kenya).

    This report to the Governing Council of the United Nations Environment Programme (UNEP) was prepared to provide the governments of member nations with information on what UNEP had done during 1985, and to serve as a communications mechanism to replace the usual meeting of the Governing Council in 1986. It contains chapters on: (1) the year in…

  16. Forest health monitoring in the United States: focus on national reports

    Science.gov (United States)

    Kurt Riitters; Kevin Potter

    2013-01-01

    The health and sustainability of United States forests have been monitored for many years from several different perspectives. The national Forest Health Monitoring (FHM) Program was established in 1990 by Federal and State agencies to develop a national system for monitoring and reporting on the status and trends of forest ecosystem health. We describe and illustrate...

  17. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Protection Transparency Media Room Inside the Media Room Public Affairs News Releases Speeches Videos Publications National Observances Veterans ... Administration National Cemetery Administration U.S. Department ... visit VeteransCrisisLine.net for more resources. Close this modal

  18. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. SECURITY IN SUSTAINABLE DEVELOPMENT: COMPARING UNITED NATIONS 2030 AGENDA FOR SUSTAINABLE DEVELOPMENT WITH MILLENNIUM DECLARATION

    Directory of Open Access Journals (Sweden)

    Ahmet BARBAK

    2017-06-01

    Full Text Available This study aims to compare United Nations 2030 Agenda for Sustainable Development with Millennium Declaration in terms of their security conceptualizations to explore changes in security thinking and policy components (goals, targets, principles, priorities etc. over time. In doing so, it is envisaged that United Nations’ expectations from member states regarding their national security policies and organizations could be revealed. Security thinking has changed since late 1980’s with the introduction of sustainable development approach by the United Nations. This shift in security thinking encompasses human security and security-development nexus. Holding all member states responsible, Millennium Declaration and 2030 Agenda for Sustainable Development constitute the primary and the most recent outcome documents of United Nations’ sustainable development policy. Both documents have security components. This enables extracting security elements and comparing them with an analytical manner. Consequently, findings are compared and discussed in terms of public policy and organization at national level.

  20. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

    Science.gov (United States)

    Drijvers, Linda; Ozyurek, Asli

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…

  1. Hate speech y tolerancia religiosa en el sistema helvético de democracia participativa // Hate speech and religious tolerance in the swiss participatory democracy.

    Directory of Open Access Journals (Sweden)

    David Martín Herrera

    2014-08-01

    On December 21 of 1965, the General Assembly of the United Nations sent out an alarm signal because of the constant manifestations of racial discrimination and because of the governmental policies based on racial superiority or hatred. Result of that assembly was an agreement which condemned all propaganda and all organisations based on the superiority of one race or groups of persons of a specific skin colour or ethnic origin. It declared as illegal all organised propaganda activities, and anyone that would promote the racial discrimination and incite to it. One year later, on December 16 of 1966, the same assembly announced another international agreement by which it prohibited any propaganda for war, any advocacy of national, racial or religious hatred that incites discrimination, hostility or violence. Both were widely accepted and internationally ratified. However, more than four decades later, we still stand between Zenith and Nadir. Also Switzerland was not immune to these manifestations of superiority and hatred. Its famous historical hospitality has been affected in recent years; on one hand, due to Swiss skepticism in accepting international law, and on the other, because of the rise of ultra conservative political parties, which, through their speeches and propaganda, have managed in numerous occasions, to incite against minorities by breaking the international law of human rights and the national law. Minorities, who they consider threatening to the Swiss cultural and historical values .

  2. An analysis of leader-media conflicts through center-periphery paradigm: comparing media reactions after Davos upheaval of R.T. Erdoğan and Durban-lI speech of M. Ahmadinejad

    OpenAIRE

    Kılıç, Cihan; Kilic, Cihan

    2010-01-01

    On January 29, 2009, at the Davos International Meeting, Turkish Prime Minister Recep Tayyip Erdogan stormed out after the moderator didn’t allow him to speak during a debate with the Israeli president Simon Peres. Iranian president Mahmoud Ahmadinejad spoke at the United Nations Durban Review Conference on Racism on April 20, 2009. Delegates from twenty-three countries walked out of Iranian President Mahmoud Ahmadinejad's speech at the UN Durban Review Conference held in Geneva in response t...

  3. "A necessary supplement" : what the United Nations global compact is and is not

    OpenAIRE

    Rasche, Andreas

    2009-01-01

    The United Nations Global Compact is with currently more than 6,000 voluntary participants the world's largest corporate citizenship initiative. This article first analyzes three critical allegations often made against the Compact by looking at the academic and nonacademic literature. (1) The Compact supports the capture of the United Nations by "big business." (2) Its 10 principles are vague and thus hard to implement. (3) The Compact is not accountable due to an absence of verification mech...

  4. The Neural Basis of Speech Perception through Lipreading and Manual Cues: Evidence from Deaf Native Users of Cued Speech

    Science.gov (United States)

    Aparicio, Mario; Peigneux, Philippe; Charlier, Brigitte; Balériaux, Danielle; Kavec, Martin; Leybaert, Jacqueline

    2017-01-01

    We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework

  5. Speech enhancement using emotion dependent codebooks

    NARCIS (Netherlands)

    Naidu, D.H.R.; Srinivasan, S.

    2012-01-01

    Several speech enhancement approaches utilize trained models of clean speech data, such as codebooks, Gaussian mixtures, and hidden Markov models. These models are typically trained on neutral clean speech data, without any emotion. However, in practical scenarios, emotional speech is a common

  6. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    Science.gov (United States)

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  7. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  8. Recognizing speech in a novel accent: the motor theory of speech perception reframed.

    Science.gov (United States)

    Moulin-Frier, Clément; Arbib, Michael A

    2013-08-01

    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory.

  9. Advocate: A Distributed Architecture for Speech-to-Speech Translation

    Science.gov (United States)

    2009-01-01

    tecture, are either wrapped natural-language processing ( NLP ) components or objects developed from scratch using the architecture’s API. GATE is...framework, we put together a demonstration Arabic -to- English speech translation system using both internally developed ( Arabic speech recognition and MT...conditions of our Arabic S2S demonstration system described earlier. Once again, the data size was varied and eighty identical requests were

  10. A National Audit of Smoking Cessation Services in Irish Maternity Units

    LENUS (Irish Health Repository)

    2017-06-01

    There is international consensus that smoking cessation in the first half of pregnancy improves foetal outcomes. We surveyed all 19 maternity units nationally about their antenatal smoking cessation practices. All units recorded details on maternal smoking at the first antenatal visit. Only one unit validated the self-reported smoking status of pregnant women using a carbon monoxide breath test. Twelve units (63%) recorded timing of smoking cessation. In all units women who reported smoking were given verbal cessation advice. This was supported by written advice in 12 units (63%), but only six units (32%) had all midwives trained to provide this advice. Only five units (26%) reported routinely revisiting smoking status later in pregnancy. Although smoking is an important modifiable risk factor for adverse pregnancy outcomes, smoking cessation services are inadequate in the Irish maternity services and there are variations in practices between hospitals.

  11. The United Nations General Assembly and Disarmament 1987

    International Nuclear Information System (INIS)

    1988-01-01

    The report offers a summary of the proposals made and action taken on disarmament issues by the Assembly at its forty-second regular session. It is published in the framework of the World Disarmament Campaign, which was launched by a unanimous decision of the Assembly in 1982 to inform, to educate and to generate public understanding and support for the objectives of the United Nations in the field of disarmament

  12. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  13. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  14. Predicting speech intelligibility in adverse conditions: evaluation of the speech-based envelope power spectrum model

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2011-01-01

    conditions by comparing predictions to measured data from [Kjems et al. (2009). J. Acoust. Soc. Am. 126 (3), 1415-1426] where speech is mixed with four different interferers, including speech-shaped noise, bottle noise, car noise, and cafe noise. The model accounts well for the differences in intelligibility......The speech-based envelope power spectrum model (sEPSM) [Jørgensen and Dau (2011). J. Acoust. Soc. Am., 130 (3), 1475–1487] estimates the envelope signal-to-noise ratio (SNRenv) of distorted speech and accurately describes the speech recognition thresholds (SRT) for normal-hearing listeners...... observed for the different interferers. None of the standardized models successfully describe these data....

  15. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  16. Video Release: 47th Vice President of the United States Joseph R. Biden Jr. Speech at HUPO2017 Global Leadership Gala | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    The Human Proteome Organization (HUPO) has released a video of the keynote speech given by the 47th Vice President of the United States of America Joseph R. Biden Jr. at the HUPO2017 Global Leadership Gala. Under the gala theme “International Cooperation in the Fight Against Cancer,” Biden recognized cancer as a collection of related diseases, the importance of data sharing and harmonization, and the need for collaboration across scientific disciplines as inflection points in cancer research.

  17. Cleft Audit Protocol for Speech (CAPS-A): A Comprehensive Training Package for Speech Analysis

    Science.gov (United States)

    Sell, D.; John, A.; Harding-Bell, A.; Sweeney, T.; Hegarty, F.; Freeman, J.

    2009-01-01

    Background: The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been…

  18. Perceived liveliness and speech comprehensibility in aphasia : the effects of direct speech in auditory narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in 'healthy' communication direct speech constructions contribute to the liveliness, and indirectly to

  19. Preschool speech intelligibility and vocabulary skills predict long-term speech and language outcomes following cochlear implantation in early childhood.

    Science.gov (United States)

    Castellanos, Irina; Kronenberger, William G; Beer, Jessica; Henning, Shirley C; Colson, Bethany G; Pisoni, David B

    2014-07-01

    Speech and language measures during grade school predict adolescent speech-language outcomes in children who receive cochlear implants (CIs), but no research has examined whether speech and language functioning at even younger ages is predictive of long-term outcomes in this population. The purpose of this study was to examine whether early preschool measures of speech and language performance predict speech-language functioning in long-term users of CIs. Early measures of speech intelligibility and receptive vocabulary (obtained during preschool ages of 3-6 years) in a sample of 35 prelingually deaf, early-implanted children predicted speech perception, language, and verbal working memory skills up to 18 years later. Age of onset of deafness and age at implantation added additional variance to preschool speech intelligibility in predicting some long-term outcome scores, but the relationship between preschool speech-language skills and later speech-language outcomes was not significantly attenuated by the addition of these hearing history variables. These findings suggest that speech and language development during the preschool years is predictive of long-term speech and language functioning in early-implanted, prelingually deaf children. As a result, measures of speech-language functioning at preschool ages can be used to identify and adjust interventions for very young CI users who may be at long-term risk for suboptimal speech and language outcomes.

  20. The United Nations' endeavour to standardize mineral resource classification

    International Nuclear Information System (INIS)

    Schanz, J.J. Jr.

    1980-01-01

    The United Nations' Economic and Social Council passed a resolution in July 1975 calling for the development of a mineral resources classification system to be used in reporting data to the United Nations. Following preparation of background papers and an agenda by the UN Centre for Natural Resources, Energy and Transport, a panel of experts recommended a classification system to the Council's Committee on Natural Resources. The Committee met in Turkey in June 1979 and has reported favourably to the Council on the proposed system. The classification system is designed to provide maximum capability for requesting and receiving data from the resources data systems already used internally by major mineral producing nations. In addition, the system provides for flexibility in adjusting to the particular needs of individual mineral commodities. The proposed system involves three basic categories of in-situ resources: R-1, reliable estimates of known deposits; R-2, preliminary estimates of the extensions of known deposits; and, R-3, tentative estimates of quantities to be found in undiscovered deposits. As an option for given countries and commodities, the R-1 category can be further sub-divided into: R-1-E, economic; R-1-M, marginal; and R-1-S, sub-economic. Finally, the classification scheme provides for all categories to have a parallel set of estimates of recoverable mineral quantities. (author)

  1. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    Science.gov (United States)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  2. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  3. Simultaneous natural speech and AAC interventions for children with childhood apraxia of speech: lessons from a speech-language pathologist focus group.

    Science.gov (United States)

    Oommen, Elizabeth R; McCarthy, John W

    2015-03-01

    In childhood apraxia of speech (CAS), children exhibit varying levels of speech intelligibility depending on the nature of errors in articulation and prosody. Augmentative and alternative communication (AAC) strategies are beneficial, and commonly adopted with children with CAS. This study focused on the decision-making process and strategies adopted by speech-language pathologists (SLPs) when simultaneously implementing interventions that focused on natural speech and AAC. Eight SLPs, with significant clinical experience in CAS and AAC interventions, participated in an online focus group. Thematic analysis revealed eight themes: key decision-making factors; treatment history and rationale; benefits; challenges; therapy strategies and activities; collaboration with team members; recommendations; and other comments. Results are discussed along with clinical implications and directions for future research.

  4. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within......The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...

  5. Song and speech: examining the link between singing talent and speech imitation ability.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  6. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  7. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    Science.gov (United States)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  8. Building Human Rights, Peace and Development within the United Nations

    Directory of Open Access Journals (Sweden)

    Christian Guillermet Fernández

    2015-01-01

    Full Text Available War and peace have perpetually alternated in history. Consequently, peace has always been seen as an endless project, even a dream, to be in brotherhood realized by everyone across the earth. Since the XVII century the elimination of war and armed conflict has been a political and humanitarian objective of all nations in the world. Both the League of Nations and the United Nations were conceived with the spirit of eliminating the risk of war through the promotion of peace, cooperation and solidarity among Nations. The Universal Declaration of Human Rights and the subsequent human rights instruments were drafted with a sincere aspiration of promoting the value of peace and human rights worldwide. International practice shows the close linkage between the disregard of human rights and the existence of war and armed conflict. It follows that the role of human rights in the prevention of war and armed conflict is very important. Since 2008 the Human Rights Council has been working on the ‘Promotion of the Right of Peoples to Peace.’ Pursuant resolutions 20/15 and 23/16 the Council decided firstly to establish, and secondly to extend the mandate of the Open-Ended Working Group (OEWG aimed at progressively negotiating a draft United Nations declaration on the right to peace. The OEGW welcomed in its second session (July 2014 the approach of the Chairperson-Rapporteur, which is basically based on the relationship between the right to life and human rights, peace and development.

  9. Automatic speech recognition used for evaluation of text-to-speech systems

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Nouza, J.; Vondra, Martin

    -, č. 5042 (2008), s. 136-148 ISSN 0302-9743 R&D Projects: GA AV ČR 1ET301710509; GA AV ČR 1QS108040569 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech recognition * speech processing Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  10. Human factors engineering of interfaces for speech and text in the office

    NARCIS (Netherlands)

    Nes, van F.L.

    1986-01-01

    Current data-processing equipment almost exclusively uses one input medium: the keyboard, and one output medium: the visual display unit. An alternative to typing would be welcome in view of the effort needed to become proficient in typing; speech may provide this alternative if a proper spee

  11. SynFace—Speech-Driven Facial Animation for Virtual Speech-Reading Support

    Directory of Open Access Journals (Sweden)

    Giampiero Salvi

    2009-01-01

    Full Text Available This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animated talking head. Firstly, we describe the system architecture, consisting of a 3D animated face model controlled from the speech input by a specifically optimised phonetic recogniser. Secondly, we report on speech intelligibility experiments with focus on multilinguality and robustness to audio quality. The system, already available for Swedish, English, and Flemish, was optimised for German and for Swedish wide-band speech quality available in TV, radio, and Internet communication. Lastly, the paper covers experiments with nonverbal motions driven from the speech signal. It is shown that turn-taking gestures can be used to affect the flow of human-human dialogues. We have focused specifically on two categories of cues that may be extracted from the acoustic signal: prominence/emphasis and interactional cues (turn-taking/back-channelling.

  12. The Effect of English Verbal Songs on Connected Speech Aspects of Adult English Learners’ Speech Production

    Directory of Open Access Journals (Sweden)

    Farshid Tayari Ashtiani

    2015-02-01

    Full Text Available The present study was an attempt to investigate the impact of English verbal songs on connected speech aspects of adult English learners’ speech production. 40 participants were selected based on the results of their performance in a piloted and validated version of NELSON test given to 60 intermediate English learners in a language institute in Tehran. Then they were equally distributed in two control and experimental groups and received a validated pretest of reading aloud and speaking in English. Afterward, the treatment was performed in 18 sessions by singing preselected songs culled based on some criteria such as popularity, familiarity, amount, and speed of speech delivery, etc. In the end, the posttests of reading aloud and speaking in English were administered. The results revealed that the treatment had statistically positive effects on the connected speech aspects of English learners’ speech production at statistical .05 level of significance. Meanwhile, the results represented that there was not any significant difference between the experimental group’s mean scores on the posttests of reading aloud and speaking. It was thus concluded that providing the EFL learners with English verbal songs could positively affect connected speech aspects of both modes of speech production, reading aloud and speaking. The Findings of this study have pedagogical implications for language teachers to be more aware and knowledgeable of the benefits of verbal songs to promote speech production of language learners in terms of naturalness and fluency. Keywords: English Verbal Songs, Connected Speech, Speech Production, Reading Aloud, Speaking

  13. National Lexicography Units: Past, Present, Future Nasionale leksikografiese eenhede: Verlede, hede, toekoms

    Directory of Open Access Journals (Sweden)

    Mariëtta Alberts

    2012-01-01

    Full Text Available This article deals with the national dictionary offices of the previous bilingual dispensation, the eleven official national dictionary offices in the present multilingual dispensation, and the future prospects of these offices. It discusses the past dispensation in terms of the need and reasons for the establishment of national dictionary offices, i.e. national lexicography units (NLUs. Attention is given to the prescripts of the National Lexicography Units Bill (1996 for the establishment of NLUs, as well as the transfer of these units from the Department of Arts, Culture, Science and Technology to the Pan South African Language Board. The restructuring of dictionary units that existed prior to the multilingual dispensation is considered, together with the establishment of new dictionary units for the official African languages. The present situation is dealt with by describing the status quo at the NLUs in terms of housing, administration, funding, management, training, computerisation, cooperation, production and the like. The article concludes with some questions and reservations about the future of the NLUs, followed by a number of apposite recommendations.Hierdie artikel handel oor die nasionale woordeboekkantore tydens die voormalige tweetalige bedeling, die huidige meertalige bedeling waarin kantore vir die elf amptelike nasionale woordeboeke funksioneer, en die toekoms van hierdie kantore. Aspekte van die vorige bedeling word bespreek ten opsigte van die behoefte aan en redes vir die stigting van nasionale woordeboekkantore oftewel nasionale leksikografiese eenhede (NLEe. Die soeklig val op voorskrifte van die wetsontwerp oor nasionale leksikografiese eenhede (1996 vir die stigting van sodanige eenhede, sowel as op hul oordrag van die Departement van Kuns, Kultuur, Wetenskap en Tegnologie na die Pan-Suid-Afrikaanse Taalraad. Verder fokus die artikel op die herstrukturering van daardie woordeboekeenhede wat voor die meertalige bedeling

  14. Enhancement of a radiation safety system through the use of a microprocessor-controlled speech synthesizer

    International Nuclear Information System (INIS)

    Keefe, D.J.; McDowell, W.P.

    1980-01-01

    A speech synthesizer is being used to differentiate eight separate safety alarms on a high energy accelerator at Argonne National Laboratory. A single board microcomputer monitors eight signals from an existing radiation safety logic circuit. The microcomputer is programmed to output the proper code at the proper time and sequence to a speech synthesizer which supplies the audio input to a local public address system. This eliminates the requirement for eight different alarm tones and the personnel training required to differentiate among them. A twenty-word vocabulary was found adequate to supply the necessary safety announcements. The article describes the techniques used to interface the speech synthesizer into the existing safety logic circuit

  15. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Accountability & Whistleblower Protection Transparency Media Room Inside the Media Room Public Affairs News Releases Speeches Videos Publications National Observances Veterans Day ...

  16. An analysis of the masking of speech by competing speech using self-report data (L)

    OpenAIRE

    Agus, Trevor R.; Akeroyd, Michael A.; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the “Speech, Spatial, and Qualities of Hearing” scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol.43, 85–99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively ...

  17. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  18. Speech Entrainment Compensates for Broca's Area Damage

    Science.gov (United States)

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-01-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443

  19. An Analytical Review of the United States National Interests in Korea

    National Research Council Canada - National Science Library

    Swope, Frederick

    2004-01-01

    ... and interests for continued security on the peninsula and in the region. It will address these new growing tensions and review the United States National interests and policy differences with South Korea...

  20. [Spontaneous speech prosody and discourse analysis in schizophrenia and Fronto Temporal Dementia (FTD) patients].

    Science.gov (United States)

    Martínez, Angela; Felizzola Donado, Carlos Alberto; Matallana Eslava, Diana Lucía

    2015-01-01

    Patients with schizophrenia and Frontotemporal Dementia (FTD) in their linguistic variants share some language characteristics such as the lexical access difficulties, disordered speech with disruptions, many pauses, interruptions and reformulations. For the schizophrenia patients it reflects a difficulty of affect expression, while for the FTD patients it reflects a linguistic issue. This study, through an analysis of a series of cases assessed Clinic both in memory and on the Mental Health Unit of HUSI-PUJ (Hospital Universitario San Ignacio), with additional language assessment (analysis speech and acoustic analysis), present distinctive features of the DFT in its linguistic variants and schizophrenia that will guide the specialist in finding early markers of a differential diagnosis. In patients with FTD language variants, in 100% of cases there is a difficulty understanding linguistic structure of complex type; and important speech fluency problems. In patients with schizophrenia, there are significant alterations in the expression of the suprasegmental elements of speech, as well as disruptions in discourse. We present how depth language assessment allows to reassess some of the rules for the speech and prosody analysis of patients with dementia and schizophrenia; we suggest how elements of speech are useful in guiding the diagnosis and correlate functional compromise in everyday psychiatrist's practice. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  1. Patterns of poststroke brain damage that predict speech production errors in apraxia of speech and aphasia dissociate.

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-06-01

    Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.

  2. Denmark's national inventory report. Submitted under the United Nations framework convention on climate change, 1990-2001. Emission inventories

    International Nuclear Information System (INIS)

    Illerup, J.B.; Lyck, E.; Nielsen, M.; Winther, M.; Hjort Mikkelsen, M.

    2003-01-01

    This report is Denmark's National Inventory Report reported to the Conference of the Parties under the United Nations Framework Convention on Climate Change (UNFCCC) due bye 15 April 2003. The report contains information on Denmark's inventories for all years' from 1990 to 2001 for CO 2 , CH 4 , N 2 O, CO, NMVOC, SO 2 , HFCs, PFCs and SF 6 . (au)

  3. A NOVEL APPROACH TO STUTTERED SPEECH CORRECTION

    Directory of Open Access Journals (Sweden)

    Alim Sabur Ajibola

    2016-06-01

    Full Text Available Stuttered speech is a dysfluency rich speech, more prevalent in males than females. It has been associated with insufficient air pressure or poor articulation, even though the root causes are more complex. The primary features include prolonged speech and repetitive speech, while some of its secondary features include, anxiety, fear, and shame. This study used LPC analysis and synthesis algorithms to reconstruct the stuttered speech. The results were evaluated using cepstral distance, Itakura-Saito distance, mean square error, and likelihood ratio. These measures implied perfect speech reconstruction quality. ASR was used for further testing, and the results showed that all the reconstructed speech samples were perfectly recognized while only three samples of the original speech were perfectly recognized.

  4. Prisoner Fasting as Symbolic Speech: The Ultimate Speech-Action Test.

    Science.gov (United States)

    Sneed, Don; Stonecipher, Harry W.

    The ultimate test of the speech-action dichotomy, as it relates to symbolic speech to be considered by the courts, may be the fasting of prison inmates who use hunger strikes to protest the conditions of their confinement or to make political statements. While hunger strikes have been utilized by prisoners for years as a means of protest, it was…

  5. Childhood apraxia of speech and multiple phonological disorders in Cairo-Egyptian Arabic speaking children: language, speech, and oro-motor differences.

    Science.gov (United States)

    Aziz, Azza Adel; Shohdi, Sahar; Osman, Dalia Mostafa; Habib, Emad Iskander

    2010-06-01

    Childhood apraxia of speech is a neurological childhood speech-sound disorder in which the precision and consistency of movements underlying speech are impaired in the absence of neuromuscular deficits. Children with childhood apraxia of speech and those with multiple phonological disorder share some common phonological errors that can be misleading in diagnosis. This study posed a question about a possible significant difference in language, speech and non-speech oral performances between children with childhood apraxia of speech, multiple phonological disorder and normal children that can be used for a differential diagnostic purpose. 30 pre-school children between the ages of 4 and 6 years served as participants. Each of these children represented one of 3 possible subject-groups: Group 1: multiple phonological disorder; Group 2: suspected cases of childhood apraxia of speech; Group 3: control group with no communication disorder. Assessment procedures included: parent interviews; testing of non-speech oral motor skills and testing of speech skills. Data showed that children with suspected childhood apraxia of speech showed significantly lower language score only in their expressive abilities. Non-speech tasks did not identify significant differences between childhood apraxia of speech and multiple phonological disorder groups except for those which required two sequential motor performances. In speech tasks, both consonant and vowel accuracy were significantly lower and inconsistent in childhood apraxia of speech group than in the multiple phonological disorder group. Syllable number, shape and sequence accuracy differed significantly in the childhood apraxia of speech group than the other two groups. In addition, children with childhood apraxia of speech showed greater difficulty in processing prosodic features indicating a clear need to address these variables for differential diagnosis and treatment of children with childhood apraxia of speech. Copyright (c

  6. Individual differneces in degraded speech perception

    Science.gov (United States)

    Carbonell, Kathy M.

    One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.

  7. Collective speech acts

    NARCIS (Netherlands)

    Meijers, A.W.M.; Tsohatzidis, S.L.

    2007-01-01

    From its early development in the 1960s, speech act theory always had an individualistic orientation. It focused exclusively on speech acts performed by individual agents. Paradigmatic examples are ‘I promise that p’, ‘I order that p’, and ‘I declare that p’. There is a single speaker and a single

  8. Commencement Speech as a Hybrid Polydiscursive Practice

    Directory of Open Access Journals (Sweden)

    Светлана Викторовна Иванова

    2017-12-01

    Full Text Available Discourse and media communication researchers pay attention to the fact that popular discursive and communicative practices have a tendency to hybridization and convergence. Discourse which is understood as language in use is flexible. Consequently, it turns out that one and the same text can represent several types of discourses. A vivid example of this tendency is revealed in American commencement speech / commencement address / graduation speech. A commencement speech is a speech university graduates are addressed with which in compliance with the modern trend is delivered by outstanding media personalities (politicians, athletes, actors, etc.. The objective of this study is to define the specificity of the realization of polydiscursive practices within commencement speech. The research involves discursive, contextual, stylistic and definitive analyses. Methodologically the study is based on the discourse analysis theory, in particular the notion of a discursive practice as a verbalized social practice makes up the conceptual basis of the research. This research draws upon a hundred commencement speeches delivered by prominent representatives of American society since 1980s till now. In brief, commencement speech belongs to institutional discourse public speech embodies. Commencement speech institutional parameters are well represented in speeches delivered by people in power like American and university presidents. Nevertheless, as the results of the research indicate commencement speech institutional character is not its only feature. Conceptual information analysis enables to refer commencement speech to didactic discourse as it is aimed at teaching university graduates how to deal with challenges life is rich in. Discursive practices of personal discourse are also actively integrated into the commencement speech discourse. More than that, existential discursive practices also find their way into the discourse under study. Commencement

  9. The effectiveness of Speech-Music Therapy for Aphasia (SMTA) in five speakers with Apraxia of Speech and aphasia

    NARCIS (Netherlands)

    Hurkmans, Joost; Jonkers, Roel; de Bruijn, Madeleen; Boonstra, Anne M.; Hartman, Paul P.; Arendzen, Hans; Reinders - Messelink, Heelen

    2015-01-01

    Background: Several studies using musical elements in the treatment of neurological language and speech disorders have reported improvement of speech production. One such programme, Speech-Music Therapy for Aphasia (SMTA), integrates speech therapy and music therapy (MT) to treat the individual with

  10. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    2016-08-26

    ; speech-to-speech translation; language identification. ... interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers.

  11. Do long-term tongue piercings affect speech quality?

    Science.gov (United States)

    Heinen, Esther; Birkholz, Peter; Willmes, Klaus; Neuschaefer-Rube, Christiane

    2017-10-01

    To explore possible effects of tongue piercing on perceived speech quality. Using a quasi-experimental design, we analyzed the effect of tongue piercing on speech in a perception experiment. Samples of spontaneous speech and read speech were recorded from 20 long-term pierced and 20 non-pierced individuals (10 males, 10 females each). The individuals having a tongue piercing were recorded with attached and removed piercing. The audio samples were blindly rated by 26 female and 20 male laypersons and by 5 female speech-language pathologists with regard to perceived speech quality along 5 dimensions: speech clarity, speech rate, prosody, rhythm and fluency. We found no statistically significant differences for any of the speech quality dimensions between the pierced and non-pierced individuals, neither for the read nor for the spontaneous speech. In addition, neither length nor position of piercing had a significant effect on speech quality. The removal of tongue piercings had no effects on speech performance either. Rating differences between laypersons and speech-language pathologists were not dependent on the presence of a tongue piercing. People are able to perfectly adapt their articulation to long-term tongue piercings such that their speech quality is not perceptually affected.

  12. Second- and Foreign-Language Variation in Tense Backshifting in Indirect Reported Speech

    Science.gov (United States)

    Charkova, Krassimira D.; Halliday, Laura J.

    2011-01-01

    This study examined how English learners in second-language (SL) and foreign-language (FL) contexts employ tense backshifting in indirect reported speech. Participants included 35 international students in the United States, 37 Bulgarian speakers of English, 38 Bosnian speakers of English, and 41 native English speakers. The instrument involved…

  13. From the Field: Speech Therapy Outcome Measures--Interview with Dr. Pam Enderby

    Science.gov (United States)

    Montgomery, Judy K.

    2015-01-01

    This article is an interview with Dr. Pam Enderby--a speech language therapist and professor at the Institute of General Practice and Primary Care at the University of Sheffield, Community Sciences Centre, Northern General Hospital, in the United Kingdom--conducted by Judy Montgomery, Editor in Chief, of "Communication Disorders…

  14. EnviroAtlas - National Inventory of Dams for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is a summary of the National Dams Inventory data from 2009 survey. The file contains counts of inventoried dams by 12-digit hydrologic units...

  15. Patterns of Post-Stroke Brain Damage that Predict Speech Production Errors in Apraxia of Speech and Aphasia Dissociate

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-01-01

    Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457

  16. Progressive apraxia of speech as a window into the study of speech planning processes.

    Science.gov (United States)

    Laganaro, Marina; Croisier, Michèle; Bagou, Odile; Assal, Frédéric

    2012-09-01

    We present a 3-year follow-up study of a patient with progressive apraxia of speech (PAoS), aimed at investigating whether the theoretical organization of phonetic encoding is reflected in the progressive disruption of speech. As decreased speech rate was the most striking pattern of disruption during the first 2 years, durational analyses were carried out longitudinally on syllables excised from spontaneous, repetition and reading speech samples. The crucial result of the present study is the demonstration of an effect of syllable frequency on duration: the progressive disruption of articulation rate did not affect all syllables in the same way, but followed a gradient that was function of the frequency of use of syllable-sized motor programs. The combination of data from this case of PAoS with previous psycholinguistic and neurolinguistic data, points to a frequency organization of syllable-sized speech-motor plans. In this study we also illustrate how studying PAoS can be exploited in theoretical and clinical investigations of phonetic encoding as it represents a unique opportunity to investigate speech while it progressively disrupts. Copyright © 2011 Elsevier Srl. All rights reserved.

  17. Musicians do not benefit from differences in fundamental frequency when listening to speech in competing speech backgrounds

    DEFF Research Database (Denmark)

    Madsen, Sara Miay Kim; Whiteford, Kelly L.; Oxenham, Andrew J.

    2017-01-01

    Recent studies disagree on whether musicians have an advantage over non-musicians in understanding speech in noise. However, it has been suggested that musicians may be able to use diferences in fundamental frequency (F0) to better understand target speech in the presence of interfering talkers....... Here we studied a relatively large (N=60) cohort of young adults, equally divided between nonmusicians and highly trained musicians, to test whether the musicians were better able to understand speech either in noise or in a two-talker competing speech masker. The target speech and competing speech...... were presented with either their natural F0 contours or on a monotone F0, and the F0 diference between the target and masker was systematically varied. As expected, speech intelligibility improved with increasing F0 diference between the target and the two-talker masker for both natural and monotone...

  18. Novel Techniques for Dialectal Arabic Speech Recognition

    CERN Document Server

    Elmahdy, Mohamed; Minker, Wolfgang

    2012-01-01

    Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and M...

  19. Speech and Communication Disorders

    Science.gov (United States)

    ... to being completely unable to speak or understand speech. Causes include Hearing disorders and deafness Voice problems, ... or those caused by cleft lip or palate Speech problems like stuttering Developmental disabilities Learning disorders Autism ...

  20. The incompatibility of the United Nations' goals and conventionalist ethical relativism.

    Science.gov (United States)

    Kopelman, Loretta M

    2005-09-01

    The Universal Draft Declaration on Bioethics and Human Rights seeks to provide moral direction to nations and their citizens on a series of bioethical concerns. In articulating principles, it ranks respect for human rights, human dignity and fundamental freedoms ahead of respect for cultural diversity and pluralism. This ranking is controversial because it entails the rejection of the popular theory, conventionalist ethical relativism. If consistently defended, this theory also undercuts other United Nations activities that assume member states and people around the world can reach trans-cultural judgments having moral authority about health, pollution, aggression, rights, slavery, and so on. To illustrate problems with conventionalist ethical relativism and the importance of rejecting it for reasons of health, human rights, human dignity and fundamental freedoms, the widespread practice of female genital circumcision or cutting is discussed. These surgeries are virtually a test case for conventionalist ethical relativism since they are widely supported within these cultures as religious and health practices and widely condemned outside them, including by the United Nations.

  1. Speech of people with autism: Echolalia and echolalic speech

    OpenAIRE

    Błeszyński, Jacek Jarosław

    2013-01-01

    Speech of people with autism is recognised as one of the basic diagnostic, therapeutic and theoretical problems. One of the most common symptoms of autism in children is echolalia, described here as being of different types and severity. This paper presents the results of studies into different levels of echolalia, both in normally developing children and in children diagnosed with autism, discusses the differences between simple echolalia and echolalic speech - which can be considered to b...

  2. The United Nations disarmament yearbook. V. 25: 2000

    International Nuclear Information System (INIS)

    2001-01-01

    The 2000 edition of The United Nations Disarmament Yearbook provides a descriptive narrative of events at the United Nations in the field of disarmament during the year of the historic Millennium Assembly. Though The Yearbook is now in its 25th edition, its more distant roots date back to the Armaments Year-Books issued by the League of Nations. Then, as now, nation-States and members of the concerned public have found it useful to have in one place a handy shelf reference documenting the triumphs and setbacks of the world community's efforts to reduce and eliminate the deadliest of weapons. The year 2000 marked a crucial juncture in the history of disarmament. During the Millennium Summit, 22 States responded to the Secretary-General's invitation to ratify six key legal instruments in the field of disarmament. Over the course of the year, 86 States chose to advance their security interests by ratifying or acceding to a wide range of disarmament treaties. The solemn 'ends' of disarmament also guided the deliberation of roughly 50 resolutions in the General Assembly as well as the work of many institutions throughout the United Nations disarmament machinery, including the Disarmament Commission, the Department for Disarmament Affairs and its three regional centres, the United Nations Institute for Disarmament Research, and the Secretary-General's Advisory Board on Disarmament Matters. Even the Conference on Disarmament, which has been deadlocked for so many years, has persisted in its efforts to forge a new consensus on a multilateral agenda for this difficult field. The Security Council also devoted attention to aspects of disarmament pertaining to peace-keeping and peace-building.With respect to the 'means' of disarmament, the world community reaffirmed its determination to implement agreed disarmament commitments and to work out arrangements in new areas. The States parties to the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) after four weeks of

  3. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: Introduction

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article is to introduce the pause marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech (CAS) from speech delay.

  4. International law and United Nations

    Directory of Open Access Journals (Sweden)

    Savić Matej

    2012-01-01

    Full Text Available Along with centuries-lasting open military pretensions of world superpowers, modern diplomacy has developed, as beginning a war, as well as coming to peace demanded political activity which resulted, first in signing, and then coming into effect of international documents, on the basis of which, a foundation for the modern international order has been cast. Further on, by the formation of international organizations, codification has been allowed, as well as a progressive development of international law. Additionally, in the sense of preserving international peace and security, first the League of Nations was formed, and following the ending of World War II, the UN. Generally, the functioning of the United Nation's organs, has been regulated by legal rules, however political goals, tendencies, and mechanisms which the member states are using determine greatly the activity above all of the Security Council, but furthermore of the General Assembly, as a plenary organ. Nevertheless, the achieved results of the Commission for International Law in the meaning of creation of international conventions, as well as state adhering to the same, present unassailable achievements in the sense of development of international law. On the other hand, tendencies of motion of international relationships are aimed at establishing a multi-polar system in the international community. Today, the political scene is assuming a new appearance, by which the nearly built international system is already awaiting further progressive development.

  5. The Origin of the United Nations "Global Counter-Terrorism System"

    Directory of Open Access Journals (Sweden)

    William B. Messmer

    2010-11-01

    Full Text Available Este artículo explica los orígenes de sistema global antiterrorista de las Naciones Unidas. Nosotros argüimos que tres factores determinan las características de un sistema descentralizado y de estados centralizados. El primero es la reacción de la ONU contra los ataques terroristas del 11 de septiembre de 2001. El segundo factor es la cada vez mayor relevancia de las redes de gobierno transnacional. La tercera fuerza son los intereses y los asuntos del Consejo de Seguridad permanente, que últimamente determina la arquitectura del sistema.9/11, United Nations, Security Council, transnacional governance networks,counter-terrorism system.___________________________ABSTRACT:This article explains the origins of the United Nations’ global counter-terrorism system. We argue that three factors shaped the system’s decentralized and state-centered characteristics. The first is the UN’s reactions to terrorism prior to the attacks of 11 September 2001. The second factor is the growing relevance of transnational governance networks. The third force is the interests and concerns of the Security Council’s permanent representative interests, which ultimately shaped the system’s architecture.Keywords: 9/11; United Nations; Security Council; transnacional governance networks; counter-terrorism system

  6. United Nations International Drug Control Programme responds

    Directory of Open Access Journals (Sweden)

    Michael Platzer

    2002-01-01

    Full Text Available [First paragraph] We would like to reply to the article written by Axel Klein entitled, "Between the Death Penalty and Decriminalization: New Directions for Drug Control in the Commonwealth Caribbean" published in NWIG 75 (3&4 2001. We have noted a number of factual inaccuracies as well as hostile comments which portray the United Nations International Drug Control Programme in a negative light. This reply is not intended to be a critique of the article, which we find unbalanced and polemical, but rather an alert to the tendentious statements about UNDCP, which we feel should be corrected.

  7. A speech production model including the nasal Cavity: A novel approach to articulatory analysis of speech signals

    DEFF Research Database (Denmark)

    Olesen, Morten

    In order to obtain articulatory analysis of speech production the model is improved. the standard model, as used in LPC analysis, to a large extent only models the acoustic properties of speech signal as opposed to articulatory modelling of the speech production. In spite of this the LPC model...... is by far the most widely used model in speech technology....

  8. Maternal and paternal pragmatic speech directed to young children with Down syndrome and typical development.

    Science.gov (United States)

    de Falco, Simona; Venuti, Paola; Esposito, Gianluca; Bornstein, Marc H

    2011-02-01

    The aim of this study was to compare functional features of maternal and paternal speech directed to children with Down syndrome and developmental age-matched typically developing children. Altogether 88 parents (44 mothers and 44 fathers) and their 44 young children (22 children with Down syndrome and 22 typically developing children) participated. Parents' speech directed to children was obtained through observation of naturalistic parent-child dyadic interactions. Verbatim transcripts of maternal and paternal language were categorized in terms of the primary function of each speech unit. Parents (both mothers and fathers) of children with Down syndrome used more affect-salient speech compared to parents of typically developing children. Although parents used the same amounts of information-salient speech, parents of children with Down syndrome used more direct statements and asked fewer questions than did parents of typically developing children. Concerning parent gender, in both groups mothers used more language than fathers and specifically more descriptions. These findings held controlling for child age and MLU and family SES. This study highlights strengths and weaknesses of parental communication to children with Down syndrome and helps to identify areas of potential improvement through intervention. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Successful and rapid response of speech bulb reduction program combined with speech therapy in velopharyngeal dysfunction: a case report.

    Science.gov (United States)

    Shin, Yu-Jeong; Ko, Seung-O

    2015-12-01

    Velopharyngeal dysfunction in cleft palate patients following the primary palate repair may result in nasal air emission, hypernasality, articulation disorder and poor intelligibility of speech. Among conservative treatment methods, speech aid prosthesis combined with speech therapy is widely used method. However because of its long time of treatment more than a year and low predictability, some clinicians prefer a surgical intervention. Thus, the purpose of this report was to increase an attention on the effectiveness of speech aid prosthesis by introducing a case that was successfully treated. In this clinical report, speech bulb reduction program with intensive speech therapy was applied for a patient with velopharyngeal dysfunction and it was rapidly treated by 5months which was unusually short period for speech aid therapy. Furthermore, advantages of pre-operative speech aid therapy were discussed.

  10. Speech Intelligibility Evaluation for Mobile Phones

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Cubick, Jens; Dau, Torsten

    2015-01-01

    In the development process of modern telecommunication systems, such as mobile phones, it is common practice to use computer models to objectively evaluate the transmission quality of the system, instead of time-consuming perceptual listening tests. Such models have typically focused on the quality...... of the transmitted speech, while little or no attention has been provided to speech intelligibility. The present study investigated to what extent three state-of-the art speech intelligibility models could predict the intelligibility of noisy speech transmitted through mobile phones. Sentences from the Danish...... Dantale II speech material were mixed with three different kinds of background noise, transmitted through three different mobile phones, and recorded at the receiver via a local network simulator. The speech intelligibility of the transmitted sentences was assessed by six normal-hearing listeners...

  11. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... of Accountability & Whistleblower Protection Transparency Media Room Inside the Media Room Public Affairs News Releases Speeches Videos Publications National Observances Veterans Day Memorial Day ...

  12. Is the Human Development Index (HDI) of the United Nations ...

    African Journals Online (AJOL)

    Is the Human Development Index (HDI) of the United Nations Development Programme (UNDP) a relevant indicator? Jean Claude Saha. Abstract. No Abstract. African Journal of Economic Policy Vol. 12(1) 2005: 1-27. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  13. Model United Nations and Deep Learning: Theoretical and Professional Learning

    Science.gov (United States)

    Engel, Susan; Pallas, Josh; Lambert, Sarah

    2017-01-01

    This article demonstrates that the purposeful subject design, incorporating a Model United Nations (MUN), facilitated deep learning and professional skills attainment in the field of International Relations. Deep learning was promoted in subject design by linking learning objectives to Anderson and Krathwohl's (2001) four levels of knowledge or…

  14. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  15. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  16. Radiological evaluation of esophageal speech on total laryngectomee

    International Nuclear Information System (INIS)

    Chung, Tae Sub; Suh, Jung Ho; Kim, Dong Ik; Kim, Gwi Eon; Hong, Won Phy; Lee, Won Sang

    1988-01-01

    Total laryngectomee requires some form of alaryngeal speech for communication. Generally, esophageal speech is regarded as the most available and comfortable technique for alaryngeal speech. But esophageal speech is difficult to train, so many patients are unable to attain esophageal speech for communication. To understand mechanism of esophageal of esophageal speech on total laryngectomee, evaluation of anatomical change of the pharyngoesophageal segment is very important. We used video fluoroscopy for evaluation of pharyngesophageal segment during esophageal speech. Eighteen total laryngectomees were evaluated with video fluoroscopy from Dec. 1986 to May 1987 at Y.U.M.C. Our results were as follows: 1. Peseudoglottis is the most important factor for esophageal speech, which is visualized in 7 cases among 8 cases of excellent esophageal speech group. 2. Two cases of longer A-P diameter at the pseudoglottis have the best quality of esophageal speech than others. 3. Two cases of mucosal vibration at the pharyngoesophageal segment can make excellent esophageal speech. 4. The cases of failed esophageal speech are poor aerophagia in 6 cases, abscence of pseudoglottis in 4 cases and poor air ejection in 3 cases. 5. Aerophagia synchronizes with diaphragmatic motion in 8 cases of excellent esophageal speech.

  17. Automatic Speech Recognition Systems for the Evaluation of Voice and Speech Disorders in Head and Neck Cancer

    Directory of Open Access Journals (Sweden)

    Andreas Maier

    2010-01-01

    Full Text Available In patients suffering from head and neck cancer, speech intelligibility is often restricted. For assessment and outcome measurements, automatic speech recognition systems have previously been shown to be appropriate for objective and quick evaluation of intelligibility. In this study we investigate the applicability of the method to speech disorders caused by head and neck cancer. Intelligibility was quantified by speech recognition on recordings of a standard text read by 41 German laryngectomized patients with cancer of the larynx or hypopharynx and 49 German patients who had suffered from oral cancer. The speech recognition provides the percentage of correctly recognized words of a sequence, that is, the word recognition rate. Automatic evaluation was compared to perceptual ratings by a panel of experts and to an age-matched control group. Both patient groups showed significantly lower word recognition rates than the control group. Automatic speech recognition yielded word recognition rates which complied with experts' evaluation of intelligibility on a significant level. Automatic speech recognition serves as a good means with low effort to objectify and quantify the most important aspect of pathologic speech—the intelligibility. The system was successfully applied to voice and speech disorders.

  18. On speech recognition during anaesthesia

    DEFF Research Database (Denmark)

    Alapetite, Alexandre

    2007-01-01

    This PhD thesis in human-computer interfaces (informatics) studies the case of the anaesthesia record used during medical operations and the possibility to supplement it with speech recognition facilities. Problems and limitations have been identified with the traditional paper-based anaesthesia...... and inaccuracies in the anaesthesia record. Supplementing the electronic anaesthesia record interface with speech input facilities is proposed as one possible solution to a part of the problem. The testing of the various hypotheses has involved the development of a prototype of an electronic anaesthesia record...... interface with speech input facilities in Danish. The evaluation of the new interface was carried out in a full-scale anaesthesia simulator. This has been complemented by laboratory experiments on several aspects of speech recognition for this type of use, e.g. the effects of noise on speech recognition...

  19. Annual Report of the United Nations Joint Staff Pension Board. Report for the Year ending on 30 September 1963

    International Nuclear Information System (INIS)

    1965-01-01

    Pursuant to the requirement in Article XXXV of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board (JSPB) present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published a report containing statistical data for the year ending on 30 September 1963, as well as an account of the twelfth session of JSPB in July 1964, as Supplement No. 8 to the Official Records of the General Assembly: 19 th Session (A/5808)

  20. Annual Report of the United Nations Joint Staff Pension Board. Report for the Year ending on 30 September 1965

    International Nuclear Information System (INIS)

    1967-01-01

    Pursuant to the requirement in Article XXXV of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board (JSPB) present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published a report containing statistical data for the year ending on 30 September 1965, as well as an account of the thirteenth session of JSPB in July 1966, as Supplement No. 8 to the Official Records of the General Assembly: Twenty-first Session (A/6308)

  1. AUTOMATIC SPEECH RECOGNITION SYSTEM CONCERNING THE MOROCCAN DIALECTE (Darija and Tamazight)

    OpenAIRE

    A. EL GHAZI; C. DAOUI; N. IDRISSI

    2012-01-01

    In this work we present an automatic speech recognition system for Moroccan dialect mainly: Darija (Arab dialect) and Tamazight. Many approaches have been used to model the Arabic and Tamazightphonetic units. In this paper, we propose to use the hidden Markov model (HMM) for modeling these phoneticunits. Experimental results show that the proposed approach further improves the recognition.

  2. 78 FR 63152 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2013-10-23

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... for telecommunications relay services (TRS) by eliminating standards for Internet-based relay services... comments, identified by CG Docket No. 03-123, by any of the following methods: Electronic Filers: Comments...

  3. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  4. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  5. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    Speech is both beautiful and informative. In this work, a conceptual study of the speech, through investigation of the tower of Babel, the archetypal phonemes, and a study of the reasons of uses of language is undertaken in order to create an artistic work investigating the nature of speech. The ....... The artwork is presented at the Re:New festival in May 2008....

  6. United nations Supported principles for Responsible Management Education

    DEFF Research Database (Denmark)

    Godemann, Jasmin; Moon, Jeremy; Haertle, Jonas

    2014-01-01

    and various ecological system crises. The United Nations supported Principles for Responsible Management Education (PRME) initiative is an important catalyst for the transformation of management education and a global initiative to change and reform management education in order to meet the increasing......The expectation that management education institutions should be leading thought and action on issues related to corporate responsibility and sustainability has been reinforced in the light of their association with business leaders' failings, including corporate corruption, the financial crisis...

  7. A Clinician Survey of Speech and Non-Speech Characteristics of Neurogenic Stuttering

    Science.gov (United States)

    Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F.

    2008-01-01

    This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for…

  8. Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders

    CERN Document Server

    Baghai-Ravary, Ladan

    2013-01-01

    Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders provides a survey of methods designed to aid clinicians in the diagnosis and monitoring of speech disorders such as dysarthria and dyspraxia, with an emphasis on the signal processing techniques, statistical validity of the results presented in the literature, and the appropriateness of methods that do not require specialized equipment, rigorously controlled recording procedures or highly skilled personnel to interpret results. Such techniques offer the promise of a simple and cost-effective, yet objective, assessment of a range of medical conditions, which would be of great value to clinicians. The ideal scenario would begin with the collection of examples of the clients’ speech, either over the phone or using portable recording devices operated by non-specialist nursing staff. The recordings could then be analyzed initially to aid diagnosis of conditions, and subsequently to monitor the clients’ progress and res...

  9. Processing United Nations Documents in the University of Michigan Library.

    Science.gov (United States)

    Stolper, Gertrude

    This guide provides detailed instructions for recording documents in the United Nations (UN) card catalog which provides access to the UN depository collection in the Harlan Hatcher Graduate Library at the University of Michigan. Procedures for handling documents when they are received include stamping, counting, and sorting into five categories:…

  10. Temporal modulations in speech and music.

    Science.gov (United States)

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Upaya United Nations World Tourism Organization (Unwto) Menangani Sex Tourism Di Thailand (2009-2013)

    OpenAIRE

    Rani, Faisyal; Oktavia, Raesa

    2015-01-01

    This research explain about the efforts of United Nations World Tourism Organization (UNWTO) in dealing with sex tourism in Thailand. This research focused explaining about the role of UNWTO to fix sex tourism problem in Thailand, because sex tourism is one of the most favorite tourism in the world. UNWTO focused to protect the children because they are the biggest victim on sex tourism. This research intended to show the role of United Nations World Tourism Organization to handle the sex tou...

  12. Song and speech: examining the link between singing talent and speech imitation ability

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  13. Dysfluencies in the speech of adults with intellectual disabilities and reported speech difficulties.

    Science.gov (United States)

    Coppens-Hofman, Marjolein C; Terband, Hayo R; Maassen, Ben A M; van Schrojenstein Lantman-De Valk, Henny M J; van Zaalen-op't Hof, Yvonne; Snik, Ad F M

    2013-01-01

    In individuals with an intellectual disability, speech dysfluencies are more common than in the general population. In clinical practice, these fluency disorders are generally diagnosed and treated as stuttering rather than cluttering. To characterise the type of dysfluencies in adults with intellectual disabilities and reported speech difficulties with an emphasis on manifestations of stuttering and cluttering, which distinction is to help optimise treatment aimed at improving fluency and intelligibility. The dysfluencies in the spontaneous speech of 28 adults (18-40 years; 16 men) with mild and moderate intellectual disabilities (IQs 40-70), who were characterised as poorly intelligible by their caregivers, were analysed using the speech norms for typically developing adults and children. The speakers were subsequently assigned to different diagnostic categories by relating their resulting dysfluency profiles to mean articulatory rate and articulatory rate variability. Twenty-two (75%) of the participants showed clinically significant dysfluencies, of which 21% were classified as cluttering, 29% as cluttering-stuttering and 25% as clear cluttering at normal articulatory rate. The characteristic pattern of stuttering did not occur. The dysfluencies in the speech of adults with intellectual disabilities and poor intelligibility show patterns that are specific for this population. Together, the results suggest that in this specific group of dysfluent speakers interventions should be aimed at cluttering rather than stuttering. The reader will be able to (1) describe patterns of dysfluencies in the speech of adults with intellectual disabilities that are specific for this group of people, (2) explain that a high rate of dysfluencies in speech is potentially a major determiner of poor intelligibility in adults with ID and (3) describe suggestions for intervention focusing on cluttering rather than stuttering in dysfluent speakers with ID. Copyright © 2013 Elsevier Inc

  14. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  15. A frequency bin-wise nonlinear masking algorithm in convolutive mixtures for speech segregation.

    Science.gov (United States)

    Chi, Tai-Shih; Huang, Ching-Wen; Chou, Wen-Sheng

    2012-05-01

    A frequency bin-wise nonlinear masking algorithm is proposed in the spectrogram domain for speech segregation in convolutive mixtures. The contributive weight from each speech source to a time-frequency unit of the mixture spectrogram is estimated by a nonlinear function based on location cues. For each sound source, a non-binary mask is formed from the estimated weights and is multiplied to the mixture spectrogram to extract the sound. Head-related transfer functions (HRTFs) are used to simulate convolutive sound mixtures perceived by listeners. Simulation results show our proposed method outperforms convolutive independent component analysis and degenerate unmixing and estimation technique methods in almost all test conditions.

  16. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: III. Theoretical Coherence of the Pause Marker with Speech Processing Deficits in Childhood Apraxia of Speech

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. Method: PM and other…

  17. Speech and language support: How physicians can identify and treat speech and language delays in the office setting.

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society's Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children's speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation.

  18. To Speak or Not to Speak: Developing Legal Standards for Anonymous Speech on the Internet

    Directory of Open Access Journals (Sweden)

    Tomas A. Lipinski

    2002-01-01

    Full Text Available This paper explores recent developments in the regulation of Internet speech, in specific, injurious or defamatory speech and the impact such speech has on the rights of anonymous speakers to remain anonymous as opposed to having their identity revealed to plaintiffs or other third parties. The paper proceeds in four sections.  First, a brief history of the legal attempts to regulate defamatory Internet speech in the United States is presented. As discussed below this regulation has altered the traditional legal paradigm of responsibility and as a result creates potential problems for the future of anonymous speech on the Internet.  As a result plaintiffs are no longer pursuing litigation against service providers but taking their dispute directly to the anonymous speaker. Second, several cases have arisen in the United States where plaintiffs have requested that the identity of an anonymous Internet speaker be revealed.  These cases are surveyed.  Third, the cases are analyzed in order to determine the factors that courts require to be present before the identity of an anonymous speaker will be revealed.  The release is typically accomplished by the enforcement of a discovery subpoena instigated by the party seeking the identity of the anonymous speaker. The factors courts have used are as follows: jurisdiction, good faith (both internal and external, necessity (basic and sometimes absolute, and at times proprietary interest. Finally, these factors are applied in three scenarios--e-commerce, education, and employment--to guide institutions when adopting policies that regulate when the identity of an anonymous speaker--a customer, a student or an employee--would be released as part of an internal initiative, but would nonetheless be consistent with developing legal standards.

  19. Real-time speech-driven animation of expressive talking faces

    Science.gov (United States)

    Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli

    2011-05-01

    In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.

  20. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  1. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  2. Effect of speech rate variation on acoustic phone stability in Afrikaans speech recognition

    CSIR Research Space (South Africa)

    Badenhorst, JAC

    2007-11-01

    Full Text Available The authors analyse the effect of speech rate variation on Afrikaans phone stability from an acoustic perspective. Specifically they introduce two techniques for the acoustic analysis of speech rate variation, apply these techniques to an Afrikaans...

  3. Denmark's national inventory report 2006 - Submitted under the United Nations framework convention on climate change, 1990-2004. Emission inventories

    International Nuclear Information System (INIS)

    Illerup, J.B.; Lyck, E.; Nielsen, Ole-Kenneth

    2006-08-01

    This report is Denmark's National Inventory Report reported to the Conference of the Parties under the United Nations Framework Convention on Climate Change (UNFCCC) due by 15 April 2006. The report contains information on Denmark's inventories for all years' from 1990 to 2004 for CO 2 , CH 4 , N 2 O, HFCs, PFCs and SF 6 , CO, NMVOC, SO 2 . (au)

  4. Annual Report of the United Nations Joint Staff Pension Board. Report for the Year ending on 30 September 1964

    International Nuclear Information System (INIS)

    1966-01-01

    Pursuant to the requirement in Article XXXV of the Regulations of the United Nations Joint Staff Pension Fund that the United Nations Joint Staff Pension Board (JSPB) present an annual report to the General Assembly of the United Nations and to the member organizations of the Fund, the United Nations has published a report containing statistical data for the year ending on 30 September 1964, as well as a summary of action taken on behalf of JSPB by its Standing Committee since the former's last session in July 1964, as Supplement No. 8 to the Official Records of the General Assembly: 20 th Session (A/6008)

  5. Speech, "Inner Speech," and the Development of Short-Term Memory: Effects of Picture-Labeling on Recall.

    Science.gov (United States)

    Hitch, Graham J.; And Others

    1991-01-01

    Reports on experiments to determine effects of overt speech on children's use of inner speech in short-term memory. Word length and phonemic similarity had greater effects on older children and when pictures were labeled at presentation. Suggests that speaking or listening to speech activates an internal articulatory loop. (Author/GH)

  6. Phonetic recalibration of speech by text

    NARCIS (Netherlands)

    Keetels, M.N.; Schakel, L.; de Bonte, M.; Vroomen, J.

    2016-01-01

    Listeners adjust their phonetic categories to cope with variations in the speech signal (phonetic recalibration). Previous studies have shown that lipread speech (and word knowledge) can adjust the perception of ambiguous speech and can induce phonetic adjustments (Bertelson, Vroomen, & de Gelder in

  7. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    on speech production characteristics, but also helps in accurate analysis of speech. .... include time delay estimation, speech enhancement from single and multi- ...... log. (. E[k]. ∑K−1 l=0. E[l]. ) ,. (7) where K is the number of samples in the ...

  8. Automatic Speech Recognition Systems for the Evaluation of Voice and Speech Disorders in Head and Neck Cancer

    OpenAIRE

    Andreas Maier; Tino Haderlein; Florian Stelzle; Elmar Nöth; Emeka Nkenke; Frank Rosanowski; Anne Schützenberger; Maria Schuster

    2010-01-01

    In patients suffering from head and neck cancer, speech intelligibility is often restricted. For assessment and outcome measurements, automatic speech recognition systems have previously been shown to be appropriate for objective and quick evaluation of intelligibility. In this study we investigate the applicability of the method to speech disorders caused by head and neck cancer. Intelligibility was quantified by speech recognition on recordings of a standard text read by 41 German laryngect...

  9. Analysis of Parent, Teacher, and Consultant Speech Exchanges and Educational Outcomes of Students With Autism During COMPASS Consultation.

    Science.gov (United States)

    Ruble, Lisa; Birdwhistell, Jessie; Toland, Michael D; McGrew, John H

    2011-01-01

    The significant increase in the numbers of students with autism combined with the need for better trained teachers (National Research Council, 2001) call for research on the effectiveness of alternative methods, such as consultation, that have the potential to improve service delivery. Data from 2 randomized controlled single-blind trials indicate that an autism-specific consultation planning framework known as the collaborative model for promoting competence and success (COMPASS) is effective in increasing child Individual Education Programs (IEP) outcomes (Ruble, Dal-rymple, & McGrew, 2010; Ruble, McGrew, & Toland, 2011). In this study, we describe the verbal interactions, defined as speech acts and speech act exchanges that take place during COMPASS consultation, and examine the associations between speech exchanges and child outcomes. We applied the Psychosocial Processes Coding Scheme (Leaper, 1991) to code speech acts. Speech act exchanges were overwhelmingly affiliative, failed to show statistically significant relationships with child IEP outcomes and teacher adherence, but did correlate positively with IEP quality.

  10. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  11. United Nations conference on the human environment, Stockholm, June 5--16, 1972

    Energy Technology Data Exchange (ETDEWEB)

    None

    1972-07-03

    Recommendations of the working group of the United Nations conference on the preservation and improvement of the human environment are presented. Emphasis was placed on conservation of natural resources. (CH)

  12. Speech-Language Dissociations, Distractibility, and Childhood Stuttering

    Science.gov (United States)

    Conture, Edward G.; Walden, Tedra A.; Lambert, Warren E.

    2015-01-01

    Purpose This study investigated the relation among speech-language dissociations, attentional distractibility, and childhood stuttering. Method Participants were 82 preschool-age children who stutter (CWS) and 120 who do not stutter (CWNS). Correlation-based statistics (Bates, Appelbaum, Salcedo, Saygin, & Pizzamiglio, 2003) identified dissociations across 5 norm-based speech-language subtests. The Behavioral Style Questionnaire Distractibility subscale measured attentional distractibility. Analyses addressed (a) between-groups differences in the number of children exhibiting speech-language dissociations; (b) between-groups distractibility differences; (c) the relation between distractibility and speech-language dissociations; and (d) whether interactions between distractibility and dissociations predicted the frequency of total, stuttered, and nonstuttered disfluencies. Results More preschool-age CWS exhibited speech-language dissociations compared with CWNS, and more boys exhibited dissociations compared with girls. In addition, male CWS were less distractible than female CWS and female CWNS. For CWS, but not CWNS, less distractibility (i.e., greater attention) was associated with more speech-language dissociations. Last, interactions between distractibility and dissociations did not predict speech disfluencies in CWS or CWNS. Conclusions The present findings suggest that for preschool-age CWS, attentional processes are associated with speech-language dissociations. Future investigations are warranted to better understand the directionality of effect of this association (e.g., inefficient attentional processes → speech-language dissociations vs. inefficient attentional processes ← speech-language dissociations). PMID:26126203

  13. Vascular access and infection prevention and control: a national survey of routine practices in Irish haemodialysis units.

    Science.gov (United States)

    McCann, Margaret; Clarke, Michael; Mellotte, George; Plant, Liam; Fitzpatrick, Fidelma

    2013-04-01

    National and international guidelines recommend the use of effective vascular access (VA) and infection prevention and control practices within the haemodialysis environment. Establishing an arterio-venous fistula (AVF) and preventing central venous catheter (CVC)-related infections are ongoing challenges for all dialysis settings. We surveyed VA and routine infection prevention and control practices in dialysis units, to provide national data on these practices in Ireland. A descriptive survey was emailed to nurse managers at all adult (n = 19) and children (n = 1) outpatient haemodialysis units in the Republic of Ireland. Data collected included AVF formation, CVC insertion and maintenance practices, VA use and surveillance of infection and screening protocols. Nineteen of the 20 units responded to the survey. The AVF prevalence was 49% for 1370 patients in 17 units who provided these data [mean prevalence per unit: 45.7% (SD 16.2)]; the CVC mean prevalence per unit was 52.5% (SD 16.0). Fourteen dialysis units experienced inadequate access to vascular surgical procedures either due to a lack of dedicated theatre time or hospital beds. Six units administered intravenous prophylactic antimicrobials prior to CVC insertion with only two units using a CVC insertion checklist at the time of catheter insertion. In general, dialysis units in Ireland show a strong adherence to national guidelines. Compared with the 12 countries participating in the Dialysis Outcomes Practice Patterns Study (DOPPS 4), in 2010, AVF prevalence in Irish dialysis units is the second lowest. Recommendations include establishing an AVF national prevalence target rate, discontinuing the administration of intravenous prophylactic antimicrobials prior to CVC insertion and promoting the use of CVC insertion checklists.

  14. International aspirations for speech-language pathologists' practice with multilingual children with speech sound disorders: development of a position paper.

    Science.gov (United States)

    McLeod, Sharynne; Verdon, Sarah; Bowen, Caroline

    2013-01-01

    A major challenge for the speech-language pathology profession in many cultures is to address the mismatch between the "linguistic homogeneity of the speech-language pathology profession and the linguistic diversity of its clientele" (Caesar & Kohler, 2007, p. 198). This paper outlines the development of the Multilingual Children with Speech Sound Disorders: Position Paper created to guide speech-language pathologists' (SLPs') facilitation of multilingual children's speech. An international expert panel was assembled comprising 57 researchers (SLPs, linguists, phoneticians, and speech scientists) with knowledge about multilingual children's speech, or children with speech sound disorders. Combined, they had worked in 33 countries and used 26 languages in professional practice. Fourteen panel members met for a one-day workshop to identify key points for inclusion in the position paper. Subsequently, 42 additional panel members participated online to contribute to drafts of the position paper. A thematic analysis was undertaken of the major areas of discussion using two data sources: (a) face-to-face workshop transcript (133 pages) and (b) online discussion artifacts (104 pages). Finally, a moderator with international expertise in working with children with speech sound disorders facilitated the incorporation of the panel's recommendations. The following themes were identified: definitions, scope, framework, evidence, challenges, practices, and consideration of a multilingual audience. The resulting position paper contains guidelines for providing services to multilingual children with speech sound disorders (http://www.csu.edu.au/research/multilingual-speech/position-paper). The paper is structured using the International Classification of Functioning, Disability and Health: Children and Youth Version (World Health Organization, 2007) and incorporates recommendations for (a) children and families, (b) SLPs' assessment and intervention, (c) SLPs' professional

  15. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds…

  16. South Africa: ANC Youth League President issues apology following conviction for hate speech.

    Science.gov (United States)

    Thomas, Shalini

    2011-10-01

    In June 2011, fifteen months after he had been found guilty of hate speech and discrimination, African National Congress (ANC)Youth President Julius issued a formal apology and agreed to pay a R50,000 (CAN$7,120) fine that was part of the conviction.

  17. United States National Waste Terminal Storage argillaceous rock studies

    International Nuclear Information System (INIS)

    Brunton, G.D.

    1981-01-01

    The past and present argillaceous rock studies for the US National Waste Terminal Storage Program consist of: (1) evaluation of the geological characteristics of several widespread argillaceous formations in the United States; (2) laboratory studies of the physical and chemical properties of selected argillaceous rock samples; and (3) two full-scale in situ surface heater experiments that simulate the emplacement of heat-generating radioactive waste in argillaceous rock

  18. United States National Waste Terminal Storage argillaceous rock studies

    International Nuclear Information System (INIS)

    Brunton, G.D.

    1979-01-01

    The past and present argillaceous rock studies for the US National Waste Terminal Storage Program consist of: (1) evaluation of the geological characteristics of several widespread argillaceous formations in the United States; (2) laboratory studies of the physical and chemical properties of selected argillaceous rock samples; and (3) two full-scale in-situ surface heater experiments that simulate the emplacement of heat-generating radioactive waste in argillaceous rock

  19. APPRECIATING SPEECH THROUGH GAMING

    Directory of Open Access Journals (Sweden)

    Mario T Carreon

    2014-06-01

    Full Text Available This paper discusses the Speech and Phoneme Recognition as an Educational Aid for the Deaf and Hearing Impaired (SPREAD application and the ongoing research on its deployment as a tool for motivating deaf and hearing impaired students to learn and appreciate speech. This application uses the Sphinx-4 voice recognition system to analyze the vocalization of the student and provide prompt feedback on their pronunciation. The packaging of the application as an interactive game aims to provide additional motivation for the deaf and hearing impaired student through visual motivation for them to learn and appreciate speech.

  20. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...... egalitarian reasons for free speech - reasons from overall welfare, from autonomy and from respect for the equality of citizens - it is argued that these reasons outweigh the proposed reasons for curbing culturally offensive speech. Currently controversial cases such as that of the Danish Cartoon Controversy...

  1. Extensions to the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  2. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech.

    Science.gov (United States)

    Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L

    2014-03-01

    In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.

  3. Freedom of racist speech: Ego and expressive threats.

    Science.gov (United States)

    White, Mark H; Crandall, Christian S

    2017-09-01

    Do claims of "free speech" provide cover for prejudice? We investigate whether this defense of racist or hate speech serves as a justification for prejudice. In a series of 8 studies (N = 1,624), we found that explicit racial prejudice is a reliable predictor of the "free speech defense" of racist expression. Participants endorsed free speech values for singing racists songs or posting racist comments on social media; people high in prejudice endorsed free speech more than people low in prejudice (meta-analytic r = .43). This endorsement was not principled-high levels of prejudice did not predict endorsement of free speech values when identical speech was directed at coworkers or the police. Participants low in explicit racial prejudice actively avoided endorsing free speech values in racialized conditions compared to nonracial conditions, but participants high in racial prejudice increased their endorsement of free speech values in racialized conditions. Three experiments failed to find evidence that defense of racist speech by the highly prejudiced was based in self-relevant or self-protective motives. Two experiments found evidence that the free speech argument protected participants' own freedom to express their attitudes; the defense of other's racist speech seems motivated more by threats to autonomy than threats to self-regard. These studies serve as an elaboration of the Justification-Suppression Model (Crandall & Eshleman, 2003) of prejudice expression. The justification of racist speech by endorsing fundamental political values can serve to buffer racial and hate speech from normative disapproval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Speech and language support: How physicians can identify and treat speech and language delays in the office setting

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. The tool aimed to help physicians achieve three main goals: early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society’s Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children’s speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation. PMID:24627648

  5. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  6. AWARENESS OF CULTURAL REALITIES AND SPEECH COMMMUNITIES IN TRANSLATION

    Directory of Open Access Journals (Sweden)

    Monica-Marcela ȘERBAN

    2013-06-01

    Full Text Available It has been stated that both the word “culture” and the syntagm “cultural realities” have influenced both communication and translation to a great extent.Moreover, the syntagm “speech community” has been tackled from many perspectives. One of them is that it cannot be determined by static physical location but it may represent an insight into a nation state, village, religious institutions, and so on. Although speech communities may take any and all of these shapes and more, it is not a flexible concept, altering shape and meaning according to any new gathering of people.Linguists offered different definitions of the syntagm ‘speech communities’, each definition representing a new perspective in approaching this term.Translating cultural realities constitutes not only a challenge but also an audacity on the part of the translator. In this respect, we have chosen to cross the religious communities and survey both their language and cultural realities and how they are mediated in translation.Consequently, translating religious terminology requires the translator’s competence since it encompasses the Truth that has to be accurately reproduced in the TC (target culture. His/her task is also to raise the target reader’s awareness of such realities and language.

  7. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Store National Resource Directory Grants Management Services Veterans Service Organizations Office of Accountability & Whistleblower Protection Transparency Media Room Inside the Media Room Public Affairs News Releases Speeches Videos ...

  8. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Inside the Media Room Public Affairs News Releases Speeches Videos Publications National Observances Veterans Day Memorial Day Celebrating America's Freedoms Special Events Adaptive Sports Program Creative Arts Festival ...

  9. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  10. Speech and non-speech processing in children with phonological disorders: an electrophysiological study

    Directory of Open Access Journals (Sweden)

    Isabela Crivellaro Gonçalves

    2011-01-01

    Full Text Available OBJECTIVE: To determine whether neurophysiological auditory brainstem responses to clicks and repeated speech stimuli differ between typically developing children and children with phonological disorders. INTRODUCTION: Phonological disorders are language impairments resulting from inadequate use of adult phonological language rules and are among the most common speech and language disorders in children (prevalence: 8 - 9%. Our hypothesis is that children with phonological disorders have basic differences in the way that their brains encode acoustic signals at brainstem level when compared to normal counterparts. METHODS: We recorded click and speech evoked auditory brainstem responses in 18 typically developing children (control group and in 18 children who were clinically diagnosed with phonological disorders (research group. The age range of the children was from 7-11 years. RESULTS: The research group exhibited significantly longer latency responses to click stimuli (waves I, III and V and speech stimuli (waves V and A when compared to the control group. DISCUSSION: These results suggest that the abnormal encoding of speech sounds may be a biological marker of phonological disorders. However, these results cannot define the biological origins of phonological problems. We also observed that speech-evoked auditory brainstem responses had a higher specificity/sensitivity for identifying phonological disorders than click-evoked auditory brainstem responses. CONCLUSIONS: Early stages of the auditory pathway processing of an acoustic stimulus are not similar in typically developing children and those with phonological disorders. These findings suggest that there are brainstem auditory pathway abnormalities in children with phonological disorders.

  11. Conflict monitoring in speech processing : An fMRI study of error detection in speech production and perception

    NARCIS (Netherlands)

    Gauvin, Hanna; De Baene, W.; Brass, Marcel; Hartsuiker, Robert

    2016-01-01

    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated

  12. The National Legal Framework of the United States

    International Nuclear Information System (INIS)

    Crosland, Martha S.

    2017-01-01

    Ms Crosland presented the United States legal framework regarding public participation. Under the Administrative Procedure Act, the primary way of conducting public participation is through 'notice and comment rulemaking'. A proposed rule is published in the Federal Register and is open to comment by the general public; the final publication of the rule includes the answers to the comments received. The various agencies in the United States make use of several digital tools to expand effective public participation and manage the process. The Atomic Energy Act established an adjudicatory process including 'trial-type' hearings, providing participation opportunities to any individual or group whose interests may be affected by a Nuclear Regulatory Commission licensing action. The National Environmental Policy Act requires several levels of review for all actions with potentially significant environmental impacts. An environmental assessment (EA) is conducted, to determine whether there is no significant impact or if a more detailed environmental impact statement (EIS) is needed. The EA requires notification of the host state and/or tribe, and the agency in charge has discretion as to the level of public involvement. The EIS requires public notification, a period for public comments on the draft EIS, and at least one public hearing. Ms Crosland presented stakeholder involvement initiatives carried out beyond the legal requirements, such as Citizen Advisory Boards at certain Department of Energy nuclear sites or the National Transportation Stakeholders Forum

  13. Religious Speech in the Military: Freedoms and Limitations

    Science.gov (United States)

    2011-01-01

    abridging the freedom of speech .” Speech is construed broadly and includes both oral and written speech, as well as expressive conduct and displays when...intended to convey a message that is likely to be understood.7 Religious speech is certainly included. As a bedrock constitutional right, freedom of speech has...to good order and discipline or of a nature to bring discredit upon the armed forces)—the First Amendment’s freedom of speech will not provide them

  14. Perceived Speech Quality Estimation Using DTW Algorithm

    Directory of Open Access Journals (Sweden)

    S. Arsenovski

    2009-06-01

    Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.

  15. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  16. Detection of target phonemes in spontaneous and read speech.

    Science.gov (United States)

    Mehta, G; Cutler, A

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalise to the recognition of spontaneous speech. In the present study listeners were presented with both spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Responses were, overall, equally fast in each speech mode. However, analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than in unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claims from previous work that listeners pay great attention to prosodic information in the process of recognising speech.

  17. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  18. Religion, hate speech, and non-domination

    OpenAIRE

    Bonotti, Matteo

    2017-01-01

    In this paper I argue that one way of explaining what is wrong with hate speech is by critically assessing what kind of freedom free speech involves and, relatedly, what kind of freedom hate speech undermines. More specifically, I argue that the main arguments for freedom of speech (e.g. from truth, from autonomy, and from democracy) rely on a “positive” conception of freedom intended as autonomy and self-mastery (Berlin, 2006), and can only partially help us to understand what is wrong with ...

  19. Modelling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    Jørgensen and Dau (J Acoust Soc Am 130:1475-1487, 2011) proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII) in conditions with nonlinearly processed speech...... subjected to phase jitter, a condition in which the spectral structure of the intelligibility of speech signal is strongly affected, while the broadband temporal envelope is kept largely intact. In contrast, the effects of this distortion can be predicted -successfully by the spectro-temporal modulation...... suggest that the SNRenv might reflect a powerful decision metric, while some explicit across-frequency analysis seems crucial in some conditions. How such across-frequency analysis is "realized" in the auditory system remains unresolved....

  20. Teaching about the United Nations through the Hunger Issue in an English as a Foreign Language Class.

    Science.gov (United States)

    Iino, Atsushi

    1994-01-01

    Reports on the views of 73 secondary school Japanese students toward the United Nations. Finds that most tend to think of the UN as relevant to conflicts. Describes how the hunger issue was used in an English-as-a-Second-Language class to teach about the United Nations. (CFR)

  1. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  2. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  3. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  4. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  5. Regulation of speech in multicultural societies

    NARCIS (Netherlands)

    Maussen, M.; Grillo, R.

    2015-01-01

    This book focuses on the way in which public debate and legal practice intersect when it comes to the value of free speech and the need to regulate "offensive", "blasphemous" or "hate" speech, especially, though not exclusively where such speech is thought to be offensive to members of ethnic and

  6. Speech Errors as a Window on Language and Thought: A Cognitive Science Perspective

    Directory of Open Access Journals (Sweden)

    Giulia M.L. Bencini

    2017-04-01

    Full Text Available We are so used to speaking in our native language that we take this ability for granted. We think that speaking is easy and thinking is hard. From the perspective of cognitive science, this view is wrong. Utterances are complex things, and generating them is an act of linguistic creativity, in the face of the computational complexity of the task. On occasion, utterance generation goes awry and the speaker’s output is different from the planned utterance, such as a speaker who says “Fancy getting your model renosed!” when “fancy getting your nose remodeled” was intended. With some notable exceptions (e.g. Fromkin 1971 linguists have not taken speech error data to be informative about speakers’ linguistic knowledge or mental grammars. The paper strives to put language production errors back onto the linguistic data map. If errors involve units such as phonemes, syllables, morphemes and phrases, which may be exchanged, moved around or stranded during spoken production, this shows that they are both representational and processing units. If similar units are converged upon via multiple methods (e.g. native speaker judgments, language corpora, speech error corpora, psycholinguistic experiments those units have stronger empirical support. All other things being equal, theories of language that can account for both representation and processing are to be preferred.

  7. ACOUSTIC SPEECH RECOGNITION FOR MARATHI LANGUAGE USING SPHINX

    Directory of Open Access Journals (Sweden)

    Aman Ankit

    2016-09-01

    Full Text Available Speech recognition or speech to text processing, is a process of recognizing human speech by the computer and converting into text. In speech recognition, transcripts are created by taking recordings of speech as audio and their text transcriptions. Speech based applications which include Natural Language Processing (NLP techniques are popular and an active area of research. Input to such applications is in natural language and output is obtained in natural language. Speech recognition mostly revolves around three approaches namely Acoustic phonetic approach, Pattern recognition approach and Artificial intelligence approach. Creation of acoustic model requires a large database of speech and training algorithms. The output of an ASR system is recognition and translation of spoken language into text by computers and computerized devices. ASR today finds enormous application in tasks that require human machine interfaces like, voice dialing, and etc. Our key contribution in this paper is to create corpora for Marathi language and explore the use of Sphinx engine for automatic speech recognition

  8. Children with Speech Difficulties: A survey of clinical practice in the Western Cape

    Directory of Open Access Journals (Sweden)

    Michelle Pascoe

    2010-12-01

    Full Text Available This paper is based on a study by Joffe and Pring (2008 which investigated assessment and therapy methods used by Speech Language Therapists (SLTs in the United Kingdom for children with phonological difficulties. Joffe and Pring reported SLTs’ most favoured assessments and therapy approaches in that context. Children with speech difficulties are likely to form a considerable part of SLT caseloads in South Africa, but the choice of assessments may not be so clearcut given the linguistic diversity of the region and the fact that few assessments have been developed specifically for the SA population. Linked to difficulties with assessment, selection of intervention approaches may also pose challenges. This study aimed to investigate the methods of assessment and intervention used by SLTs in the Western Cape when working with children with speech difficulties. A questionnaire was sent to SLTs working with pre and/ or primary school- aged children. Twenty-nine clinicians of varying experience responded. The majority of SLTs (89% use informal assessment tools in combination with formal assessment. When using formal assessments, more than 50% of SLTs make modifications to better suit the population. Participants use a variety of intervention approaches, often in combination, and based on a child’s individual profile of difficulties and available resources. Forty-six percent of SLTs felt unsure about the selection of assessments and intervention for bi/multilingual children with speech difficulties. SLTs suggested that guidelines about accepted / typical speech development in the region would be helpful for their clinical practice. Clinical implications of the findings are discussed together with some suggestions for developing knowledge of children’s speech difficulties in the South African context.

  9. Is Birdsong More Like Speech or Music?

    Science.gov (United States)

    Shannon, Robert V

    2016-04-01

    Music and speech share many acoustic cues but not all are equally important. For example, harmonic pitch is essential for music but not for speech. When birds communicate is their song more like speech or music? A new study contrasting pitch and spectral patterns shows that birds perceive their song more like humans perceive speech. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Effect of "developmental speech and language training through music" on speech production in children with autism spectrum disorders.

    Science.gov (United States)

    Lim, Hayoung A

    2010-01-01

    The study compared the effect of music training, speech training and no-training on the verbal production of children with Autism Spectrum Disorders (ASD). Participants were 50 children with ASD, age range 3 to 5 years, who had previously been evaluated on standard tests of language and level of functioning. They were randomly assigned to one of three 3-day conditions. Participants in music training (n = 18) watched a music video containing 6 songs and pictures of the 36 target words; those in speech training (n = 18) watched a speech video containing 6 stories and pictures, and those in the control condition (n = 14) received no treatment. Participants' verbal production including semantics, phonology, pragmatics, and prosody was measured by an experimenter designed verbal production evaluation scale. Results showed that participants in both music and speech training significantly increased their pre to posttest verbal production. Results also indicated that both high and low functioning participants improved their speech production after receiving either music or speech training; however, low functioning participants showed a greater improvement after the music training than the speech training. Children with ASD perceive important linguistic information embedded in music stimuli organized by principles of pattern perception, and produce the functional speech.

  11. Speech networks at rest and in action: interactions between functional brain networks controlling speech production

    Science.gov (United States)

    Fuertinger, Stefan

    2015-01-01

    Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. PMID:25673742

  12. Speech Synthesis Applied to Language Teaching.

    Science.gov (United States)

    Sherwood, Bruce

    1981-01-01

    The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…

  13. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  14. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  15. 31 CFR 585.218 - Trade in United Nations Protected Areas of Croatia and those areas of the Republic of Bosnia and...

    Science.gov (United States)

    2010-07-01

    ... HERZEGOVINA SANCTIONS REGULATIONS Prohibitions § 585.218 Trade in United Nations Protected Areas of Croatia... importation from, exportation to, or transshipment of goods through the United Nations Protected Areas in the... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Trade in United Nations Protected...

  16. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  17. Familiar units prevail over statistical cues in word segmentation.

    Science.gov (United States)

    Poulin-Charronnat, Bénédicte; Perruchet, Pierre; Tillmann, Barbara; Peereman, Ronald

    2017-09-01

    In language acquisition research, the prevailing position is that listeners exploit statistical cues, in particular transitional probabilities between syllables, to discover words of a language. However, other cues are also involved in word discovery. Assessing the weight learners give to these different cues leads to a better understanding of the processes underlying speech segmentation. The present study evaluated whether adult learners preferentially used known units or statistical cues for segmenting continuous speech. Before the exposure phase, participants were familiarized with part-words of a three-word artificial language. This design allowed the dissociation of the influence of statistical cues and familiar units, with statistical cues favoring word segmentation and familiar units favoring (nonoptimal) part-word segmentation. In Experiment 1, performance in a two-alternative forced choice (2AFC) task between words and part-words revealed part-word segmentation (even though part-words were less cohesive in terms of transitional probabilities and less frequent than words). By contrast, an unfamiliarized group exhibited word segmentation, as usually observed in standard conditions. Experiment 2 used a syllable-detection task to remove the likely contamination of performance by memory and strategy effects in the 2AFC task. Overall, the results suggest that familiar units overrode statistical cues, ultimately questioning the need for computation mechanisms of transitional probabilities (TPs) in natural language speech segmentation.

  18. Digitized Ethnic Hate Speech: Understanding Effects of Digital Media Hate Speech on Citizen Journalism in Kenya

    Directory of Open Access Journals (Sweden)

    Stephen Gichuhi Kimotho

    2016-06-01

    Full Text Available Ethnicity in Kenya permeates all spheres of life. However, it is in politics that ethnicity is most visible. Election time in Kenya often leads to ethnic competition and hatred, often expressed through various media. Ethnic hate speech characterized the 2007 general elections in party rallies and through text messages, emails, posters and leaflets. This resulted in widespread skirmishes that left over 1200 people dead, and many displaced (KNHRC, 2008. In 2013, however, the new battle zone was the war of words on social media platform. More than any other time in Kenyan history, Kenyans poured vitriolic ethnic hate speech through digital media like Facebook, tweeter and blogs. Although scholars have studied the role and effects of the mainstream media like television and radio in proliferating the ethnic hate speech in Kenya (Michael Chege, 2008; Goldstein & Rotich, 2008a; Ismail & Deane, 2008; Jacqueline Klopp & Prisca Kamungi, 2007, little has been done in regard to social media.  This paper investigated the nature of digitized hate speech by: describing the forms of ethnic hate speech on social media in Kenya; the effects of ethnic hate speech on Kenyan’s perception of ethnic entities; ethnic conflict and ethics of citizen journalism. This study adopted a descriptive interpretive design, and utilized Austin’s Speech Act Theory, which explains use of language to achieve desired purposes and direct behaviour (Tarhom & Miracle, 2013. Content published between January and April 2013 from six purposefully identified blogs was analysed. Questionnaires were used to collect data from university students as they form a good sample of Kenyan population, are most active on social media and are drawn from all parts of the country. Qualitative data were analysed using NVIVO 10 software, while responses from the questionnaire were analysed using IBM SPSS version 21. The findings indicated that Facebook and Twitter were the main platforms used to

  19. Speech and nonspeech: What are we talking about?

    Science.gov (United States)

    Maas, Edwin

    2017-08-01

    Understanding of the behavioural, cognitive and neural underpinnings of speech production is of interest theoretically, and is important for understanding disorders of speech production and how to assess and treat such disorders in the clinic. This paper addresses two claims about the neuromotor control of speech production: (1) speech is subserved by a distinct, specialised motor control system and (2) speech is holistic and cannot be decomposed into smaller primitives. Both claims have gained traction in recent literature, and are central to a task-dependent model of speech motor control. The purpose of this paper is to stimulate thinking about speech production, its disorders and the clinical implications of these claims. The paper poses several conceptual and empirical challenges for these claims - including the critical importance of defining speech. The emerging conclusion is that a task-dependent model is called into question as its two central claims are founded on ill-defined and inconsistently applied concepts. The paper concludes with discussion of methodological and clinical implications, including the potential utility of diadochokinetic (DDK) tasks in assessment of motor speech disorders and the contraindication of nonspeech oral motor exercises to improve speech function.

  20. Denmark's national inventory report 2008 - Submitted under the United Nations framework convention on climate change, 1990-2006. Emission inventories

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Ole-Kenneth; Lyck, E; Hjorth Mikkelsen, M [and others

    2008-05-15

    This report is Denmark's National Inventory Report reported to the Conference of the Parties under the United Nations Framework Convention on Climate Change (UNFCCC) due by 15 April 2008. The report contains information on Denmark's inventories for all years' from 1990 to 2006 for CO{sub 2}, CH{sub 4}, N{sub 2}O, HFC{sub s}, PFC{sub s} and SF{sub 6}, CO, NMVOC, SO{sub 2}. (au)